Meet Harpal Khing: AI, intuition and how to 'power up' next-gen VCs

We sat down with Harpal to talk end-to-end AI integration, algorithmic bias, and how our tech stack is giving VCs superpowers.

Meet Harpal Khing: AI, intuition and how to 'power up' next-gen VCs
Credit: Dan Taylor

Harpal Khing is one of our Senior Machine Learning Engineers, working across data analytics and NLP research.

To give you an insight into his work, we sat down to talk end-to-end AI integration, algorithmic bias, and how Moonfire’s prop stack is giving VCs new superpowers. You can find out more about Harpal on his bio page here.

You’ve worked across tech at all levels – from startups to giants like Samsung. How did those experiences lead you to VC?

Founding my own AI startup, SpeakEasy, in 2017 was like taking a masterclass in the power of launching impactful products. The experience was so valuable, and set me on the path to where I am today.

I’ve always been fascinated by machine learning, and when SpeakEasy was acquired back into Samsung I joined their core AI R&D Lab and went even deeper into NLP and transformer-based language models. Moving back to the UK at the outset of the pandemic in 2020, I was looking for a way to apply this tech to solve real-world problems again - to reproduce that startup buzz. That was when I realised that VC could give me that same sense of accomplishment and contribution, on an even bigger scale.

What attracted you to Moonfire in particular?

Moving into VC as a Senior Machine Learning Engineer offered me the opportunity to take my knowledge and experience and apply it in a new way – helping other founders to grow their own startups and shape the future. That said, not all VCs are approaching AI with the same goals or level of sophistication.

Once I discovered Moonfire, it was clear that they were the best-in-class given my experience and philosophy. What was really exciting was that the AI tech that I’ve been developing over the years – transformer models – fit so well with what Moonfire wants to do. The team is applying these models to VC in a way no-one else is. I was so impressed by their depth of knowledge and their ability to execute on this technology; the vision just made sense. So now I get the value of helping other founders move forward, in a way that lines up with my technical expertise. It’s a genuine win-win.

How does this technology – these transformer models – show up in Moonfire’s stack? What does it offer cutting-edge VCs?

It’s everywhere! Even before the first interaction with a founder, it’s part of our screening and evaluation pipeline. We source information on founders and companies from across Europe, from a huge range of data sources. How can we work out which of these are actually good investment opportunities for Moonfire in terms of sector, scale, investment thesis and so on? These are questions that our transformer models answer beautifully. They take all this data, parse the essence of a company against our investment strategy, and help us understand how closely they line up. That’s just one way we're integrating these models into the very earliest stages of the venture pipeline.

AI is fantastic at working with huge amounts of data in a logical fashion, but a good VC also has to trust their conviction. How do you see technology interacting with the intuitive side of investing?

The beauty of these models is that, because we've built them in-house, they're ours to tune and tweak as we see fit. Our AI helps us make good decisions; but our investors help our AI get better too.

Let’s say that we have a company come through the pipeline with a lot of green flags, but we choose not to invest based on the intuition of the investor team. We can take the reason for that intuitive rejection and feed it back into the machine learning process, so the next time the model identifies a similar company, it can grade it with this updated intuitive function baked in.

So you see AI as complementing the human rather than competing with them?

Absolutely. There is a lot of fear around AI, but I don’t see it as this big scary thing. VC and PE are still cottage industries in that they are relationship-driven, and therefore human-driven. Technology isn’t going to eat those jobs; everyone still needs humans to make decisions and be accountable at the end of the day.

Instead, it’s about giving those people – our investors – superpowers. If we can massively reduce the number of hours it takes to deal with their day-to-day activities, and particularly the non-investing activities they are doing, then we are adding real value. We want our investors to spend the majority of their time actually investing – which is what they enjoy doing!

We're not replacing people; we're just trying to power them up.

Another subject you’ve done a lot of work on in the past is algorithmic bias. How does that inform your work at Moonfire?

That’s a really important subject, and one that manifests itself at Moonfire uniquely, on a number of levels. Firstly through our Investment Committee meetings, every single member of the team – myself included – discusses every single investment that we make. So I might put a greater weight on a founder’s educational background, for example; whilst another investor on the team will bring other ways of thinking, and other biases.  We all work together to make decisions, and therefore train the AI collectively too.

Programmatically there are ways to reduce and remove biases using mathematics. But actually learning from the investors, and having the investors learn from us, in these investment meetings is probably the most productive way to improve our human and machine decision making in real time. We have technology, but at the end of the day it’s still about people.

How is this tech benefiting not only your in-house work, but the founders in your portfolio too?

The most obvious way is that, because we’re machine learning and AI experts – and this is such a hot topic right now – we can offer not only abstract advice, but practically support them in architecting end-to-end machine learning solutions. I can say: I’ve done this at my own startup, and here’s how. So they can come to us with ideas around product-market fit and AI integrations they want to build, then we can literally tell them how to execute on that.

Secondly, if our investors aren't bogged down in their inbox for hours a day, they have more time to actually put towards portfolio support. We’re building automations to help them cope with the inundation of stimuli they face each day – email, WhatsApp, Socials and other sources – and get the most value from their time. Imagine a spam filter for all of your communication channels that’s hyper-focussed on enabling you to be the best possible VC!

Our tools help to clear our investors’ desks, as well as giving them status alerts on our companies and so on. That’s an area we’re working on constantly – using AI to unlock our investors time, so they are free to do what they do best, including supporting our portfolio companies.

What are you seeing at the frontier of AI today that will be indispensable to VCs in five years?

These large language models (LLMs) that everyone is excited about are great, but they're still missing a key factor: there is no switch for truthfulness. If you're writing a novel then being truthful is not so important, and you can still add a lot of value creatively using AI. But if you want to build LLMs that an investor can speak to and get factual, verifiable information – that just isn't here today.

There should be a lever you can pull that tells the AI, feel free to fabricate and be creative; and another that tells it to shut off that creativity and stick to factual knowledge with strong verifiability. These hallucinations can cause problems as they can sound incredibly convincing to someone that isn’t a domain expert. There are a lot more steps involved in ensuring the veracity of the claims made by the model, and that’s what I’m looking out for right now in this space. Again, it’s about giving humans that upgrade.