Large language models (like GPT-3, BERT, and their friends) grab headlines for their ability to generate text and images from the input of a sentence or for their ability to understand text exceptionally well and answer questions about it. But these kinds of deep learning models can also be powerful tools for VCs.
At Moonfire, we maintain our own large language model to help us with all sorts of objectives such as helping us find new companies that align with our investment philosophy, and to help our founders hire and partner with the best people. Here’s our thinking behind it and what we’re doing with it.
How does a large language model help in VC?
Many applications of machine learning, in essence, are taking a real world concept, converting it into some mathematical representation, doing some mathematics with that mathematical representation, and then taking that result and converting it back into the real world domain. Language models use language, words, as their real-world input.
We make heavy use of text embeddings. Embeddings, in essence, are mathematical representations of words in high dimensional space. In other words, using a function to convert a piece of text into a point in geometric space. Imagine that you had a room and every word in any language could be found at a specific point in the room. If words are close to each other, they’re contextually similar. You can use the spatial relationships between the points to do all sorts of fun things. This is how a lot of modern natural language processing works but, instead of the three numbers you’d need for a point in 3D space, we’re using 2048 numbers to represent a point in a theoretical 2048-dimensional space!
Why do this? Because once you’ve represented lots of different things as points in that space – that could be individual words, pieces of content, people, companies – you can start to join the dots. If two things are closer together, they’re similar in some way. If they’re further apart, they’re more different.
We apply this to look for interesting new companies that fit with our philosophy.
How we use language learning at Moonfire
We start with our investment thesis. We set out, in words, our perspective on a given industry – where the industry was before, where it is now and where it’s headed. And we answer the same questions for the verticals within that industry, before drilling down into specific tools: what are the attributes of successful tools? What are the future models? Then we have a few one- or two-sentence descriptions of companies that we think are interesting in the space.
We then use natural language processing to look at all of our theses. We’re not keyword matching, but creating numerical representations of our entire investment philosophy – like a series of points, a geometric shape that represents our outlook across all of our different focus areas. You can picture it as something like a manifold:
With that mathematical model of our philosophy in place, we can use it to evaluate new companies, plotting them against this meta-thesis and determining how closely they align. And when one seems like a good match, we can suggest it to our investors.
We won’t dismiss a company that doesn’t match up, but it helps us prioritise those that are closer to what we’re looking for out of the thousands of companies our sourcing engine gives us each week.
But it can also help us find the unexpected. It might suggest a company that aligns with our thesis, but that combines things that we haven’t thought of combining. So, say in one part of our investment thesis we talk about how we really like digital tools that help with depression and in another part of our thesis, we talk about how we like investing in innovative games. It might find a company out there that’s developing a gamified platform to help people deal with their depression more effectively than traditional methods. It can help you stumble on founders that share your outlook, but in a way you hadn’t thought of before.
Choose your own adventure
And it’s useful beyond deal flow. If you think of the domain model of venture capital, you have companies, people, jobs, investors, funding rounds – all rich with words. If you create vector representations of all these things and do some maths to connect them up, you can start doing interesting stuff with them. Choose your own adventure!
For example, we use this embedding space for hiring, both for ourselves and our founders. We can help our founders with exploring the talent space they’re operating in, finding people that fit their existing team and mission or, say, finding them a CTO who’s similar to another successful CTO they admire.
We’re also using it to build a sort of funding pathfinder for founders. Our founders can log into a dashboard and say “I want to raise $6m on my Series A” and then, using our language model, we can suggest investors that have backed companies like them, and who they should talk to at that firm based on the content that person posts on social media and the sorts of companies and geographies that they’ve invested in previously.
Language learning can also go beyond assessing similarity. Picture the space in which all these points are plotted as a room. You could start to see how you move across that room to another position. For example, you could say “I want to become an awesome founder like X”, and then plot a path through the people and the content that connects you to help you move toward that position. You’re not only mapping your similarity to a thing, but also seeing the path you need to carve to move in that direction.
Basically, if there are words involved in the analysis – and VC has plenty of them – it can go through this pipeline. So there’s a lot more left to explore and plenty more use cases for these models, both for us and our founders, and we’re excited to keep exploring. If you’re interested, get in touch!