Moonfire-13/06/2023

Meet Jonas Vetterle: Using language models to evaluate startups, embracing LLMs, and the next big thing after AI

We sat down with Jonas to talk about building a programmatic investment pipeline, how he’s using LLMs, and why he’s reading up on quantum computing.

Hero image

Jonas Vetterle is our Staff Machine Learning Engineer, focusing on our data-driven sourcing and evaluation pipeline. To give you an insight into his work, we sat down to talk about building a programmatic investment pipeline, how he’s using large language models (LLMs), and why he’s reading up on quantum computing. You can find out more about Jonas on his bio page here.

How did you get into machine learning and VC?

In a past life, I was an economist working with organisations like the European Investment Bank, Deutsche Telekom, and the London Stock Exchange. On one project I looked into Google’s business model in great detail. It involved a lot of machine learning, which first sparked my interest in the topic. I started by doing online courses in my spare time and enjoyed them so much that I decided to go back to university to study a masters in machine learning.

At first, I wasn’t sure I could do it because I don’t have a programming background, but it turned out I could teach myself a lot of it. The maths side had a lot in common with the econometrics work I’d done in my old job.

From there, I joined AI startup Kortical where I spent three years as a Machine Learning Engineer, helping to build a machine learning SaaS product with a heavy focus on Natural Language Processing. After that, I worked at Benevolent AI building cutting-edge NLP models for finding and disambiguating different biological entities like diseases and proteins.

What is Moonfire doing with AI?

Of course, we’re investing in AI companies because it’s a technology that we see disrupting industries in the future. But it goes much deeper than that. From the start, Moonfire was created as a new kind of fund that uses AI from the ground up. We’ve reimagined what a VC firm looks like and how it operates, integrating AI into how we work at every stage. So as well as investing in other companies, we’re kind of also an early-stage startup building the software we use to pick winning companies.

What aspect of Moonfire’s AI work do you focus on?

One thing is our data-driven pipeline. Traditionally, narrowing down a list of investment targets involves lots of manual work. There are so many opportunities out there that it's impossible for anyone to look at them all. We build software to do it much more quickly.

The pipeline uses a mixture of rules-based and machine learning methods. For example, we can screen out companies that have already raised a Series A – because we’re a pre-seed and seed fund – then use natural language processing to match how a company describes itself against our investment thesis in that sector or sub-category. We might also use in-house language models to work out how close the company’s description is to what we think is a winning hand in the future.

I also built a Chrome extension that our investors use to source opportunities. When they’re on a company website, it shows all the information that has already been gathered on it, and allows the investor to add the company to the pipeline or market map. This is also hooked up to our AI models in the background so we can make predictions on the fly, for example when we encounter companies that aren’t already in our database.

How has GPT-4 affected your work?

It’s been really pivotal for us. We built the first version of the pipeline over the last few years and trained the initial models. Since then, we’ve replaced them with better models. I recently spent three months building one that gave us a five percentage-point increase in performance. But in the same week, GPT-4 came out and blew my model out of the water. We got another 20% performance increase just by using GPT-4 out of the box. So it’s a step change in our line of work.

The last time something comparable happened was in 2017 when the Transformer model architecture was invented. Natural language processing practitioners have been using pre-trained Transformer models like BERT ever since, which are much smaller than current LLMs, but are architecturally very similar. We’re now looking at our pipeline and replacing components with LLMs like GPT-4 where it’s better than what we currently have.

There are also new use cases. We’re exploring a chat-based agent that our investors can use to be more efficient. For example, they could ask it to update the CRM with some information they got in an email from a founder. At the moment that might involve a chain of events – taking the information, inputting it into the right part of the CRM, maybe sending a Slack message to someone else – but with the agent all of that would happen in the background.

How does your work benefit your portfolio companies?

Many companies are investigating whether to use GPT or other machine learning tools in their products. Previously, you had to write a load of code to train the models yourself. Now you can just make an API call. Because we’ve been working with machine learning for many years, we’re in a position to help portfolio companies that are newer to the game.

We also want to expose more models to our community. We’re building a private app for interacting with our models and we want to give access to founders, LPs, and maybe also our angel network. That’ll allow us to do things like matching our portfolio companies to potential Series A investors by comparing the investment history of those investors with a description of what the company does.

Which other technologies are you excited about?

AI is the big one. Beyond that there are so many things that are outside of our investment focus, like fusion energy, quantum computing, and space travel. It’d be very exciting if there were a breakthrough in just one of these during our lifetime, but it seems like there’s a lot of progress being made in all of them.

They also all have the potential to enhance each other. That’s how breakthroughs have worked historically – they all compound. So if you’re a technologist, and an optimist, it’s hard not to be excited about the future at the moment.

I’m especially enjoying learning more about quantum computing. It’s a bit early for us to invest in the space because the hardware isn’t mature enough. But I want to be ready for the day when we have quantum computers that are large and stable enough to run meaningful computations. That’s why I started reading up on it two years ago. I’m also a contributor to IBM’s open source quantum computing repository, Qiskit. I feel like writing actual quantum code helps me understand the underlying concepts a lot better and stay ahead of the curve.

Authors

Sign up for more like this

Stay ahead of trends, get a roundup of high-quality content in your inbox every month.

Newsletter Image