Newsletter-29/08/2024
Differentiation in the AI hype cycle, how to hire engineers as an early-stage startup, and why machines learn.
Hello and welcome to the August newsletter.
This month Mike talks about how founders can differentiate themselves amid the hype and uncertainty of AI. As we approach hiring season, we're also re-upping Mike's advice for how early-stage startups should approach hiring engineers.
If you missed it, we asked some of the brilliant entrepreneurs we've invested in about what brought them together with their co-founders, and how their individual skills combine to make a winning team. Watch their answers here.
Enjoy.
Mattias and the Moonfire team
đđ„
Here's a quick roundup of cool stuff we saw this month:
For the first time in software, people are building products and platforms where future progress hinges on scientific progress, not just engineering and product development. But this creates a challenge with our ability to predict progress based on past experiences, because reasoning about the pace and direction of fundamental scientific discovery, as opposed to things that we can build, is hard. Itâs all too easy to see the progress weâve made so far â think GPT3 to GPT4o â and assume it will continue unabated.
How do you, as a startup founder, differentiate yourself amid this hype and uncertainty?
This was one of the topics of a panel at our latest Pulse Summit â with me; Megan Quinn, investor, builder and board director; Mehdi Ghissassi, Director and Head of Product at DeepMind; Fabian Roth, Co-founder and CEO of fore ai; and Rosalyn Moran, CEO of Stanhope AI â and I wanted to explore some of the points that we discussed.
Transformers have been true to their name, dominating the field for good reason. Theyâre incredibly powerful and have a lot of road left to run, both in terms of model improvement and new use cases.
The risk, however, is that peopleâs expectations of progress might include things not natively provided by transformers â like memory and active learning. And such misaligned expectations could drive divestment and push us into another AI winter.
That said, thereâs already a lot of great work happening to augment the transformer architecture, as well as address some of its limitations, like mitigating the quadratic scaling problem of self-attention to reduce the computational complexity while increasing context sizes.
In parallel, others are exploring approaches that can address aspects of artificial intelligence not natively provided by transformers. Our portfolio company Stanhope AI is drawing inspiration from human intelligence, working from the neural architectures and computational underpinnings of the brain. They are taking the concept of active inference from neuroscience into AI as the foundation for software that will allow robots and embodied platforms to make autonomous decisions like the human brain.
The idea is not to replace transformers, but to complement them. As Rosalyn explained, while deep learning is effective for tasks like image generation or MRI analysis, applications requiring reasoning about dynamic environments, such as autonomous systems, could really benefit from human-like intelligence.
But progress and differentiation in AI also involves getting the basics right. âYouâve got to walk the walk, and build specific applications that are actually bringing value to what youâre doing,â said Fabian. Companies need to be willing to invest the time and resources to bring AI prototypes to production quality.
As Fabian noted, a trustworthy solution built with transformers requires trustworthy, accessible data to reason on and function effectively. A lot of value can be created just by getting the plumbing and integration right. âI think being able to glue together APIs from different applications on your phone or computer and automate that with natural language is going to be extremely powerful very soon.â
Really, the key to differentiation in AI is timeless: build products that people want. Start with the user and their needs, not the technology. That doesnât mean you need to solve a âhair-on-fireâ problem. Look at ChatGPT, the fastest growing consumer application in history. It didnât address a hair-on-fire issue at launch and, as the next big thing so often does, it feels like a toy. But giving people the ability to rapidly extract high-throughput information is as much of an important problem as has ever existed.
However, a big difficulty right now is predicting where the big players are going with the technology. A lot of startups are building on top of these platforms, only to have their lunch eaten. âAs a startup, you need to solve a very specific problem and find the right business case around it, that is specific enough that itâs not going to be solvable by a generic system,â advised Fabian.
Itâs also about thinking ahead â envisioning the new ways of doing things that AI can enable or augment. âAs these things develop and you improve latency, cost, quality, [...] there will be things that we just canât think of now,â said Mehdi. âThink about Google Maps: Uber was only possible because of Google Maps.â
This means thinking carefully about the composition of your founding team. You need at least one person who understands and has good, practical experience in the field. âIf youâre not able to have someone in the team who has that muscle, itâs going to be hard to know whatâs really possible,â said Mehdi. You also need to understand the specific user needs, workflows and risks of the vertical youâre going after, and adapt your teamâs expertise to it.
Right now, LLMs can be applied to more creative, administrative, and financial tasks if youâre willing accept the risk of errors, but areas like healthcare, defence, or any case where human lives are at stake, are much riskier; if you predict something wrong, the cost is fatal.
We need people building in these high-risk spaces, but you need a deep understanding of the risks, and know how to mitigate â or ideally avoid â them. Be prepared for a longer, more challenging path, even though the risks will make it a less crowded market.
Differentiation is not just within the technology itself, but how you apply it to a given product domain in a differentiated way to do something that hitherto wasnât possible. You need to really think critically about what new fundamental cognitive capabilities you have available to you, what theyâre useful for, and how you can apply that to the domain youâre working in to actually create a differentiated experience thatâs useful for different people.
Mehdi captured it nicely: âIf youâre building a company now, and youâre in it for the next decade, in five years some of the limitations will have been solved, and youâll benefit from that. Itâs a hard balancing act: how much risk do you take and how do you see the future? Perhaps you should double down on the view of the world that you have and that you think is unique.â
â Mike
My advice from a couple of years ago on how early-stage startups should approach hiring engineers is still relevant here.
I've received feedback from folks saying that they think it's a surprisingly involved interview process for a startup, but I think that's exactly what you need, especially for senior engineering hires early on who have the potential to be very influential and move into CTO, VP roles.
In part I, I talk about how to establish a robust, repeatable process that balances technical assessment with cultural alignment. The key is to design an interview loop that is both rigorous and respectful of candidatesâ time, ensuring flexibility to accommodate varying profiles. By focusing on these principles, you can build an engineering foundation that supports your long-term vision.
In part II, I break down each step of the technical assessment process. I outline how to effectively evaluate candidatesâ coding skills, problem-solving abilities, and technical depth through structured interviews and take-home challenges. The goal is to identify engineers who not only meet the technical requirements but also align with your teamâs culture and values. That's how you build a team thatâs both technically sound and cohesive.
â Mike
âThe Social Radars: Replit Co-Founders, Amjad Masad & Haya Odehâ
Great episode with the co-founders â and husband and wife â of coding platform Replit. Amjad and Haya talk about the early days and the vision they couldn't let go of: bringing coding to everyone.
â'Why Machines Learn: The Elegant Maths Behind Modern AI' by Anil Ananthaswamyâ
The current AI revolution is founded on mathematics that dates back centuries. The groundwork was laid long ago, but it took the rise of computer science and 1990s video game chips to drive the latest wave. By demystifying the maths behind AI, Anil reveals that both natural and artificial intelligence may share the same fundamental rules. To realise AI's full potential, we need to understand that shared heritage â and the potential limits set by that mathematical foundation.
Thatâs all for this month.
Until next time, all the best,
Mattias and the Moonfire team
đđ„
Authors
Moonfire Team
In this article
Authors