The challenges of investing in AI

Published on
April 10, 2024
Toby Coppel
No items found.
The challenges of investing in AI

This is not AI’s first “moment”. At Mosaic, founded in 2014, we experienced the resurgence of interest in AI in 2013-2015, triggered by the success of ImageNet in 2012 and Google’s acquisition of DeepMind in January 2014. This period underscored deep learning's potential, leading us to invest in novel computer vision-based applications such as Auterion, Nexar, Vortexa, and Veriff

The evolution continued. By 2018, a conversational AI revolution was under way, driven by advancements in NLP and the emergence of open-source libraries like TensorFlow. This prompted us to back NLP-driven innovations such as, Shipamax, Mavenoid and Keelvar. Meanwhile, pre-trained transformer models were growing exponentially in size. ChatGPT's launch in November 2022 was an inflection point, highlighting the transformative power of large-scale Generative AI (GenAI) models.

My partner Simon Levene and I began our careers in 1996 in Silicon Valley, shortly after the launch of Netscape – a time when the world began to understand the potential of the internet - and we were having many of the same debates as we are now about whether this new technology is a genuine paradigm shift. That was the most exciting time in my career: a true explosion of ideas; a new generation of young, idealistic founders; and a new communications platform with untold potential. Today, the new capabilities unleashed by LLMs seem poised to surpass even the internet in terms of impact. And personally, I am enjoying my work more than ever before – the intellectual challenge of peering through the fog to evaluate transformative AI-first applications, and collaborating with founders to navigate the shifting landscape. It is an exhilarating time but also an extremely challenging one. The mist is both dense and dynamic, as new technical breakthroughs are achieved at a speed I have not seen in 25 years. The unknowns make it challenging to pick early when making long-term bets on applied AI companies. 

At Mosaic, we are not actively investing in the foundation model layer, which is too capital intensive for a boutique early stage fund. It is unclear to us whether any of the large-scale models (GPT-4, Claude, Gemini, Cohere etc) will maintain a sustainable performance advantage over the others. Open source models are also catching up on performance (Llama3, Mistral, DBRX) and there is a genuine question as to whether a finely-tuned small model (SLM) can effectively compete with a large, generalized model; notwithstanding the fact that a SLM is cheaper and easier to operate.  Beyond this, we are seeing a number of Mosaic-backed founders using multiple models to drive their products, selecting different models for training vs inference, and at the query level based on quality vs cost.

We have been 100% focused over the past 18 months on applied AI businesses; recent investments include Coram, Parloa, and Podcastle. My partner Chandar recently shared our investing framework with regards to LLM applications. We continue to look for applications with characteristics such as: novel proprietary datasets, domain-specific models, deep integrations into existing tools and workflows, or those that re-imagine business processes using agents. Since the competitive moat in a LLM-powered world is likely to be lower than the world of traditional SaaS products, we think new software products & services also should deliver delightful user experiences, fast ingestion of unstructured data, and an unfair advantage on their go-to-market approach. They may not even be using LLMs as their core differentiator, but as an enabling technology that delivers a 10x better experience for their customers. 

We are exploring investments in both co-pilots, where AI can scale and augment high-value, specialized or scarce labor, as well as agents that take actions and handle tasks independently.  We find the second category particularly attractive where often an application is replacing a services vendor (in Parloa’s case) or there is no incumbent at all. These businesses are “selling work” to their customers and being paid by the task or the hour, with a very high ROI for their customer. 

Despite the received wisdom that the large tech incumbents will be tough to beat, we are genuinely interested in native AI-based horizontal applications e.g. sales & marketing, customer service, supply chain or software development. We believe that if you can re-imagine an entire workflow with LLMs, reducing friction and delivering value to customers, you can build a great business. And we are spending time on specific vertical applications, where unique datasets may be easier to obtain and where solving a customer problem with full end-to-end automation underpins a robust business case for the buyer. 

Over the next couple of months, we are hosting a series of entrepreneur-led roundtables in sales & marketing, software development, ecommerce and marketplaces, finance ops, productivity and healthcare ops, to discuss and share learnings on common issues and challenges. We continue to learn and update our views on the space - we usually have more questions than answers - and we find that regular brainstorming discussions with founders can help improve visibility around the AI landscape. What qualities to look for in a long-term defensible AI-first application remains the key topic for us. If you are building in this area and would be willing to share your perspective, we would love to hear from you. 

And if you enjoy studying the history of AI and discussing whether it truly is the next platform shift, my partner Benedict Evans’ recent presentation is a must-read.