LLM agents: the next platform shift in B2B software

Published on
September 25, 2023
Chris Rainville
No items found.
LLM agents: the next platform shift in B2B software

B2B software has been on a relentless growth trajectory over the last two decade, made possible by the rise of AWS and the hyperscalers in the mid-2000s. Since 2010, software has grown from being ~7% of the S&P 500 to ~20%, and the “digital transformation” of the enterprise is well under way. Cloud infrastructure has made it vastly easier to deploy and maintain B2B applications, spurring on the shift towards SaaS.

While we’re still early in the transition to the cloud, the next platform shift might already be here: high performing pre-trained models open to all. We recently shared our investing framework for generative AI applications: many of these are next-generation B2B software products, that use LLMs to interpret or generate work products in a way that wasn’t possible before. We see this rise of AI-native B2B software as a secular trend, which will also have second-order effects on how SaaS companies work.

LLMs augment software products in two fundamental ways – they enable:

i) better understanding and processing of unstructured data;

ii) more scalable creation of content and work product.

The first notable B2B applications of LLMs are assistants to human users (e.g. Copilot for code, Jasper for copy) – but we hypothesise that AI agents will deliver the biggest step-change in value creation. While an assistant can make a single worker more efficient, an agent-based application could theoretically play the role of an entire team or function.

AI agents: Infinitely scalable workers

An LLM’s abilities are amplified when combined using emerging agent frameworks like BabyAGI, AutoGPT and ChatArena. These frameworks break down workflows into a series of discrete roles each handled by AI agents, which are given a defined objective and work together to automate a complex task.

Initially, this will take the form of human-AI collaboration. Take call centres, for example. Traditionally, a first-line agent handles triage, and routes the customer to a more specialised department. In the near term, a software agent might take on the first of these steps, then hand off to a human (or help that human work more effectively). Taking this further, a fully agent-based or autonomous system would require no hand-off whatsoever. One recent investment of ours enables exactly that – more on this to come...

Early agent-based LLM applications are tackling complex tasks with increasing autonomy – including automating complex procurement negotiations (see Keelvar), solving hard data science problems and replicating the functions of rudimentary equities trading firms. While still early it’s not hard to imagine an end state where large parts of organizations are fully autonomous. Our best guess is that in the medium term we’ll see enterprise functions staffed by a combination of AI agents and human users - in a similar arrangement to what the human-machine cooperation frequently seen in multiplayer video games, in call centres, and increasingly in defense (enabled by Auterion).

We believe LLMs will enable a new era of autonomous, highly flexible software. Until now, workflow software has always been opinionated: it rigidly mandates a process that the business must adhere to. Now, things are different. AI agents can conceivably map out a company’s processes and can change the configuration of software dynamically (a vision pioneered by Qatalog). A world in which armies of implementation partners who customize software are replaced with AI agents is closer than we might imagine.

The trade-off with this flexibility is that, combined with the recent improvements in developer efficiency, the moats for software companies have never been shallower. It’s not clear what the moats for AI-software companies will be, but our best guesses are verticalization, access to proprietary datasets and brand.

Second-order effects of AI agents

This new paradigm will have important knock-on effects on B2B SaaS. First, AI agents are likely to lead to new go-to-market strategies for SaaS companies. With potentially fewer humans in the loop, there is likely to be a further decoupling of seats/users and contract value. To capture value, LLM applications will need a new scaling factor for pricing: variable value-based pricing is a plausible solution.

Moreover, AI-native software has well defined inputs, but probabilistic outputs - a significant departure from B2B software built for a specific task. At the core of AI-native software is one or several “black box” LLMs managed by a third-party model provider which effectively reduces control over the end product. SaaS companies will need to adapt as their products behave stochastically rather than deterministically. The role of the product manager will change; validating outputs; observing anomalies; ensuring consistent UX. The role of AI teams may also widen significantly. These teams were typically responsible for a clearly defined feature like a product recommendation engine or search capability – but now ML is much more closely tied to user experience, with LLMs being employed as a user interface, a key feature and occasionally an orchestration layer. The increased importance of AI within software companies also implies increased an increased emphasis on model validation, governance and control (see our latest thinking on this topic here).

Lastly, the cost structure of SaaS business will change. Open access LLMs are delivered as a service, and represent an additional direct cost for SaaS businesses. Early LLM applications we’ve spoken to are therefore highly conscious of their “LLM token burn” and are careful to optimize their model usage to maintain a SaaS-like gross margins. Similar to the emergence of multi-cloud strategies adopted by enterprises, companies are using multiple models of varying sizes, trading off performance for cost. This variable cost is yet another reason to adopt usage-based pricing. Another solution is to intermediate user interactions with LLMs with a quality control or routing model (see our LLMs for healthcare and education blogpost for a deeper look at strategies adopted in those regulated markets).

Beyond the second-order effects above, there are new questions to be answered:

  • Software companies’ value is traditionally tied to stickiness and lock-in. This usually comes from teams of humans adopting new tools and learning new processes. Could AI-native software shrink the overall market because there are fewer moats and lower friction to switch?
  • If we shift away from contractual recurring revenue towards value- or consumption-based pricing, does this result in more volatile revenue for software companies? (This was observed earlier this year with a handful of recent publicly traded companies)
  • What are the implications for the workforce? How will teams be trained to work alongside AI agents, so that they can be more productive than ever?
  • What impact will AI agents have on services-heavy business models, systems integrators and consultants?

Much is uncertain, but we are confident LLMs will fundamentally change B2B software. The productivity gain from the upcoming LLM and agent-based applications will enable entirely new B2B products, that alter the day-to-day lives of many software users.

Just as the cloud defined the last two decades, LLMs could be the defining platform shift for the next 20 years of software. If you're building an agent-based B2B application, we'd love to hear from you.