top of page
New Azafran Logo_white.png

What Managed AI Investments Actually Look Like in a Portfolio

  • Apr 8
  • 5 min read

The AI companies worth backing are not the ones with the best models. They are the ones with the best delivery. That distinction is becoming clearer across enterprise markets. AI adoption is broad, but durable value creation is still narrow. McKinsey’s 2025 global survey found that 88% of organizations report using AI in at least one business function, yet only a minority have scaled it in ways that materially change enterprise economics. High performers separate themselves not by access to models alone, but by redesigning workflows, setting governance, and building operating models that convert AI into measurable business outcomes.


For investors, that gap matters. It means “AI exposure” is not a thesis by itself. In a portfolio, managed AI investments should look less like a collection of impressive demos and more like a disciplined set of businesses that can repeatedly deploy, govern, integrate, and monetize AI in real operating environments. That is especially true in applied deep tech, where technical sophistication only matters if it survives implementation. At Azafran Capital Partners, that is how we think about AI. We are not underwriting model theater. We are underwriting delivery. In our view, the durable AI winners in MedTech, IoT, and enterprise B2B will not be defined by raw model novelty alone. They will be defined by defensible intellectual property, workflow fit, implementation discipline, and the ability to create value inside environments where buyers care about reliability, accountability, and time to impact. That is what managed AI investments actually look like in a serious portfolio. 


What Managed AI Investments Actually Look Like in a Portfolio

Model quality gets attention; delivery quality creates returns 

The last two years trained the market to ask, “How good is the model?” That is an incomplete question. Enterprise buyers are already moving beyond one-model thinking. Andreessen Horowitz’s 2025 survey of 100 enterprise CIOs found that organizations increasingly use multiple models and are becoming more pragmatic about where value actually comes from. They are optimizing for budget, deployment flexibility, vendor fit, and use-case performance rather than simply defaulting to the largest or most talked-about model.


That behavior compresses advantage at the model layer for many B2B companies. If the customer can swap models, the moat is not the model. The moat is what sits around it: proprietary workflows, domain-specific data, integration depth, compliance architecture, deployment speed, and customer outcomes. 

This is why managed AI investing has to be portfolio-aware. A fund should not just ask whether a company has AI. It should ask whether the company can repeatedly deliver AI into production and capture value after the pilot phase. McKinsey’s 2025 research points to the same conclusion: organizations that create enterprise value from AI are much more likely to redesign workflows, assign senior ownership, and define how human validation and risk controls work.


The real portfolio question is not “Who has AI?” It is “Who can scale it?” 

This is where many AI portfolios get messy. They end up overexposed to narrative and underexposed to implementation. BCG’s research captures the problem cleanly. In its 2025 study of more than 1,250 companies, only 5% were achieving AI value at scale, while 60% reported minimal revenue and cost gains despite substantial investment. Another 35% were scaling efforts and seeing some returns, but had not yet achieved full transformation. That is a staggering dispersion of outcomes for a market that often gets discussed as though value creation were automatic.


For LPs and institutional investors, that should change how AI exposure is evaluated. A credible AI portfolio should not be stuffed with companies whose value depends on perpetual future adoption. It should be weighted toward businesses that can move from experimentation to scaled deployment with repeatability. In other words, portfolio construction in AI should reward operating evidence, not just category adjacency.

That logic becomes even more important in applied deep tech. In MedTech, IoT, and enterprise B2B, deployment is not a light-touch software event. It often involves regulated workflows, legacy systems, hardware environments, security requirements, and cross-functional change management. The delivery burden is higher. Precisely for that reason, the moat can be stronger. 


Managed AI investments are built on delivery systems, not AI claims 

A managed AI investment, in our view, has four characteristics. First, it solves an expensive problem in a constrained environment. Buyers do not fund AI because it is interesting. They fund it because it reduces labor intensity, improves throughput, lowers risk, strengthens decision quality, or unlocks a revenue path that was previously too costly or too manual.

Second, it has implementation architecture that survives contact with reality. That means data integration, workflow design, governance, and human oversight are part of the product, not cleanup work left to the customer. Deloitte’s enterprise AI reporting shows organizations are increasingly focused on turning use into value, with productivity and efficiency gains cited most often by those already realizing benefits.


That is a delivery story as much as a technology story. Third, it creates reusable deployment capability. The market is starting to recognize the value of high-touch implementation models when they are built on reusable primitives rather than one-off services. Andreessen Horowitz’s 2026 analysis of “forward-deployed” enterprise AI delivery makes the point directly: embedded teams create value when they build on repeatable platforms, opinionated workflows, and standardized components, not when they reinvent the product for every client.

Fourth, it becomes more defensible as it goes deeper into the customer environment. That is the hallmark of applied deep tech. The company gets stronger not because the narrative gets louder, but because the workflow becomes harder to replace. 


Why this matters for Azafran’s AI thesi

Azafran’s view of AI is not horizontal and generic. It is applied, operational, and selective. We are interested in enabling technologies such as voice, acoustics, imagery, machine learning, and augmented AI when they are tied to real deployment paths and defensible IP. We care about long-term partners, not transactional capital, because AI value is rarely unlocked by funding alone. It is unlocked by delivery discipline. 


That is also why the Azafran Catalyst matters. Capital without operational acceleration is often insufficient in AI. The portfolio companies most likely to outperform are the ones that can align product, implementation, governance, and commercial execution early, before complexity turns into drag. McKinsey’s and BCG’s findings both reinforce that point from different angles: a small minority of organizations are producing scaled value, and the differentiator is not hype. It is management practice, operating discipline, and execution maturity.


For LP relations, this is an important distinction. A managed AI portfolio should not promise exposure to every AI trend. It should demonstrate judgment about where AI can actually be delivered, defended, and monetized. That is a stronger posture than broad thematic enthusiasm because it reflects underwriting discipline. 


The next cycle will reward delivery, not spectacle 

The market still spends too much time discussing AI as if the battle will be won at the model layer alone. That is unlikely to be true for most B2B companies. 

As enterprise buyers become more sophisticated, model access will continue to broaden, multi-model strategies will continue to grow, and expectations around ROI will keep rising. That shifts advantage toward companies that can deliver AI in production, prove value quickly, and build defensibility through execution.


That is what managed AI investments should look like in a portfolio: fewer abstract AI stories, more businesses with real deployment capability; fewer claims about intelligence, more proof of delivery; fewer companies built around model dependence, more companies built around workflow ownership and operational value. 

Our position is straightforward. The AI companies worth backing are not the ones with the best models. They are the ones with the best delivery. In the next cycle, that will not just be an operating truth. It will be an investment separator. 

Comments


bottom of page