How to Prevent AI Commoditization: 3 Tactics for Running Successful Pilot Programs

With the rise of open-source AI models, the commoditization of this breakthrough technology is imminent. It’s easy to fall into the trap of aiming a newly released model at a desired tech demographic and hoping it catches on.

Creating a moat when so many models are easily accessible creates a dilemma for early-stage AI startups, but leveraging deep relationships with customers in your domain is a simple, yet effective tactic.

The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert performs their day-to-day tasks to solve nuanced workflow problems.

In highly regulated industries where results impact practice, data storage must pass a high standard of compliance checks. Typically, customers prefer companies with previous track records over startups, fostering an industry of fragmented datasets where no single player has access to all data. Today we have a multi-modal reality where players of all sizes hold datasets behind highly compliant walled-garden servers.

This creates an opportunity for startups with existing relationships to approach potential customers who would normally outsource their technology to start a test pilot with their software to solve specific customer problems. These relationships can arise through co-founders, investors, advisors or even previous professional networks.

The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert performs their day-to-day tasks to solve nuanced workflow problems.

Showing references to clients is an effective way to build trust: positive indicators include team members from a university known for AI experts, a strong demo where the prototype allows potential clients to visualize results, or a clear business case analysis of how your solution will help them save or earn money.

A mistake founders often make at this stage is to assume that modeling customer data is sufficient for product-market fit and differentiation. In reality, finding PMF is much more complex: simply throwing AI at a problem creates problems related to accuracy and customer acceptance.

Setting the bar is often a daunting task for seasoned experts in highly regulated industries who have an intricate understanding of day-to-day changes. Even AI models properly trained on data can lack the accuracy and nuance of expert domain knowledge, or more importantly, any connection to reality.

A risk-detection system trained on a decade of data may be clueless about conversations with industry experts or recent news that could completely defuse a previously considered “risky” widget. Another example could be a coding assistant suggesting code completion of an earlier version of a front-end framework that has separately benefited from a succession of high-frequency breaking feature releases.

In this kind of situation, it is better for startups to rely on the pattern of starting and iterating, even with pilots.

There are three main tactics in managing pilots:

Leave a Comment