AI that works: generate value
The oft-quoted anecdote that 80% of AI projects fail is familiar to digital transformation folks. Now, in the rush to harness Generative AI, many more projects are likely headed that way, generating everything but value. We explain how to reliably deploy AI that works using a multi-disciplinary approach, called Holistic AI, that takes into account all success factors that an enterprise must attend to in order to deliver sustainable value.
Injecting hardcore AI into an enterprise can be like injecting high-octane fuel into a worn-out car with dirty plugs and no service love. It might fly like the wind for a few hundred yards, but quickly stutter to a halt.
A common measure of AI is some benchmark, like number of documents classified per dollar. But this can be like viewing the speedo of the car during the high-octane surge.
CEOs have become tired of fast-slow-stop transformation, followed by more YAPP: Yet Another Pilot Project. Whilst early results are always “directionally useful” (beware this euphemism), come the QBR crunch, well, we’ll say no more.
Complex, not Complicated
An enterprise is complex, not complicated.
With a complicated airplane, we know exactly what role each part has in relation to flying. But within an enterprise, we never quite know which initiative is moving the needle.
Many AI programs fail in the “real” org as opposed to the oversimplified “imaginary” one where project leaders conveniently ignore complex interactions.
The solution to deploying AI that works is to utilize all four of the following approaches:
- Design thinking – solving the right problem.
- Systems thinking – having the right impact.
- Model thinking – using the right AI solution in the right way.
- Product-centric operations – systematically delivering the right outcome.
Combined, we call this Holistic AI.
There are good Complexity Theory reasons to justify our approach, but we hope that the four methods outlined below will speak for themselves without theoretical justification.
Design Thinking: AI that works on the right problem
Whilst design thinking is now widely practiced, be honest: when did you last use it on any AI initiative?
The essence of DT is to uncover the real org versus the imaginary one. For example, in the imaginary one, sales folks are given dashboards, which we suppose they will use. But in the real org, dashboards often don’t work. We would go so far as to call this the norm.
DT would attempt to discover the real value and usage of dashboards within the context of daily routines, goals and incentives. All of these matter – e.g. the best dashboard gets no love if there are no incentives to use it.
Whilst DT is often used to discover custom-facing pain, its superpower for transformation is in identifying employee/process pain-points, like dashboard dysfunction.
If you want AI that works, you must target pain-points with accuracy and actionable insights. Be aware: the insights are seldom what you think.
As the Chief Data Officer of Jaguar Group discovered, much to his surprise, the reason for failed “data initiatives” was not, as expected, the specter of “data quality”. Instead, user research (aka DT) revealed that employees couldn’t use their data assets to get the intended results. This was poor design, an anti-pattern for AI that works.
The essence of DT is solving the right problem for the right reason.
Systems Thinking: AI that works in context (right impact)
A complex enterprise requires special attention when trying to identify leverage points, such as which sales initiative to run or which process to automate.
ST says how leverage points are often counterintuitive – you need less of the thing you think you need more of. The canonical example from Donella Meadows (“Godmother of ST”) is how low-income housing can increase, not decrease, poverty.
ST analysis concerns causal loops: identifying what plausibly causes what. Clearly, sales performance has more than one cause, yet initiatives typically oversimplify by looking at causes in isolation.
For example, perhaps a particular defense strategy for competitive sales seems obvious. Early results seem convincing, so actions are amplified. However, lo and behold, come the QBR, results are hard to find, perhaps even negative.
One explanation is that focussing in one area causes another to suffer.
Another explanation is that over-emphasis of a particular solution has precluded the sales prospect from learning of other solutions. Those others might be more profitable in the long run, as in lifetime value (LTV). Yet such considerations might be absent without ST.
The value of ST is in helping to consider the whole problem – the real problem – not just a simplified proxy. With complex systems, simplification is often not the answer: it’s more of a coping mechanism. Beware of “low hanging fruit”.
The essence of ST is that it ensures that the right problem is solved within context.
Model Thinking: AI that works optimally
AI, and its broader cousin, Machine Learning (ML) come in different flavors. Understanding those flavors helps to understand what kind of solution to use for what kind of problem. More valuably, it helps workers to identify use cases for AI/ML.
It’s all about fitting the art of the AI possible to the problem space.
Consider the familiar concept of Critical Thinking. To master CT, one first has to learn the types of fallacies and then practice how to spot them in a situational example.
Ditto with AI: one has to learn the patterns of AI, the art of the possible, and then learn how to spot commensurate opportunities in the enterprise.
If you don’t believe that AI solutions affect how we think about problems, take a look at Christoph Molnar’s book Modeling Mindsets.
At Frontier, we advocate widespread awareness of AI in order to utilize MT. Just as the spreadsheet has become the de facto numerical modeling tool, many AI/ML tools need to similarly become commonplace. This is especially so in the age of Generative AI where much of the technical heavy lifting can be done using AI itself (to build AI/ML solutions).
Many workers, with the right tools and MT know-how, could apply AI directly to their work without the need for data scientists and AI PhDs. We call this “Direct AI” and it is part of the MT approach. It is also highly scalable.
The essence of MT is helping to identify the right solution for the job.
Product-Centric Operations: AI that works to produce outcomes
PCO is what binds the above approaches into a holistic approach. Many AI initiatives fail due to a lack of operating model. Failure can often be as simple as not having a process for adoption of the shiny new AI tool. This is remarkably common.
A product-centric approach helps operations to focus on outcomes, not tools.
Product management includes a theory called “Jobs To Be Done”. In the case of a dashboard, a user might “hire” the dashboard to do a job: “allow me to predict which product configuration the customer is likely to buy”.
The dashboard might suggest product configurations via some widget. Assuming the widget is usable (often not), what if the SKU codes don’t align with a usable BOM? If the salesperson cannot produce a quotation due to misalignment with the price-quote tool (CPQ), then the dashboard has not delivered on its intended outcome.
Tracking is vital. Can we track suggestions from the dashboard, through the CPQ and down the sales pipeline? In product management: measuring outcomes is a must.
Methodologies, like Lean, can ensure efficient and timely confirmation of outcomes whilst avoiding the classic mistake of taking months to build elaborate solutions that fail to deliver. An Agile framework makes delivery efficient, controllable and measurable.
Agile dictates the use of retrospectives to encourage open and honest diagnosis of initiatives in a blame-free fashion, which is essential to long-term success.
The essence of PCO is systematically delivering the right and best outcome.
A Bundle of AIs that work together
In a fable, a man asks his children to break twigs, which they easily do. But once bundled together, the kids cannot break them.
There is a similar notion in AI concerning weak learners. Each, by itself, is weak, barely producing results. But when combined, they produce a strong effect.
With complex enterprises, this principle applies at the macro level. We could go into some theory, but the idea should seem obvious and scalable: empower employees to deploy as many local AI solutions as possible (using holistic approaches) such that the combined effect at the organizational level is robust.
This is only scalable via Direct AI, and will only work when following the above four approaches. Of course, there are lots of details to consider, which you can read about in our handbook or discuss in a 1:1 consultation.