The New Innovation: Right-brained AI
Frontier Team
6/15/2023
GenAI is still in its infancy, yet many leaders already want to know what they should do differently beyond exploring tactical use-cases. Our Head of AI, Paul Golding offers a more strategic outlook through a familiar lens: innovation, but with a right-brained twist.
We want technology innovation
CEO’s want innovation, not technology. Indeed, many are wary of techno- promises, like “Big Data” or “Cloud”. Many waves have failed to deliver the promised gold.
In the age of GenAI, three modes of innovating will become indispensable:
- Design Thinking (DT)
- Systems Thinking (ST)
- Model Thinking (MT)
At their heart, these are right-brained activities. I add that remark as a counterpoint to the trend towards overly left-brain orgs cap-sizing under the weight of inflexible analytical thinking. This includes dogmatic prioritization of “methodology” over underlying wisdom, such as “doing Agile” versus being agile.
AI-First? Or design-first?
If GenAI results in an abundance, then how do we avoid a race to the bottom of proliferation? The means to reliably generate continual differentiation becomes paramount. This type of generation is missing from GenAI.
Orgs must strive towards anti-fragility as a defensive moat against the brittleness of an ecosystem revved up on GenAI. Yet, as no one can specify how to obtain anti-fragility, we offer a solution: design.
Good designers are propositional, not analytical – they see the pattern before the data, so to speak. It is a different type of insight than the one that mythically emerges from data-driven analytics
Corporatization of DT has turned it into a box-ticking analytical exercise of sticky-note madness. It lacks inductive spirit and the real freedom to innovate. Insights have become relegated to phantoms found in Tableau charts.
Product meaning has disappeared, confused with pixel-positioning frameworks (cunningly renamed “Patterns”) or some kind of gloss.
Consider this: the biggest woot at Google IO was for …. “dark mode” (sigh).
The reduction of DT to an analytical, deductive approach is ironic because it insists upon the logic of left-brain thinking in what is a distinctly right-brained discipline. This is how we have ended up with an insistence that AI should merely replace things: copywriters, artists, coders, administrators, etc.
What we need is design-first AI-first, although admittedly that’s a bit less of a snappy slogan. Instead of asking what existing things can we do faster and cheaper with AI, the winners will ask: what can we do differently, because of AI.
Design-driven Innovation
Consider this: many of the greatest architects and furniture designers propose a single idea. Great photographers take a single shot. Legendary directors make a single take from a first-draft script. Many of the greatest poets pen a single draft. This is because truly creative acts are driven by percepts, not concepts or marketing fads.
Propositional creative acts spring from the holistic right-brain perception of the world, able to comprehend “the big picture” without decomposing into meaningless parts. Mozart heard symphonies in his head, not notes. Design is Gestalt, not graphs.
This is the irony of Verganti’s book “Design-driven Innovation” in which he attempts to describe the interpretative propositional act of design. His struggles when he attempts codification by trying to fit design into the framing of “business process”. This is why many missed the point and returned to the ritual of sticky notes.
It is still an excellent book. And, the answer to codification is simple: hire lots of designers. Hire them as first-class citizens, not disposable contractors. Learn design. Train design. Worship design.
The response of many to GenAI overabundance will be to double-down on the wrong thing. They will do everything the same, only turbo-charged and cheaper. But this is not a sustainable competitive strategy. It is more of a knee-jerk reaction.
Consider this: how many of those tasked with “AI strategy” have invited designers to the discussion?
Note: one of our consulting partners is Rick Lewis, a designer. Another, Geoff McGrath, is an expert in innovation through AI and has led many design-first businesses at scale. Learn more about our team.
Systems Thinking: Using Levers
We live in an era of accelerated entropy – more information, but more diffuse, less clear. At the same time, conventions are being challenged and many of the bedrocks that undergirded careers are crumbling.
We are suffering the legacy of too many left-brained operating theories from the 80s and 90s. We are suffering the weight of an overly-analytical management class, many of whom still believe that a spreadsheet can “explain” the organization. None of this works in an era of unprecedented Complexity accelerated by GenAI.
The true multiverse is the one we live in. A one where many potential parallel worlds are hidden in plain sight, hidden from those of us blinded by top-down analytical thinking. The idea of a single core narrative that might fit inside the head of the CEO is defunct.
In an age of GenAI abundance, many organizational heuristics, based upon finding ways to simplify the org, will no longer work. We must, as Einstein pleaded, think anew.
GenAI, with all its combinatorial glory, will accelerate organizational complexity geometrically. As complexity accelerates, so will instabilities, unintended consequences and hard-to-predict side effects. In some cases, blindly using GenAI to amplify existing processes will speed up collapse the org. Indeed, this is what Systems Thinkers warn: beware of turning the wrong knobs!
Systems thinkers tell us the value of leverage points in complex systems. These are interventions that can have a disproportionate effects upon the system.
In her seminal work, Donella Meadows identified different levels of leverage from low (e.g. tweaking parameters) to high (e.g. paradigm shifts). However, the challenge with complex systems is that interventions often have the opposite effect. Meadows cites how increasing low-income housing can make people poorer and weaken ecosystems.
In other words, where we might instinctively think to add, we should subtract, and vice versa. The right move, if there is one, must come from Systems Thinking, backed by Design Thinking. And this is how we should approach innovation through AI.
In an age of accelerated complexity, only those orgs who can identify appropriate higher-order levers will win. And, in case you didn’t see it coming, one instrument for finding levers is design-driven systems thinking. This combo has been recognized as a possible solution to handling complexity (e.g. see Systems-Shifting Design Report).
One can argue that the core of Apple’s innovation flowed from the combination of the propositional design instinct, per Jonny Ive, with systems thinking, namely the re-invention of manufacturing and supply-chain.
Model Thinking: Simulating Futures
With GenAI, we are approaching an era where whole organizations will become amenable to computation. Yet, it is strikingly obvious that many orgs are ill-equipped to understand what to compute and how.
They don’t know how to model systems at this scale.
Well, nobody does – yet.
Consider the humble legal contract. By its literal properties, it is a document that states terms. Yet, in what ways does it become a different entity once amenable to semantic processing in a long chain of interwoven GenAI pipelines?
In a world where we can explore a sales deal via all kinds of models and simulated futures, a “computed” contract might be the essential difference between profit and loss.
We need models to predict this. But what are they?
Few know. Fewer still know where to look.
Consider the increasing brittleness of forecasting and long-range planning. Now imagine a world in which more information becomes computable via GenAI. The possibilities are mind-boggling.
At a certain point, the answer isn’t forecasting, but simulation. The org that can simulate its futures will win. Others will be flying blind, still looking for insights inside of Tableau, after the fact.
Yet there’s barely an org using simulation, except for specialist purposes like chip design or aerodynamics.
Simulation requires data approaches that are way beyond the current trend of meshes and fabrics. It will need complete event histories (like those available via the so-called event-sourcing pattern). It will need the ability to “fork” the org into an array of “alt universe” versions of its future. This is a radically different proposition to today’s data methodologies.
Such systems don’t yet exist.
The engineering challenges are enormous: I was privileged to work on a “simulated org” platform for a stealth start-up in UC Berkeley. The two visionaries, in their 20s who designed it were, ironically, refused an NSF grant. They “lacked the experience to build such a system”. What a classic left-brained response: putting the parts before the vision.
Org futures: Red-pill or blue-pill
In a hyper-complex matrix-embedded world, organizational “sense-making” will become the number one defensive moat. It will become increasingly impervious to analytical methods, like the aptly name analytics. It will need design, systems and model thinking, including simulation where the entire org is replaced by a model, sometimes called a Digital Twin.
We will also need to let go of our dogmatic obsession with precision and determinism, much of which, like the fabled red-pill world, is an illusion. Our clinging to these ideas is a vestige of left-brained management theories traceable to an industrial philosophy wherein we believed everything reducible to cogs and springs, each perfectly explainable.
The explainable org is over. We must embrace the interpretable org and use a number of tools to interpret it: design, systems and model thinking.
Consider our obsession with explainability, as if we could ever really explain what’s going on in our complex orgs. How can we expect this of inordinately complex AI models that with trillions of parameters? Philosophers have struggled for millennia to explain language. Yet, somehow we insist that models that are arguably more complex than language itself will yield to simplistic explanations. Perhaps, but probably not.
The GenAI world will soon destroy many of our corporate intuitions. Things that were not imagined computable, except in sci-fi, look like they’re going to become computable in all kinds of unexpected ways. AI models are already popping up like mushrooms. Innovators are hacking AIs together (“chaining”) on a daily basis in ways unintended.
There is great power in the absurd, but also great dangers. One will be our underestimation of the new order, clinging to old ideas. And by old, I mean left-brained.
We do indeed have to think anew, but likely in ways strange, or even alien. But the good news is that designers, systems thinkers and modelers have an uncanny knack of making the strange possible.
And, just as the power of GenAI lies in combinatorial powers, so too will the power of deploying new modes of innovation. Systems thinking needs modeling needs design needs modeling needs systems thinking, and so on, and so on, and so on.
Which precedes which?
Well, that is a left-brain question and I am willing to bet that the answer is a right-brained one.