Innovation: What makes AI unique? Towards model thinking.

Frontier Team

6/15/2023

In this first of a series of articles about AI and innovation, our Director of AI, Paul Golding, explores opportunities for innovation by asking what makes AI unique?


What follows is a discussion of mental models: how to think about AI. We call this Model Thinking, like Design Thinking, necessary for the discovery of AI-first innovations. It’s part of a broader set of approaches that we collectively call Holistic AI.

Mental models shape what we do. By failing to adopt good mental models about AI, many leaders have missed crucial innovation opportunities and might continue to do so.

In this article, we introduce some foundational mental models for thinking about AI. 

AI’s Superpower: Turns Silicon into Innovation

It has escaped the attention of many business leaders that we live in an age of abundant computation. Apart from Big Tech and a few outliers, this possibility is seldom considered as a strategic opportunity.

Leaders could ask: How might our business be different with “unlimited” computation? But most do not, thus overlooking critical opportunities for innovation.

It is plausible that any company serious about this question could have invented the Transformer, the tech behind ChatGPT.

Why would they?

Because language is the core operating system of business, not Windows or Linux. Hence, if one is seeking major competitive advantage, then one might seek to excel at language processing.

But leaders don’t think in these fundamental terms. Most likely, if they came up the MBA or CFO ladder, such frameworks are entirely alien.

Luckily, we have a framework for innovating with computational abundance. It is called AI. Seen with the right mental models, unique AI opportunities begin to emerge.

Mental Model 1: Seeing the wood for the trees (aka decoding complexity)

In a complicated system, like an airplane, the role of every part in achieving flight is completely known. But in a complex system, like an enterprise, the causal role of parts is hard to identify. (Complexity comes from the Latin word “plexus” which indicates non-separability in components.)

Enterprise complexity exists because an enterprise runs most of its apps, human and software, on a “language operating system” and language itself is complex. Galileo pointed this out, how a few symbols (alphabet) can generate an infinite variety of utterances. He considered it astonishing. 

An enterprise is a large set of hidden complex functions whose inputs are encoded as language. We cannot write these functions down. Moreover, if we attempted to plot their inputs on a graph, the dimensions would be vast and evade mathematical analysis.

But there is a discovery mechanism, called AI, or Deep Learning.

It exploits a principle called the Manifold Hypothesis, which suggests:

  1. A solution probably exists due to innate structure in human life
  2. It can be usefully approximated in some lower-dimensional space, which is a unique AI feature

The surface of that function in lower-dimensional space has the fancy name: manifold, here shown in 3D. It will have many more dimensions for any meaningful problem and look a lot messier than this contrived illustration (see code).

The image shows a 3D contour plot, which is also known as a manifold.

Unique AI: Finding Hidden Structure

Consider language. Our brains ignore surface structure, as in the linear order of words, and apply their own structure, per the illustration. See how the verb attends to the subject, skipping words. Our brains do this reflexively and innately.

There are no supervisory labels like verb and subject, yet our brains somehow pick out these objects and link them, skipping words in between.

The Transformer AI is the first machine that can replicate this innate structure, or “language function”. That’s because Deep Learning can find structure by learning a function that approximates it.

Finding hidden structures is a unique AI superpower.

Mental Model 2: Structure is Everywhere

There exist many hidden structures in our enterprises, which we characterize via questions:

  1. Why does a client buy our product?
  2. Which initiatives affect revenue?
  3. Which employees make a difference?
  4. Which products should we make more of?
  5. Which legislation will affect my business?

Structure is everywhere, often hidden in plain sight!

Smart people exploited this insight during the last AI wave. Before Transformers, we had Word2Vec, whose principles form the mathematical basis of Transformers.

The idea of Word2Vec was simple: build a neural network to predict which words follow which. If done in the right way, it can learn a low-dimensional manifold that acts as a “word relatedness” function in order to make such predictions.

For example, it might learn that girl is related to boy, perhaps via some “opposite-sense” dimension. It might predict that girl is related to woman via some kind of “similar-kinds” dimension, or perhaps an “older kind” dimension.

Images shows how words in a vocabulary can be related via hidden relationship functions, called embeddings, that an AI has discovered by analyzing a corpus of text containing the vocabulary.

These dimensions tend to generalize – i.e. by adding the “opposite” vector to the word pretty, you get ugly.

The folks with the right mental model saw the potential for using this technique to find hidden dimensions (called embeddings) in other data besides words.

Consider Hotel2Vec in which the researchers found embeddings in hotel data. They used them to identify similar hotels in ways better than surface structure (“business rules”) suggested.

Structure is everywhere, but we have to train ourselves to realize this and develop the habit of using AI to go find it.

Mental Model 3: Let the data speak!

Data-centricity is a powerful mental model, unique to AI. In the AI sense, it means that instead of attempting to identify any formal structure using rules or inspection, we let the data speak for itself.

The analogy is like watching someone play a video game until you begin to spot patterns that suggest the underlying game-play rules.

Why is this so important? 

Because knowledge workers will toil in trying to identify and codify structure for a business process, but seldom think to utilize AI.

Returning to Hotel2Vec, imagine we include a computer-vision representation of a guest’s trash can. Perhaps a dimension of latent space represents “room usage” because guests who eat more snacks are often working and so prefer a bigger desk. Feeding this data into an AI recommendation system might suggest displaying big-desk photos on the booking website, persuading “snack” guests to upgrade to studio suites with larger desks.

It’s possible that a dimension of latent space represents “room usage”. Perhaps guests who eat more snacks in their room are more likely to be working and so need a bigger desk. Showing big-desk photos on the booking website might be a latent signal for attracting such guests to upgrade to studio suites with larger desks.

Or, maybe trash can contents are embedded in floor choice because guests who snack often like to stay on lower floors so that they can visit the nearby grocery store.

Image shows a trash can with imaginary AI embeddings vectors superimposed to represent hidden structure in the data obtained from computer-vision analysis of the trash.

The point is that the AI finds the hidden representations, whether or not they make sense. Knowing that this is a possibility allows workers to rely more upon AI to find hidden structure that unlocks new business value, which is the definition of innovation!

What’s your Innovation2Vec?

Let's build your AI frontier.

The field of AI is accelerating. Doing nothing is going backward. Book a 1:1 consultation today.