# AI Magic: How does it actually work?

Frontier Team

7/7/2023

We will explain the basis of how and why AI works. We will introduce a few key terms that will help you navigate AI without a PhD. The explanation is representative, missing details, but sufficiently accurate to get you started.

## What is learning?

AI is a type of machine learning. It is software that can discover (â€ślearnâ€ť) some underlying function of a system *merely by inspecting data*. This is a radical idea. At school, we learn by instruction. We are told the function, such as one of Newtonâ€™s laws:

`Force = Mass x Acceleration (F=ma)`

Â

But what if we donâ€™t have a function? Not because we werenâ€™t told about it, but because it doesnâ€™t exist. For example, there is no equation for how many dollars a customer will spend on a website. But we might have a bunch of measurements, like customer age, income and time spent browsing. And for each user we can observe total spend during a visit.

Is it possible to discover a function that maps the measurements to the observations? Well, this is the goal of machine learning: to discover hidden (latent) functions by inspecting the data.

## Why is there no function?

Why is there no function for predicted spend? Did we just not learn it in high school?

With highly localized artifices like custom websites, or business systems, it is difficult to reduce the intricacies and interactions to a concise mathematical function. Unlike our universal law of force, there is no universal function for websites, nor any complex business process. The best we can do is try to find an approximation to the localized hidden function that we believe represents underlying system behavior.

## But is there a hidden function?

How do we know that thereâ€™s a hidden function in our data? We donâ€™t. But if there is one, AI is a highly powerful and versatile technique for searching for it: thatâ€™s the AI magic.

One definition of a function is that it is a relationship mapper, mapping inputs to outputs.

For our example, we can also write this down using function notation:

An important assumption that allows the AI magic to happen is that the sample and observations are related in some way â€“ i.e. not totally random. Also, we assume that the complexities of the system (website and user behavior) are reducible to some function simple enough for the AI to guess via a â€śmagicâ€ť process we shall explain.

If these assumptions hold, we can use AI to build a simulation, or model, of the website system and attempt to search for a function that will approximate the hidden function and mimic for the observed behavior in response to the same sample set.

Notice here that the AI (orange box) represents a model of the target system such that when fed with the same sample set, its outputs are an approximation to the original observations. In other words, the AI simulated function now approximates the hidden function.

## Where and how do we search?

When we say that AI can search for an approximate function, what does that mean?

Letâ€™s say the AI takes an initial random guess at the function. We want to know how to adjust our guess so that we can search for the function that best mimics the observations.

Remember the childhood game of hide-and-seek whereby the hider calls out â€śwarmerâ€ť or â€ścolderâ€ť if the searcher is getting nearer? AI uses a similar trick. After taking a guess, it generates feedback signals, â€śwarmerâ€ť or â€ścolderâ€ť, to help adjust direction until it finds the best-fitting function.

### What do we mean by direction?

You might be wondering if direction is a metaphor? It isnâ€™t. AI can use a real mathematical concept of direction (vector) to generate the â€śwarmerâ€ť and â€ścolderâ€ť signals.

Recall that a function is a mapper. One generic way to do mapping is to assume that the output is some combination (addition) of scaled inputs:

If we *plot the parameters*, we have a geometrical space. Surprise-surprise, itâ€™s called the `parameter space`

.

Each point in the parameter space is a particular instance of the function. In other words, our parameter space is our function space.

### Searching the parameter space

Now we have somewhere to search. We can hunt around in the parameter space until we find a point (i.e. a particular function) that best approximates the hidden function of the system that we are trying to model.

To see which is the most accurate, we still need a â€śwarmer-colderâ€ť measure of how well any candidate function approximates the hidden website-spending function. Fortunately, we can measure this because in our original dataset we have the observed outputs.

For any given candidate function (i.e. a point in the parameter space) we can plug in all of our input variables (sample) and get a set of outputs. We can see how close this candidate set is to the set of observed values.

Adding up all the distances between approximated (AI output) observation points (orange) and actual observation points (blue) is akin to measuring the distance between the current function guess and the hidden function *as far as the observations can represent the underlying function*.

### AI magic: warmer or colder?

Now that we have a means to compare our AI-approximated function to our hidden function, how do we tell the AI * which direction* in the parameter space to move in order to get warmer?

This is equivalent to * minimizing* the distance between points.

You might remember from high-school calculus class that if we want to find a minimum, we can use differentiation. Wherever the differentiation is zero, we have the minimum â€“ i.e. where the distance between the approximated function and the hidden function is smallest, or practically zero.

If we are not yet at the minimum, then we can wiggle all the values (i.e. the parameters) by small amounts (`Î´1, Î´2, Î´3`

) to see which combination of wiggles makes the current guess closer to zero. Whichever set of wiggles gets closest tells us how we adjust (wiggle) our parameters to get closer to the ideal function.

We just keep wiggling and adjusting until the wiggles (`Î´1, Î´2, Î´3`

) donâ€™t really make much of a difference any more â€“ i.e. we have arrived at the ideal point in the parameter space where our approximate function is as close to the hidden one as we can find. This wiggling and adjusting the function is a key part of the AI magic.

## Iterative Learning

We have described a technique for searching for a function that produces output data from our samples that are closest to the observed set. However, we havenâ€™t really explained the mechanism yet.

We can propose a procedure (algorithm):

- Initialize: take an initial guess of the parameters
- Map the samples to outputs
- Measure the distance to the observed set
- Wiggle the parameters to see which combination gets closer to zero distance
- Adjust the parameters using the wiggle (i.e. subtract
`Î´1, Î´2, Î´3`

) - Go back to 2 and repeat until the distance no longer improves that much

This iterative process takes us closer, step by step, until our guessed function is pretty close to the hidden one, or as close as we can get with the available data.

Each iteration 2-6 is called an `epoch`

. The process of making the adjustments to the parameters to get closer to the hidden function is what we call machine learning.

After some number of epochs, we hope that the â€śwarmerâ€ť signal eventually becomes negligible. This is reflected in how close the outputs from the AI function model resemble the observed data:

## So, why does AI really work?

The sneaky, yet honest, answer is: because much of the time, with the right data, it just does. However, we have omitted some key details.

As described thus far, it probably wonâ€™t work. Thatâ€™s because our function model is too crude:

In reality, hidden functions that represent system behaviors, like website spend, cannot be reduced to a single approximate function. A better way to approximate it is via a network of functions that are combined into a single composite function. Each of the contributing functions models just a tiny part of the hidden function. In combination, they stand a better chance of modeling the entire hidden function with better accuracy.

This is the so-called Neural Network â€“ a network of functions. Otherwise, the process is exactly the same, only that we now have a lot more parameters to wiggle because each sub-function has its own set of parameters to adjust.

## Why is it called AI?

Why, then, is this particular arrangement of machine learning called Artificial Intelligence? Well, we just gave the clue. The brain is thought to also be a vast network of functions. Each of these is called a neuron. Hence our network of functions is attempting to mimic the brainâ€™s architecture, which is the center of human intelligence. This is why it is called Artificial Intelligence. It is due to the architectural inspiration, nothing to do with any claim that computer programs might exhibit human intelligence (which is a broader part of the origins of the term, part of the so-called Cognitive Revolution).

The functions we use in the network in our software program are just bits of program code, nothing to do with the biological form of actual neurons. That is, except for one key aspect that we have omitted thus far.

After each of the functions in our artificial neural network, we insert another function, called an Activation Function, whose only job is to limit the output in some way, typically to some range.Â

## AI Magic Dust: Activation Functions

This part turns out to be a key â€śtrickâ€ť *essential for the success of neural networks*. The AF introduces distortion (called non-linearity) to the output of each functional component in our network.Â

Imagine that the hidden function is actually some highly complex messy shape in a high dimension space. Something like this, except in way more dimensions (that we canâ€™t visualize):

With activation functions, we can think of each function as being able to contribute an irregular sub-shape, like a jigsaw piece in a puzzle, to help build a complex composite shape (function). Without non-linearity, all we can do with each sub-function is add linear elements, and these can never approximate a highly complex function, no matter how many we add.

As an analogy, imagine playing a guessing game to identify a hidden object, yet the only hints are â€śsmallerâ€ť or â€ślargerâ€ť â€“ i.e. if you say â€śmouseâ€ť and the hidden object is a zebra, you only get the feedback: â€ślargerâ€ť. But what if you get richer feedback like: â€ślong neckâ€ť or â€śstripes.â€ť With richer clues, you can eventually get the required information to guess that itâ€™s a zebra. This is kind of the role of activation functions: to provide richness.

## Deep Learning?

You might have heard of the AI winter. Neural networks were invented many decades ago, yet they struggled to achieve much. The major breakthrough came when folks attempted to use lots of layers with lots of sub-functions (neurons) â€“ i.e. the networks became deeper, hence the term `Deep Learning`

.

It was really the advent of Deep Learning that heralded the current AI revolution. And this has largely been made possible by two things:

- Computer power
- Lots of data, mostly thanks to the internet

Recall the parameter space mentioned above. Our initial architecture had just 3 parameters. In the network above, we have 9 times 3, or 27, parameters. So, for each iteration (epoch) we have to wiggle 27 parameters after computing the distance for our entire sample set, which is, say, 20 data points. (Very) approximately, that means 27 + 20 computations.

In todayâ€™s Large Language Models, consider the open source Falcon model. It has 40 billion parameters and was trained using over a trillion data points (called tokens). This took 2 months on a large set of computers running on Amazonâ€™s cloud (AWS).

## AI Magic đź§™: Really?

We hope that youâ€™ve found our gentle non-math introduction to AI useful. So, is it really magic? Yes, and no.

Thereâ€™s nothing magical about the components or the algorithm. It really isnâ€™t all that more complicated than weâ€™ve described. It does involve a lot of dense mathematical and programming techniques to make AI software work, like Pytorch. And, at scale, all kinds of clever optimization tricks are required.

However, even now, the real *how* of AI, as in how it really works is still hard to pin down in terms of which parameters are doing what to arrive at the final function. In the case of massive models, like LLMs, itâ€™s even difficult to figure out how the networks are really managing to model language so well. Many aspects of this achievement remain mysterious.

So, yes, there is a kind of AI magic after all đź§™đźŞ„

For more explainers about AI, please check out our Handbook.