How Does AI Learn? Compare Real-Life Examples

by Brad Blood | June 22, 2021

How does AI learn? It’s a question often asked but rarely is the truth revealed. AI is just not as smart as we’d like for it to be.

But let’s not confuse smart with powerful. Or, power with compute. At the end of the day, all we really want is for technology to do difficult things quickly and with very little human effort. We also want these things done accurately.

how ai learnsThe concept of AI learning is both marketing hype and our worse fears come to life. In all the AI-takes-over-the-world scenarios, the AI has learned to do things without humans knowing about it.

And, I suppose there’s some reality to the fear. If an AI system gained consciousness, life as we know it would be changed forever. But the way AI learns is far from anything similar to the way humans learn.

You’ve probably heard that AI is a “simulation of human intelligence.” Well, that’s simply not true. And for no other reason than we don’t even fully understand how the human brain works. Today’s “AI” is really algorithms – math – programmed to do a certain thing.

If AI learns the way humans do, it would be an ethics nightmare.

Discover Our Data Science Solutions

So How Does AI Learn?

AI learns through data inputs to algorithms designed to produce a certain outcome. In some cases, the path to the outcome is highly constrained and transparent.

This is how machine learning works. In other cases, the path is only loosely constrained and a predictable outcome is virtually impossible. This is the reality of deep learning, or convoluted neural networks.

In the case of machine learning, very specific outcomes are achievable. The algorithms are programmed to take specific inputs and provide a 100% predicable output.

There are no surprises with machine learning. It simply performs a difficult task quickly and predictably. Machine learning (AI) learns when a new input is fed into the algorithm to change the way it works.

Mystery Black Boxes

tech transparencyWith deep learning and neural nets, the algorithm is configured with many hidden layers and steps that each build on one another. The final output, or training, comes from a mystery “black box” and this is by design.

A neural net’s algorithms are fed a huge amount of input data with the goal of the machine consistently arriving at the correct conclusion (we just don’t have any way of knowing how the conclusion was drawn or if it will work in real life).

Real World Deep Learning / Neural Net Example

Deep learning and neural nets are fascinating and powerful technologies with an intrinsic flaw:

This is best understood by two popular examples. In both examples the algorithms (AI) was trained using explicit examples on a huge volume of samples. All the training is combined with the intention of providing a useful outcome when new examples are fed to the algorithm. Ideally, it will produce consistently good results.

First, AI Image Recognition

orange sheep

If you go to Microsoft’s image recognition tool here and upload this picture of orange sheep, it will give you the following caption: “a large body of water with a mountain in the background.”

(Image credit aiweirdness.com)

Not bad, right? But not great either. Try this one for fun:

mountain area

“A herd of sheep grazing on a lush green hillside.”

(Image credit aiweirdness.com)

So with image recognition, it’s OK, but nothing like an AI that’s going to take over the world! The point here is that because of all the training the algorithm as been given, there are upsides and downsides.

Because a previous image (that probably had sheep and looked like this one), was trained as “a herd of sheep grazing,” the algorithm incorrectly assumes that the rocks in this image are sheep.

Powerful? Yes. But intelligent? No.

Second, Healthcare Diagnoses

Another more compelling example is that given to us by healthcare AI researcher Andrew Ng. He and a team of researchers at Stanford University trained a deep learning algorithm that diagnosed pneumonia from chest x-rays. Overall, it performed better than radiologists.

how ai worksFantastic, right? Well, what Ng discovered is something he calls the “Proof-of-concept-to-production-gap.” This is the idea that with a constrained test set of data, AI can be trained to produce favorable outcomes, but the AI cannot produce the same outcomes on a new set of data it has not seen before.

Think about the example of AI learning what’s in a picture. If I submit a photo that the AI has been trained on, it will tell me exactly what’s in it because it has seen it before.

When AI Breaks

The same thing happens with any kind of AI learning software. New data causes unpredictable and sometimes crazy results.

Ng discovered this problem when he took his AI software to another hospital system and fed it their chest x-rays. Slight differences in the machine that took the images — or the way the images were taken — broke the AI algorithms and it basically didn’t work.

As a side note, any human radiologist can look at any image and make a good diagnosis. Humans and AI do not learn the same way!

Discover Our Data Science Solutions

Real World Machine Learning Example

A very easy-to-understand example of the way that intelligence in document AI learns is with something called document classification. This is a method where software is trained to analyze pages of a document to determine what kind of document it is.

ClassificationThe machine learning algorithm is fed an input, like an invoice, for example. Let’s say the machine learning algorithm has not been trained yet, and the system has no idea what type of document it is. It only knows what words and features are on the page.

A user of the software simply clicks a button that trains the AI software that this is an invoice. Immediately, an algorithm runs, looks at everything contained on the document and creates a large data table that contains a list of all words and features that appear on the page.

If the user submits another invoice to the system that looks generally the same as the other, the software will recognize that yes, this is also an invoice. Classification done. Well, not so fast.

Why AI Learning ≠ Human Learning

What if another document type — like a purchase order — is fed into the system? Chances are there will be many similar words and features. The system will likely call this an invoice as well.

human and artificial learningFor machine learning to learn, it needs a human to at least show it one example of each type of document it will need to know about. When the system is trained to recognize both an invoice and a purchase order, it will look at either one and give a confidence rating. Like, this is 70% likely to be an invoice and only 30% likely to be a purchase order. In most cases it will be correct.

This is nothing like the way humans learn. If I’ve never seen a Nuclear Regulatory Commission data intake form and someone puts one in front of me, I’ll know what it is straight away even if it doesn’t say “Nuclear Regulatory Commission Data Intake” at the top.

  • form-learningI’ll notice the structure of the document looks like a form
  • I may notice that the address is that of the NRC
  • Or I may see a seal or footer on the document that clues me in. It may say “Submit this form to the NRC” or some such thing

The other possibility is that it is a sample data intake form using the NRC’s form as an example, but not really a form to be used in real life. I would understand that too, from the way the document is laid out.

The point is that machine learning needs human input to learn and to improve.

But What About Chess and Go?

AI systems that learn how to play games like chess and Go represent the power of computers more than an advancement in AI learning. Powerful? Yes. Smart? No.

ai-computer-playing-chessFor example, Google’s AlphaGo cannot play chess, or checkers, or tic-tac-toe. But that isn’t to marginalize what Google created.

AlphaGo represents an AI model that learns by playing itself. Given initial training data and the rules of the game, the software was programmed to avoid failure. And given enough processing power, the software could not only play against itself at super-human speeds, it could also predict moves out into the future to determine a new path to success with each and every move.

A Final Comment on How AI Learns

So you see how AI is really just algorithms (math) and a mix of cunning / creative approaches to working with data. Building an AI system that even remotely approaches human intelligence is something for the sci-fi films or many, many years in the future.

If someone tells you that their AI learns, or learns like a human, they are either saying that their algorithms are:

  • Manually updated
  • Working within programmed rules (think AlphaGo)
  • Or they are simply unsure how the software actually works.

If today's AI could learn independent of human input, then that would imply that it KNOWS it was wrong in arriving at a decision.

If it knows it's wrong, wouldn't it have made a better choice the first time around?

Discover Our Data Science Solutions



4 Steps to Achieving Wisdom You can Use at Work Today

4 Steps to Achieving Wisdom You can Use at Work Today

How to create an Information as a Second Language program. [Free Guide]

4 Steps to Achieving Wisdom You can Use at Work Today

4 Steps to Achieving Wisdom You can Use at Work Today

We are proud to announce that Grooper software, as well as all software products under the BIS brand, is 100% Made in the USA. Every line of code, every feature, and every update stems from our dedicated team working diligently at our Oklahoma City headquarters. Additionally, our support services are exclusively provided by local talent based in our Headquarters office, ensuring that you receive firsthand, quality assistance every time. Our unwavering commitment to local expertise emphasizes our dedication to top-tier quality and innovation. Thank you for your continued trust in our homegrown solutions.