Explainable AI is a Necessity
I’ll be the first to admit that gaining an understanding of AI and Machine Learning (M.L.) has been frustrating. For me, it’s because I’ve always had a need to know how things work.
With my car, it’s really simple. I open the hood and there’s something there. When AI suddenly can’t recognize a sheep (more on that later), I can’t just pop the hood and poke around.
What cleared up my frustrations with AI was discovering that it doesn’t have to be hidden to work.
In fact, this idea of explainable AI opens up a whole new world of discovery.
Getting to the Core of the AI Apple
All AI is just algorithms – math.
In the case of neural nets (which use hidden AI, by the way), the algorithms work their magic – one trial at a time – to find the best path through the data.
The results often leave even the best researchers scratching their collective heads as they try to understand why the algorithm behaved the way it did. As it turns out, AI and M.L. algorithms often don’t do what we expect – and certainly arrive at some strange and sometimes dangerous conclusions.
Humans Programmed AI, So Isn’t There An Easy Fix?
Let me back up a bit before we address how to fix AI’s seemingly random and malicious behavior. In order for AI to work reliably, one has to define a narrow problem.
This starts to get into the realm of opinion, politics, and religion if you make these kinds of statements in the wrong company. I’m going out on a limb here repeating what I’ve learned over the past two years about AI.
The bottom line is that “General AI” aka “Strong AI” doesn’t exist, and if it did, it wouldn’t work.
"But wait," you say, “I’ve seen AI work! Fraud!!!”
I know, I said this would be contentious. What you have seen is, in fact, “Narrow AI” or “Weak AI” in action. That doesn’t denigrate it’s ability - AI corrects my spelling nearly continually.
Yet it still gets some of the same words wrong, doesn’t it? And heaven forbid if I accept a misspelling. Now, my brilliant AI will help me misspell that word in perpetuity.
What Problems is AI Good at Solving and What is It Not Good at?
We tend to get influenced a little too much from Hollywood and the various media outlets. Even Microsoft has a blurb where they say their AI is approaching “Human Parity.”
It’s not. They, of course, go no further than to make the claim because that helps the narrative that “we all need AI” And that, I can’t dispute. AI is doing some really great things, and those of us here at BIS who use AI regularly with Grooper really are seeing fantastic results.
But it’s not magic. It’s an ALGORITHM. Math.
The predictable, concrete, logical, and infallible nature of mathematics is what’s behind AI. And this is where we run into trouble. We’re trying to use math to simulate human cognition. Humans aren’t logical, even when they think they are!
BIG IDEA: What AI ends up being good at is repeatable, predictable tasks that are well (narrowly) defined.
“This looks like that” works very well. Except when it doesn’t. You see, the algorithms are designed by humans, and they have errors and bugs in them as well.
An Example of AI Errors
As one story around computer vision goes, a neural net (AI) was trained to spot sheep. Virtually all of the training data set was sheep in a field, because well, sheep are usually found in fields, right? So the AI claimed it was working - it was finding sheep.
But, as it turns out, AI doesn’t understand. It just finds things that look like the things it was trained with. In this case, when a sheep was shown to the AI all by itself, it wasn’t detected. But if a green field was shown to the AI, it detected a sheep. The AI had associated the field with the recognition, not the actual sheep.
This is very common with AI as it turns out. The definition of neural networks is that they have “at least one hidden layer.”
It’s the hidden part that gets you. You feed the AI and if the results aren’t what you expected, you don’t get an explanation of what went wrong. You get “no sheep.”
BIG TIP: When you’re using AI to achieve business outcomes, it makes sense to ensure the AI system you’re using exposes the AI
As a result, you can tailor the system for maximum productivity and output.
How Our Company Uses Explainable AI Every Day
Here at BIS, we pride ourselves in the use of explainable AI. We use the TF/IDF algorithm quite a bit.
When we do, we show the results in real-time. We show the rankings of each term, field, etc. on the document so you know what’s happening.
And we go one step further. We engineered the software with configurable AI. Trained users can adjust and tune the algorithm to maximize results – and as always – show the results real-time for rapid testing and deployment.
Some people call it innovative. We’re just calling it explainable AI. Why would you want it any other way?