I’ll be the first to admit that gaining an understanding of AI and Machine Learning (M.L.) has been frustrating. For me, it’s because I’ve always had a need to know how things work.
With my car, it’s really simple. I open the hood and there’s something there. When AI suddenly can’t recognize a sheep (more on that later), I can’t just pop the hood and poke around.
What cleared up my frustrations with AI was discovering that it doesn’t have to be hidden to work.
In fact, this idea of explainable AI opens up a whole new world of discovery.
All AI is just algorithms – math.
The results often leave even the best researchers scratching their collective heads as they try to understand why the algorithm behaved the way it did. As it turns out, AI and M.L. algorithms often don’t do what we expect – and certainly arrive at some strange and sometimes dangerous conclusions.
Let me back up a bit before we address how to fix AI’s seemingly random and malicious behavior. In order for AI to work reliably, one has to define a narrow problem.
The bottom line is that “General AI” aka “Strong AI” doesn’t exist, and if it did, it wouldn’t work.
"But wait," you say, “I’ve seen AI work! Fraud!!!”
I know, I said this would be contentious. What you have seen is, in fact, “Narrow AI” or “Weak AI” in action. That doesn’t denigrate it’s ability - AI corrects my spelling nearly continually.
Yet it still gets some of the same words wrong, doesn’t it? And heaven forbid if I accept a misspelling. Now, my brilliant AI will help me misspell that word in perpetuity.
We tend to get influenced a little too much from Hollywood and the various media outlets. Even Microsoft has a blurb where they say their AI is approaching “Human Parity.”
But it’s not magic. It’s an ALGORITHM. Math.
The predictable, concrete, logical, and infallible nature of mathematics is what’s behind AI. And this is where we run into trouble. We’re trying to use math to simulate human cognition. Humans aren’t logical, even when they think they are!
“This looks like that” works very well. Except when it doesn’t. You see, the algorithms are designed by humans, and they have errors and bugs in them as well.
As one story around computer vision goes, a neural net (AI) was trained to spot sheep. Virtually all of the training data set was sheep in a field, because well, sheep are usually found in fields, right? So the AI claimed it was working - it was finding sheep.
This is very common with AI as it turns out. The definition of neural networks is that they have “at least one hidden layer.”
It’s the hidden part that gets you. You feed the AI and if the results aren’t what you expected, you don’t get an explanation of what went wrong. You get “no sheep.”
BIG TIP: When you’re using AI to achieve business outcomes, it makes sense to ensure the AI system you’re using exposes the AI
As a result, you can tailor the system for maximum productivity and output.
Here at BIS, we pride ourselves in the use of explainable AI. We use the TF/IDF algorithm quite a bit.
And we go one step further. We engineered the software with configurable AI. Trained users can adjust and tune the algorithm to maximize results – and as always – show the results real-time for rapid testing and deployment.
Some people call it innovative. We’re just calling it explainable AI. Why would you want it any other way?