A.I. still gets all the hype these days. Even technology without real A.I. is on the bandwagon telling outright lies, such as these:
- Have you heard of Engineer.ai (now builder.ai)?
Their brilliant idea of automating mobile app development turned out to be just a team of human engineers hacking away at code in the background. - And Expensify - can you guess what their A.I. is?
Yep, you got it - manual data entry.
But the plot thickens when you understand that it's Mechanical Turk workers who have access to some very personal and private data.
Scared?
Proponents of this practice of swapping A.I. tasks for much cheaper Human Intelligence Tasks (HIT) say that it gives software companies the time they need to amass the data required for real A.I. (supposing they know how to build it).
And of course, they need your funding.
THE BIG IDEA: If you invested in a product that's supposed to be A.I., and found out it wasn't, what would you do?
And, what if the result is your unprotected data being bounced all over the globe? Equifax hack nightmares?
I know I'll never send anything through Expensify until they can prove their security. Yikes!
So how do you know whether you're being sold a legit bill of goods when it comes to A.I. solutions?
Here are 2 Ways:
- Does the A.I. take hours or days to get back with you? (Not A.I. - there are humans somewhere processing your data)
- Does the software show you how the A.I. is actually working? (Transparent A.I.)
If Both A.I. Requirements are Met, There's 1 Final Factor to Consider:
Who gets access to your data?
If the information is stored and used by the software provider outside of your needs, this is a red flag that their software cannot function without access to your sensitive data.
At that point, you have to decide whether this kind of exposure is something your business can afford, and if you want to fund their endeavor.
I should point out that not all A.I. can, or should be transparent. Take self-driving cars, for example.
It would be difficult to create an understanding of all the algorithms keeping motorists and pedestrians safe, and the bigger question there is "Who is liable?"
The Moral of This Story is All About Ethics and Security
When real A.I. (such as some document recognition AI) can't be so transparent because of complexity, it should:
- Meet ethical standards
- Meet a security threshold to ensure privacy and protection of data