A.I. still gets all the hype these days. Even technology without real A.I. is up on the band wagon telling outright lies.
Have you heard of Engineer.ai? Their brilliant idea of automating mobile app development turned out to be just a team of human engineers hacking away at code in the background.
And Expensify - can you guess what their A.I. is? Yep, you got it - manual data entry. But the plot thickens when you understand that it's Mechanical Turk workers who have access to some very personal and private data. Scared?
Proponents of this practice of swapping A.I. tasks for much cheaper Human Intelligence Tasks (HIT) say it gives software companies the time they need to amass the data required for real A.I. (supposing they know how to build it…). And of course, they need your funding.
If you invested in a product that's supposed to be A.I., and found out it wasn't, what would you do? And, what if the result is your unprotected data being bounced all over the globe? Equifax hack nightmares?
I know I'll never send anything through Expensify until they can prove their security - yikes!
So how's the average purchaser to know whether they're being sold a legit bill of goods? There are two ways to know:
- Does the A.I. take hours or days to get back with you? (Not A.I. - there are humans somewhere processing your data)
- Does the software show you how the A.I. is actually working? (Transparent A.I.)
And if both these requirements are met, there's a final thing to consider: Who gets access to your data? If the information is stored and used by the software provider outside of your needs, this is a red flag that their software cannot function without access to your sensitive data - and you have to decide whether this kind of exposure is something your business can afford, and if you want to fund their endeavor.
I should point out that not all A.I. can, or should be transparent. Take self-driving cars, for example. It would be difficult to create an understanding of all the algorithms keeping motorists and pedestrians safe - and the bigger question there is "Who is liable?"
Perhaps the moral of this story is really all about ethics and security. When real A.I. can't be so transparent because of complexity, it should meet ethical standards. And, A.I. must meet a security threshold to ensure privacy and protection of data.