A.I. still gets all the hype these days. Even technology without real A.I. is on the bandwagon telling outright lies, such as these:
Proponents of this practice of swapping A.I. tasks for much cheaper Human Intelligence Tasks (HIT) say that it gives software companies the time they need to amass the data required for real A.I. (supposing they know how to build it).
And of course, they need your funding.
THE BIG IDEA: If you invested in a product that's supposed to be A.I., and found out it wasn't, what would you do?
And, what if the result is your unprotected data being bounced all over the globe? Equifax hack nightmares?
I know I'll never send anything through Expensify until they can prove their security. Yikes!
Who gets access to your data?
At that point, you have to decide whether this kind of exposure is something your business can afford, and if you want to fund their endeavor.
I should point out that not all A.I. can, or should be transparent. Take self-driving cars, for example.
It would be difficult to create an understanding of all the algorithms keeping motorists and pedestrians safe, and the bigger question there is "Who is liable?"
When real A.I. (such as some document recognition AI) can't be so transparent because of complexity, it should: