The intelligence in AI is computational intelligence, and a better word could be Automated Intelligence. But when it comes to good judgment, AI is not smarter than the human brain that designed it. Many automated systems perform poorly, to the point that you are wondering if AI is an abbreviation for Artificial Innumeracy.

Critical systems - automated piloting, running a power plant - usually do well with AI and automation, as considerable testing is done before deploying these systems. But for many mundane tasks, such as spam detection, chatbots, spell checking, detecting duplicate or fake accounts on social networks, detecting fake reviews or hate speech in social networks, search engine technology (Google) or AI-based advertising, a lot of progress must be made. It works just like in the video below, featuring a drunken robot.

 

2025015?profile=original

Why can driverless cars recognize a street sign, but Facebook algorithms can not recognize if a picture contains text or not ? Why can't the Alexa robot understand the command "Close the lights" but understands "Turn off the lights"? Sometimes the limitation of AI just reflects the lack of knowledge of the people who implement these solutions: they might not know much about the business operations and products, and are sometimes glorified coders. In some cases, the systems are so poorly designed that they can be used in unintended, harmful ways. For instance, some Google algorithms automatically detect bad websites using tricks to be listed at the top on search results pages. These algorithms will block you if you use such tricks, but indeed you can use these tricks against your competitors to get them blocked, defeating the purpose of the algorithm.

Why is AI still failing on mundane tasks?

I don't have an answer. But I think that tasks that are not critical for the survival of a business (such as spam detection) receive little attention from executives, and even employees working on these tasks might be tempted to not do anything revolutionary, and show a low profile. Imagination is not encouraged, beyond some limited level. Is is as "if it ain't broken, don't fix it."

2025272?profile=RESIZE_1024x1024

For instance, if advertising dollars are misused by some poorly designed AI system (assuming the advertising budget is fixed) the negative impact on the business is limited. If, to the contrary it is done well, the upside could be great. The fact is, for non-critical tasks, businesses are not willing to significantly change the routine, especially for projects where ROI is deemed impossible to measure accurately.. For tiny companies where the CEO is also a data scientist, things are very different, and the incentive to have performing AI (to beat competition or reduce workload) is high. 

Originally posted here. Follow the author on Twitter, at @GranvilleDSC

E-mail me when people leave their comments –

Data science pioneer, founder, author, CEO, investor, with broad spectrum of domain expertise, technical knowledge, and proven success in bringing measurable added value to companies ranging from startups to fortune 100, across multiple industries (finance, Internet, media, IT, security) and domains (data science, operations research, machine learning, computer science, business intelligence, growth hacking, IoT).

Vincent developed and deployed new techniques such as hidden decision trees (for scoring and fraud detection), automated tagging, indexing and clustering of large document repositories, black-box, scalable, simple, noise-resistant regression known as the Jackknife Regression (fit for real-time or automated data processing), model-free confidence intervals, combinatorial feature selection algorithms, detecting causation not correlations, and generally speaking, the invention of a set of consistent robust statistical / machine learning techniques that can be understood, implemented, interpreted, leveraged and fine-tuned by the non-expert. Vincent also invented many synthetic metrics that work better than old-fashioned stats, especially on badly-behaved sparse big data. Some of these techniques have been implemented in a Map-Reduce Hadoop-like environment. Some are concerned with identifying true signal in an ocean of noisy data.

Vincent is a former post-doctorate of Cambridge University and the National Institute of Statistical Sciences. He created the first IoT platform to automate growth and content generation for digital publishers, using a system of API's for machine-to-machine communications, involving Hootsuite, Twitter, and Google Analytics.

Vincent's profile is accessible at bit.ly/1jWEfMP and includes top publications, presentations, and work experience with Visa, Microsoft, eBay, NBC, Wells Fargo, and other organisations.

You need to be a member of IoT Central to add comments!

Join IoT Central

Upcoming IoT Events

More IoT News

Arcadia makes supporting clean energy easier

Nowadays, it’s easier than ever to power your home with clean energy, and yet, many Americans don’t know how to make the switch. Luckily, you don’t have to install expensive solar panels or switch utility companies…

Continue

Answering your Huawei ban questions

A lot has happened since we uploaded our most recent video about the Huawei ban last month. Another reprieve has been issued, licenses have been granted and the FCC has officially barred Huawei equipment from U.S. networks. Our viewers had some… Continue

IoT Career Opportunities