Artificial intelligence really isn’t all that intelligent Otesanya David April 1, 2022

Artificial intelligence really isn’t all that intelligent

Artificial intelligence really isn’t all that intelligent

[ad_1]

From self-driving cars to dancing robots in Super Bowl commercials, artificial intelligence (AI) is everywhere. The problem with all of these AI examples, though, is that they’re not really intelligent. Rather, they represent narrow AI – an application that can solve a specific problem using artificial intelligence techniques. And that is very different from what you and I possess.

Humans (hopefully) display general intelligence. We are able to solve a wide range of problems and learn to work out those problems we haven’t previously encountered. We are capable of learning new situations and new things. We understand that physical objects exist in a three-dimensional environment and are subject to various physical attributes, including the passage of time. The ability to replicate human-level thinking abilities artificially, or artificial general intelligence (AGI), simply does not exist in what we today think of as AI. 

That’s not to take anything away from the overwhelming success AI has enjoyed to date. Google Search is an outstanding example of AI that most people regularly use. Google is capable of searching volumes of information at an incredible speed to provide (usually) the results the user wants near the top of the list.

Similarly, Google Voice Search allows users to speak search requests. Users can say something that sounds ambiguous and get a result back that is properly spelled, capitalized, punctuated, and, to top it off, usually what the user meant. 

How does it work so well? Google has the historical data of trillions of searches, and which results the user chose. From this, it can predict which searches are likely and which results will make the system useful. But there is no expectation that the system understands what it is doing or any of the results it presents.

This highlights the requirement for a huge amount of historical data. This works pretty well in search because every user interaction can create a training set data item. But if the training data needs to be manually tagged, this is an arduous task. Further, any bias in the training set will flow directly to the result. If, for example, a system is developed to predict criminal behavior, and it is trained with historical data that includes a racial bias, the resulting application will have a racial bias as well.

Copyright © 2022 IDG Communications, Inc.

[ad_2]

Source link

Write a comment