Will Artificial Intelligence someday dominate humans?


BY William Meisel

DAILY NEWS CONTRIBUTOR

Monday, January 25, 2016, 2:35 PM


Experts have warned of the dangers of Artificial Intelligence getting so powerful that it dominates humans.


The term “Artificial Intelligence,” or AI, suggests a category of computer technology that challenges humans. Some very smart people like Elon Musk, Bill Gates, and Steven Hawking warn of the dangers of AI getting so powerful it dominates humans.

The basic claim is that these techniques will get smarter as computer power continues to grow, until we reach what has been called a “singularity” — where the “artificial” intelligence is smarter than human intelligence.


But let’s take a deeper look at this category of advanced computing. “Artificial Intelligence” is a term that has a long history. It has been an area of academic study since the 1960s. The major subject in the ’60s was “Expert Systems,” where a human expert in an area used specialized software tools to express their knowledge as a form of computer code.

Non-experts could then access that knowledge through a software program that told what the expert recommended for specific cases in narrow areas of knowledge.


The area of expert systems fizzled in part as the specialized computer languages turned out to be more limited and not much easier to use than general computer languages.


But, most fundamentally, it turned out that most human knowledge couldn’t be easily reduced to a series of explicit rules — experts often disagree on many points.


Early attempts to express the uncertainty in knowledge led to techniques with names like “fuzzy logic,” but those approaches ran into the same limitations in expressing knowledge as explicit rules, fuzzy or not.

AI has been more recently resurrected as a new generation of technologies that analyze data statistically, with names such as “machine learning.”


Machine learning analyzes data to create mathematical models with names like “deep neural networks,” suggesting a connection to the way humans think. “Big data” is regularly collected and stored economically today, and today’s computers support the heavy computing and large memory size required for statistical analysis of these large databases.

Some software tools we use almost daily, such as web search, depend on such statistical analyses of large databases. Machine learning is also used in such tasks as telling us what books or movies we might like based on what people with similar tastes to ours have chosen.


Calling such techniques “artificial intelligence” is misleading if the implied comparison is to human intelligence. In many ways, data-based computer intelligence goes beyond human intelligence. No human could look at the amount of data that the statistical software does and see the connections well enough to draw the “statistically significant” conclusions that the software can.


On the other hand, human intelligence draws on a lifetime of experience in living in a human body and dealing with other humans. Statistical software only summarizes what is in the data.


The term Artificial Intelligence seems to fit best for technologies like “speech recognition,” where there is an attempt to mimic human understanding of language (e.g., in “personal assistants” like Apple’s Siri, Google voice search, or Microsoft’s Cortana). Technology understanding language mimics capabilities we once thought of as unique to humans.

But even those concerned with AI going too far concede that the progress of AI is driven by its doing things for us that we want done.

The large companies investing substantially in AI-style technologies are doing so because those technologies attract users. Personal assistant technology is useful, in part, because it’s more difficult to use the typical point-and-click interface on a small smartphone screen or while driving.

And we can avoid long and painful navigation through many screens of any digital system if we can get a result in one or two steps by just saying what we want.


With companies such as Microsoft and IBM (with its Watson offerings) providing “machine learning” as a pay-for-usage service, the use of these advanced AI techniques is available to all companies. A company doesn’t have to invest in basic research to use the technology, so we are likely to see substantial growth in such systems supporting many areas of our lives — personal and business.

Other companies are providing tools for directly building specialized digital assistants. A company can use these tools to create an assistant app that helps, for example, with customer service without investing in core technology development.


This trend will lead to most companies providing specialized personal assistants much as they provide web sites today. We will find it easier to do a range of activities, from ordering pizza at home to using Saleforce.com software at work.


Like any technology, these advances can be misused. But creating a special category called Artificial Intelligence and looking at it as dangerous when we get “too much computing power” is misleading. We certainly have too many automobile accidents, but blaming that on “too much horsepower” misses the mark.


Advances in language technology and digital assistants, can — rather than replacing humans — make humans better. As we learn to connect with digital technology through human language, that technology becomes almost an extension of our human intelligence that can always be with us because of mobile devices.


Perhaps “human intelligence” will become “Augmented Intelligence,” aiding us in our lives and our jobs, and providing a new meaning to AI.


William Meisel is an industry analyst covering the commercial uses of speech and language understanding technology. Meisel’s 2013 book, “The Software Society: Cultural and Economic Impact,” discusses the impact of the acceleration of technology development, particularly software advances, on how we live and our economy. Meisel writes a monthly paid-subscription industry newsletter, Speech Strategy News, and organizes the annual Mobile Voice Conference in his role as Executive Director of the Applied Voice Input Output Society. Dr. Meisel began his career as a professor of Electrical Engineering and Computer Science at USC and published the first technical book on “machine learning” (“Computer-Oriented Approaches to Pattern Recognition,” from Academic Press). He wrote a mystery novel in 2015, “Technically Dead.”