AI gets its groove back

After decades of start-and-stop, artificial intelligence is being advanced by major computing firms from Facebook and Google to IBM.

Try this: Go online to translate.google.com.

In the left-hand input box, type, "The spirit is willing, but the flesh is weak." In the right-hand box, decide which language you want it translated to. After it's translated the first time, copy the translated text and paste it into the left-hand box for conversion back into English.

If you don't get exactly the original text, the back-translation will in all likelihood still reflect at least part of the original thought: That the actions of the subject fell short of his or her intentions and not that the wine was good but the meat was tasteless, which the phrase could mean in a literal translation.

AI is becoming real. Jackie Fenn, Gartner Analyst

In other words, a machine figured out what you meant, not merely what you said.

"In the 1960s, this was considered impossible," explains Michael Covington, a consultant and retired associate director of the Institute for Artificial Intelligence at the University of Georgia.

For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hyped-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it's being made in the absence of hype. In fact, some of the chief practitioners won't even talk about what they are doing.

Seasons old and new

"AI is becoming real," says Jackie Fenn, a Gartner analyst. "AI has been in winter for a decade or more but there have been many breakthroughs [during] the last several years," she adds, pointing to face recognition algorithms and self-driving cars.

Researcher Daniel Goehring, a member of the Artificial Intelligence Group at the Freie Universitaet (Free University), demonstrates hands-free driving during a 2011 test in Berlin. The car, a modified Volkswagen Passat, is controlled by 'BrainDriver' software with a neuroheadset device that interprets electroencephalography signals with additional support from radar-sensing technology and cameras. REUTERS/Fabrizio Bensch

"There was a burst of enthusiasm in the late 1950s and early 1960s that fizzled due to a lack of computing power," recalls Covington. "Then there was a great burst around 1985 and 1986 because computing power had gotten cheaper and people were able to do things they had been thinking about for a long time. The winter came in the late 1980s when the enthusiasm was followed by disappointment," and small successes did not turn into big successes. "And since then, as soon as we get anything to work reliably, the industry stops calling it AI."

In the "early days" -- the 1980s -- "we built systems that were well-constrained and confined, and you could type in all the information that the system would make use of," recalls Kris Hammond, co-founder of Narrative Science, which sells natural-language AI systems. "The notion was to build on a substrate of well-formed rules, and chain through the rules and come up with an answer. That was the version of AI that I cut my teeth on. There are some nice success stories but they did not scale, and they did not map nicely onto what human beings do. There was a very strong dead end."

There was a burst of enthusiasm in the late 1950s and early 1960s that fizzled due to a lack of computing power. Michael Covington, consultant

Today, thanks to the availability of vast amounts of online data and inexpensive computational power, especially in the cloud, "we are not hitting the wall anymore," Hammond says. "AI has reached an inflection point. We now see it emerging from a substrate of research, data analytics and machine learning, all enabled by our ability to deal with large masses of data."

Going forward, "The idea that AI is going to stall again is probably dead," says Luke Muehlhauser, executive director of the Machine Intelligence Research Institute (MIRI) in Berkeley, Calif. "AI is now ubiquitous, a tool we use every time we ask Siri a question or use a GPS device for driving directions."

Deep learning

Beyond today's big data and massive computational resources, sources cite a third factor pushing AI past an inflection point: improved algorithms, especially the widespread adoption of a decade-old algorithm called "deep learning." Yann LeCun, director of Facebook's AI Group, describes it as a way to more fully automate machine learning by using multiple layers of analysis that can compare their results with other layers.

He explains that previously, anyone designing a machine-learning system had to submit data to it, but not before they hand-crafted software to identify sought-after features in the data and also hand-crafted software to classify the identified features. With deep learning, both of these manual processes are replaced with trainable machine-learning systems.

"The entire system from end to end is now multiple layers that are all trainable," LeCun says.

(LeCun attributes the development of deep learning to a team led by Geoff Hinton, a professor at the University of Toronto who now works part-time for Google; LeCun was, in fact, part of Hinton's deep learning development team. Hinton did not respond to interview requests.)

Even so, "deep learning can only take us so far," counters Gary Marcus, a professor at New York University. "Despite its name it's rather superficial -- it can pick up statistical tendencies and is particularly good for categorization problems, but it's not good at natural language understanding. There needs to be other advances as well so that machines can really understand what we are talking about."

There needs to be other advances ... so that machines can really understand what we are talking about. Gary Marcus, Professor, New York University

He hopes the field will revisit ideas that were abandoned in the 1960s since, with modern computer power, they now might produce results, such as a machine that would be as good as a four-year-old child at learning language.

In the final analysis, "About half of the progress in the performance of AI has been from improved computing power, and half has been from improvements by programmers. Sometimes, progress is from brute force applied to get a one percent improvement. But the ingenuity of people like Hinton should not be downplayed," says MIRI's Muehlhauser.

The AI rush

If the spectacle of large corporations investing major sums in a technology is evidence that the technology has gone mainstream, future historians may say that AI reached that point in the winter of 2013-2014.

In January, Rob High, vice president and chief technology officer of the Watson Group, announced IBM's plans to invest $1 billion in AI over the next few years. This includes $100 million as venture capital seed money to invest in Watson-based startups.

IBM has made no secret of its embrace of AI, especially after its Watson natural-language AI system (with access to four terabytes of information) famously won the TV quiz show Jeopardy! against two human champions in 2011.

Michael Rhodin, the new head of IBM's Watson Group, announcing in January that IBM will invest more than $1 billion to establish a new business unit around its Watson cognitive supercomputer. REUTERS/Brendan McDermid

High explains that Watson involves "a major shift from classical AI, which relied heavily on ontology for evaluating questions or answers. Instead we are aggregating multiple technologies and multiple strategies to disambiguate results and enhance fidelity. My wife calls me to say that she will stop at the store on the way home. That is ambiguous, but I have enough history to know what she is talking about."

The result of these aggregated technologies is that Watson can read natural-language material and derive information from it with success approaching that of a human being, he explains.

IBM is exploring the use of Watson in a number of industries, especially medicine, where it could digest all available clinical literature associated with a case. "Doctors see a demo and walk away giddy about how it affects their ability to make decisions," High says.

As a product, Watson will be based in the cloud, but developers can embed access in their applications, he adds.

Google, meanwhile, also spent this past winter making significant AI-related investments. In March, Google acquired DNN Research, which works in the field of deep neural networks.

In January Google reportedly paid $400 million or more (reports vary) for DeepMind Technologies, a London-based machine learning firm.

Google spokespersons declined to discuss Google's AI-related actions and plans. Facebook's LeCun, however, is familiar with DeepMind. "They had hired some of my students," he recalls. "They had a presentation where they connected their system to an old video game like Space Invaders and had it try to maximize points by trial and error, learning the game from scratch. After a week it was better than a human," he says.

In a related field, Google also acquired several robotics firms, all in the first half of December. These included Boston Dynamics (outdoor robots), Redwood Robotics (robotic arms), Holomni (robot wheels) and Meka Robotics (bipedal robots).

"Google has a sense that AI will have an application not just on the Web but in robotics," LeCun says. "They think it will have an impact in the next 10 years and they have the financial resources to invest that far ahead."

Meanwhile, Google's most public foray into AI to date is its translation page. Instead of having linguists set up translation rules based on dictionaries and grammars, Google acquired millions of documents that had already been translated, and had an AI program look for patterns between the original and translated versions, Muehlhauser explains.

"Previously, even seven or eight years ago, the required computing power would have been too costly," he adds.

Natural-language pioneer -- and now Google employee -- Ray Kurzweil speaks at a Fortune-sponsored technology conference in 2009. REUTERS/Fred Prouser

In 2012, Google hired AI pioneer Ray Kurzweil to work on machine learning and language processing projects. And as previously noted, it hired deep-learning pioneer Geoff Hinton in early 2013.

Also in the first half of December, Facebook hired LeCun to head its AI Group, which had been established in September. Just prior to that Facebook had acquired Mobile Technologies, a speech recognition and machine translation firm. LeCun declined to discuss Facebook's AI-related plans, as did Facebook spokespersons. However, Facebook CEO Mark Zuckerberg told analysts in October that the idea is to "build services that are much more natural to interact with." The acquisition of Mobile Technologies "will help expand our work in the field beyond just photo recognition to voice."

The future: Human-level abilities?

Assuming future progress in AI technology will match past progress, the technology could produce a machine that can emulate a human being -- eventually. Robin Hanson, a professor at George Mason University, explains that he has made a habit of asking people who have been involved with AI research for 20 years or more what progress they have seen as a percentage of how far we need to go to match human ability.

"They say five to 10%, meaning we have two to four centuries to go," he explains.

During that time, he expects that machines will continue to replace people at about the same steady pace they have been replacing them since the Industrial Revolution. (In 1870 as much as 80% of the U.S. population worked on farms, but today fewer than 3% do -- yet unemployment is not 77%.) The implication is that society will have plenty of time to digest the impact of AI.

But no one can rule out a sudden, cataclysmic breakthrough, he adds Within the next century it might be possible to "port" brain functions to computers, suddenly creating machines as capable as humans for some, or even most, tasks. Assuming the machines are affordable and can be mass produced, the resulting unbounded supply of inexpensive human-capable labor could trigger a revolution on par with the Neolithic Agricultural Revolution and the more recent Industrial Revolution, in which economic performance rose by a factor of 50 or more during a period of time previously needed for it to double, Hanson says. The world economy is already doubling every 15 years, so such a revolution would lead to it doubling every few months.

The field of AI is trying to understand human-level intelligence, something that took evolution a billion years and more to develop, and it's unreasonable to expect humans to recapitulate that process even in a few decades. Jeff Siskind, professor, Purdue University

Anyone with any ownership of the economy could see their wealth balloon until it reached some plateau, but those whose income is from their labor rather than from their investments could see themselves marginalized, like today's subsistence farmers or aboriginal foragers, since they will not be able compete for wages against mass-produced human-emulating machines, Hanson warns.

Others in the AI field are more upbeat about the future. "Things will come to be that we can't think of now -- there will be unexpected revolutions like the Internet," says Patrick Winston, a professor at MIT.

"As the machines become smart, they will make us smarter," agrees Narrative Science's Hammond. "No matter where you are or what you are doing they will get you the information you need, and you will see and hear a richer version of the world." Tapping into a world of information, "everyone will have an augmented memory of everything," he adds.

But, LeCun cautions, "We are still very far from building really intelligent machines." How far? He won't say. "Those kinds of predictions are invariably wrong," he explains.

"The field of AI is trying to understand human-level intelligence, something that took evolution a billion years and more to develop, and it's unreasonable to expect humans to recapitulate that process even in a few decades," adds Jeff Siskind, professor at Purdue University. "That said, I think we're making a huge amount of progress."

Beyond economic impact, futurists have also proposed a singularity, or a moment at which the machines individually or collectively achieve consciousness and turn against humanity. Those in the AI field tend to shrug off the idea.

"We can always pull the plug," MIT's Winston says.

This article, AI regains its footing, was originally published at Computerworld.com.

Lamont Wood is a freelance writer in San Antonio.

Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Gartnerintelsoftwareapplicationshardware systemsEmerging TechnologiesUniversity of Georgia

More about FacebookFBIFredGartnerGoogleIBM AustraliaMITTopicVolkswagen AustraliaYork University

Show Comments
[]