Reality check: The state of AI, bots, and smart assistants

We’ve made a lot of progress in artificial intelligence over the last half century, but we’re nowhere near what the tech enthusiasts would have you believe

Artificial intelligence—in the guises of personal assistants, bots, self-driving cars, and machine learning—is hot again, dominating Silicon Valley conversations, tech media reports, and vendor trade shows.

AI is one of those technologies whose promise is resurrected periodically, but only slowly advances into the real world. I remember the dog-and-pony AI shows at IBM, MIT, Carnegie-Mellon, Thinking Machines, and the like in the mid-1980s, as well as the technohippie proponents like Jaron Lanier who often graced the covers of the era’s gee-whiz magazine like Omni.

AI is an area where much of the science is well established, but the implementation is still quite immature. It’s not that the emperor has no clothes—rather, the emperor is only now wearing underwear. There’s a lot more dressing to be done.

Thus, take all these intelligent machine/software promises with a big grain of salt. We’re decades away from a Star Trek-style conversational computer, much less the artificial intelligence of Stephen Spielberg’s A.I.

Still, there’s a lot happening in general AI. Smart developers and companies will focus on the specific areas that have real current potential and leave the rest to sci-fi writers and the gee-whiz press.

Robotics and AI are separate disciplines

For years, popular fiction has fused robots with artificial intelligence, from Gort of The Day the Earth Stood Still to the Cylons of Battlestar Galactica, from the pseudo-human robots of Isaac Asimov’s I Robot novel to Data of Star Trek: The Next Generation. However, robots are not silicon intelligences but machines that can perform mechanical tasks formerly handled by people—often more reliably, faster, and without demands for a living wage or benefits.

Robots are common in manufacturing and becoming used in hospitals for delivery and drug fulfillment (since they won’t steal drugs for personal use), but not so much in office buildings and homes.

There’ve been incredible advances lately in the field of bionics, largely driven by war veterans who’ve lost limbs in the several wars of the last two decades. We now see limbs that can respond to neural impulses and brain waves as if they were natural appendages, and it’s clear they soon won’t need all those wires and external computers to work.

Maybe one day we’ll fuse AI with robots and end up slaves to the Cylons—or worse. But not for a very long while. In the meantime, some advances in AI will help robots work better, because their software can become more sophisticated.

Pattern matching is today’s focus but often unsophisticated

Most of what is now positioned as the base of AI—product recommendations at Amazon, content recommendations at Facebook, voice recognition by Apple’s Siri, driving suggestions from Google Maps, and so on—is simply pattern matching. 

Thanks to the ongoing advances in data storage and computational capacity, boosted by cloud computing, more patterns can be stored, identified, and acted on then ever before. Much of what people do is based on pattern matching—to solve an issue, you first try to figure out what it is like that you already know, then try the solutions you already know. The faster the pattern matching to likeliest actions or outcomes, the more intelligent the system seems.

But we’re still in early days. There are some cases, such as navigation, where systems have become very good, to the point where (some) people will now drive onto an airport tarmac, into a lake, or onto a snowed-in country road because their GPS told them to, contrary to all the signals the people themselves have to the contrary.

But mostly, these systems are dumb. That’s why when you go to Amazon and look at products, many websites you visit feature those products in their ads. That’s especially silly if you bought the product or decided not to—but all these systems know is you looked at X product, so they’ll keep showing you more of the same. That’s anything but intelligent. And it’s not only Amazon product ads; Apple’s Genius music-matching feature and Google’s Now recommendations are similarly clueless about the context, so they lead you into a sea of sameness very quickly.

They can actually work against you, as Apple’s autocorrection now does. It epitomizes a failure of the crowdsourcing, where people’s bad grammar, lack of clarity on how to form plurals or use apostrophes, inconsistent capitalization, and typos are imposed on everyone else. (I’ve found that turning it off can result in fewer errors, even for horrible typists like myself.)

Missing is the nuance of more context, such as knowing what you bought or rejected, so you don’t get advertisements for more of the same but another item you may be more interested in. Ditto with music—if your playlists is varied, so should be the recommendations. And ditto with, say, recommendation of where to eat that Google Now makes—I like Indian food, but I don’t want it every time I go out. What else do I like and have not had lately? And what about the patterns and preferences of the people I’m dining with?

Autocorrect is another example of where context is needed. First, someone should tell Apple the difference between “its” and “it’s,” as well as explain that there are legitimate, correct variations in English that people should be allowed to specify. For example, prefixes can be made part of a word (like “preconfigured”) or hyphenated (like “pre-configured”), and users should be allowed to specify that preference. (Putting a space after them is always wrong, such as “pre configured,” yet that’s what Apple autocorrect imposes unless you hyphenate.)

Don’t expect bots—automated software assistants that do stuff for you based on all the data they’ve monitored—to be useful for anything but the simplest tasks until problem domains like autocorrection work. They are, in fact, the same kinds of problems. 

Pattern identification is on the rise as machine learning

Pattern matching, even with rich context, is not enough. Because it must be predefined. That’s where pattern identification comes in, meaning that the software detects new patterns or changed patterns by monitoring your activities.

That’s not easy, because something has to define the parameters for the rules that undergird such systems. It’s easy to either try to boil the ocean and end up with an undifferentiated mess or be too narrow and end up not being useful in the real world. 

This identification effort is a big part of what machine learning is today, whether it’s to get you to click more ads or buy more products, better diagnose failures in photocopiers and aircraft engines, reroute delivery trucks based on weather and traffic, or respond to dangers while driving (the collision-avoidance technology soon to be standard in U.S. cars).

Because machine learning is so hard—especially outside highly defined, engineered domains—you should expect slow progress, where systems get better but you don’t notice it for a while.

Voice recognition is a great example—the first systems (for phone-based help systems) were horrible, but now we have Siri, Google Now, Alexa, and Cortana that are pretty good for many people for many phrases. They’re still error-prone—bad at complex phrasing and niche domains, and bad at many accents and pronunciation patterns—but usable in enough contexts where they can be helpful. Some people actually can use them as if they were a human transcriber.

But the messier the context, the harder it is for machines to learn, because their models are incomplete or are too warped by the world in which they function. Self-driving cars are a good example: A car may learn to drive based on patterns and signals from the road and other cars, but outside forces like weather, pedestrian and cyclist behaviors, double-parked cars, construction adjustments, and so on will confound much of that learning—and be hard to pick up, given their idiosyncracies and variability. Is it possible to overcome all that? Yes—the crash-avoidance technology coming into wider use is clearly a step to the self-driving future—but not at the pace the blogosphere seems to think.

Predictive analytics follows machine learning

For many years, IT has been sold the concept of predictive analytics, which has had other guises such as operational business intelligence. It’s a great concept, but requires pattern matching, machine learning, and insight. Insight is what lets people take the mental leap into a new area.

For predictive analytics, that doesn’t go so far as out-of-the-box thinking but does go to identifying and accepting unusual patterns and outcomes. That’s hard, because pattern-based “intelligence”—from what search result to display to what route take to what moves to make in chess—is based on the assumption that the majority patterns and paths are the best ones. Otherwise, people wouldn’t use them so much.

Most assistive systems use current conditions to steer you to a proven path. Predictive systems combine current and derivable future conditions using all sorts of probablistic mathematics. But those are the easy predictions. The ones that really matter are the ones that are hard to see, usually for a handful of reasons: the context is too complex for most people to get their heads around, or the calculated path is an outlier and thus rejected as such—by the algorithm or the user.

As you can see, there’s a lot to be done, so take the gee-whiz future we see in the popular press and at technology conferences with a big grain of salt. The future will come, but slowly and unevenly.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AmazonAppleFacebookGeniusGoogleIBMInsightMellonMITOmniSmartTrek

Show Comments
[]