How Microsoft plans to address AI and machine learning challenges

Peter Lee, Microsoft’s research vice president, answers hard questions about artificial intelligence and machine learning

Microsoft, Google and IBM face difficult challenges in winning early enterprise adopters of artificial intelligence (AI) and machine learning (ML). Microsoft AI Research Vice President Peter Lee shed some like on how Microsoft will meet those challenges when we met at this week’s MIT Technology Review EmTech Digital conference.

Lee began with the same explanation that he gave to Microsoft CEO Satya Nadella. Nadella, a former research engineer, does not need to have technology spoon fed to him, but he does need to understand the framework used to manage the future of the vertical industries transferred to the stewardship of Microsoft’s AI group.

It is a big bet, amounting to 6,000 employees or one-fourth of the company and includes leading industry businesses such as healthcare, education, automotive, finance and retail. A very big bet on the future of AI, indeed.

How Microsoft categorizes AI opportunities

Lee divided ML into three tiers for me, as he did for Nadella:

Tier 0: The customer can use one of Microsoft’s pretrained models, perhaps adding domain-specific categories. This might be as simple as using Microsoft’s machine translation API to translate comments or to add a domain-specific category such as women’s fashion vocabulary to the translation API service—like eBay did to enable shopping across international markets and different languages.

Tier 1: This tier puts tools such as Microsoft’s Cognitive Tool Kit and cloud hardware optimized for ML in the hands of developers who can build a novel and original domain-specific model. This says easy and does hard—in New Jersey parlance—because the task requires experts in ML, computer architecture and system optimization.

Lee cited partner UPMC Enterprises, a for-profit division of non-profit healthcare delivery service University of Pittsburgh Medical Center, as an example. Although Lee cannot talk about the details of the model that UPMC is building yet, he did say it will enable physicians to improve their performance and the care they provide to patients.

UPMC recruited ML expert Adam Berger, Ph.D. to build a team and lead the project. After the model is created, trained and proven, UPMC plans to market the solution to other healthcare delivery services where Microsoft has 168,000 productivity and IT customers in 140 countries. Pittsburgh is situated at the nexus of a large medical community that has leading teaching hospitals and medical research institutions combined with a leading AI and ML community centered around Carnegie Mellon.

Tier 2: Carnegie Mellon University’s Libratus pokerbot that beat poker pros and Google’s AlphaGo-bot that beat Lee Sedol represent the special case of Tier 2, an artificial general intelligence (AGI). AGI is a hypothetical machine intelligence discussed by researchers and sci-fi writers that will be able to perform any intellectual human task. These gamebots represent a narrow AGI in the field of gaming, but this technology is unique because the bots are self-trained and not limited by human experience.

Most ML models are trained by scraping human intelligence from photos or other corpora, such as Google’s language translation that was trained by consuming about a third of the public internet. If Libratus was a more traditional type of intelligence, it would have first observed a million poker players playing a billion hands of poker, then applied the observation to train a model based on human behavior. Instead, it learned the rules for winning at poker by experimenting with every possible poker hand.

AlphaGo was similarly trained. It startled Sedol with its own invented style of play that was not limited by human intelligence. Lee used Libratus and AlphaGo to explain to Nadella that in narrowly defined cases, AGI is possible.

The hard question facing every company delivering ML to the enterprise

The supply chain for delivering the benefits of ML to the enterprise has a problem. Lee says machine learning models such as language translation are created by privileged teams of experts. They’re privileged in the sense that they are very skilled in machine learning, linear algebra and probability, and often they have Ph.Ds from top universities. It is a finite expert labor pool concentrated at just a few companies, such as Facebook, Google, IBM and Microsoft.

The question to Lee is how to navigate this labor shortage and deliver AI to the enterprise. Hard to build ML models are first proven by advanced researchers, then the models are optimized by ML and computer systems experts to run on today’s underpowered hardware. In the last step, very specialized developers rewrite resource-consuming components with fast native code and optimize them to run cost effectively. ML development tools are complex and methods are immature, which prevents lesser-skilled developers from building models

How Microsoft plans to deliver the benefits of AI

Tier 0 works today. Microsoft has few thousand customers that use its cloud portfolio of pretrained models, adding features such as natural language processing, text analytics and language translation to existing applications with Microsoft Cognitive Services APIs.

Tier 1 is a not as easy because expanding the expert labor pool is not an option, and a mature tool chain enabling less-skilled developers to build models has not yet emerged. Lee says Microsoft will leverage the two decades of Microsoft’s AI research to help partners build original ML models to solve new problems. Lee says this tier is best delivered by partners—partners with domain expertise and ML expertise that can create a new and original model to solve a problem that once proven, could be marketed to an entire industry. This is similar to how the UPMC plans to sell its model to the healthcare industry.

Lee wants to invest in a portfolio of vertical industry opportunities like UMPC Enterprises. He is willing to bring the right partners’ engineers into Microsoft’s labs to work shoulder to shoulder with Microsoft researchers to create original models that, once prove, could be stamped out in volume and sold to an entire vertical industry that Lee hopes will use Microsoft’s ML specialized cloud hardware to train and run the models.

Though Lee may have a preference for Microsoft’s open source Cognitive Tool Kit, he isn’t opposed to Google’s TensorFlow or Facebook’s Caffe 2 libraries if they are a better fit. That’s because success is measured based on the impact on Microsoft’s cloud business in the post-Ballmer, open-Microsoft cloud era.

Tier 2 models will remain a research topic for Microsoft to watch and measure progress towards reaching the long-term goal of AGI.

Perhaps in the next five years, more computer science students will learn AI and ML and expand the expert pool. Or the toolchain may mature to enable lesser-skilled developers to build new models. In the meantime, the limited pool of experts will be leveraged to deliver the greatest impact.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoft

More about eBayFacebookGoogleIBMindeedMellonMicrosoftMITTechnology

Show Comments
[]