58,754 views|Oct 12, 2020,09:22pm EDT
I write about the big picture of artificial intelligence.
For the second part of this article series, see here.
The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.
If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.
What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the field—and society—in the years ahead. Study up now.
1. Unsupervised Learning
The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term “supervised learning” comes from the fact that human “supervisors” prepare the data in advance.)MORE FOR YOUThe Next Generation Of Artificial Intelligence (Part 2)These Are The Startups Applying AI To Transform HealthcareDeep Learning’s Carbon Emissions Problem
While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations.
The process of manually labeling thousands or millions of data points can be enormously expensive and cumbersome. The fact that humans must label data by hand before machine learning models can ingest it has become a major bottleneck in AI.
At a deeper level, supervised learning represents a narrow and circumscribed form of learning. Rather than being able to explore and absorb all the latent information, relationships and implications in a given dataset, supervised algorithms orient only to the concepts and categories that researchers have identified ahead of time.
In contrast, unsupervised learning is an approach to AI in which algorithms learn from data without human-provided labels or guidance.
Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: “The next AI revolution will not be supervised.” UC Berkeley professor Jitenda Malik put it even more colorfully: “Labels are the opium of the machine learning researcher.”
How does unsupervised learning work? In a nutshell, the system learns about some parts of the world based on other parts of the world. By observing the behavior of, patterns among, and relationships between entities—for example, words in a text or people in a video—the system bootstraps an overall understanding of its environment. Some researchers sum this up with the phrase “predicting everything from everything else.”
Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the “training wheels” of supervised learning. One of its fundamental advantages is that there will always be far more unlabeled data than labeled data in the world (and the former is much easier to come by).
In the words of LeCun, who prefers the closely related term “self-supervised learning”: “In self-supervised learning, a portion of the input is used as a supervisory signal to predict the remaining portion of the input….More knowledge about the structure of the world can be learned through self-supervised learning than from [other AI paradigms], because the data is unlimited and the amount of feedback provided by each example is huge.”
Unsupervised learning is already having a transformative impact in natural language processing. NLP has seen incredible progress recently thanks to a new unsupervised learning architecture known as the Transformer, which originated at Google about three years ago. (See #3 below for more on Transformers.)
Efforts to apply unsupervised learning to other areas of AI remain at earlier stages, but rapid progress is being made. To take one example, a startup named Helm.ai is seeking to use unsupervised learning to leapfrog the leaders in the autonomous vehicle industry.
Many researchers see unsupervised learning as the key to developing human-level AI. According to LeCun, mastering unsupervised learning is “the greatest challenge in ML and AI of the next few years.”
2. Federated Learning
One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AI’s trajectory.
Privacy-preserving artificial intelligence—methods that enable AI models to learn from datasets without compromising their privacy—is thus becoming an increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is federated learning.
The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.
The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the world’s data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques.
Federated learning solves this problem by flipping the conventional approach to AI on its head.
Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent out—one to each device with training data—and trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these “mini-models” are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.
The original federated learning use case was to train AI models on personal data distributed across billions of mobile devices. As those researchers summarized: “Modern mobile devices have access to a wealth of data suitable for machine learning models….However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center….We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates.”
More recently, healthcare has emerged as a particularly promising field for the application of federated learning.
It is easy to see why. On one hand, there are an enormous number of valuable AI use cases in healthcare. On the other hand, healthcare data, especially patients’ personally identifiable information, is extremely sensitive; a thicket of regulations like HIPAA restrict its use and movement. Federated learning could enable researchers to develop life-saving healthcare AI tools without ever moving sensitive health records from their source or exposing them to privacy breaches.
A host of startups has emerged to pursue federated learning in healthcare. The most established is Paris-based Owkin; earlier-stage players include Lynx.MD, Ferrum Health and Secure AI Labs.
Beyond healthcare, federated learning may one day play a central role in the development of any AI application that involves sensitive data: from financial services to autonomous vehicles, from government use cases to consumer products of all kinds. Paired with other privacy-preserving techniques like differential privacy and homomorphic encryption, federated learning may provide the key to unlocking AI’s vast potential while mitigating the thorny challenge of data privacy.
The wave of data privacy legislation being enacted worldwide today (starting with GDPR and CCPA, with many similar laws coming soon) will only accelerate the need for these privacy-preserving techniques. Expect federated learning to become an important part of the AI technology stack in the years ahead.
We have entered a golden era for natural language processing.
OpenAI’s release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.
GPT-3 is just the latest (and largest) in a string of similarly architected NLP models—Google’s BERT, OpenAI’s GPT-2, Facebook’s RoBERTa and others—that are redefining what is possible in NLP.
The key technology breakthrough underlying this revolution in language AI is the Transformer.
Transformers were introduced in a landmark 2017 research paper. Previously, state-of-the-art NLP methods had all been based on recurrent neural networks (e.g., LSTMs). By definition, recurrent neural networks process data sequentially—that is, one word at a time, in the order that the words appear.
Transformers’ great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, Transformers rely heavily on an AI mechanism known as attention. Attention enables a model to consider the relationships between words regardless of how far apart they are and to determine which words and phrases in a passage are most important to “pay attention to.”
Why is parallelization so valuable? Because it makes Transformers vastly more computationally efficient than RNNs, meaning they can be trained on much larger datasets. GPT-3 was trained on roughly 500 billion words and consists of 175 billion parameters, dwarfing any RNN in existence.
Transformers have been associated almost exclusively with NLP to date, thanks to the success of models like GPT-3. But just this month, a groundbreaking new paper was released that successfully applies Transformers to computer vision. Many AI researchers believe this work could presage a new era in computer vision. (As well-known ML researcher Oriol Vinyals put it simply, “My take is: farewell convolutions.”)
While leading AI companies like Google and Facebook have begun to put Transformer-based models into production, most organizations remain in the early stages of productizing and commercializing this technology. OpenAI has announced plans to make GPT-3 commercially accessible via API, which could seed an entire ecosystem of startups building applications on top of it.
Expect Transformers to serve as the foundation for a whole new generation of AI capabilities in the years ahead, starting with natural language. As exciting as the past decade has been in the field of artificial intelligence, it may prove to be just a prelude to the decade ahead.Follow me on Twitter. Rob Toews
Rob Toews is a venture capitalist at Highland Capital Partners.