After decades of experiencing a slow burn, artificial intelligence (AI) innovation has caught fire to become the hottest item on the agendas of the world’s top technology firms.
The fuel for this fire? Necessity. As a result, leading tech companies, as well as scores of startups and researchers, have been racing to develop AI solutions that can provide a competitive advantage by augmenting human intelligence.
Today’s flurry of AI advances wouldn’t have been possible without the confluence of three factors that combined to create the right equation for AI growth: the rise of big data combined with the emergence of powerful graphics processing units (GPUs) for complex computations and the re-emergence of a decades-old AI computation model—deep learning.
While we’re still only scratching the surface of what this trio can do together, it’s never too early to look ahead. Prepare for a formula that includes the advent of small data, more efficient deep learning models, deep reasoning, new AI hardware and progress toward unsupervised learning.
Deep Reasoning Enters the Equation
Experts express little doubt that deep learning will serve as a vital variable in the new AI growth equation, much as it did in the current one. However, deep learning has yet to demonstrate a strong ability to help machines with reasoning, a skill they must master to enhance many AI applications.
Reasoning is required for a range of cognitive tasks including using basic common sense, dealing with changing situations, simple planning and making complex decisions in a profession. AI experts agree that we’re still in the early days of teaching systems to deeply reason, with a few examples of progress in narrow applications such as self-driving cars and select professions. Much work remains to reach a level of efficiency that allows for scaling reasoning capabilities across a broader swath of applications.
Some AI technologists are optimistic that we’ll figure out the reasoning challenge in the next five to ten years and point out that deep learning might actually be part of the solution.
Deep Learning Gets a Makeover
While deep learning is here to stay, it will likely look different in the next wave of AI breakthroughs. Experts stress the need to become much more efficient at training deep learning models to apply them at scale across increasingly more complex and diverse tasks. The path to this efficiency will be led in part by “small data” and the use of more unsupervised learning.
The Advent of Small Data
The neural networks of deep learning models require exposure to huge amounts of data to learn a task. Training a neural network to recognize an object, for example, could require feeding it as many as 15 million images. Acquiring relevant datasets of this size can be costly and time-consuming, which slows the pace of training, testing and refining AI systems.
And sometimes there simply isn’t enough data available in a particular domain to feed a hungry deep learning model. Researchers are pushing to figure out ways to train systems on less data and are confident they’ll find a viable solution. As a result, AI experts expect the “data” variable in the AI growth equation to flip on its head, with small datasets overtaking big data as drivers of new AI innovation.
En Route to Unsupervised Learning
Current deep learning models require datasets that are not only massive but also labeled so that the system knows what each piece of data represents. Supervised learning largely relies on humans to do the labeling, a laborious task that further slows the innovation process, piles on expense and could introduce human bias into systems.
Even with labels in place, systems often require additional human hand-holding to learn. On the other end of the spectrum, unsupervised learning allows raw, unlabeled data to be used to train a system with little to no human effort.
However, most AI visionaries cast pure unsupervised learning as the holy grail of deep learning and admit we’re a long way off from figuring out how to use it to train practical applications of AI. The next wave of AI innovation will likely be fueled by deep learning models trained using a method that lies somewhere between supervised and unsupervised learning.
Computer scientists and engineers are exploring a number of such learning methods, some of which offer a triple threat—less labeled data, less data volume and less human intervention. Among them, “one-shot learning” is closest to unsupervised learning. It’s based on the premise that most human learning takes place upon receiving just one or two examples.
Other promising methods require more supervision, but would still help speed and scale applications of deep learning. These include:
Efficient Algorithms and New AI Hardware
Simplifying the learning process will also help relieve the power crunch that slows both innovation and AI application performance. While GPUs have accelerated the training and running of deep learning models, they’re not enough. With model improvements, experts contend that GPUs will pick up speed and remain an important part of the “computational power” variable in the formula that drives the next AI leaps. However, some AI hardware under development, such as neuromorphic chips or even quantum computing systems, could factor into the new equation for AI innovation.
Ultimately, researchers hope to create future AI systems that do more than mimic human thought patterns like reasoning and perception—they see it performing an entirely new type of thinking. While this might not happen in the very next wave of AI innovation, it’s in the sights of AI thought leaders.