The power and scope of artificial intelligence technology has taken great strides in the past year as a result of the rise in deep learning research, a subfield of artificial intelligence that mimics distinctly human processes to engineer extremely advanced technologies, according to a recent New York Times article. Harvard scientists have been at the center of these advances in “deep learning” technologies, which can be found in some of society’s newest gadgets—from the iPhone’s Siri program to voice recognition programs in many automobiles.
In 1995, Paul Bamberg ’63, retired from teaching to devote himself to speech recognition research at a startup called Dragon Systems. The work done at Dragon Systems was fundamental to the rapid advancement in the field of voice recognition over the following years. Several acquisitions later, Nuance Communications—the new owners of the Dragon Systems’ software—has been involved with a number of high profile speech recognition software releases including the iPhone’s Siri and the systems used in cars.
“The recognition technology keeps getting better and better, and its getting better with no training data provided by the person who’s being recognized,” said Bamberg, a Senior Lecturer on Mathematics at Harvard. “Part of this is that processing power and memory have become so cheap.”
However, Bamberg cautions against calling what he did “artificial intelligence,” arguing that he only used standard methods of statistical analysis.
“My view is that with artificial intelligence, you write a program and it learns the right thing to do, so you didn’t build your own cleverness into it like neural networks...I don’t see that it’s artificial intelligence—it’s standard probability and statistics,” said Bamberg.
According to Ryan P. Adams, an Assistant Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, the field of artificial intelligence has experienced many changes over the last couple of decades. Recent successes have caused artificial intelligence research to shift away from its traditional focus on mimicking human learning processes to a focus on statistical methods seen as better suited to more practical applications.
However, Adams said that deep learning is taking the field of artificial intelligence back to its roots in building computer programs that actually “learn” just like humans do.
“[Deep learning] is headed back toward the ideas of [artifical intelligence] being a central objective in what machine learning systems need to do,” said Adams. “And so within the last five or six years there have been a couple of nice technical insights [...] that have led to what amounts to it being possible to build large artificial neural networks.”
Artificial neural networks are named for their ability to mimic the brain. Similar to how the brain functions, these networks are structured to have a number of “neurons” connected in a network. It is now possible to build large artificial neural networks that are good at a wide range of tasks, from computer vision to speech recognition to predicting the functions of different proteins, Adams added.
Thinking ahead, Adams noted that artificial intelligence provides the foundation for revolutionary new ways to solve problems in computational biology, as well as in the study of social networks and economic systems.
“These technical insights have been branching out in a lot of different areas, and these advances have been very interesting for thinking about how intelligent systems need to work,” said Adams.
Read more in News
Researchers Develop Spongy Gel Scaffold