Until now, artificially intelligent (“AI”) programs for machine learning have had to use thousands of examples in order to teach the computer to correctly complete the task at hand. Now, AI researchers at MIT have discovered new method which cuts the number of examples needed to exactly one.
An incredibly interesting article published on GeekWire breaks the news on this new development in AI research:
The algorithm takes advantage of a probabilistic approach the researchers call “Bayesian Program Learning,” or BPL. Essentially, the computer generates its own additional examples, and then determines which ones fit the pattern best.
The researchers behind BPL say they’re trying to reproduce the way humans catch on to a new task after seeing it done once – whether it’s a child recognizing a horse, or a mechanic replacing a head gasket.
“The gap between machine learning and human learning capacities remains vast,” said MIT’s Joshua Tenenbaum, one of the authors of a research paper published today in the journal Science. “We want to close that gap, and that’s the long-term goal.”
Tenenbaum and two colleagues – New York University’s Brenden Lake and the University of Toronto’s Ruslan Salakhutdinov – tested the algorithm by setting it to work on a database of 1,623 handwritten characters drawn from 50 writing systems, including Sanskrit and Tibetan.
In the video below, research collaborator Brenden Lake explains their approach the challenge and reviews the results that they were able to obtain.
You can read more of the fascinating details of this experiment in the really excellent article on GeekWire.
Source: GeekWire.com – “Bayesian boost for A.I.: Researchers find a quicker way to teach a computer”
Featured Image Credit: Danqing Wang