SUBSCRIBE | ADVERTISE | HOME

Article Posted Tuesday October 27, 2020


___________________
AI

There’s a new, faster way to train AI

Researchers have developed a new method to make machine learning more efficient.

The technique, called “less-than-one-shot learning,” can train an AI model to accurately identify more objects than the number it was trained on – a huge shift from the expensive, time-consuming process that currently requires thousands of examples of one object for an accurate identification.

“More efficient machine learning and deep learning models mean that AI can learn faster, are potentially smaller, and are lighter and easier to deploy,” said Ilia Sucholutsky (photo), a PhD candidate at the University of Waterloo’s Department of Statistics and Actuarial Science and lead author on the study introducing the method. “These improvements will unlock the ability to use deep learning in settings where we previously couldn’t because it was just too expensive or impossible to collect more data.

“As machine-learning models start to appear on Internet-of-Things devices and phones, we need to be able to train them more efficiently, and ‘less-than-one-shot learning’ could make this possible,” Sucholutsky said.

The “less-than-one-shot learning” technique builds on previous work Sucholutsky and his supervisor, Professor Matthias Schonlau, have done on soft labels and data distillation. In that work, the researchers managed to train a neural network to classify handwritten images of all 10 digits using only five “carefully designed soft-label examples,” which is less than one example per digit.

In developing the new technique, the researchers used the “k-nearest neighbours” (k-NN) machine learning algorithm, which classifies data based on the thing to which it’s most similar. They used k-NN because it makes the analysis more tractable, but “less-than-one-shot learning” can be used with any classification algorithm.

Using ‘k-nearest neighbours,’ the researchers show that it’s possible for machine learning models to learn to discern objects even when there’s not enough data for certain classes. In theory, it can work for any classification task.

“Something that looked absolutely impossible at the outset indeed has been shown to be possible,” said Schonlau, of Waterloo’s Department of Statistics and Actuarial Science. “This is absolutely astounding because a model can learn more classes than you have examples of, and it’s due to this soft label. So, now that we have shown that it’s possible, work can begin to figure out all the applications.”

A paper detailing the new technique, Less Than One-Shot Learning: Learning N Classes From M<N Samples, authored by Waterloo’s Faculty of Mathematics researchers Sucholutsky and Schonlau, is under review at one of the major AI conferences.











....................................................x


Not a Subscriber?
Receive Exchange Magazine Monitor here




contact us

ISSN 0824-45
Copyright, 2020

Publisher: Exchange Business Communication Inc., PO Box 248, Waterloo, Ontario, Canada

Content published on this site represents the opinion of the individual, organization and/or source provider. Exchangemagazine.com is a online daily journal of Exchange Business Communication Inc. (1997) Publishers of Exchange Magazine est. 1983. Privacy Policy. Copyright of Exchange produced editorial is the copyright of Exchange Business Communications Inc. 2020. Submitted editorials, comments and releases are copyright of respective source(s).