Thursday 24 February 2022

Smaller, Smarter Data Needed to Train and Scale AI

NAB

The current way to train successful AI models is to throw massive data sets at it, but that hits a snag with video. The processing power and bandwidth required to crunch video at sufficient volumes in current neural networks is holding back developments in computer vision.

article here

That could change if smaller, higher quality “data-centric AI” were employed, allowing it to scale much more quickly than today’s current rate.

Data scientist and businessman Andrew Ng says that “small data” solutions can solve big issues in AI, including model efficiency, accuracy, and bias.

“Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system,” he explains in an interview with IEEE Spectrum.

Ng has form, which is why IEEE is interested in what he has to say. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s; he cofounded Google Brain in 2011; and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group.

 “I’m excited about the potential of building foundation models in computer vision,” he says. “I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text.”

The compute power needed to process the large volume of images for video is significant, which is why foundation models have emerged first in audio and text contexts like Neural Language Processing. Ng is confident that advances in the power of semiconductors could see foundation models developed in computer vision.

“Architectures built for hundreds of millions of images don’t work with only 50 images,” he says. “But it turns out, if you have 50 really good examples, you can build something valuable. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”

He says the difficulty in being able to scale AI models is a problem in just about every industry. Using health care as an example, he says, “Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic.

“The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge.”

That’s what Ng’s new company, Landing AI, is executing in computer vision.

“In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.”

 


No comments:

Post a Comment