New algorithm discovers language just by watching videos | Hacker News

· algiegray's blog


Learning Language from Videos #

This research paper details a new algorithm that can learn to associate sounds with visual objects simply by watching videos. The algorithm uses two contrastive objectives:

By training the model with these objectives, the researchers successfully identified correlations between sounds, spoken words, and visual features in video clips.

Comparison to Human Learning #

The paper compares the algorithm's learning process to the way babies develop language skills. Both processes involve massive exposure to sound and visual data.

If I understand the paper correctly, the model was trained on 64 million video and audio samples (0.8 milion x 80 samples/batch). For comparison, over the course of a baby's first two years of life, the baby will get trillions of sound samples (measured at 44.1KHz) and billions of frames of visual data (measured at 30fps). It's not too hard to imagine that a large AI model trained on comparably vast quantities of audio and video data can learn to recognize objects and words, like the baby.

Future Applications #

The authors envision exciting applications for this technology, particularly in robotics.

The most exciting takeaway, for me, is that a bigger model inside a robot body should be able to integrate learning in this manner across all five human data modalities -- vision, hearing, smell, touch, taste -- as well as any other data modalities for which the robot has sensors -- radar, lidar, GPS position, etc. We sure live in exciting times!

Debates and Criticisms #

Some commenters argue that this algorithm is merely learning correlations and not truly "discovering" language. They highlight the need for a deeper understanding of the complexities of language and its relationship to human cognition.

It's a stretch to call this discovering language. It's learning the correlations between sounds, spoken words and visual features. That's a long way from learning language.

Others argue that this approach is an oversimplification of human language acquisition and does not account for the role of social interaction and feedback in learning.

A lot of a baby's language pickup is also based on what other people do in response to their attempts, linguistically and behaviourally. Read-only observation is obviously a big part of it, but it's not "exactly" the same.

Despite these criticisms, many commenters are impressed by the potential of this technology and its implications for AI development. Some believe that this approach could pave the way for future AI systems with more human-like language abilities.

Additional Points #

source