![]() ![]() Experiments on the major benchmarks of speech recognition, image classification, and Instead of predicting modality-specific targets such as words, visual tokens or units of human speech whichĪre local in nature, data2vec predicts contextualized latent representations that contain information from Masked view of the input in a selfdistillation setup using a standard Transformer architecture. The core idea is to predict latent representations of the full input data based on a Self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, ![]() Objectives differ widely because they were developed with a single modality in mind. While the general idea of self-supervised learning is identical across modalities, the actual algorithms and The abstract from the paper is the following: Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets. The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.ĭata2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |