Skip to main content

Contextual Encoding

Contextual representations are representations of words, phrases, or sentences within the context of the surrounding text. Unlike word embeddings from Word2Vec where each word is represented by a fixed vector regardless of its context, contextual representations capture the meaning of a word or sequence of words based on their context in a particular document such that the representation of a word can vary depending on the words surrounding it, allowing for a more nuanced understanding of meaning in natural language processing tasks.

Contents

References

  • Attention is All You Need, Vaswani et al., Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2017.
warning

Q1: How can document-level vector representations be derived from Word2Vec word embeddings?

warning

Q2: How did the embedding representation facilitate the adaption of Neural Networks in Natural Language Processing?

warning

Q3: How are embedding representations for Natural Language Processing fundamentally different from ones for Computer Vision?