Lecture 2 – Word Vectors and Word Senses
1. Review: Main idea of word2vec
Word2vec parameters and computations
Word2vec maximizes objective function by putting similar words nearby in space
2. Optimization: Gradient Descent
Gradient Descent
Stochastic Gradient Descent
Stochastic gradients with word vectors!
1b. Word2vec: More details
So far, we have looked at two main classes of methods to find word embeddings. The first set are count-based and rely on matrix factorization (e.g. LSA, HAL). While these methods effectively leverage global statistical information, they are primarily used to capture word similarities and do poorly on tasks such as word analogy, indicating a sub-optimal vector space structure. The other set of methods are shallow window-based (e.g. the skip-gram and the CBOW models), which learn word embeddings by making predictions in local context windows. These models demonstrate the capacity to capture complex linguistic patterns beyond word similarity, but fail to make use of the global co-occurrence statistics.
The skip-gram model with negative sampling (HW2)
In comparison, GloVe consists of a weighted least squares model that trains on global word-word co-occurrence counts and thus makes efficient use of statistics. The model produces a word vector space with meaningful sub-structure. It shows state-of-the-art performance on the word analogy task, and outperforms other current methods on several word similarity tasks.