Word2Vec For Word Embeddings -A Beginner’s Guide

Nibedita Dutta Last Updated : 29 Oct, 2024
6 min read

Word2Vec revolutionized natural language processing by transforming words into dense vector representations, capturing semantic relationships. This article explores its fundamentals: What is Word2Vec? Why are word embeddings crucial for NLP tasks? Delve into the mechanics of CBOW and Skip-gram models, and learn to implement Word2Vec in Python for practical applications.”

This article was published as a part of the Data Science Blogathon

What is Word2Vec?

Word2Vec creates vectors of the words that are distributed numerical representations of word features – these word features could comprise of words that represent the context of the individual words present in our vocabulary. Word embeddings eventually help in establishing the association of a word with another similar meaning word through the created vectors.

As seen in the image below where word embeddings are plotted, similar meaning words are closer in space, indicating their semantic similarity.

Word2Vec For Word Embeddings

checkout this Source

Two different model architectures that can be used by Word2Vec to create the word embeddings are the Continuous Bag of Words (CBOW) model & the Skip-Gram model.

Why are word embeddings needed?

Let us consider the two sentences – “You can scale your business.” and “You can grow your business.”. These two sentences have the same meaning. If we consider a vocabulary considering these two sentences, it will constitute of these words: {You, can, scale, grow, your, business}.

A one-hot encoding of these words would create a vector of length 6. The encodings for each of the words would look like this:

You: [1,0,0,0,0,0], Can: [0,1,0,0,0,0], Scale: [0,0,1,0,0,0], Grow: [0,0,0,1,0,0],

Your: [0,0,0,0,1,0], Business: [0,0,0,0,0,1]

In a 6-dimensional space, each word would occupy one of the dimensions, meaning that none of these words has any similarity with each other – irrespective of their literal meanings.

Word2Vec, a word embedding methodology, solves this issue and enables similar words to have similar dimensions and, consequently, helps bring context.

How does CBOW work?

Even though Word2Vec is an unsupervised model where you can give a corpus without any label information and the model can create dense word embeddings, Word2Vec internally leverages a supervised classification model to get these embeddings from the corpus. These embeddings are representations of words in a vector space, which are crucial for various natural language processing tasks and machine learning applications.

The CBOW architecture comprises a deep learning classification model in which we take in context words as input, X, and try to predict our target word, Y.

For example, if we consider the sentence – “Word2Vec has a deep learning model working in the backend.”, there can be pairs of context words and target (center) words. If we consider a context window size of 2, we will have pairs like ([deep, model], learning), ([model, in], working), ([a, learning), deep) etc. The deep learning model would try to predict these target words based on the context words.

cbow | Word2Vec For Word Embeddings

The following steps describe how the model works:

  • The context words are first passed as an input to an embedding layer (initialized with some random weights) as shown in the Figure below.
  • The word embeddings are then passed to a lambda layer where we average out the word embeddings.
  • We then pass these embeddings to a dense SoftMax layer that predicts our target word. We match this with our target word and compute the loss and then we perform backpropagation with each epoch to update the embedding layer in the process.

We can extract out the embeddings of the needed words from our embedding layer, once the training is completed.

CBOW deep learning model | Word2Vec For Word Embeddings

You Can Checkout this Source for better Understanding

How does Skip-gram work?

In the skip-gram model, given a target (centre) word, the context words are predicted using word representations. So, considering the same sentence – “Word2Vec has a neural networks working in the backend.” and a context window size of 2, given the centre word ‘learning’, the model tries to predict [‘deep’, ‘model’] and so on. This prediction process is fundamental to language modeling and the creation of vector representations of words.

Since the skip-gram model has to predict multiple words from a single given word, we feed the model pairs of (X, Y) where X is our input and Y is our label. This is done by creating positive input samples and negative input samples.

Positive Input Samples will have the training data in this form: [(target, context),1] where the target is the target or centre word, context represents the surrounding context words, and label 1 indicates if it is a relevant pair. Negative Input Samples will have the training data in the same form: [(target, random),0]. In this case, instead of the actual surrounding words, randomly selected words are fed in along with the target words with a label of 0 indicating that it’s an irrelevant pair.

These samples make the model aware of the contextually relevant words and consequently generate similar embeddings for similar meaning words.

The following steps describe how the model works:

  • Both the target and context word pairs are passed to individual embedding layers from which we get dense word embeddings for each of these two words.
  • We then use a ‘merge layer’ to compute the dot product of these two embeddings and get the dot product value.
  • This dot product value is then sent to a dense sigmoid layer that outputs either 0 or 1.
  • The output is compared with the actual label and the loss is computed followed by backpropagation with each epoch to update the embedding layer in the process.
deep learning method feature engineering

As with CBOW, we can extract out the embeddings of the needed words from our embedding layer, once the training is completed.

Word2Vec in Python

We can generate word embeddings for our corpus in Python using the genism module. Below is a simple illustration of the same.

Installing modules

We start by installing the ‘gensim’ and ‘nltk’ modules.

pip install gensim
pip install nltk

Importing libraries

from nltk.tokenize import sent_tokenize, word_tokenize
import gensim
from gensim.models import Word2Vec

Reading the text data

We have taken the ‘Amazon Fine Food Reviews’ dataset from Kaggle here. We use the ‘Text’ column of the dataset.

import pandas as pd
rev = pd.read_csv("Reviews.csv")
print(rev.head())

Preparing the corpus

We create the list of the words that our corpus has using the following lines of code:

corpus_text = 'n'.join(rev[:1000]['Text'])
data = []
# iterate through each sentence in the file
for i in sent_tokenize(corpus_text):
    temp = []
    # tokenize the sentence into words
    for j in word_tokenize(i):
        temp.append(j.lower())
    data.append(temp)

Building the Word2Vec model using Gensim

To create the word embeddings using CBOW architecture or Skip Gram architecture, you can use the following respective lines of code:

model1 = gensim.models.Word2Vec(data, min_count = 1,size = 100, window = 5, sg=0) 
model2 = gensim.models.Word2Vec(data, min_count = 1, size = 100, window = 5, sg = 1)

Conclusion

Word2Vec is a neural network-based algorithm that learns word embeddings, which are numerical representations of words that capture their semantic and syntactic relationships. Word embeddings are useful for a variety of natural languages processing tasks, such as sentiment analysis, machine translation, and question answering.

Frequently Asked Questions

Q1.Is Word2vec a word embedding?

Yes, Word2vec is a word embedding technique commonly used in NLP for generating vector representations of words based on their context in a given corpus of text. It aims to capture semantic relationships between words by placing words with similar contexts closer together in the vector space

Q2.What is the Word2vec technique?

No, ChatGPT does not use Word2vec. It relies on a different architecture and training method to generate word embeddings, but it also aims to capture semantic relationships between words.

Q3.Does ChatGPT use Word2vec?

To use Word2vec in NLP, you need to train the model on a large corpus of unstructured text data, such as the Google News dataset. Once trained, Word2vec learns to map words to vector representations in such a way that words with similar contexts are placed closer together in the vector space.

Q4. How to use Word2vec in NLP?

In summary, Word2vec provides a basic idea of generating vector representations of words based on their context in a large corpus of unstructured text data. It allows for efficient computation of word similarities and predictions of words likely to appear in similar contexts.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Nibedita completed her master’s in Chemical Engineering from IIT Kharagpur in 2014 and is currently working as a Senior Data Scientist. In her current capacity, she works on building intelligent ML-based solutions to improve business processes.

Responses From Readers

Clear

MADHU NAKEREKANTI
MADHU NAKEREKANTI

This artilce is very good and useful for me . Thank you all

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details