Word2Vec revolutionized natural language processing by transforming words into dense vector representations, capturing semantic relationships. This article explores its fundamentals: What is Word2Vec? Why are word embeddings crucial for NLP tasks? Delve into the mechanics of CBOW and Skip-gram models, and learn to implement Word2Vec in Python for practical applications.”
This article was published as a part of the Data Science Blogathon
Word2Vec creates vectors of the words that are distributed numerical representations of word features – these word features could comprise of words that represent the context of the individual words present in our vocabulary. Word embeddings eventually help in establishing the association of a word with another similar meaning word through the created vectors.
As seen in the image below where word embeddings are plotted, similar meaning words are closer in space, indicating their semantic similarity.
checkout this Source
Two different model architectures that can be used by Word2Vec to create the word embeddings are the Continuous Bag of Words (CBOW) model & the Skip-Gram model.
Let us consider the two sentences – “You can scale your business.” and “You can grow your business.”. These two sentences have the same meaning. If we consider a vocabulary considering these two sentences, it will constitute of these words: {You, can, scale, grow, your, business}.
A one-hot encoding of these words would create a vector of length 6. The encodings for each of the words would look like this:
You: [1,0,0,0,0,0], Can: [0,1,0,0,0,0], Scale: [0,0,1,0,0,0], Grow: [0,0,0,1,0,0],
Your: [0,0,0,0,1,0], Business: [0,0,0,0,0,1]
In a 6-dimensional space, each word would occupy one of the dimensions, meaning that none of these words has any similarity with each other – irrespective of their literal meanings.
Word2Vec, a word embedding methodology, solves this issue and enables similar words to have similar dimensions and, consequently, helps bring context.
Even though Word2Vec is an unsupervised model where you can give a corpus without any label information and the model can create dense word embeddings, Word2Vec internally leverages a supervised classification model to get these embeddings from the corpus. These embeddings are representations of words in a vector space, which are crucial for various natural language processing tasks and machine learning applications.
The CBOW architecture comprises a deep learning classification model in which we take in context words as input, X, and try to predict our target word, Y.
For example, if we consider the sentence – “Word2Vec has a deep learning model working in the backend.”, there can be pairs of context words and target (center) words. If we consider a context window size of 2, we will have pairs like ([deep, model], learning), ([model, in], working), ([a, learning), deep) etc. The deep learning model would try to predict these target words based on the context words.
The following steps describe how the model works:
We can extract out the embeddings of the needed words from our embedding layer, once the training is completed.
You Can Checkout this Source for better Understanding
In the skip-gram model, given a target (centre) word, the context words are predicted using word representations. So, considering the same sentence – “Word2Vec has a neural networks working in the backend.” and a context window size of 2, given the centre word ‘learning’, the model tries to predict [‘deep’, ‘model’] and so on. This prediction process is fundamental to language modeling and the creation of vector representations of words.
Since the skip-gram model has to predict multiple words from a single given word, we feed the model pairs of (X, Y) where X is our input and Y is our label. This is done by creating positive input samples and negative input samples.
Positive Input Samples will have the training data in this form: [(target, context),1] where the target is the target or centre word, context represents the surrounding context words, and label 1 indicates if it is a relevant pair. Negative Input Samples will have the training data in the same form: [(target, random),0]. In this case, instead of the actual surrounding words, randomly selected words are fed in along with the target words with a label of 0 indicating that it’s an irrelevant pair.
These samples make the model aware of the contextually relevant words and consequently generate similar embeddings for similar meaning words.
The following steps describe how the model works:
As with CBOW, we can extract out the embeddings of the needed words from our embedding layer, once the training is completed.
We can generate word embeddings for our corpus in Python using the genism module. Below is a simple illustration of the same.
We start by installing the ‘gensim’ and ‘nltk’ modules.
pip install gensim
pip install nltk
from nltk.tokenize import sent_tokenize, word_tokenize
import gensim
from gensim.models import Word2Vec
We have taken the ‘Amazon Fine Food Reviews’ dataset from Kaggle here. We use the ‘Text’ column of the dataset.
import pandas as pd
rev = pd.read_csv("Reviews.csv")
print(rev.head())
We create the list of the words that our corpus has using the following lines of code:
corpus_text = 'n'.join(rev[:1000]['Text'])
data = []
# iterate through each sentence in the file
for i in sent_tokenize(corpus_text):
temp = []
# tokenize the sentence into words
for j in word_tokenize(i):
temp.append(j.lower())
data.append(temp)
To create the word embeddings using CBOW architecture or Skip Gram architecture, you can use the following respective lines of code:
model1 = gensim.models.Word2Vec(data, min_count = 1,size = 100, window = 5, sg=0) model2 = gensim.models.Word2Vec(data, min_count = 1, size = 100, window = 5, sg = 1)
Word2Vec is a neural network-based algorithm that learns word embeddings, which are numerical representations of words that capture their semantic and syntactic relationships. Word embeddings are useful for a variety of natural languages processing tasks, such as sentiment analysis, machine translation, and question answering.
Yes, Word2vec is a word embedding technique commonly used in NLP for generating vector representations of words based on their context in a given corpus of text. It aims to capture semantic relationships between words by placing words with similar contexts closer together in the vector space
No, ChatGPT does not use Word2vec. It relies on a different architecture and training method to generate word embeddings, but it also aims to capture semantic relationships between words.
To use Word2vec in NLP, you need to train the model on a large corpus of unstructured text data, such as the Google News dataset. Once trained, Word2vec learns to map words to vector representations in such a way that words with similar contexts are placed closer together in the vector space.
In summary, Word2vec provides a basic idea of generating vector representations of words based on their context in a large corpus of unstructured text data. It allows for efficient computation of word similarities and predictions of words likely to appear in similar contexts.
This artilce is very good and useful for me . Thank you all