This article was published as a part of the Data Science Blogathon
In human language, often a word is used in more than one way. Understanding the various usage patterns in the language is important for various Natural Language Processing Applications.
In various usage situations, the same word can mean differently. As, a vast majority of the information online, is in English, for the sake of simplicity, let us deal with examples in the English language only.
Let us take the example of the word “bark“:
One meaning of the word refers to the outer covering of the tree. The other meaning refers to the sound made by a dog. So, here the same word has different meanings.
“Cinnamon comes from the bark of the Cinnamon tree.”
“The dog barked at the stranger.”
Let us now try a sentence with both words:
“The dog was scratching the bark of the tree, when the man approached the dog to make it stop, the dog barked.”
Suppose, this sentence is passed to an algorithm for sentiment analysis, “bark” and “barked” might mean the same meaning.
So, we can understand that, same words can mean differently based on the usage of the word in a particular sentence. The usage of words defines a lot about their meaning. But the problem lies that, in NLP, while dealing with text data, we need some way to interpret the different words with different meanings.
Word Sense Disambiguation is an important method of NLP by which the meaning of a word is determined, which is used in a particular context. NLP systems often face the challenge of properly identifying words, and determining the specific usage of a word in a particular sentence has many applications.
Word Sense Disambiguation basically solves the ambiguity that arises in determining the meaning of the same word used in different situations.
WSD has many applications in various text processing and NLP fields.
WSD faces a lot of challenges and problems.
There are four main ways to implement WSD.
These are:
Dictionary- and knowledge-based methods:
These methods rely on text data like dictionaries, thesaurus, etc. It is based on the fact that words that are related to each other can be found in the definitions. The popularly used Lesk method, which we shall discuss more later is a seminal dictionary-based method.
Supervised methods:
In this type, sense-annotated corpora are used to train machine learning models. But, a problem that may arise is that such corpora are very tough and time-consuming to create.
Semi-supervised Methods:
Due to the lack of such corpus, most word sense disambiguation algorithms use semi-supervised methods. The process starts with a small amount of data, which is often manually created.
This is used to train an initial classifier. This classifier is used on an untagged part of the corpus, to create a larger training set. Basically, this method involves bootstrapping from the initial data, which is referred to as the seed data.
Semi-supervised methods thus, use both labeled and unlabelled data.
Unsupervised Methods:
Unsupervised Methods pose the greatest challenge to researchers and NLP professionals. A key assumption of these models is that similar meanings and senses occur in a similar context. They are not dependent on manual efforts, hence can overcome the knowledge acquisition deadlock.
Lesk Algorithm is a classical Word Sense Disambiguation algorithm introduced by Michael E. Lesk in 1986.
The Lesk algorithm is based on the idea that words in a given region of the text will have a similar meaning. In the Simplified Lesk Algorithm, the correct meaning of each word context is found by getting the sense which overlaps the most among the given context and its dictionary meaning.
Read More about the Lesk Algorithm here.
We can use NLTK to implement Lesk in Python.
Let us start by importing the libraries.
from nltk.wsd import lesk from nltk.tokenize import word_tokenize
Let us now proceed with some examples.
a1= lesk(word_tokenize('This device is used to jam the signal'),'jam') print(a1,a1.definition()) a2 = lesk(word_tokenize('I am stuck in a traffic jam'),'jam') print(a2,a2.definition())
Output:
Synset('jamming.n.01') deliberate radiation or reflection of electromagnetic energy for the purpose of disrupting enemy use of electronic devices or systems Synset('jam.v.05') get stuck and immobilized
So, here we see that the first meaning was correctly perceived, and the 2nd was also correct.
First, it is meant to stop the correct signal, and second implies a traffic jam.
Let us try another example.
# testing with some data b1= lesk(word_tokenize('Apply spices to the chicken to season it'),'season') print(b1,b1.definition())
Output:
Synset('season.v.01') lend flavor to
In this case, also, the output is correct. The word “season” is used here from the cooking sense of view.
Let us try another usage of the same word.
b2= lesk(word_tokenize('India receives a lot of rain in the rainy season'),'season') print(b2,b2.definition())
Output:
Synset('season.n.01') a period of the year marked by special events or activities in some field
I think a more appropriate usage would be a geographic season, but this works too.
Moving on to the next example.
c1= lesk(word_tokenize('Water current'),'current') print(c1,c1.definition())
Output:
Synset('stream.n.02') dominant course (suggestive of running water) of successive events or ideas
Correct output again.
Let us try with different use of the word “current“.
# testing with some data c1= lesk(word_tokenize('The current time is 2 AM'),'current') print(c1,c1.definition())
Output:
Synset('current.a.01') occurring in or belonging to the present time
So, the NLTK library can implement the Lesk method properly.
Do check out the code here.
Word Sense Disambiguation is closely related to Parts of speech tagging and is an important part of the whole Natural Language Processing process.
WSD if implemented properly, can lead to breakthroughs in NLP. A problem that often arises is the whole meaning of word sense. Word sense is not a numeric quantity that can be measured or a true or false value that can be denoted as 1 or 0.
The whole idea of word sense is controversial. The meaning of a word is highly contextual and depends on its usage. It is not something that can be easily measured as a discrete quantity.
Lexicography deals with generalizing the corpus and explaining the full and extended meaning of a word. But, sometimes these meanings might not apply to the algorithms or data.
My personal opinion and experience, say that working with text data can be tricky. So, implementation of WSD can be often very difficult and problem-creating.
But, WSD has immense applications and uses.
If a computer algorithm can just read a text and identify different uses of a text, it would mean vast improvements in the field of text analytics.
Comparing and evaluating various WSD methods can be difficult. But, with time more research is going on regarding WSD and it can be improved.
About me:
Prateek Majumder
Data Science and Analytics | Digital Marketing Specialist | SEO | Content Creation
Connect with me on Linkedin.
My other articles on Analytics Vidhya: Link.
Thank You.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.