An Introduction to Stemming in Natural Language Processing

Prashant Last Updated : 16 Oct, 2024
8 min read

This article was published as a part of the Data Science Blogathon.

Introduction

We will learn how to do stemming in Python using the NLTK package for our NLP project in this lesson. We shall provide an overview of stemming and trace its history. Finally, we will discuss several kinds of stemmers and various applications of stemming in NLTK.

What is Stemming?

Stemming is a natural language processing technique that lowers inflection in words to their root forms, hence aiding in the preprocessing of text, words, and documents for text normalization.

According to Wikipedia, inflection is the process through which a word is modified to communicate many grammatical categories, including tense, case, voice, aspect, person, number, gender, and mood. Thus, although a word may exist in several inflected forms, having multiple inflected forms inside the same text adds redundancy to the NLP process.

As a result, we employ stemming to reduce words to their basic form or stem, which may or may not be a legitimate word in the language.

For instance, the stem of these three words, connections, connected, connects, is “connect”. On the other hand, the root of trouble, troubled, and troubles is “troubl,” which is not a recognized word.

stemming image

History of Stemming

Julie Beth Lovins wrote the first published stemmer in 1968. This article was groundbreaking in its day and had a significant effect on subsequent efforts in this field. Her paper makes reference to three previous major attempts at stemming algorithms: one by Professor John W. Tukey of Princeton University, another by Michael Lesk of Harvard University under the direction of Professor Gerard Salton, and a third algorithm developed by James L. Dolby of R and D Consultants in Los Altos, California.

Martin Porter wrote a further stemmer, which was published in the July 1980 edition of the journal Program. This stemmer was extensively used and eventually became the de facto norm for English stemming. In 2000, Dr. Porter was honored with the Tony Kent Strix prize for his work on stemming and information retrieval.

Why Stemming is Important?

As previously stated, the English language has several variants of a single term. The presence of these variances in a text corpus results in data redundancy when developing NLP or machine learning models. Such models may be ineffective.

To build a robust model, it is essential to normalize text by removing repetition and transforming words to their base form through stemming.

Application of Stemming

In information retrieval, text mining SEOs, Web search results, indexing, tagging systems, and word analysis, stemming is employed. For instance, a Google search for prediction and predicted returns comparable results.

Types of Stemmer in NLTK

There are several kinds of stemming algorithms, and all of them are included in Python NLTK. Let us have a look at them below.

1. Porter Stemmer – PorterStemmer()

Martin Porter invented the Porter Stemmer or Porter algorithm in 1980. Five steps of word reduction are used in the method, each with its own set of mapping rules. Porter Stemmer is the original stemmer and is renowned for its ease of use and rapidity. Frequently, the resultant stem is a shorter word with the same root meaning.

PorterStemmer() is a module in NLTK that implements the Porter Stemming technique. Let us examine this with the aid of an example.

Example of PorterStemmer()

In the example below, we construct an instance of PorterStemmer() and use the Porter algorithm to stem the list of words.

from nltk.stem import PorterStemmer
porter = PorterStemmer()
words = ['Connects','Connecting','Connections','Connected','Connection','Connectings','Connect']
for word in words:
    print(word,"--->",porter.stem(word))

 

2. Snowball Stemmer – SnowballStemmer()

Martin Porter also created Snowball Stemmer. The method utilized in this instance is more precise and is referred to as “English Stemmer” or “Porter2 Stemmer.” It is somewhat faster and more logical than the original Porter Stemmer.

SnowballStemmer() is a module in NLTK that implements the Snowball stemming technique. Let us examine this form of stemming using an example.

Example of SnowballStemmer()

In the example below, we first construct an instance of SnowballStemmer() to use the Snowball algorithm to stem the list of words.

from nltk.stem import SnowballStemmer
snowball = SnowballStemmer(language='english')
words = ['generous','generate','generously','generation']
for word in words:
    print(word,"--->",snowball.stem(word))

[Out] :

generous ---> generous
generate ---> generat
generously ---> generous
generation ---> generat

3. Lancaster Stemmer – LancasterStemmer()

Lancaster Stemmer is straightforward, although it often produces results with excessive stemming. Over-stemming renders stems non-linguistic or meaningless.

LancasterStemmer() is a module in NLTK that implements the Lancaster stemming technique. Allow me to illustrate this with an example.

Example of LancasterStemmer()

In the example below, we construct an instance of LancasterStemmer() and then use the Lancaster algorithm to stem the list of words.

from nltk.stem import LancasterStemmer
lancaster = LancasterStemmer()
words = ['eating','eats','eaten','puts','putting']
for word in words:
    print(word,"--->",lancaster.stem(word))

[Out] :

eating ---> eat
eats ---> eat
eaten ---> eat
puts ---> put
putting ---> put

4. Regexp Stemmer – RegexpStemmer()

Regex stemmer identifies morphological affixes using regular expressions. Substrings matching the regular expressions will be discarded.

RegexpStemmer() is a module in NLTK that implements the Regex stemming technique. Let us try to understand this with an example.

Example of RegexpStemmer()

In this example, we first construct an object of RegexpStemmer() and then use the Regex stemming method to stem the list of words.

from nltk.stem import RegexpStemmer
regexp = RegexpStemmer('ing$|s$|e$|able

[Out] :

mass ---> mas
was ---> was
bee ---> bee
computer ---> computer
advisable ---> advis

Porter vs Snowball vs Lancaster vs Regex Stemming in NLTK

Let us compare the outcomes of several forms of stemming in NLTK using the following Example :—

from nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer, RegexpStemmer
porter = PorterStemmer()
lancaster = LancasterStemmer()
snowball = SnowballStemmer(language='english')
regexp = RegexpStemmer('ing$|s$|e$|able

[Out] :

Word                Роrter Stemmer      Snowball Stemmer    Lancaster Stemmer             Regexp Stemmer
friend              friend              friend              friend                        friend
friendship          friendship          friendship          friend                        friendship
friends             friend              friend              friend                        friend
friendships         friendship          friendship          friend                        friendship

Stemming a Text File with NLTK

We demonstrated stemming certain words before, but what if you have a text file and want to conduct it on the full file? Allow us to comprehend how to do this.

In the example below, we constructed a function called stemming that uses word_tokenize to tokenize the text and then uses SnowballStemmer to stem down the token to its basic form.

We then add it to a list and finally join and return the list’s elements.

from nltk.tokenize import word_tokenize
from nltk.stem import SnowballStemmer
def stemming(text):
    snowball = SnowballStemmer(language='english')
    list=[]
    for token in word_tokenize(text):
        list.append(snowball.stem(token))
    return ' '.join(list)
with open('text_file.txt') as f:
    text=f.read()
print(stemming(text))

[Out] :

analyt vidhya provid a communiti base knowledg portal for analyt and data scienc profession. the aim of the platform is to becom a complet portal serv all knowledg and career need of data scienc profession

Conclusion

In this article, we showed you how to conduct stemming in Python using the NLTK package for your natural language processing project. We looked at the several kinds of stemmers available in NLTK, as well as some examples of each. Then we conducted a comparative analysis of the outcomes provided by Porter vs Snowball vs Lancaster vs Regex variants, as well as the results produced by other methods. Finally, we demonstrated how to conduct it on a text file using the NLTK library.

I hope you find the information interesting. If you’d want to communicate with me, you may do so via:

Linkedin

or if you have any other questions, you can also drop me an email.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion
, min=4) words = ['mass','was','bee','computer','advisable'] for word in words: print(word,"--->",regexp.stem(word))

[Out] :

 

Porter vs Snowball vs Lancaster vs Regex Stemming in NLTK

Let us compare the outcomes of several forms of stemming in NLTK using the following Example :—

 

[Out] :

 

Stemming a Text File with NLTK

We demonstrated stemming certain words before, but what if you have a text file and want to conduct it on the full file? Allow us to comprehend how to do this.

In the example below, we constructed a function called stemming that uses word_tokenize to tokenize the text and then uses SnowballStemmer to stem down the token to its basic form.

We then add it to a list and finally join and return the list’s elements.

 
 

[Out] :

 

Conclusion

In this article, we showed you how to conduct stemming in Python using the NLTK package for your natural language processing project. We looked at the several kinds of stemmers available in NLTK, as well as some examples of each. Then we conducted a comparative analysis of the outcomes provided by Porter vs Snowball vs Lancaster vs Regex variants, as well as the results produced by other methods. Finally, we demonstrated how to conduct it on a text file using the NLTK library.

I hope you find the information interesting. If you’d want to communicate with me, you may do so via:

Linkedin

or if you have any other questions, you can also drop me an email.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion
, min=4) word_list = ["friend", "friendship", "friends", "friendships"] print("{0:20}{1:20}{2:20}{3:30}{4:40}".format("Word","Porter Stemmer","Snowball Stemmer","Lancaster Stemmer",'Regexp Stemmer')) for word in word_list: print("{0:20}{1:20}{2:20}{3:30}{4:40}".format(word,porter.stem(word),snowball.stem(word),lancaster.stem(word),regexp.stem(word)))

[Out] :

 

Stemming a Text File with NLTK

We demonstrated stemming certain words before, but what if you have a text file and want to conduct it on the full file? Allow us to comprehend how to do this.

In the example below, we constructed a function called stemming that uses word_tokenize to tokenize the text and then uses SnowballStemmer to stem down the token to its basic form.

We then add it to a list and finally join and return the list’s elements.

 
 

[Out] :

 

Conclusion

In this article, we showed you how to conduct stemming in Python using the NLTK package for your natural language processing project. We looked at the several kinds of stemmers available in NLTK, as well as some examples of each. Then we conducted a comparative analysis of the outcomes provided by Porter vs Snowball vs Lancaster vs Regex variants, as well as the results produced by other methods. Finally, we demonstrated how to conduct it on a text file using the NLTK library.

I hope you find the information interesting. If you’d want to communicate with me, you may do so via:

Linkedin

or if you have any other questions, you can also drop me an email.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion

, min=4) words = [‘mass’,’was’,’bee’,’computer’,’advisable’] for word in words: print(word,”—>”,regexp.stem(word))

[Out] :

 

Porter vs Snowball vs Lancaster vs Regex Stemming in NLTK

Let us compare the outcomes of several forms of stemming in NLTK using the following Example :—

 

[Out] :

 

Stemming a Text File with NLTK

We demonstrated stemming certain words before, but what if you have a text file and want to conduct it on the full file? Allow us to comprehend how to do this.

In the example below, we constructed a function called stemming that uses word_tokenize to tokenize the text and then uses SnowballStemmer to stem down the token to its basic form.

We then add it to a list and finally join and return the list’s elements.

 
 

[Out] :

 

Conclusion

In this article, we showed you how to conduct stemming in Python using the NLTK package for your natural language processing project. We looked at the several kinds of stemmers available in NLTK, as well as some examples of each. Then we conducted a comparative analysis of the outcomes provided by Porter vs Snowball vs Lancaster vs Regex variants, as well as the results produced by other methods. Finally, we demonstrated how to conduct it on a text file using the NLTK library.

I hope you find the information interesting. If you’d want to communicate with me, you may do so via:

Linkedin

or if you have any other questions, you can also drop me an email.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion

Hello, my name is Prashant, and I'm currently pursuing my Bachelor of Technology (B.Tech) degree. I'm in my 3rd year of study, specializing in machine learning, and attending VIT University.

In addition to my academic pursuits, I enjoy traveling, blogging, and sports. I'm also a member of the sports club. I'm constantly looking for opportunities to learn and grow both inside and outside the classroom, and I'm excited about the possibilities that my B.Tech degree can offer me in terms of future career prospects.

Thank you for taking the time to get to know me, and I look forward to engaging with you further!

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details