Imagine you’re tasked with reading through mountains of documents, extracting the key points to make sense of it all. It feels overwhelming, right? That’s where Sumy comes in, acting like a digital assistant with the power to swiftly summarize extensive texts into concise, digestible insights. Picture yourself cutting through the noise and focusing on what really matters, all thanks to the magic of Sumy library. This article will take you on a journey through Sumy’s capabilities, from its diverse summarization algorithms to practical implementation tips, transforming the daunting task of summarization into an efficient, almost effortless process. Get ready to dive into the world of automated summarization and discover how Sumy can revolutionize the way you handle information.
This article was published as a part of the Data Science Blogathon.
Sumy is one of the Python libraries for Natural Language Processing tasks. It is mainly used for automatic summarization of paragraphs using different algorithms. We can use different summarizers that are based on various algorithms, such as Luhn, Edmundson, LSA, LexRank, and KL-summarizers. We will learn in-depth about each of these algorithms in the upcoming sections. Sumy requires minimal code to build a summary, and it can be easily integrated with other Natural Language Processing tasks. This library is suitable for summarizing large documents.
Now let’s look at the how to install this library in our system.
To install it via PyPI, then paste the below command in your terminal.
pip install sumy
If you are working in a notebook such as Jupyter Notebook, Kaggle, or Google Colab, then add ‘!’ before the above command.
Tokenization is one of the most important task in text preprocessing. In tokenization, we divide a paragraph into sentences and then breakdown those sentences into individual words. By tokenizing the text, Sumy can better understand its structure and meaning, which improves the accuracy and quality of the summaries generated.
Now, let’s see how to build a tokenizer using Sumy lirary. We will first import the Tokenizer module from sumy, then we will download the ‘punkt’ from NLTK. Then we will create an object or instance of Tokenizer for English language. We will then convert a sample text into sentences, then we will print the tokenized words for each sentence.
from sumy.nlp.tokenizers import Tokenizer
import nltk
nltk.download('punkt')
tokenizer = Tokenizer("en")
sentences = tokenizer.to_sentences("Hello, this is Analytics Vidhya! We offer a wide
range of articles, tutorials, and resources on various topics in AI and Data Science.
Our mission is to provide quality education and knowledge sharing to help you excel
in your career and academic pursuits. Whether you're a beginner looking to learn
the basics of coding or an experienced developer seeking advanced concepts,
Analytics Vidhya has something for everyone. ")
for sentence in sentences:
print(tokenizer.to_words(sentence))
Output:
Stemming is the process of reducing a word to its base or root form. This helps in normalizing words so that different forms of a word are treated as the same term. By doing this, summarization algorithms can more effectively recognize and group similar words, thereby improving the summarization quality. The stemmer is particularly useful when we have large texts that have various forms of the same words.
To create a stemmer using the Sumy library, we will first import the `Stemmer` module from Sumy. Then, we will create an object of `Stemmer` for the English language. Next, we will pass a word to the stemmer to reduce it to its root form. Finally, we will print the stemmed word.
from sumy.nlp.stemmers import Stemmer
stemmer = Stemmer("en")
stem = stemmer("Blogging")
print(stem)
Output:
Let us now look into the different summarization algorithms.
The Luhn Summarizer is one of the summarization algorithms provided by the Sumy library. This summarizer is based on the concept of frequency analysis, where the importance of a sentence is determined by the frequency of significant words within it. The algorithm identifies words that are most relevant to the topic of the text by filterin gout some common stop words and then ranks sentences. The Luhn Summarizer is effective for extracting key sentences from a document. Here’s how to build the Luhn Summarizer:
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.luhn import LuhnSummarizer
from sumy.nlp.stemmers import Stemmer
from sumy.utils import get_stop_words
import nltk
nltk.download('punkt')
def summarize_paragraph(paragraph, sentences_count=2):
parser = PlaintextParser.from_string(paragraph, Tokenizer("english"))
summarizer = LuhnSummarizer(Stemmer("english"))
summarizer.stop_words = get_stop_words("english")
summary = summarizer(parser.document, sentences_count)
return summary
if __name__ == "__main__":
paragraph = """Artificial intelligence (AI) is intelligence demonstrated by machines, in contrast
to the natural intelligence displayed by humans and animals. Leading AI textbooks define
the field as the study of "intelligent agents": any device that perceives its environment
and takes actions that maximize its chance of successfully achieving its goals. Colloquially,
the term "artificial intelligence" is often used to describe machines (or computers) that mimic
"cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"."""
sentences_count = 2
summary = summarize_paragraph(paragraph, sentences_count)
for sentence in summary:
print(sentence)
Output:
The Edmundson Summarizer is another powerful algorithm provided by the Sumy library. Unlike other summarizers that primarily rely on statistical and frequency-based methods, the Edmundson Summarizer allows for a more tailored approach through the use of bonus words, stigma words, and null words. These type of words enable the algorithm to emphasize or de-emphasize those words in the summarized text. Here’s how to build the Edmundson Summarizer:
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.edmundson import EdmundsonSummarizer
from sumy.nlp.stemmers import Stemmer
from sumy.utils import get_stop_words
import nltk
nltk.download('punkt')
def summarize_paragraph(paragraph, sentences_count=2, bonus_words=None, stigma_words=None, null_words=None):
parser = PlaintextParser.from_string(paragraph, Tokenizer("english"))
summarizer = EdmundsonSummarizer(Stemmer("english"))
summarizer.stop_words = get_stop_words("english")
if bonus_words:
summarizer.bonus_words = bonus_words
if stigma_words:
summarizer.stigma_words = stigma_words
if null_words:
summarizer.null_words = null_words
summary = summarizer(parser.document, sentences_count)
return summary
if __name__ == "__main__":
paragraph = """Artificial intelligence (AI) is intelligence demonstrated by machines, in contrast
to the natural intelligence displayed by humans and animals. Leading AI textbooks define
the field as the study of "intelligent agents": any device that perceives its environment
and takes actions that maximize its chance of successfully achieving its goals. Colloquially,
the term "artificial intelligence" is often used to describe machines (or computers) that mimic
"cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"."""
sentences_count = 2
bonus_words = ["intelligence", "AI"]
stigma_words = ["contrast"]
null_words = ["the", "of", "and", "to", "in"]
summary = summarize_paragraph(paragraph, sentences_count, bonus_words, stigma_words, null_words)
for sentence in summary:
print(sentence)
Output:
The LSA summarizer is the best one amognst all because it works by identifying patterns and relationships between texts, rather than soley rely on frequency analysis. This LSA summarizer generates more contextually accurate summaries by understanding the meaning and context of the input text. Here’s how to build the LSA Summarizer:
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.lsa import LsaSummarizer
from sumy.nlp.stemmers import Stemmer
from sumy.utils import get_stop_words
import nltk
nltk.download('punkt')
def summarize_paragraph(paragraph, sentences_count=2):
parser = PlaintextParser.from_string(paragraph, Tokenizer("english"))
summarizer = LsaSummarizer(Stemmer("english"))
summarizer.stop_words = get_stop_words("english")
summary = summarizer(parser.document, sentences_count)
return summary
if __name__ == "__main__":
paragraph = """Artificial intelligence (AI) is intelligence demonstrated by machines, in contrast
to the natural intelligence displayed by humans and animals. Leading AI textbooks define
the field as the study of "intelligent agents": any device that perceives its environment
and takes actions that maximize its chance of successfully achieving its goals. Colloquially,
the term "artificial intelligence" is often used to describe machines (or computers) that mimic
"cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"."""
sentences_count = 2
summary = summarize_paragraph(paragraph, sentences_count)
for sentence in summary:
print(sentence)
Output:
Sumy is one of the best automatic text summarizing libraries available. We can also use this library for tasks like tokenization and stemming. By using different algorithms like Luhn, Edmundson, and LSA, we can generate concise and meaningful summaries based on our specific needs. Although we have used a smaller paragraph for examples, we can summarize lengthy documents using this library in no time.
A. Sumy is a Python library for automatic text summarization using various algorithms.
A. Sumy supports algorithms like Luhn, Edmundson, LSA, LexRank, and KL-summarizers.
A. Tokenization is dividing text into sentences and words, improving summarization accuracy.
A. Stemming reduces words to their base or root forms for better summarization.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.