Textual Statistical Analysis Using pyNLPL (Pineapple) Library

Adil Mohammed Last Updated : 17 Jul, 2024
9 min read

Introduction

Statistical Analysis of text is one of the important steps of text pre-processing. It helps us understand our text data in a deep, mathematical way. This type of analysis can help us understand hidden patterns, and the weight of specific words in a sentence, and overall, helps in building good language models. The pyNLPL or as we call it Pineapple library, is one of the best Python libraries for textual statistical analysis. This library is also useful for other tasks such as cleaning and analyzing text, and it provides text pre-processing functions like tokenizers, n-gram extractors, and more. Additionally, pyNLPL can be used to build simple language models.

In this blog, you will understand how to perform text analysis using pyNLPL. We will first understand all the ways to install this library on our systems. Next, we will understand the Term Co-Occurrence matrix and its implementation using the pyNLPL library. After that, we will learn how to create a frequency list to identify the most repeated words. Next, we will perform text distribution analysis to measure the similarity between two text documents or strings. Finally, we will understand and calculate the Leveshtein’s distance using this library. You can either follow along and code by yourself, or you can just click on the ‘Copy & Edit’ button in this link to execute all programs.

Learning Objectives

  • Understand how to install this library in detail through all available methods.
  • Learn how to create a Term Co-Occurrence Matrix to analyze word relationships.
  • Learn to perform common tasks like generating frequency lists and calculating Levenshtein distance.
  • Learn to perform advanced tasks like conducting text distribution analysis and measuring document similarity.

This article was published as a part of the Data Science Blogathon.

How to Install pyNLPL?

We can install this library in two ways, first using PyPI, and second using GitHub.

Via PyPI

To install it using PyPI paste the below command in your terminal.

pip install pynlpl

If you are using a notebook like Jupyter Notebook, Kaggle Notebook, or Google Colab, then add ‘!’ before the above command.

Via GitHub

To install this library using GitHub, clone the official pyNLPL repository into your system using the below command.

git clone https://github.com/proycon/pynlpl.git

Then change the directory of your terminal to this folder using ‘cd’ then paste this below command to install the library.

python3 setup.py install

How to Use pyNLPL for Text Analysis?

Let us now explore on how we can use pyNLPL for text analysis.

Term Co-Occurrence Matrix

Term Co-Occurrence Matrix (TCM) is a statistical method to identify how often a word co-occurs with another specific word in a text. This matrix helps us understand the relationships between words and can reveal hidden patterns that are useful. It is commonly used in building text summaries, as it provides relationships between words that can help generate concise summaries. Now, let’s see how to build this matrix using the pyNLPL library.

We will first import the FrequencyList function from pynlpl.statistics, which is used to count how many times a word has been repeated in a text. We will explore this in more detail in a later section. Additionally, we will import the defaultdict method from the collections module. Next, we will create a function named create_cooccurrence_matrix, which takes a text input and a window size, and returns the matrix. In this function, we will first split the text into individual words and create a co-occurrence matrix using defaultdict. For every word in the text, we will identify its context words within the specified window size and update the co-occurrence matrix. Finally, we will print the matrix and display the frequency of each term.

from pynlpl.statistics import FrequencyList
from collections import defaultdict

def create_cooccurrence_matrix(text, window_size=2):
    words = text.split()
    cooccurrence_matrix = defaultdict(FrequencyList)
    
    for i, word in enumerate(words):
        start = max(i - window_size, 0)
        end = min(i + window_size + 1, len(words))
        context = words[start:i] + words[i+1:end]
        
        for context_word in context:
            cooccurrence_matrix[word.lower()].count(context_word.lower())
    
    return cooccurrence_matrix

text = "Hello this is Analytics Vidhya and you are doing great so far exploring data science topics. Analytics Vidhya is a great platform for learning data science and machine learning."

# Creating term co-occurrence matrix
cooccurrence_matrix = create_cooccurrence_matrix(text)

# Printing the term co-occurrence matrix
print("Term Co-occurrence Matrix:")
for term, context_freq_list in cooccurrence_matrix.items():
    print(f"{term}: {dict(context_freq_list)}")

Output:

Term Co-Occurrence Matrix

Frequency List

A frequency list will contain the number of times a specific word has been repeated in a document or a paragraph. This is a useful function to understand the main theme and context of the whole document. We usually use frequency lists in fields such as linguistics, information retrieval, and text mining. For example, search engines use frequency lists to rank web pages. We can also use this as a marketing strategy to analyze product reviews and understand the main public sentiment of the product.

Now, let’s see how to create this frequency list using the pyNLPL library. We will first import the FrequencyList function from pynlpl.statistics. Then, we will take a sample text into a variable and split the whole text into individual words. We will then pass this ‘words’ variable into the FrequencyList function. Finally, we will iterate through the items in the frequency list and print each word and its corresponding frequency.

from pynlpl.statistics import FrequencyList

text = "Hello this is Analytics Vidhya and you are doing great so far exploring data science topics. Analytics Vidhya is a great platform for learning data science and machine learning."

words = text.lower().split()

freq_list = FrequencyList(words)

for word, freq in freq_list.items():
    print(f"{word}: {freq}")

Output:

pyNLPL Library

Text Distribution Analysis

In Text distribution analysis, we calculate the frequency and probability distribution of words in a sentence, to understand which words make up the context of the sentence. By calculating this distribution of word frequencies, we can identify the most common words and their statistical properties, like entropy, perplexity, mode, and max entropy. Let’s understand these properties one by one:

  • Entropy: Entropy is the measure of randomness in the distribution. In terms of textual data, higher entropy means that the text has a wide range of vocabulary and the words are less repeated.
  • Perplexity: Perplexity is the measure of how well the language model predicts on sample data. If the perplexity is lower then the text follows a predictable pattern.
  • Mode: As we all have learnt this term since childhood, it tells us the most repeated word in the text.
  • Maximum Entropy: This property tells us the maximum entropy a text can have. Meaning it provides a reference point to compare the actual entropy of the distribution.

We can also calculate the information content of a specific word, meaning we can calculate the amount of information provided by a word.

Implement using pyNLPL

Now let’s see how to implement all these using pyNLPL.

We will import the Distribution and FrequencyList functions from the pynlpl.statistics module and the math module. Next, we will create a sample text and count the frequency of each word within that text. To do this, we will follow the same steps as above. Then, we will create an object of the Distribution function by passing the frequency list. We will then display the distribution of each word by looping through the items of the distribution variable. To calculate the entropy, we will call the distribution.entropy() function.

To calculate the perplexity, we will call distribution.perplexity(). For mode, we will call distribution.mode(). To calculate the maximum entropy, we will call distribution.maxentropy(). Finally, to get the information content of a specific word, we will call distribution.information(word). In the example below, we will pass the mode word as the parameter to this function.

import math
from pynlpl.statistics import Distribution, FrequencyList

text = "Hello this is Analytics Vidhya and you are doing great so far exploring data science topics. Analytics Vidhya is a great platform for learning data science and machine learning."

# Counting word frequencies
words = text.lower().split()

freq_list = FrequencyList(words)
word_counts = dict(freq_list.items())

# Creating a Distribution object from the word frequencies
distribution = Distribution(word_counts)

# Displaying the distribution
print("Distribution:")
for word, prob in distribution.items():
    print(f"{word}: {prob:.4f}")

# Various statistics
print("\nStatistics:")
print(f"Entropy: {distribution.entropy():.4f}")
print(f"Perplexity: {distribution.perplexity():.4f}")
print(f"Mode: {distribution.mode()}")
print(f"Max Entropy: {distribution.maxentropy():.4f}")

# Information content of the 'Mode' word
word = distribution.mode()
information_content = distribution.information(word)
print(f"Information Content of '{word}': {information_content:.4f}")

Output:

pyNLPL library

Levenshtein Distance

Levenshtein distance is the measure of the difference between two words. It calculates how many single-character changes are needed for two words to become the same. It calculates based on the insertion, deletion, or substitution of a character in a word. This distance metric is usually used for checking spellings, DNA sequence analysis, and natural language processing tasks such as text similarity which we will implement in the next section, and it can be used to build plagiarism detectors. By calculating Levenshtein’s distance we can understand the relationship between two words, we can tell if two words are similar or not. If the levenshtein’s distance is very less then those words could have the same meaning or context, and if it is very high then it means they are completely different words.

To calculate this distance, we will first import the levenshtein function from the pynlpl.statistics module. We will then define two words, ‘Analytics’ and ‘Analysis’. Next, we will pass these words into the levenshtein function, which will return the distance value. As you can see in the output, the Levenshtein distance between these two words is 2, meaning it takes only two single-character edits to convert ‘Analytics’ to ‘Analysis’. The first edit is substituting the character ‘t‘ with ‘s‘ in ‘Analytics’, and the second edit is deleting the character ‘c‘ at index 8 in ‘Analytics’.

from pynlpl.statistics import levenshtein

word1 = "Analytics"
word2 = "Analysis"
distance = levenshtein(word1, word2)
    
print(f"Levenshtein distance between '{word1}' and '{word2}': {distance}")

Output:

pyNLPL Library

Measuring Document Similarity

Measuring how similar two documents or sentences are can be useful in many applications. It allows us to understand how closely related the two documents are. This technique is used in many applications such as plagiarism checkers, code difference checkers, and more. By analyzing how similar the two documents are we can identify the duplicate one. This can also be used in recommendation systems, where the search results shown to user A can be shown to user B who typed the same query.

Now to implement this, we will use the cosine similarity metric. First, we will import two functions: FrequencyList from the pyNLPL library and sqrt from the math module. Now we will add two strings to two variables, in place of just strings we can open two text documents also. Next, we will create frequency lists of these strings by passing them to the FrequencyList function we imported earlier. We will then write a function named cosine_similarity, in which we will pass those two frequency lists as inputs. In this function, we will first create vectors from the frequency lists, and then calculate the cosine of the angle between these vectors, providing a measure of their similarity. Finally, we will call the function and print the result.

from pynlpl.statistics import FrequencyList
from math import sqrt

doc1 = "Analytics Vidhya provides valuable insights and tutorials on data science and machine learning."
doc2 = "If you want tutorials on data science and machine learning, check out Analytics Vidhya."

# Creating FrequencyList objects for both documents
freq_list1 = FrequencyList(doc1.lower().split())
freq_list2 = FrequencyList(doc2.lower().split())

def cosine_similarity(freq_list1, freq_list2):
    vec1 = {word: freq_list1[word] for word, _ in freq_list1}
    vec2 = {word: freq_list2[word] for word, _ in freq_list2}

    intersection = set(vec1.keys()) & set(vec2.keys())
    numerator = sum(vec1[word] * vec2[word] for word in intersection)

    sum1 = sum(vec1[word] ** 2 for word in vec1.keys())
    sum2 = sum(vec2[word] ** 2 for word in vec2.keys())
    denominator = sqrt(sum1) * sqrt(sum2)

    if not denominator:
        return 0.0
    return float(numerator) / denominator

# Calculatinng cosine similarity
similarity = cosine_similarity(freq_list1, freq_list2)
print(f"Cosine Similarity: {similarity:.4f}")

Output:

output

Conclusion

pyNLPL is a powerful library using which we can perform textual statistical analysis. Not just text analysis, we can also use this library for some text pre-processing techniques like tokenization, stemming, n-gram extraction, and even building some simple language models. In this blog, we first understood all the ways of installing this library, then we used this library to perform various tasks like implementing the Term Co-Occurrence Matrix, creating frequency lists to identify common words, performing text distribution analysis, and understanding how to calculate levenshtein distance, and calculated document similarity. Each of these techniques can be used to extract valuable insights from our textual data, making it a valuable library. Next time you are doing text analysis, consider trying the pyNLPL (Pineapple) library.

Key Takeaways

  • PyNLPL (Pineapple) library is one of the best libraries for textual statistical analysis.
  • The Term Co-Occurence Matrix helps us understand the relationship between words and could be useful in building summaries.
  • Frequency lists are useful to understand the main theme of the text or document.
  • Text distribution analysis and Levenshtein distance can help us understand the text similarity.
  • We can also use the PyNLPL library for text preprocessing and not just for textual statistical analysis.

Frequently Asked Questions

Q1. What is pyNLPL?

A. PyNLPL, also known as Pineapple, is a Python library used for textual statistical analysis and text pre-processing.

Q2. What is the benefit of measuring document similarity?

A. This technique allows us to measure how similar two documents or texts are and could be used in plagiarism checkers, code difference checkers, and more.

Q3. What is the Term Co-Occurrence Matrix used for?

A. The Term Co-Occurrence Matrix can be used to identify how often two words co-occur in a document.

Q4. How is Levenshtein distance useful?

A. We can use Levenshtein distance to find the difference between two words, which can be useful in building spell checkers.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

👋 Hello! I'm Adil Naib, a passionate data science enthusiast and Kaggle Notebooks Expert currently pursuing a degree in Data Science at Presidency University. Through courses and independent projects, I've acquired expertise in Exploratory Data Analysis, Data Visualization, and Predictive modelling.

🖋 I've written and published informative data science blogs on Analytics Vidhya. As I continue my journey in the data science world, I aim to create more valuable content that provides insights, tips, and solutions to complex data-related challenges.

💼 I'm excited to use my knowledge and talents in the real world and acquire first-hand experience through internships or other possibilities. I eventually want to work in data science and contribute significantly.

In my free time, I enjoy writing Data Science blogs and sharing my knowledge with the community.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details