Topic Modeling Using Latent Dirichlet Allocation (LDA)

Kevin Kibe Last Updated : 06 Nov, 2024
10 min read

Introduction

The internet is a wealth of knowledge and information, which may confuse readers and make them use more time and energy looking for accurate information about particular areas of interest. To recognize and analyze content in online social networks (OSNs), there is a need for more effective techniques and tools, especially for those who employ user-generated content (UGC) as a source of data.

In NLP(Natural Language Processing), Topic Modeling identifies and extracts abstract topics from large collections of text documents. It uses algorithms such as LDA in NLP to identify latent topics in the text and represent documents as a mixture of all the words these topics. Some uses of topic modeling include:

  • Text classification and document organization
  • Marketing and advertising to understand customer preferences
  • Recommendation systems to suggest similar content
  • News categorization and information retrieval systems
  • Customer service and support to categorize customer inquiries.

Researchers and analysts use Latent Dirichlet Allocation, a statistical and visual concept, to discover the connections in word distribution between many documents in a corpus. They employ the Variational Expectation Maximization (VEM) technique to obtain the highest probability estimate from the entire corpus of text.

"

Learning Objectives

  • This project aims to perform topic modeling on a dataset of news headlines to show the topics that stand out and uncover patterns and trends in the news.
  • The second objective of this project will be to have a  visual representation of the dominant topics, which news aggregators, journalists, and individuals can use to gain a broad understanding of the current news landscape quickly.
  • Understanding the topic modeling pipeline and being able to implement it.

This article was published as a part of the Data Science Blogathon.

Important Libraries in Topic Modeling Project

In a topic modeling project, knowledge of the following libraries plays important roles:

  1. Gensim: It is a library for unsupervised topic modeling and document indexing. It provides efficient algorithms for modeling latent topics in large-scale text collections, such as those generated by search engines or online platforms.
  2. NLTK: The Natural Language Toolkit (NLTK) is a library for working with human language data. It provides tools for tokenizing, stemming, and lemmatizing text and for performing part-of-speech tagging, named entity recognition, and sentiment analysis.
  3. Matplotlib: It is a plotting library for Python. Researchers use it to visualize the results of topic models, such as the distribution of topics over documents or the relationships between words and topics.
  4. Scikit-learn: It is a library for machine learning in Python. It provides a wide range of algorithms for modeling topics, including LDA in NLP, Non-Negative Matrix Factorization (NMF), and others.
  5. Pandas: It is a library for data analysis in Python. It provides data structures and functions for working with structured data, such as the results of topic models, in a convenient and efficient manner.
"

What is Topic Modeling Used For?

Topic modeling is a versatile technique utilized in natural language processing and machine learning to uncover underlying themes or topics within a corpus of documents. It serves various purposes seamlessly:

  • Document Organization: Topic modeling aids in organizing extensive document collections by naturally grouping them into clusters based on prevalent themes, facilitating efficient management and retrieval.
  • Information Retrieval: Enhancing search engines, topic modeling categorizes documents into topics, enabling users to access relevant information swiftly and accurately.
  • Content Recommendation: Online platforms leverage topic modeling to recommend personalized content to users, enhancing user experience and engagement across diverse domains such as news, e-commerce, and social media.
  • Text Summarization: By discerning primary topics within documents, topic modeling streamlines the process of generating concise summaries that encapsulate the core essence of the text, fostering comprehension and accessibility.
  • Understanding Textual Data: Researchers and analysts employ topic modeling to glean insights from vast troves of textual data, uncovering prevalent themes, trends, and patterns, thereby enriching comprehension and decision-making processes.
  • Sentiment Analysis: Topic modeling, when integrated with sentiment analysis techniques, enables the exploration of sentiment nuances associated with different topics, facilitating a deeper understanding of textual data and its emotional context.

In essence, topic modeling serves as a powerful tool in deciphering the intricate fabric of textual data, offering invaluable insights and facilitating a myriad of applications across artificial intelligence, data analysis, and information retrieval domains.

Dataset Description of the Topic Modeling Project

The dataset used is from Kaggle’s A million News Headlines. The data contains 1.2 million rows and 2 columns namely “publish date” and “headline text”. The ‘Headline text’ column contains news headlines, and the ‘publish date’ column contains the date when the headline was published.

topic modelling project

Step 1: Importing Necessary Dependencies

The code below imports the libraries(listed in the introduction section above) needed for our project.

import pandas as pd
import matplotlib.pyplot as plt

import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords

import gensim
from gensim.corpora import Dictionary
from gensim.models import LdaModel
from gensim.matutils import corpus2csc
from sklearn.feature_extraction.text import CountVectorizer

from wordcloud import WordCloud
import matplotlib.pyplot as plt

Step 2: Importing and Reading Dataset

Loading our dataset that is in csv format into a data frame. The code below loads the ‘abcnews-date-text.csv’ file into a data frame named ‘df’.

#loading the file from its local path into a dataframe
df=pd.read_csv(r"path\abcnews-date-text.csv\abcnews-date-text.csv")

df

Python Code:

import pandas as pd
import matplotlib.pyplot as plt

import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords

import gensim
from gensim.corpora import Dictionary
from gensim.models import LdaModel
from gensim.matutils import corpus2csc
from sklearn.feature_extraction.text import CountVectorizer

from wordcloud import WordCloud
import matplotlib.pyplot as plt

df=pd.read_csv(r"abcnews-date-text.csv")

print(df.head())

Output:

 The dataset

Step 3: Data Preprocessing

The code below selects the first 1,000,000 rows in the dataset and drops the rest of the columns except the “headline text” column and then names the new data-frame ‘data.’

data = df.sample(n=100000, axis=0) #to select only a million rows to use in our dataset

data= data['headline_text']  #to extract the headline_text column and give it the variable name data

Next, we perform lemmatization and removal of stop-words from the data.

Lemmatization reduces words to the base root, reducing the dimensionality and complexity of the textual data. We assign WordNetLemmatizer() to the variable. This is important to improve the algorithm’s performance and helps the algorithm focus on the meaning of the words rather than the surface form.

Stop-words are common words like “the” and “a” that often appear in text data but do not carry lots of meaning. Removing them helps reduce the data’s complexity, speeds up the algorithm, and makes it easier to find meaningful patterns.

The code below downloads dependencies for performing lemmatization and removing stop-words, then defines a function to process the data and finally applies the function to our data-frame ‘data.’

# lemmatization and removing stopwords

#downloading dependencies
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')

lemmatizer = WordNetLemmatizer()
stop_words = set(stopwords.words("english"))

#function to lemmatize and remove stopwords from the text data
def preprocess(text):
    text = text.lower()
    words = word_tokenize(text)
    words = [lemmatizer.lemmatize(word) for word in words if word not in stop_words]
    return words


#applying the function to the dataset
data = data.apply(preprocess)
data

Output:

 processed data

Step 4: Training the Model

The number of topics is set to 5 (which can be set to as many topics as one wants to extract from the data), the number of passes is 20, and the alpha and eta are set to “auto.” This lets the model estimate the appropriate values. You can experiment with different parameters to see the impact on results.

The code below processes the data to remove words that appear in fewer than 5 documents and those that appear in more than 50% of the data. This ensures that the model does not include words that appear less in the data or more in the data. For example, news headlines in a country will have a lot of mentions of that country which will alter the effectiveness of our model. Then we create a corpus from the filtered data. We then select the number of topics and train the Lda-model, get the topics from the model using ‘show topics’, and then print the topics.

# Create a dictionary from the preprocessed data
dictionary = Dictionary(data)

# Filter out words that appear in fewer than 5 documents or more than 50% of the documents
dictionary.filter_extremes(no_below=5, no_above=0.5)

bow_corpus = [dictionary.doc2bow(text) for text in data]

# Train the LDA model
num_topics = 5
ldamodel = LdaModel(bow_corpus, num_topics=num_topics, id2word=dictionary, passes=20, alpha='auto', eta='auto')

# Get the topics
topics = ldamodel.show_topics(num_topics=num_topics, num_words=10, log=False, formatted=False)

# Print the topics
for topic_id, topic in topics:
    print("Topic: {}".format(topic_id))
    print("Words: {}".format([word for word, _ in topic]))

Output:

 the topics extracted| Topic modeling

Step 5: Plotting a Word Cloud for the Topics

Word cloud is a data visualization tool used to visualize the most frequently occurring words in a large amount of text data and can be useful in understanding the topics present in data. It’s important in text data analysis, and it provides valuable insights into the structure and content of the data.

Word cloud is a simple but effective way of visualizing the content of large amounts of text data. It displays the most frequent words in a graphical format, allowing the user to easily identify the key topics and themes present in the data. The size of each word in the word cloud represents its frequency of occurrence, so the largest words in the cloud correspond to the most commonly occurring words in the data.

This visualization tool can be a valuable asset in text data analysis, providing an easy-to-understand representation of the data’s content. For example, researchers can use a word cloud to quickly identify the dominant topics in a large corpus of news articles, customer reviews, or social media posts. This information can then guide further analysis, such as sentiment analysis or topic modeling, or inform decision-making, such as product development or marketing strategy.

The code below plots word clouds using topic words from the topic id using matplotlib.

# Plotting a wordcloud of the topics 

for topic_id, topic in enumerate(lda_model.print_topics(num_topics=num_topics, num_words=20)):
    topic_words = " ".join([word.split("*")[1].strip() for word in topic[1].split(" + ")])
    wordcloud = WordCloud(width=800, height=800, random_state=21, max_font_size=110).generate(topic_words)
    plt.figure()
    plt.imshow(wordcloud, interpolation="bilinear")
    plt.axis("off")
    plt.title("Topic: {}".format(topic_id))
    plt.show()

Output:

 Topic 0 and 1| Topic modeling

Topic 0 and 1

 topic 2, 3 and 4| Topic modeling

Topic 2, 3 and 4

How will LDA optimize the distributions?

Latent Dirichlet Allocation (LDA) is a generative probabilistic model used for topic modeling. Topic modeling is the process of identifying topics present in a collection of documents. LDA in NLP optimizes the distributions through an iterative process. This involves estimating the parameters of the model based on the observed data, which are the words in the documents.

In Latent Dirichlet Allocation (LDA), we perceive each document as a blend of topics. Each topic possesses a word distribution, which mathematically corresponds to the concept of “topic word distribution.” LDA then allocates a probability distribution over words for each topic. For each document, there is a ‘topic distribution’ indicating the probability of each topic being present in that document.

By analyzing these distributions, LDA in NLP determines the best member-only stories. Here, ‘best’ refers to the most likely topics associated with a given document. The process involves calculating the posterior distribution of topics given the observed words in the document. The equation governing this process encapsulates the essence of LDA.

let’s simplify it:

  • Getting Started: LDA starts by guessing what topics might be in the documents and what words might belong to those topics.
  • Guess and Check: It then looks at each word in each document and makes a guess about which topic it might belong to.
  • Adjusting: Based on those guesses, LDA adjusts its ideas about which topics are in the documents and which words belong to each topic.
  • Repeating: It keeps doing this, guessing and adjusting, over and over again, until it’s not changing its ideas much anymore.
  • Figuring Out: Once it’s done adjusting, LDA figures out the final topics and which words belong to each topic based on its best guesses

Conclusion

Topic modeling is a powerful tool for analyzing and understanding large collections of text data. Topic modeling works by discovering latent topics and the relationships between words and documents, can help uncover hidden patterns and trends in text data and provide valuable insights into the underlying structure of text data.

The combination of powerful libraries such as Gensim, NLTK, Matplotlib, scikit-learn, and Pandas make it easier to perform topic modeling and gain insights from text data. As individuals, organizations, and society continue to generate more text data, topic modeling and its role in data analysis and understanding become increasingly important.

Feel free to leave your comments, and I hope the article has provided insights into topic modeling with Latent Dirichlet Allocation (LDA) and the various use cases of this algorithm.

The code can be found in my github repository.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What is a latent Dirichlet allocation?

A. Latent Dirichlet Allocation (LDA) is a generative probabilistic model used in natural language processing. It helps to discover abstract topics within a collection of documents. LDA assumes each document is a mixture of a small number of topics. Each word in the document is attributable to one of the document’s topics.

Q2. What is LDA and how does it work?

A. LDA works by iteratively updating the topic distribution for each document and the word distribution for each topic. It assigns topics to words in a way that maximizes the likelihood of the observed words in the documents. The process involves two main steps. First, estimating the topic distribution for each document. Second, estimating the word distribution for each topic. These estimates are then updated iteratively until convergence.

Q3. What is the difference between LSA and latent Dirichlet allocation?

A. Latent Semantic Analysis (LSA) uses singular value decomposition (SVD) to reduce the dimensionality of the term-document matrix. This captures the relationships between terms and documents. LDA, on the other hand, is a probabilistic model. It assigns topics to words and documents, providing a more interpretable set of topics by modeling the generation of documents.

Q4. What is the difference between LDA and K-means?

A. LDA is a probabilistic model for topic modeling. It assumes that each document can contain multiple topics with different proportions. K-means is a clustering algorithm. It partitions data into a fixed number of clusters, with each document belonging to only one cluster. LDA provides a soft clustering of words into topics, while K-means provides a hard clustering of documents into clusters.

I am a Data Scientist and Machine Learning specialist with a passion for uncovering insights and solving complex problems.
I have worked on a wide range of projects, using tools such as python and excel to extract, clean, and analyze data from various sources. I have also developed and trained machine learning models to make predictions and generate insights from data..

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details