Natural Language Processing (NLP) is a hotbed of research in data science these days and one of the most common applications of NLP is sentiment analysis. From opinion polls to creating entire marketing strategies, this domain has completely reshaped the way businesses work, which is why this is an area every data scientist must be familiar with.
Thousands of text documents can be processed for sentiment (and other features including named entities, topics, themes, etc.) in seconds, compared to the hours it would take a team of people to manually complete the same task.
In this article, we will learn how to solve the Twitter Sentiment Analysis Practice Problem.
We will do so by following a sequence of steps needed to solve a general sentiment analysis problem. We will start with preprocessing and cleaning of the raw text of the tweets. Then we will explore the cleaned text and try to get some intuition about the context of the tweets. After that, we will extract numerical features from the data and finally use these feature sets to train models and identify the sentiments of the tweets.
This is one of the most interesting challenges in NLP so I’m very excited to take this journey with you!
Let’s go through the problem statement once as it is very crucial to understand the objective before working on the dataset. The problem statement is as follows:
The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets.
Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset.
Note: The evaluation metric from this practice problem is F1-Score.
Personally, I quite like this task because hate speech, trolling and social media bullying have become serious issues these days and a system that is able to detect such texts would surely be of great use in making the internet and social media a better and bully-free place. Let’s look at each step in detail now.
Take a look at the pictures below depicting two scenarios of an office space – one is untidy and the other is clean and organized.
You are searching for a document in this office space. In which scenario are you more likely to find the document easily? Of course, in the less cluttered one because each item is kept in its proper place. The data cleaning exercise is quite similar. If the data is arranged in a structured format then it becomes easier to find the right information.
The preprocessing of the text data is an essential step as it makes the raw text ready for mining, i.e., it becomes easier to extract information from the text and apply machine learning algorithms to it. If we skip this step then there is a higher chance that you are working with noisy and inconsistent data. The objective of this step is to clean noise those are less relevant to find the sentiment of tweets such as punctuation, special characters, numbers, and terms which don’t carry much weightage in context to the text.
In one of the later stages, we will be extracting numeric features from our Twitter text data. This feature space is created using all the unique words present in the entire data. So, if we preprocess our data well, then we would be able to get a better quality feature space.
Let’s first read our data and load the necessary libraries. You can download the datasets from here.
import re import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import string import nltk import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) %matplotlib inline
train = pd.read_csv('train_E6oV3lV.csv') test = pd.read_csv('test_tweets_anuFYb8.csv')
Let’s check the first few rows of the train dataset.
train.head()
The data has 3 columns id, label, and tweet. label is the binary target variable and tweet contains the tweets that we will clean and preprocess.
Initial data cleaning requirements that we can think of after looking at the top 5 records:
As mentioned above, the tweets contain lots of twitter handles (@user), that is how a Twitter user acknowledged on Twitter. We will remove all these twitter handles from the data as they don’t convey much information.
For our convenience, let’s first combine train and test set. This saves the trouble of performing the same steps twice on test and train.
combi = train.append(test, ignore_index=True)
Given below is a user-defined function to remove unwanted text patterns from the tweets. It takes two arguments, one is the original string of text and the other is the pattern of text that we want to remove from the string. The function returns the same input string but without the given pattern. We will use this function to remove the pattern ‘@user’ from all the tweets in our data.
def remove_pattern(input_txt, pattern): r = re.findall(pattern, input_txt) for i in r: input_txt = re.sub(i, '', input_txt) return input_txt
Now let’s create a new column tidy_tweet, it will contain the cleaned and processed tweets. Note that we have passed “@[\w]*” as the pattern to the remove_pattern function. It is actually a regular expression which will pick any word starting with ‘@’.
# remove twitter handles (@user) combi['tidy_tweet'] = np.vectorize(remove_pattern)(combi['tweet'], "@[\w]*")
As discussed, punctuations, numbers and special characters do not help much. It is better to remove them from the text just as we removed the twitter handles. Here we will replace everything except characters and hashtags with spaces.
# remove special characters, numbers, punctuations combi['tidy_tweet'] = combi['tidy_tweet'].str.replace("[^a-zA-Z#]", " ")
We have to be a little careful here in selecting the length of the words which we want to remove. So, I have decided to remove all the words having length 3 or less. For example, terms like “hmm”, “oh” are of very little use. It is better to get rid of them.
combi['tidy_tweet'] = combi['tidy_tweet'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>3]))
Let’s take another look at the first few rows of the combined dataframe.
combi.head()
You can see the difference between the raw tweets and the cleaned tweets (tidy_tweet) quite clearly. Only the important words in the tweets have been retained and the noise (numbers, punctuations, and special characters) has been removed.
Now we will tokenize all the cleaned tweets in our dataset. Tokens are individual terms or words, and tokenization is the process of splitting a string of text into tokens.
tokenized_tweet = combi['tidy_tweet'].apply(lambda x: x.split()) tokenized_tweet.head()
Stemming is a rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “s” etc) from a word. For example, For example – “play”, “player”, “played”, “plays” and “playing” are the different variations of the word – “play”.
from nltk.stem.porter import * stemmer = PorterStemmer() tokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x]) # stemming tokenized_tweet.head()
Now let’s stitch these tokens back together.
for i in range(len(tokenized_tweet)): tokenized_tweet[i] = ' '.join(tokenized_tweet[i]) combi['tidy_tweet'] = tokenized_tweet
import re
import pandas as pd
import numpy as np
import seaborn as sns
import string
import nltk
from nltk.stem.porter import *
train = pd.read_csv('sample_train.csv')
print(train.head())
def remove_pattern(input_txt, pattern):
r = re.findall(pattern, input_txt)
for i in r:
input_txt = re.sub(i, '', input_txt)
return input_txt
print('\n\nRemoving Twitter Handles \n\n')
train['tidy_tweet'] = np.vectorize(remove_pattern)(train['tweet'], "@[\w]*")
print(train['tidy_tweet'].head())
print('\n\nRemoving Short Words\n\n')
train['tidy_tweet'] = train['tidy_tweet'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>3]))
print(train['tidy_tweet'].head())
print('\n\nTweet Tokenization\n\n')
tokenized_tweet = train['tidy_tweet'].apply(lambda x: x.split())
print(tokenized_tweet.head())
print('\n\nStemming\n\n')
stemmer = PorterStemmer()
tokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x])
# stemming
print(tokenized_tweet.head())
In this section, we will explore the cleaned tweets text. Exploring and visualizing data, no matter whether its text or any other data, is an essential step in gaining insights. Do not limit yourself to only these methods told in this tutorial, feel free to explore the data as much as possible.
Before we begin exploration, we must think and ask questions related to the data in hand. A few probable questions are as follows:
Now I want to see how well the given sentiments are distributed across the train dataset. One way to accomplish this task is by understanding the common words by plotting wordclouds.
A wordcloud is a visualization wherein the most frequent words appear in large size and the less frequent words appear in smaller sizes.
Let’s visualize all the words our data using the wordcloud plot.
all_words = ' '.join([text for text in combi['tidy_tweet']]) from wordcloud import WordCloud wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(all_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()
We can see most of the words are positive or neutral. With happy and love being the most frequent ones. It doesn’t give us any idea about the words associated with the racist/sexist tweets. Hence, we will plot separate wordclouds for both the classes(racist/sexist or not) in our train data.
normal_words =' '.join([text for text in combi['tidy_tweet'][combi['label'] == 0]]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(normal_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()
We can see most of the words are positive or neutral. With happy, smile, and love being the most frequent ones. Hence, most of the frequent words are compatible with the sentiment which is non racist/sexists tweets. Similarly, we will plot the word cloud for the other sentiment. Expect to see negative, racist, and sexist terms.
negative_words = ' '.join([text for text in combi['tidy_tweet'][combi['label'] == 1]]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(negative_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()
As we can clearly see, most of the words have negative connotations. So, it seems we have a pretty good text data to work on. Next we will the hashtags/trends in our twitter data.
Hashtags in twitter are synonymous with the ongoing trends on twitter at any particular point in time. We should try to check whether these hashtags add any value to our sentiment analysis task, i.e., they help in distinguishing tweets into the different sentiments.
For instance, given below is a tweet from our dataset:
The tweet seems sexist in nature and the hashtags in the tweet convey the same feeling.
We will store all the trend terms in two separate lists — one for non-racist/sexist tweets and the other for racist/sexist tweets.
# function to collect hashtags def hashtag_extract(x): hashtags = [] # Loop over the words in the tweet for i in x: ht = re.findall(r"#(\w+)", i) hashtags.append(ht) return hashtags
# extracting hashtags from non racist/sexist tweets HT_regular = hashtag_extract(combi['tidy_tweet'][combi['label'] == 0]) # extracting hashtags from racist/sexist tweets HT_negative = hashtag_extract(combi['tidy_tweet'][combi['label'] == 1]) # unnesting list HT_regular = sum(HT_regular,[]) HT_negative = sum(HT_negative,[])
Now that we have prepared our lists of hashtags for both the sentiments, we can plot the top n hashtags. So, first let’s check the hashtags in the non-racist/sexist tweets.
Non-Racist/Sexist Tweets
a = nltk.FreqDist(HT_regular) d = pd.DataFrame({'Hashtag': list(a.keys()), 'Count': list(a.values())}) # selecting top 10 most frequent hashtags d = d.nlargest(columns="Count", n = 10) plt.figure(figsize=(16,5)) ax = sns.barplot(data=d, x= "Hashtag", y = "Count") ax.set(ylabel = 'Count') plt.show()
All these hashtags are positive and it makes sense. I am expecting negative terms in the plot of the second list. Let’s check the most frequent hashtags appearing in the racist/sexist tweets.
Racist/Sexist Tweets
b = nltk.FreqDist(HT_negative) e = pd.DataFrame({'Hashtag': list(b.keys()), 'Count': list(b.values())}) # selecting top 10 most frequent hashtags e = e.nlargest(columns="Count", n = 10) plt.figure(figsize=(16,5)) ax = sns.barplot(data=e, x= "Hashtag", y = "Count") ax.set(ylabel = 'Count') plt.show()
As expected, most of the terms are negative with a few neutral terms as well. So, it’s not a bad idea to keep these hashtags in our data as they contain useful information. Next, we will try to extract features from the tokenized tweets.
To analyze a preprocessed data, it needs to be converted into features. Depending upon the usage, text features can be constructed using assorted techniques – Bag-of-Words, TF-IDF, and Word Embeddings. In this article, we will be covering only Bag-of-Words and TF-IDF.
Bag-of-Words is a method to represent text into numerical features. Consider a corpus (a collection of texts) called C of D documents {d1,d2…..dD} and N unique tokens extracted out of the corpus C. The N tokens (words) will form a list, and the size of the bag-of-words matrix M will be given by D X N. Each row in the matrix M contains the frequency of tokens in document D(i).
Let us understand this using a simple example. Suppose we have only 2 document
D1: He is a lazy boy. She is also lazy.
D2: Smith is a lazy person.
The list created would consist of all the unique tokens in the corpus C.
= [‘He’,’She’,’lazy’,’boy’,’Smith’,’person’]
Here, D=2, N=6
The matrix M of size 2 X 6 will be represented as –
Now the columns in the above matrix can be used as features to build a classification model. Bag-of-Words features can be easily created using sklearn’s CountVectorizer function. We will set the parameter max_features = 1000 to select only top 1000 terms ordered by term frequency across the corpus.
from sklearn.feature_extraction.text import CountVectorizer bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') # bag-of-words feature matrix bow = bow_vectorizer.fit_transform(combi['tidy_tweet'])
This is another method which is based on the frequency method but it is different to the bag-of-words approach in the sense that it takes into account, not just the occurrence of a word in a single document (or tweet) but in the entire corpus.
TF-IDF works by penalizing the common words by assigning them lower weights while giving importance to words which are rare in the entire corpus but appear in good numbers in few documents.
Let’s have a look at the important terms related to TF-IDF:
from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') # TF-IDF feature matrix tfidf = tfidf_vectorizer.fit_transform(combi['tidy_tweet'])
We are now done with all the pre-modeling stages required to get the data in the proper form and shape. Now we will be building predictive models on the dataset using the two feature set — Bag-of-Words and TF-IDF.
We will use logistic regression to build the models. It predicts the probability of occurrence of an event by fitting data to a logit function.
The following equation is used in Logistic Regression:
Read this article to know more about Logistic Regression.
Note: If you are interested in trying out other machine learning algorithms like RandomForest, Support Vector Machine, or XGBoost, then we have a free full-fledged course on Sentiment Analysis for you.
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score train_bow = bow[:31962,:] test_bow = bow[31962:,:] # splitting data into training and validation set xtrain_bow, xvalid_bow, ytrain, yvalid = train_test_split(train_bow, train['label'], random_state=42, test_size=0.3) lreg = LogisticRegression() lreg.fit(xtrain_bow, ytrain) # training the model prediction = lreg.predict_proba(xvalid_bow) # predicting on the validation set prediction_int = prediction[:,1] >= 0.3 # if prediction is greater than or equal to 0.3 than 1 else 0 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int) # calculating f1 score
Output: 0.53
We trained the logistic regression model on the Bag-of-Words features and it gave us an F1-score of 0.53 for the validation set. Now we will use this model to predict for the test data.
test_pred = lreg.predict_proba(test_bow) test_pred_int = test_pred[:,1] >= 0.3 test_pred_int = test_pred_int.astype(np.int) test['label'] = test_pred_int submission = test[['id','label']] submission.to_csv('sub_lreg_bow.csv', index=False) # writing data to a CSV file
The public leaderboard F1 score is 0.567. Now we will again train a logistic regression model but this time on the TF-IDF features. Let’s see how it performs.
train_tfidf = tfidf[:31962,:] test_tfidf = tfidf[31962:,:] xtrain_tfidf = train_tfidf[ytrain.index] xvalid_tfidf = train_tfidf[yvalid.index] lreg.fit(xtrain_tfidf, ytrain) prediction = lreg.predict_proba(xvalid_tfidf) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int)
Output: 0.544
The validation score is 0.544 and the public leaderboard F1 score is 0.564. So, by using the TF-IDF features, the validation score has improved and the public leaderboard score is more or less the same.
If you are interested to learn about more techniques for Sentiment Analysis, we have a well laid out video course on NLP for you.This course is designed for people who are looking to get into the field of Natural Language Processing. It provides you everything you need to know to become an NLP practitioner.
Key topics covered in the course:
In this article, we learned how to approach a sentiment analysis problem. We started with preprocessing and exploration of data. Then we extracted features from the cleaned text using Bag-of-Words and TF-IDF. Finally, we were able to build a couple of models using both the feature sets to classify the tweets.
Did you find this article useful? Do you have any useful trick? Did you use any other method for feature extraction? Feel free to discuss your experiences in comments below or on the discussion portal and we’ll be more than happy to discuss.
Thanks you for your work on the twitter sentiment in the article is, there any way to get the article in PDF format? I am new to NLTP / NLTK and would like to work through the article as I look at my own dataset but it is difficult scrolling back and forth as I work.
Hi Tom, The entire code has been shared in the end. Feel free to use it. Regards, Prateek Joshi
Hello I can't seem to find the data
Hi Nicholas, You can download the datasets from here.
Hi Prateek, I just wanted to know where are you getting the label values? Where are you calculating it? Because if you are scrapping the tweets from twitter it does not come with that field. So how are you determining whether it is a positive or a negative tweet?
Can you post R code as well