What Are N-Grams and How to Implement Them in Python?

Nithyashree Last Updated : 12 Nov, 2024
12 min read

N-grams are one of the fundamental concepts every data scientist and computer science professional must know while working with text data. In this beginner-level tutorial, we will learn what n-grams are and explore them on text data in Python. The objective of the blog is to analyze different types of n-grams on the given text data and hence decide which n-gram works the best for our data.In this article we are covering about n grams and how are they classified with that we have also implemented ngram in python.

In this article, you will learn what n-grams in NLP are, explore how to implement Python n-grams, and understand the concept of unsmoothed n-grams in NLP for effective text analysis.

Learning Objectives

  • Implement n-gram in Python from scratch and using nltk
  • Understand n-grams and their importance
  • Know the applications of n-grams in NLP

This article was published as a part of the Data Science Blogathon.

Quiz Time

Step into the realm of N-Grams and their implementation in Python using NLTK library. Good luck!

What is N-Grams(ngrams)?

N-grams are continuous sequences of words or symbols, or tokens in a document. In technical terms, they can be defined as the neighboring sequences of items in a document. They come into play when we deal with text data in NLP (Natural Language Processing) tasks. They have a wide range of applications, like language models, semantic features, spelling correction, machine translation, text mining, etc.

How Are N-Grams Classified?

Did you notice the ‘n’ in the term “n-grams”? Can you guess what this ‘n’ possibly is?

Remember when we learned how to input an array by first inputting its size(n) or even a number from the user? Generally, we used to store such values in a variable declared as ‘n’! Apart from programming, you must have extensively encountered ‘n’ in the formulae of the sum of series and so on. What do you think ‘n’ was over there?

Summing up, ‘n’ is just a variable that can have positive integer values, including 1,2,3, and so on.’n’ basically refers to multiple.

Thinking along the same lines, n-grams are classified into the following types, depending on the value that ‘n’ takes.

nTerm
1Unigram
2Bigram
3Trigram
nn-gram

As clearly depicted in the table above, when n=1, it is said to be a unigram. When n=2, it is said to be a bigram, and so on.

Now, you must be wondering why we need many different types of n-grams?! This is because different types of n-grams are suitable for different types of applications. You should try different n-grams on your data in order to confidently conclude which one works the best among all for your text analysis. For instance, research has substantiated that trigrams and 4 grams work the best in the case of spam filtering.

Read this article Guide to deal with text data using python for data engineers

Example of N-Grams

Let’s understand n-grams practically with the help of the following sample sentence:

“I reside in Bengaluru”.

SL.No.Type of n-gram    Generated n-grams
1Unigram[“I”,”reside”,”in”,”Bengaluru”]
2Bigram[“I reside”,”reside in”,”in Bengaluru”]
3Trigram[“I reside in”, “reside in Bengaluru”]
from nltk import ngrams
sentence = 'I reside in Bengaluru.'
n = 1
unigrams = ngrams(sentence.split(), n)
for grams in unigrams:
  print grams

For the time being, let’s not consider the removal of stop-words :

From the table above, it’s clear that unigram means taking only one word at a time, bigram means taking two words at a time, and trigram means taking three words at a time. We will be implementing only till trigrams here in this blog. Feel free to proceed ahead and explore 4 grams, 5 grams, and so on from your takeaways from the blog!

Step-By-Step Implementation of N-Grams in Python

And here comes the most interesting section of the blog! Unless we practically implement what we learn, there is absolutely no fun in learning it! So, let’s proceed to code and generate n-grams on Google Colab in Python. You can also build a simple n-gram language model on top of this code. Here We have Classified Ngram in python.

Step 1: Explore the Dataset

I will be using sentiment analysis for the financial news dataset. The sentiments are from the perspective of retail investors. It is an open-source Kaggle dataset. Download it from here before moving ahead.

Let’s begin, as usual, by importing the required libraries and reading and understanding the data:

Python Code:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use(style='seaborn')

df=pd.read_csv('all-data.csv',encoding = "ISO-8859-1")
print(df.head())
df.info()
data.info

You can see that the dataset has 4846 rows and two columns, namely,’ Sentiment’ and ‘News Headline.’

NOTE: When you download the dataset from Kaggle directly, you will notice that the columns are nameless! So, I named them later and updated them in the all-data.csv file before reading it using pandas. Ensure that you do not miss this step.

df.isna().sum()
find null value

The data is just perfect, with absolutely no missing values at all! That’s our luck, indeed!

df['Sentiment'].value_counts()
sentiment value count

We can undoubtedly infer that the dataset includes three categories of sentiments:

  • Neutral
  • Positive
  • Negative

Out of 4846 sentiments, 2879 have been found to be neutral, 1363 positive, and the rest negative.

Step 2: Feature Extraction

Our objective is to predict the sentiment of a given news headline. Obviously, the ‘News Headline’ column is our only feature, and the ‘Sentiment’ column is our target variable.

y=df['Sentiment'].values
y.shape
x=df['News Headline'].values
x.shape

Both the outputs return a shape of (4846,) which means 4846 rows and 1 column as we have 4846 rows of data and just 1 feature and a target for x and y, respectively.

Step 3: Train-Test Split

In any machine learning, deep learning, or NLP(Natural Language Processing) task, splitting the data into train and test is indeed a highly crucial step. The train_test_split() method provided by sklearn is widely used for the same. So, let’s begin by importing it:

from sklearn.model_selection import train_test_split

Here’s how I’ve split the data: 60% for the train and the rest 40% for the test. I had started with 20% for the test. I kept on playing with the test_size parameter only to realize that the 60-40 ratio of split provides more useful and meaningful insights from the trigrams generated. Don’t worry; we will be looking at trigrams in just a while.

(x_train,x_test,y_train,y_test)=train_test_split(x,y,test_size=0.4)
x_train.shape
y_train.shape
x_test.shape
y_test.shape

On executing the codes above, you will observe that 2907 rows have been considered as train data, and the rest of the 1939 rows have been considered as test data.

Our next step is to convert these NumPy arrays to Pandas data frames and thus create two data frames, namely,df_train and df_test. The former is created by concatenating x_train and y_train arrays. The latter data frame is created by concatenating x_test and y_test arrays. This is necessary to count the number of positive, negative, and neutral sentiments in both train and test datasets which we will be doing in a while.

df1=pd.DataFrame(x_train)
df1=df1.rename(columns={0:'news'})
df2=pd.DataFrame(y_train)
df2=df2.rename(columns={0:'sentiment'})
df_train=pd.concat([df1,df2],axis=1)
df_train.head()
train data.head
df3=pd.DataFrame(x_test)
df3=df3.rename(columns={0:'news'})
df4=pd.DataFrame(y_test)
df4=df2.rename(columns={0:'sentiment'})
df_test=pd.concat([df3,df4],axis=1)
df_test.head()
test.head

Step 4: Basic Pre-Processing of Train and Test Data

Here, in order to pre-process our text data, we will remove punctuations in train and test data for the ‘news’ column using punctuation provided by the string library.

#removing punctuations
#library that contains punctuation
import string
string.punctuation
#defining the function to remove punctuation
def remove_punctuation(text):
  if(type(text)==float):
    return text
  ans=""  
  for i in text:     
    if i not in string.punctuation:
      ans+=i    
  return ans
#storing the puntuation free text in a new column called clean_msg
df_train['news']= df_train['news'].apply(lambda x:remove_punctuation(x))
df_test['news']= df_test['news'].apply(lambda x:remove_punctuation(x))
df_train.head()
#punctuations are removed from news column in train dataset
punctuation removed | n-grams

Compare the above output with the previous output of df_train. You can observe that punctuations have been successfully removed from the text present in the feature column(news column) of the training dataset. Similarly, from the above codes, punctuations will be removed successfully from the news column of the test data frame as well. You can optionally view df_test.head() as well to note it.

As a next step, we have to remove stopwords from the news column. For this, let’s use the stopwords provided by nltk as follows:

import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')

We will be using this to generate n-grams in the very next step.

Checkout this article about  how to Build your own Language Model in Python

Step 5: Code to Generate N-grams

Let’s code a custom function to generate n-grams for a given text as follows:

#method to generate n-grams:
#params:
#text-the text for which we have to generate n-grams
#ngram-number of grams to be generated from the text(1,2,3,4 etc., default value=1)
def generate_N_grams(text,ngram=1):
  words=[word for word in text.split(" ") if word not in set(stopwords.words('english'))]  
  print("Sentence after removing stopwords:",words)
  temp=zip(*[words[i:] for i in range(0,ngram)])
  ans=[' '.join(ngram) for ngram in temp]
  return ans

The above function inputs two parameters, namely, text and ngram, which refer to the text data for which we want to generate a given number of n-grams and the number of grams to be generated, respectively. Firstly, word tokenization is done where the stop words are ignored, and the remaining words are retained. From the example section, you must have been clear on how to generate n-grams manually for a given text. We have coded the very same logic in the function generate_N_grams() above. It will thus consider n words at a time from the text where n is given by the value of the ngram parameter of the function.

Let’s check the working of the function with the help of a simple example to create bigrams as follows:

#sample!
generate_N_grams("The sun rises in the east",2)
generate n grams

Great! We are now set to proceed.

Step 6: Creating Unigrams

Let’s follow the steps below to create unigrams for the news column of the df_train data frame:

  1. Create unigrams for each of the news records belonging to each of the three categories of sentiments.
  2. Store the word and its count in the corresponding dictionaries.
  3. Convert these dictionaries to corresponding data frames.
  4. Fetch the top 10 most frequently used words.
  5. Visualize the most frequently used words for all the 3 categories-positive, negative and neutral.

Have a look at the codes below to understand the steps better.

from collections import defaultdict
positiveValues=defaultdict(int)
negativeValues=defaultdict(int)
neutralValues=defaultdict(int)
#get the count of every word in both the columns of df_train and df_test dataframes
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="positive"
for text in df_train[df_train.sentiment=="positive"].news:
  for word in generate_N_grams(text):
    positiveValues[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="negative"
for text in df_train[df_train.sentiment=="negative"].news:
  for word in generate_N_grams(text):
    negativeValues[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="neutral"
for text in df_train[df_train.sentiment=="neutral"].news:
  for word in generate_N_grams(text):
    neutralValues[word]+=1
#focus on more frequently occuring words for every sentiment=>
#sort in DO wrt 2nd column in each of positiveValues,negativeValues and neutralValues
df_positive=pd.DataFrame(sorted(positiveValues.items(),key=lambda x:x[1],reverse=True))
df_negative=pd.DataFrame(sorted(negativeValues.items(),key=lambda x:x[1],reverse=True))
df_neutral=pd.DataFrame(sorted(neutralValues.items(),key=lambda x:x[1],reverse=True))
pd1=df_positive[0][:10]
pd2=df_positive[1][:10]
ned1=df_negative[0][:10]
ned2=df_negative[1][:10]
nud1=df_neutral[0][:10]
nud2=df_neutral[1][:10]
plt.figure(1,figsize=(16,4))
plt.bar(pd1,pd2, color ='green',
        width = 0.4)
plt.xlabel("Words in positive dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in positive dataframe-UNIGRAM ANALYSIS")
plt.savefig("positive-unigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(ned1,ned2, color ='red',
        width = 0.4)
plt.xlabel("Words in negative dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in negative dataframe-UNIGRAM ANALYSIS")
plt.savefig("negative-unigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(nud1,nud2, color ='yellow',
        width = 0.4)
plt.xlabel("Words in neutral dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in neutral dataframe-UNIGRAM ANALYSIS")
plt.savefig("neutral-unigram.png")
plt.show()

Step 7: Creating Bigrams

Repeat the same steps which we followed to analyze our data using unigrams, except that you have to pass parameter 2 while invoking the generate_N_grams() function. You can optionally consider changing the names of the data frames, which I have done.

positiveValues2=defaultdict(int)
negativeValues2=defaultdict(int)
neutralValues2=defaultdict(int)
#get the count of every word in both the columns of df_train and df_test dataframes
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="positive"
for text in df_train[df_train.sentiment=="positive"].news:
  for word in generate_N_grams(text,2):
    positiveValues2[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="negative"
for text in df_train[df_train.sentiment=="negative"].news:
  for word in generate_N_grams(text,2):
    negativeValues2[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="neutral"
for text in df_train[df_train.sentiment=="neutral"].news:
  for word in generate_N_grams(text,2):
    neutralValues2[word]+=1
#focus on more frequently occuring words for every sentiment=>
#sort in DO wrt 2nd column in each of positiveValues,negativeValues and neutralValues
df_positive2=pd.DataFrame(sorted(positiveValues2.items(),key=lambda x:x[1],reverse=True))
df_negative2=pd.DataFrame(sorted(negativeValues2.items(),key=lambda x:x[1],reverse=True))
df_neutral2=pd.DataFrame(sorted(neutralValues2.items(),key=lambda x:x[1],reverse=True))
pd1bi=df_positive2[0][:10]
pd2bi=df_positive2[1][:10]
ned1bi=df_negative2[0][:10]
ned2bi=df_negative2[1][:10]
nud1bi=df_neutral2[0][:10]
nud2bi=df_neutral2[1][:10]
plt.figure(1,figsize=(16,4))
plt.bar(pd1bi,pd2bi, color ='green',width = 0.4)
plt.xlabel("Words in positive dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in positive dataframe-BIGRAM ANALYSIS")
plt.savefig("positive-bigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(ned1bi,ned2bi, color ='red',
        width = 0.4)
plt.xlabel("Words in negative dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in negative dataframe-BIGRAM ANALYSIS")
plt.savefig("negative-bigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(nud1bi,nud2bi, color ='yellow',
        width = 0.4)
plt.xlabel("Words in neutral dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in neutral dataframe-BIGRAM ANALYSIS")
plt.savefig("neutral-bigram.png")
plt.show()

Also, You can Go through about Machine Learning Algorithms and their types

Step 8: Creating Trigrams

Repeat the same steps which we followed to analyze our data using unigrams, except that you have to pass parameter 3 while invoking the generate_N_grams() function. You can optionally consider changing the names of the data frames, which I have done.

positiveValues3=defaultdict(int)
negativeValues3=defaultdict(int)
neutralValues3=defaultdict(int)
#get the count of every word in both the columns of df_train and df_test dataframes
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="positive"
for text in df_train[df_train.sentiment=="positive"].news:
  for word in generate_N_grams(text,3):
    positiveValues3[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="negative"
for text in df_train[df_train.sentiment=="negative"].news:
  for word in generate_N_grams(text,3):
    negativeValues3[word]+=1
#get the count of every word in both the columns of df_train and df_test dataframes where sentiment="neutral"
for text in df_train[df_train.sentiment=="neutral"].news:
  for word in generate_N_grams(text,3):
    neutralValues3[word]+=1#focus on more frequently occuring words for every sentiment=>
#sort in DO wrt 2nd column in each of positiveValues,negativeValues and neutralValues
df_positive3=pd.DataFrame(sorted(positiveValues3.items(),key=lambda x:x[1],reverse=True))
df_negative3=pd.DataFrame(sorted(negativeValues3.items(),key=lambda x:x[1],reverse=True))
df_neutral3=pd.DataFrame(sorted(neutralValues3.items(),key=lambda x:x[1],reverse=True))
pd1tri=df_positive3[0][:10]
pd2tri=df_positive3[1][:10]
ned1tri=df_negative3[0][:10]
ned2tri=df_negative3[1][:10]
nud1tri=df_neutral3[0][:10]
nud2tri=df_neutral3[1][:10]
plt.figure(1,figsize=(16,4))
plt.bar(pd1tri,pd2tri, color ='green',
        width = 0.4)
plt.xlabel("Words in positive dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in positive dataframe-TRIGRAM ANALYSIS")
plt.savefig("positive-trigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(ned1tri,ned2tri, color ='red',
        width = 0.4) 
plt.xlabel("Words in negative dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in negative dataframe-TRIGRAM ANALYSIS")
plt.savefig("negative-trigram.png")
plt.show()
plt.figure(1,figsize=(16,4))
plt.bar(nud1tri,nud2tri, color ='yellow',
        width = 0.4) 
plt.xlabel("Words in neutral dataframe")
plt.ylabel("Count")
plt.title("Top 10 words in neutral dataframe-TRIGRAM ANALYSIS")
plt.savefig("neutral-trigram.png")
plt.show()

What is the advantage of using n-gram in language modeling?

Advantages of Using N-grams

  1. Understanding Context: These help capture the meaning of words based on the words around them. This means they can better predict what comes next in a sentence.
  2. Easy to Use: These models are simple to create and understand. They work by counting how often different word sequences appear in a text, making it clear how predictions are made.
  3. Adjustable Size: You can change the size of ‘n’ to fit your needs. A smaller ‘n’ is faster but may miss some context, while a larger ‘n’ captures more information but is more complex.
  4. Wide Range of Uses: N-grams can be used in many applications, like speech recognition, translating languages, and suggesting the next word when you type.
  5. Statistical Approach: N-gram models use statistics to predict the likelihood of word sequences. This helps in tasks like filtering spam emails or improving text suggestions.

Results of the Model

From the above graphs, we can conclude that trigrams perform the best on our train data. This is because it provides more useful words frequently, such as profit rose EUR, a year earlier for the positive data frame, corresponding period, period 2007, names of companies such as HEL for the negative data frame and Finland, the company said and again names of companies such as HEL, OMX Helsinki and so on for the neutral data frame.

Conclusion

Therefore, n-grams are one of the most powerful techniques for extracting features from the text while working on a text problem. You can find the entire code here. In this blog, we have successfully learned what n-grams are and how we can generate n-grams for a given text dataset easily in Python. We also understood the applications of n-grams in NLP and generated n-grams in the case study of sentiment analysis.

Hope you like the article! N-grams in NLP are essential for analyzing text, enabling Python n-grams to predict word sequences. Unsmoothed n-grams in NLP help manage data sparsity, making ngrams Python a vital tool for language modeling. What is n-grams in NLP? They are contiguous sequences of n words, crucial for understanding context in natural language processing tasks.

Key Takeaways

  • N-grams are the most powerful technique to extract the features from the text.
  • N-grams have a wide range of applications in language models, spelling correctors, text classification problems, and more.

Frequently Asked Questions

Q1. How do you implement n-gram in Python?

A. Below is the n-gram implementation code for Python.
from nltk import ngrams
sentence = 'Hi! How are you doing today?'
n = 2
bigrams = ngrams(sentence.split(), 2)
for grams in bigrams:
print grams

Q2. What does n-gram do in Python?

A. N-grams split the sentence into multiple sequences of tokens depending upon the value of n. For example, given n=3, n-grams for the following sentence “I am doing well today” looks like [“I am doing”, “am doing good”, “doing good today”]

Q3. What are n-grams used for in NLP?

A. N-grams are used in the various use cases of NLP, such as spelling correction, machine translation, language models, semantic feature extraction, etc.

Q4. What is the difference between n-grams and bigrams?

A. The ‘n’ in n-grams refers to the no. of sequences of tokens. Hence, when the value of n=2, it’s known as bigrams.

Q5. What are the advantages and disadvantages of using n-grams in NLP?

A. Here are the advantages and disadvantages of n-grams in NLP.
Pros
The concept of n-grams is simple and easy to use yet powerful. Hence, it can be used to build a variety of applications in NLP, like language models, spelling correctors, etc.
Cons
N-grams cannot deal Out Of Vocabulary (OOV) words. It works well with the words present in the training set. In the case of an Out Of Vocabulary (OOV) word, n-grams fail to tackle it.
Another serious concern about n-grams is that it deals with large sparsity.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

I am Nithyashree V, a final year BTech Computer Science and Engineering student at Dayananda Sagar University,Bangalore. I love learning technologies and putting them into practice, especially to observe how they help us solve society’s challenging problems. My areas of interest include Artificial Intelligence, Data Science, and Natural Language Processing.

Responses From Readers

Clear

Sriansh Raj Pradhan
Sriansh Raj Pradhan

I am getting so many 'Key errors' while running the given code. Can you help ?

Flash Card

What are N-grams in the context of Natural Language Processing (NLP)?

N-grams are continuous sequences of words, symbols, or tokens in a document. They are defined as neighboring sequences of items in a document. N-grams are used in NLP tasks to analyze text data. They have applications in language models, semantic features, spelling correction, machine translation, and text mining.

Quiz

What are N-grams used for in Natural Language Processing (NLP)?

Flash Card

How are N-grams classified based on the value of 'n'?

N-grams are classified by the number of items in the sequence, denoted by 'n'. Unigram: n=1, representing single words or tokens. Bigram: n=2, representing pairs of consecutive words or tokens. Trigram: n=3, representing sequences of three consecutive words or tokens. n-gram: n=n, representing sequences of 'n' consecutive words or tokens.

Quiz

What does a Bigram represent in the classification of N-grams?

Flash Card

What are some applications of different types of N-grams?

Different types of N-grams are suitable for various applications. Trigrams and 4-grams are particularly effective for spam filtering. N-grams are used in language modeling, spelling correction, and text classification.

Quiz

Which types of N-grams are particularly effective for spam filtering?

Flash Card

How can N-grams be implemented in Python using the NLTK library?

The NLTK library in Python can be used to generate N-grams. Example code for generating unigrams, bigrams, and trigrams involves using the 'ngrams' function. The code splits a sentence into words and generates N-grams based on the specified 'n' value.

Quiz

Which Python library is commonly used to generate N-grams?

Flash Card

What are the steps involved in implementing N-grams for a dataset?

Explore the dataset, such as a sentiment analysis dataset. Import necessary libraries and read the data. Extract features from the text data for sentiment prediction. Split the data into training and testing sets using 'train_test_split'. Perform basic pre-processing by removing punctuations and stopwords. Create a function to generate N-grams from the text data. Use the function to generate and analyze unigrams, bigrams, and trigrams.

Quiz

What is one of the initial steps in implementing N-grams for a dataset?

Flash Card

What are the advantages of using N-grams in NLP tasks?

N-grams capture the meaning of words based on surrounding words. They are simple to create and understand. The size of N-grams can be adjusted for different needs. They have a wide range of applications in NLP tasks.

Quiz

What is one advantage of using N-grams in NLP tasks?

Flash Card

What are the limitations of N-grams in NLP?

N-grams cannot handle Out Of Vocabulary (OOV) words. They deal with large sparsity, which can be a challenge in some applications.

Quiz

What is a limitation of using N-grams in NLP?

Flash Card

How are N-grams used in sentiment analysis?

N-grams are used for feature extraction in sentiment analysis. They help visualize the results by analyzing the frequency of N-grams. Insights into the sentiment of text data can be gained through N-gram analysis.

Quiz

What role do N-grams play in sentiment analysis?

Flash Card

Why are N-grams important in NLP?

N-grams are crucial for capturing word context and predicting word sequences. They can be adjusted for different applications, making them versatile in text analysis. N-grams play a significant role in various NLP tasks, enhancing the understanding of text data.

Quiz

Why are N-grams considered important in NLP?

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details