Fine-tune BERT Model for Sentiment Analysis in Google Colab

Deepak Last Updated : 29 May, 2024
6 min read

Objective

In this blog, we will learn how to Fine-tune a Pre-trained BERT model for the Sentiment analysis task.

Prerequisites

Working knowledge of Python and training neural networks using Tensorflow

Introduction to BERT Model for Sentiment Analysis

Sentiment Analysis is a major task in Natural Language Processing (NLP) field. It is used to understand the sentiments of the customer/people for products, movies, and other such things, whether they feel positive, negative, or neutral about it. It helps companies and other related entities to know about their products/services and helps them to work on the feedback to further improve it.

Let’s understand a little bit about the BERT Architecture Preprocessing method pandas, I won’t go much into details as there are a lot of blogs available for it, check the link in the Reference section

This article was published as a part of the Data Science Blogathon

What is BERT?

Bidirectional Encoder Representation for Transformer (BERT) is an NLP model developed by Google Research in 2018, after its inception it has achieved state-of-the-art accuracy on several NLP tasks.

Transformer architecture has encoder and decoder stack, hence called encoder-decoder architecture whereas BERT is just an encoder stack of transformer architecture. There are two variants, BERT-base and BERT-large, which differ in architecture complexity. The base model has 12 layers in the encoder whereas the Large has 24 layers.

BERT Model for Sentiment Analysis

BERT was trained on a large text corpus, which gives architecture/model the ability to better understand the language and to learn variability in data patterns and generalizes well on several NLP tasks. As it is bidirectional that means BERT learns information from both the left and the right side of a token’s context during the training phase.

BERT Model for Sentiment Analysis 2

Now, Let’s dive into the hands-on part.

First enable the GPU in Google Colab, Edit -> Notebook Settings -> Hardware accelerator -> Set to GPU

Dataset for Sentiment Analysis

We will be using the IMBD dataset, which is a movie reviews dataset containing 100000 reviews consisting of two classes, positive and negative.

We will load the dataset from the TensorFlow dataset API

import tensorflow_datasets as tfds
(ds_train, ds_test), ds_info = tfds.load('imdb_reviews',
          split = (tfds.Split.TRAIN, tfds.Split.TEST),
          as_supervised=True,
          with_info=True)

tfds.load function loads the dataset and split it into train and test sets.

Let’s check a few examples of our dataset

Dataset for Sentiment Analysis

The BERT model we will use is from the Transformer library, we need to install it using python package manager(pip)

!pip install -q transformers

Which model is best for sentiment analysis?

When working with deep learning models, particularly in natural language processing, several pre-trained models offer different advantages and trade-offs. Using frameworks like PyTorch, we can implement and fine-tune these models efficiently. Here’s a comparison of some notable models:

  1. BERT (Bidirectional Encoder Representations from Transformers):
    • Pros: High accuracy, understands context well by considering both previous and next words. This makes it excellent for creating high-quality embeddings.
    • Cons: Computationally intensive, requiring significant resources for training and inference.
  2. RoBERTa (Robustly optimized BERT approach):
    • Pros: An optimized version of BERT, generally performs better due to more training data and longer training times.
    • Cons: Similar computational demands as BERT.
  3. DistilBERT:
    • Pros: A smaller, faster, and less resource-intensive version of BERT with nearly comparable performance. This makes it a popular choice for applications needing quick responses without sacrificing too much accuracy.
    • Cons: Slightly less accurate than the full BERT model.
  4. XLNet:
    • Pros: Handles context better by considering permutations of word orders, often outperforming BERT on several benchmarks.
    • Cons: More complex and resource-intensive.
  5. GPT-3:
    • Pros: Very powerful and flexible, capable of handling various NLP tasks including sentiment analysis and text generation. Its pre-trained model can be easily adapted to many use cases.
    • Cons: High computational and resource demands, making it expensive to use at scale.

Using these open pre-trained models, developers can leverage state-of-the-art performance in their applications without starting from scratch, significantly accelerating the development process in deep learning projects.

What is BERT Tokenizer?

Now we need to apply BERT tokenizer to use pre-trained tokenizers. The tokenizers should also match the core model that we would like to use as the pre-trained, e.g. cased and uncased version. For more details refer HuggingFace Tokenizers.

from transformers import BertTokenizertokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)

Let’s prepare the data according to the format needed for the BERT model

Input IDs – The input ids are often the only required parameters to be passed to the model as input. Token indices, numerical representations of tokens building the sequences that will be used as input by the model.

Attention mask – Attention Mask is used to avoid performing attention on padding token indices. Mask value can be either 0 or 1, 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

Token type ids – It is used in use cases like sequence classification or question answering. As these require two different sequences to be encoded in the same input IDs. Special tokens, such as the classifier[CLS] and separator[SEP] tokens are used to separate the sequences.

What is BERT Tokenizer?

def convert_example_to_feature(review):
  return tokenizer.encode_plus(review,
                add_special_tokens = True, # add [CLS], [SEP]
                max_length = max_length, # max length of the text that can go to BERT
                pad_to_max_length = True, # add [PAD] tokens
                return_attention_mask = True, # add attention mask to not focus on pad tokens
              )

The encode_plus  function of the tokenizer class will tokenize the raw input, add the special tokens, and pad the vector to a size equal to max length (that we can set).


# can be up to 512 for BERT
max_length = 512
batch_size = 6

The following helper functions will help us to transform our raw data to an appropriate format ready to feed into the BERT model

def map_example_to_dict(input_ids, attention_masks, token_type_ids, label):
  return {
      "input_ids": input_ids,
      "token_type_ids": token_type_ids,
      "attention_mask": attention_masks,
  }, label

def encode_examples(ds, limit=-1):
  # prepare list, so that we can build up final TensorFlow dataset from slices.
  input_ids_list = []
  token_type_ids_list = []
  attention_mask_list = []
  label_list = []
  if (limit > 0):
      ds = ds.take(limit)
  for review, label in tfds.as_numpy(ds):
    bert_input = convert_example_to_feature(review.decode())
    input_ids_list.append(bert_input['input_ids'])
    token_type_ids_list.append(bert_input['token_type_ids'])
    attention_mask_list.append(bert_input['attention_mask'])
    label_list.append([label])
  return tf.data.Dataset.from_tensor_slices((input_ids_list, attention_mask_list, token_type_ids_list, label_list)).map(map_example_to_dict)

Now, Let’s form our train and test dataset

# train dataset
ds_train_encoded = encode_examples(ds_train).shuffle(10000).batch(batch_size)
# test dataset
ds_test_encoded = encode_examples(ds_test).batch(batch_size)

BERT Model Initialization for Sentiment Analysis

from transformers import TFBertForSequenceClassification
import tensorflow as tf
# recommended learning rate for Adam 5e-5, 3e-5, 2e-5
learning_rate = 2e-5
# we will do just 1 epoch, though multiple epochs might be better as long as we will not overfit the model
number_of_epochs = 1
# model initialization
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')

The number of epochs is set to 2 as higher epochs will give rise to overfitting problems as well as take more time for the model to train.

# choosing Adam optimizer optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=1e-08) # we do not have one-hot vectors, we can use sparce categorical cross entropy and accuracy loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric])  

Training the BERT model for Sentiment Analysis

Now we can start the fine-tuning process. We will use the Keras API model.fit and just pass the model configuration, that we have already defined.

bert_history = model.fit(ds_train_encoded, epochs=number_of_epochs, validation_data=ds_test_encoded)

The model will take around two hours on GPU to complete training, with just 1 epoch we can achieve over 93% accuracy on validation, you can further increase the epochs and play with other parameters to improve the accuracy.

Test on random sample


test_sentence = "This is a really good movie. I loved it and will watch again"

predict_input = tokenizer.encode(test_sentence,

truncation=True,

padding=True,

return_tensors="tf")

tf_output = model.predict(predict_input)[0]
tf_prediction = tf.nn.softmax(tf_output, axis=1)
labels = ['Negative','Positive'] #(0:negative, 1:positive)
label = tf.argmax(tf_prediction, axis=1)
label = label.numpy()
print(labels[label[0]])

tokenizer. encode will encode our test example into integers using Bert tokenizer, then we use predict method on the encoded input to get our predictions. The model. predict will return logits, on which we can apply softmax function to get the probabilities for each class, and then using TensorFlow argmax function we can get the class with the highest probability and map it to text labels (positive or negative).

Output:

Bert output

End Notes

BERT models achieve state-of-the-art accuracy on several tasks as compared to other RNN architectures. However, they require high computational power and it takes a large time to train on a model.In this paragraph you learn about the Bert models algorithms, like : num, plt, val loss, Cpu, matplotlib.pyplot, train loss.

References

Bidirectional Encoder Representations from Transformers (BERT) (humboldt-wi.github.io)

BERT (huggingface.co)

Transformer Architecture

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details