News Classification by Fine-tuning Small Language Model

Nibedita Dutta Last Updated : 17 Dec, 2024
11 min read

Small Language Models (SLMs) are compact, efficient versions of large language models (LLMs) with fewer than 10 billion parameters. They are designed to reduce computational costs, energy usage, and latency while maintaining targeted performance. SLMs are ideal for resource-constrained environments like edge computing and real-time applications. By focusing on specific tasks and utilizing smaller datasets, they offer a balance between efficiency and performance. These models provide a practical solution for applications such as lightweight chatbots and on-device AI, making advanced AI more accessible and scalable.

Learning Objectives

  • Understand the difference between SLMs and LLMs in terms of size, training data, and computational requirements.
  • Recognize the benefits of fine-tuning SLMs for domain-specific tasks, including efficiency, precision, and faster training.
  • Identify when fine-tuning is necessary and when alternatives like prompt engineering or Retrieval Augmented Generation (RAG) should be used.
  • Explore parameter-efficient fine-tuning techniques like LoRA and how they reduce computational costs while enhancing model adaptation.
  • Understand the practical application of fine-tuning SLMs using examples, such as classifying news categories with Microsoft’s Phi-3.5-mini-instruct model.

This article was published as a part of the Data Science Blogathon.

Understanding SLMs vs LLMs

Below we will discover the key differences between Small Language Models and Large Language Models to understand their unique strengths:

  • Size: SLMs are smaller with fewer than 10 billion parameters, while LLMs are much larger with more parameters.
  • Training Data & Time: SLMs use smaller, focused datasets and take weeks to train; LLMs use large, varied datasets and take months to train.
  • Computing Resources: SLMs require fewer resources, making them more sustainable; LLMs need extensive resources for training and operation.
  • Proficiency: SLMs excel at simpler, specific tasks; LLMs are best for complex, generic tasks.
  • Inference & Control: SLMs can run locally on devices with faster response times and more control; LLMs require specialized hardware and are less flexible for user control.
  • Cost: SLMs are more cost-effective due to lower computing resource requirements, while LLMs are more expensive to run and train

Need For Fine-tuning SLMS

Fine-tuning small language models (SLMs) is increasingly recognized as a valuable approach in various applications. Here are the key reasons for this need:

  • Specialization for Domain-Specific Tasks: SLMs can be fine-tuned on domain-specific datasets, enabling them to understand specialized vocabulary and contexts better than larger, generalized models. For instance, a small model trained on legal documents can provide accurate legal interpretations, while a larger model may misinterpret terminology due to its generic training.
  • Efficiency and Cost-Effectiveness: Fine-tuning smaller models typically requires fewer computational resources and less time compared to larger models.
  • Faster Training and Iteration: The fine-tuning process for SLMs is generally simpler and quicker, enabling rapid iterations and faster deployment.
  • Reduced Risk of Overfitting: Smaller models tend to generalize better when trained on limited datasets, reducing the risk of overfitting.
  • Enhanced Security and Privacy: SLMs can be deployed in more secure environments (e.g., on-premises), which helps protect sensitive data from potential leaks.
  • Lower Latency for Real-Time Applications: Due to their smaller size, SLMs can process requests more quickly, making them ideal for applications that require low latency, such as customer service chatbots or real-time data analysis.

When to Fine-tune?

Before diving into fine-tuning, it is important to consider if Fine-tuning of the model is really needed or the problem in hand can be handled by using techniques like prompt engineering, retrieval augmented generation or by addition of intermediate reasoning steps.

Fine-tuning is best suited for high-stakes applications requiring precision and context awareness with adequate resources, while prompt engineering offers a flexible and cost-effective alternative for rapid adaptation and experimentation in diverse scenarios.

Fine-tuning is ideal when a model needs to specialize in a specific domain. It works best for static knowledge and tasks requiring high accuracy. On the other hand, RAG is suited for applications needing dynamic knowledge integration. It excels in broader contextual understanding, reducing hallucinations, and offering cost-effective solutions.

Parameter-efficient fine-tuning

Parameter-efficient fine-tuning (PEFT) enhances the performance of pre-trained language models for specific tasks while minimizing computational costs. Instead of retraining an entire model, PEFT reuses the existing parameters and adjusts only a few layers, typically those related to the task at hand. This approach significantly reduces the need for extensive datasets and computational resources. By freezing the majority of the pre-trained model’s layers and fine-tuning only the final ones, PEFT ensures efficient adaptation to new tasks

How is PEFT Different from Fine-tuning?

PEFT provides an efficient alternative to traditional fine-tuning by focusing on a small subset of parameters while maintaining most of the pre-trained model’s structure. This approach allows organizations to adapt LLMs effectively without incurring high computational costs or requiring extensive datasets. Each method has its advantages and is suited for different scenarios depending on resource availability and task requirements.

How is PEFT Different from Fine-tuning?

LORA – Parameter-efficient fine-tuning Techniques

Updating all the parameters of large language models can be costly, particularly due to the constraints of GPU memory.

LoRA, or Low-Rank Adaptation, is an innovative technique for fine-tuning large language models (LLMs) that enhances efficiency and reduces computational costs. Instead of updating all parameters of a pre-trained model, LoRA freezes the original weights and introduces smaller, trainable low-rank matrices that approximate the necessary adjustments. This approach significantly decreases the number of parameters that need to be trained, allowing for faster training times and lower resource requirements.

Formula Explanation

Consider a model with 10 billion parameters stored in a weight matrix W. During backpropagation, a matrix ΔW is calculated, which indicates the adjustments needed to the original weights in order to reduce the loss function during the training process.

The weight update is then as follows:

W’ = W + ΔW

When the weight matrix W has 10 billion parameters, the update matrix ΔW will also contain 10 billion parameters, making the computation of ΔW highly resource-intensive in terms of both memory and processing power.

LoRA introduces a method to express ΔW as the product of two smaller matrices, A and B, which have a lower rank. This results in the updated weight matrix W’ being:

W′=W+BA

In this formulation, W remains fixed and is not updated during training. The matrices B and A are of reduced dimensions, and their product BA provides a low-rank approximation of ΔW.

By setting A and B to have a lower rank r, the number of parameters to train is greatly minimized. For instance, if W is a d x d matrix, updating it traditionally would involve d² parameters. However, when B is d x r and A is r x d, the total number of parameters needed is reduced to 2dr, which is much smaller when r << d.

output

LoRA reduces memory usage and computational requirements by lowering the number of parameters to update, enabling faster training and fine-tuning of large models. This makes it feasible to adapt large models on less powerful hardware and scale them efficiently without increasing resource demands.

Equation for full parameter Fine-tuning

Consider the following equation which is optimized in full parameter fine-tuning [1] :

formula

Here {x,y} could be a set of context target pairs for a given NLP task.

During fine-tuning, Φ  is initialized with pre-trained model’s weights which are then updated to

Φ  + Δ Φ  by iterations with the objective to maximize the above equation. In LoRA, this Δ Φ is approximated as Δ Φ (θ) where |θ| << |Φ| or lower dimensions.

While LoRA can be applied to any dense layer weight matrix, it is usually applied to the self attention weights (key and value weight matrices).

Fine-tuning Small Language Model using LoRA

We will fine-tune Microsoft’s Phi-3.5-mini-instruct model and fine-tune it to classify BBC News Data based on their descriptions. We will be using this dataset which is available on Kaggle. There are 5 different categories of the News in the training dataset –

“Entertainment”,”Business”, “Sport”, “Politics”, “Tech”

We will implement this Fine-tuning on Google Colab using the free tier T4 GPU. First start with checking the metrics if we classify using the base Microsoft’s Phi-3.5-mini-instruct model. We will then fine-tune this model and eventually check if the fine-tuned model gives better performance metrics as compared to the base model.

Step 1. Installing and Importing the Libraries

First we will install and import all necessary libraries.

%%capture
%pip install -U bitsandbytes
%pip install -U transformers
%pip install -U accelerate
%pip install -U peft
%pip install -U trl

import numpy as np
import pandas as pd
import os
from tqdm import tqdm
import bitsandbytes as bnb
import torch
import torch.nn as nn
import transformers
from datasets import Dataset
from peft import LoraConfig, PeftConfig
from trl import SFTTrainer
from trl import setup_chat_format
from transformers import (AutoModelForCausalLM, 
                          AutoTokenizer, 
                          BitsAndBytesConfig, 
                          TrainingArguments, 
                          pipeline, 
                          logging)
from sklearn.metrics import (accuracy_score, 
                             classification_report, 
                             confusion_matrix)
from sklearn.model_selection import train_test_split

Step 2: Loading the Data and Splitting into Train and Test

Our next step will be to load the data and split that data into training and testing datasets.

df = pd.read_csv("bbc_data.csv")
df.columns = ["text","label"]
df['label'].unique()

# Shuffle the DataFrame and select only 2000 rows
df = df.sample(frac=1, random_state=85).reset_index(drop=True).head(2000)

# Split the DataFrame
train_size = 0.8
eval_size = 0.1

# Calculate sizes
train_end = int(train_size * len(df))
eval_end = train_end + int(eval_size * len(df))

# Split the data
X_train = df[:train_end]
X_eval = df[train_end:eval_end]
X_test = df[eval_end:]
test_label = X_test['label'].values.tolist()

Step3: Creation of a Prompt Column and X_test, X_train

Now we will create prompt column for our SLM.

# Define the prompt generation functions
def prompt_generation(data_point):
    return f"""
            Classify the News Data Text into Entertainment, Business, Sport, Politics, Tech.
text: {data_point["text"]}
label: {data_point["label"]}""".strip()

def prompt_generation(data_point):
    return f"""
            Classify the News Data Text into Entertainment, Business, Sport, Politics, Tech.
text: {data_point["text"]}
label: """.strip()

# Generate prompts for training and evaluation data
X_train.loc[:,'text'] = X_train.apply(prompt_generation, axis=1)
X_eval.loc[:,'text'] = X_eval.apply(prompt_generation, axis=1)

# Generate test prompts and extract true labels
y_true = X_test.loc[:,'label']
X_test = pd.DataFrame(X_test.apply(generate_test_prompt, axis=1), columns=["text"])

# Convert to datasets
train_data = Dataset.from_pandas(X_train[["text"]])
eval_data = Dataset.from_pandas(X_eval[["text"]])

In the above piece of code, we create a column for prompt that is to be fed to the small language model that can help with the classification of the news data.

Step4: Loading the Model

base_model_name = "microsoft/Phi-3.5-mini-instruct"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=False,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
)

model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    device_map="auto",
    torch_dtype="float16",
    quantization_config=bnb_config, 
)

model.config.use_cache = False
model.config.pretraining_tp = 1

tokenizer = AutoTokenizer.from_pretrained(base_model_name)

tokenizer.pad_token_id = tokenizer.eos_token_id

The above code starts with creation of a configuration for 4 bit quantization using the Bits and Bytes library, which is used for optimizing model loading with reduced precision.

Then the pre-trained causal language model (microsoft/Phi-3.5-mini-instruct) from the Hugging Face model hub is loaded which is followed by defining the tokenizer for the model. Padding token ID is set to be the same as the end-of-sequence token ID.

Step5: Defining Function For Prediction From the Model

def predict(test, model, tokenizer):
    categories = ["Entertainment", "Business", "Sport", "Politics", "Tech"]
    y_pred = []
    
    # Create the pipeline once, outside the loop
    pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4, temperature=0.1)

    # Iterate over the test data and predict categories
    for prompt in tqdm(test["text"]):
        result = pipe(prompt)
        answer = result[0]['generated_text'].split("label:")[-1].strip()

        # Determine the predicted category
        predicted_category = next((category for category in categories if category.lower() in answer.lower()), "none")
        y_pred.append(predicted_category)
    
    return y_pred

y_pred = predict(X_test, model, tokenizer)

The above code creates a function for predicting the category of the news data for the test rows. The output is one of the categories from the list – [“Entertainment”, “Business”, “Sport”, “Politics”, “Tech”].

Step6: Generating Metrics For Base Model

from sklearn.metrics import classification_report
test_label1 =[i.capitalize() for i in test_label]
print(classification_report(test_label1, y_pred))

Output

"

The output shows that the metrics for the “Business” and “Sports” categories are relatively good. However, the other categories have weaker metrics. In the next steps, we will explore ways to improve these metrics. This will involve using a fine-tuned model for better performance.

Step7: Finding Specific Modules for Fine-tuning

def find_all_linear_names(model):
    cls = bnb.nn.Linear4bit
    lora_module_names = set()
    for name, module in model.named_modules():
        if isinstance(module, cls):
            names = name.split('.')
            lora_module_names.add(names[0] if len(names) == 1 else names[-1])
    if 'lm_head' in lora_module_names:  # needed for 16 bit
        lora_module_names.remove('lm_head')
    return list(lora_module_names)
modules = find_all_linear_names(model)
modules

The above function scans through all the modules in the provided model and looks for instances of bnb.nn.Linear4bit, which is a 4-bit optimized linear layer. The output is a list of unique module names that correspond to the 4-bit linear layers in the model. LoRA is applied only for these modules.

Step8: Defining Configuration for LoRA

output_dir="Phi-3.5-mini-instruct"

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0,
    r=64,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=modules,
)

In the above code, the LoRA technique is configured with:

  • lora_alpha=16 (to control the scaling of the low-rank updates),
  • lora_dropout=0 (no dropout applied, Dropout can be used to prevent overfitting, but here it’s set to 0, meaning no regularization in the LoRA layers),
  • r=64 (a low-rank factor of 64 for the decomposed matrices),
  • bias=”none” (no bias terms are added or modified in the low-rank adaptation),
  • task_type=”CAUSAL_LM” (for causal language modeling),
  • target_modules=modules (only applies to modules specified in the previously generated modules list)

Step9: Defining Fine-tuning Training Arguments

training_arguments = TrainingArguments(
    output_dir=output_dir,                    # directory to save and repository id
    num_train_epochs=1,                       # number of training epochs
    per_device_train_batch_size=1,            # batch size per device during training
    gradient_accumulation_steps=4,            # number of steps before performing a backward/update pass
    gradient_checkpointing=True,              # use gradient checkpointing to save memory
    optim="paged_adamw_8bit",
    logging_steps=1,                         
    learning_rate=2e-3,                       # learning rate, based on QLoRA paper
    weight_decay=0.001,
    fp16=False,
    bf16=False,
    max_grad_norm=0.3,                        # max gradient norm based on QLoRA paper
    max_steps=-1,
    warmup_ratio=0.03,                        # warmup ratio based on QLoRA paper
    group_by_length=False,
    lr_scheduler_type="cosine",              # use cosine learning rate scheduler
            
    eval_strategy="steps",              # save checkpoint every epoch
    eval_steps = 0.2
)

In the above code, all the arguments for fine-tuning are defined.

Step10: Defining Fine-tuning Trainer

trainer = SFTTrainer(
    model=model,
    args=training_arguments,
    train_dataset=train_data,
    eval_dataset=eval_data,
    peft_config=peft_config,
    dataset_text_field="text",
    tokenizer=tokenizer,
    max_seq_length=512,
    packing=False,
    dataset_kwargs={
    "add_special_tokens": False,
    "append_concat_token": False,
    }
)
trainer.train()

You will be asked to input the wandb API key here so that you can track the experiments on wandb.

The above code sets up and starts the fine-tuning of a pre-trained model using Supervised Fine-Tuning (SFT) with a number of custom configurations:

  • The model is fine-tuned on a dataset (train_data) using the provided settings (training_arguments).
  • LoRA or PEFT is used for efficient fine-tuning (peft_config), which helps reduce the number of parameters to be updated.
  • The data is tokenized using the tokenizer, and the model is trained for a specified number of steps or epochs on the training dataset, while periodically evaluating performance on the evaluation dataset.

Step11: Saving the model and tokenizer locally

trainer.save_model(output_dir)
tokenizer.save_pretrained(output_dir)

The above ode save both the trained model and tokenizer:

  • trainer.save_model(output_dir) saves the model weights and configuration.
  • tokenizer.save_pretrained(output_dir) saves the tokenizer’s configuration and vocabulary.

Step12: Evaluation of Fine-tuned Model

y_pred = predict(X_test, model, tokenizer)
print(classification_report(test_label1, y_pred))

Output From Fine-tuned Model

Evaluation of Fine-tuned model

As we can see, the output of the fine-tuned model is far better than what we got from the base model for all categories. The fine-tuned model has drastically improved the predictions when compared to the predictions from the base model.

Out of the 200 rows in the test dataset, there are only 5 rows where this fine-tuned model has predicted the category wrongly. One of the wrongly predicted rows had the following text:

output

The actual label for this row was “Business” while the fine-tuned model predicted the category for this as “Politics”.

Conclusion

SLMs represent a significant advancement in the field of artificial intelligence. They offer a practical and efficient alternative to larger models. Their compact size allows for reduced computational costs and faster processing times, making them particularly suitable for real-time applications and resource-constrained environments. The ability to fine-tune SLMs for specific tasks enhances their performance while maintaining a balance between efficiency and accuracy. As AI technology continues to evolve, SLMs and techniques like parameter-efficient fine-tuning will play a crucial role in democratizing access to advanced AI solutions, paving the way for innovative applications across various industries.

Key Takeaways

  • SLMs require fewer resources, making them more sustainable; LLMs need extensive resources for training and operation
  • SLMs can be fine-tuned on domain-specific datasets, enabling them to understand specialized vocabulary and contexts better than larger, generalized models. For instance, a small model trained on legal documents can provide accurate legal interpretations, while a larger model may misinterpret terminology due to its generic training.
  • Fine-tuning is best suited for high-stakes applications requiring precision and context awareness with adequate resources, while prompt engineering offers a flexible and cost-effective alternative for rapid adaptation and experimentation in diverse scenarios.
  • PEFT provides an efficient alternative to traditional fine-tuning by focusing on a small subset of parameters while maintaining most of the pre-trained model’s structure.

Frequently Asked Questions

Q1. What are Small Language Models (SLMs)?

A. SLMs are compact, efficient versions of large language models (LLMs) with fewer than 10 billion parameters, designed to be resource-efficient and faster to deploy.

Q2. How does fine-tuning improve the performance of Small Language Models?

A. Fine-tuning allows SLMs to specialize in certain domains by training them on relevant datasets, improving their ability to accurately interpret context and terminology specific to that domain.

Q3. What is PEFT, and how is it different from traditional fine-tuning?

A. PEFT (Parameter-Efficient Fine-Tuning) is an efficient alternative to traditional fine-tuning that focuses on adjusting a small subset of parameters, while retaining most of the original model’s structure. This method requires fewer resources and is faster than full model retraining.

Q4. What is LoRA, and how does it improve fine-tuning efficiency?

A. LoRA (Low-Rank Adaptation) freezes the original model weights and introduces smaller, trainable low-rank matrices. This allows for efficient fine-tuning by reducing the number of parameters that need to be trained, leading to faster training times and lower resource consumption.

Q5. What is the difference between fine-tuning and prompt engineering?

A. Fine-tuning is ideal for high-stakes applications requiring precision and context-awareness with enough resources, while prompt engineering is a flexible, cost-effective approach for quick adaptation and experimentation in various scenarios.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Nibedita completed her master’s in Chemical Engineering from IIT Kharagpur in 2014 and is currently working as a Senior Data Scientist. In her current capacity, she works on building intelligent ML-based solutions to improve business processes.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details