Small Language Models (SLMs) are compact, efficient versions of large language models (LLMs) with fewer than 10 billion parameters. They are designed to reduce computational costs, energy usage, and latency while maintaining targeted performance. SLMs are ideal for resource-constrained environments like edge computing and real-time applications. By focusing on specific tasks and utilizing smaller datasets, they offer a balance between efficiency and performance. These models provide a practical solution for applications such as lightweight chatbots and on-device AI, making advanced AI more accessible and scalable.
This article was published as a part of the Data Science Blogathon.
Below we will discover the key differences between Small Language Models and Large Language Models to understand their unique strengths:
Fine-tuning small language models (SLMs) is increasingly recognized as a valuable approach in various applications. Here are the key reasons for this need:
Before diving into fine-tuning, it is important to consider if Fine-tuning of the model is really needed or the problem in hand can be handled by using techniques like prompt engineering, retrieval augmented generation or by addition of intermediate reasoning steps.
Fine-tuning is best suited for high-stakes applications requiring precision and context awareness with adequate resources, while prompt engineering offers a flexible and cost-effective alternative for rapid adaptation and experimentation in diverse scenarios.
Fine-tuning is ideal when a model needs to specialize in a specific domain. It works best for static knowledge and tasks requiring high accuracy. On the other hand, RAG is suited for applications needing dynamic knowledge integration. It excels in broader contextual understanding, reducing hallucinations, and offering cost-effective solutions.
Parameter-efficient fine-tuning (PEFT) enhances the performance of pre-trained language models for specific tasks while minimizing computational costs. Instead of retraining an entire model, PEFT reuses the existing parameters and adjusts only a few layers, typically those related to the task at hand. This approach significantly reduces the need for extensive datasets and computational resources. By freezing the majority of the pre-trained model’s layers and fine-tuning only the final ones, PEFT ensures efficient adaptation to new tasks
PEFT provides an efficient alternative to traditional fine-tuning by focusing on a small subset of parameters while maintaining most of the pre-trained model’s structure. This approach allows organizations to adapt LLMs effectively without incurring high computational costs or requiring extensive datasets. Each method has its advantages and is suited for different scenarios depending on resource availability and task requirements.
Updating all the parameters of large language models can be costly, particularly due to the constraints of GPU memory.
LoRA, or Low-Rank Adaptation, is an innovative technique for fine-tuning large language models (LLMs) that enhances efficiency and reduces computational costs. Instead of updating all parameters of a pre-trained model, LoRA freezes the original weights and introduces smaller, trainable low-rank matrices that approximate the necessary adjustments. This approach significantly decreases the number of parameters that need to be trained, allowing for faster training times and lower resource requirements.
Consider a model with 10 billion parameters stored in a weight matrix W. During backpropagation, a matrix ΔW is calculated, which indicates the adjustments needed to the original weights in order to reduce the loss function during the training process.
The weight update is then as follows:
W’ = W + ΔW
When the weight matrix W has 10 billion parameters, the update matrix ΔW will also contain 10 billion parameters, making the computation of ΔW highly resource-intensive in terms of both memory and processing power.
LoRA introduces a method to express ΔW as the product of two smaller matrices, A and B, which have a lower rank. This results in the updated weight matrix W’ being:
W′=W+BA
In this formulation, W remains fixed and is not updated during training. The matrices B and A are of reduced dimensions, and their product BA provides a low-rank approximation of ΔW.
By setting A and B to have a lower rank r, the number of parameters to train is greatly minimized. For instance, if W is a d x d matrix, updating it traditionally would involve d² parameters. However, when B is d x r and A is r x d, the total number of parameters needed is reduced to 2dr, which is much smaller when r << d.
LoRA reduces memory usage and computational requirements by lowering the number of parameters to update, enabling faster training and fine-tuning of large models. This makes it feasible to adapt large models on less powerful hardware and scale them efficiently without increasing resource demands.
Consider the following equation which is optimized in full parameter fine-tuning [1] :
Here {x,y} could be a set of context target pairs for a given NLP task.
During fine-tuning, Φ is initialized with pre-trained model’s weights which are then updated to
Φ + Δ Φ by iterations with the objective to maximize the above equation. In LoRA, this Δ Φ is approximated as Δ Φ (θ) where |θ| << |Φ| or lower dimensions.
While LoRA can be applied to any dense layer weight matrix, it is usually applied to the self attention weights (key and value weight matrices).
We will fine-tune Microsoft’s Phi-3.5-mini-instruct model and fine-tune it to classify BBC News Data based on their descriptions. We will be using this dataset which is available on Kaggle. There are 5 different categories of the News in the training dataset –
“Entertainment”,”Business”, “Sport”, “Politics”, “Tech”
We will implement this Fine-tuning on Google Colab using the free tier T4 GPU. First start with checking the metrics if we classify using the base Microsoft’s Phi-3.5-mini-instruct model. We will then fine-tune this model and eventually check if the fine-tuned model gives better performance metrics as compared to the base model.
First we will install and import all necessary libraries.
%%capture
%pip install -U bitsandbytes
%pip install -U transformers
%pip install -U accelerate
%pip install -U peft
%pip install -U trl
import numpy as np
import pandas as pd
import os
from tqdm import tqdm
import bitsandbytes as bnb
import torch
import torch.nn as nn
import transformers
from datasets import Dataset
from peft import LoraConfig, PeftConfig
from trl import SFTTrainer
from trl import setup_chat_format
from transformers import (AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
TrainingArguments,
pipeline,
logging)
from sklearn.metrics import (accuracy_score,
classification_report,
confusion_matrix)
from sklearn.model_selection import train_test_split
Our next step will be to load the data and split that data into training and testing datasets.
df = pd.read_csv("bbc_data.csv")
df.columns = ["text","label"]
df['label'].unique()
# Shuffle the DataFrame and select only 2000 rows
df = df.sample(frac=1, random_state=85).reset_index(drop=True).head(2000)
# Split the DataFrame
train_size = 0.8
eval_size = 0.1
# Calculate sizes
train_end = int(train_size * len(df))
eval_end = train_end + int(eval_size * len(df))
# Split the data
X_train = df[:train_end]
X_eval = df[train_end:eval_end]
X_test = df[eval_end:]
test_label = X_test['label'].values.tolist()
Now we will create prompt column for our SLM.
# Define the prompt generation functions
def prompt_generation(data_point):
return f"""
Classify the News Data Text into Entertainment, Business, Sport, Politics, Tech.
text: {data_point["text"]}
label: {data_point["label"]}""".strip()
def prompt_generation(data_point):
return f"""
Classify the News Data Text into Entertainment, Business, Sport, Politics, Tech.
text: {data_point["text"]}
label: """.strip()
# Generate prompts for training and evaluation data
X_train.loc[:,'text'] = X_train.apply(prompt_generation, axis=1)
X_eval.loc[:,'text'] = X_eval.apply(prompt_generation, axis=1)
# Generate test prompts and extract true labels
y_true = X_test.loc[:,'label']
X_test = pd.DataFrame(X_test.apply(generate_test_prompt, axis=1), columns=["text"])
# Convert to datasets
train_data = Dataset.from_pandas(X_train[["text"]])
eval_data = Dataset.from_pandas(X_eval[["text"]])
In the above piece of code, we create a column for prompt that is to be fed to the small language model that can help with the classification of the news data.
base_model_name = "microsoft/Phi-3.5-mini-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
)
model = AutoModelForCausalLM.from_pretrained(
base_model_name,
device_map="auto",
torch_dtype="float16",
quantization_config=bnb_config,
)
model.config.use_cache = False
model.config.pretraining_tp = 1
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
tokenizer.pad_token_id = tokenizer.eos_token_id
The above code starts with creation of a configuration for 4 bit quantization using the Bits and Bytes library, which is used for optimizing model loading with reduced precision.
Then the pre-trained causal language model (microsoft/Phi-3.5-mini-instruct) from the Hugging Face model hub is loaded which is followed by defining the tokenizer for the model. Padding token ID is set to be the same as the end-of-sequence token ID.
def predict(test, model, tokenizer):
categories = ["Entertainment", "Business", "Sport", "Politics", "Tech"]
y_pred = []
# Create the pipeline once, outside the loop
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4, temperature=0.1)
# Iterate over the test data and predict categories
for prompt in tqdm(test["text"]):
result = pipe(prompt)
answer = result[0]['generated_text'].split("label:")[-1].strip()
# Determine the predicted category
predicted_category = next((category for category in categories if category.lower() in answer.lower()), "none")
y_pred.append(predicted_category)
return y_pred
y_pred = predict(X_test, model, tokenizer)
The above code creates a function for predicting the category of the news data for the test rows. The output is one of the categories from the list – [“Entertainment”, “Business”, “Sport”, “Politics”, “Tech”].
from sklearn.metrics import classification_report
test_label1 =[i.capitalize() for i in test_label]
print(classification_report(test_label1, y_pred))
Output
The output shows that the metrics for the “Business” and “Sports” categories are relatively good. However, the other categories have weaker metrics. In the next steps, we will explore ways to improve these metrics. This will involve using a fine-tuned model for better performance.
def find_all_linear_names(model):
cls = bnb.nn.Linear4bit
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16 bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
modules = find_all_linear_names(model)
modules
The above function scans through all the modules in the provided model and looks for instances of bnb.nn.Linear4bit, which is a 4-bit optimized linear layer. The output is a list of unique module names that correspond to the 4-bit linear layers in the model. LoRA is applied only for these modules.
output_dir="Phi-3.5-mini-instruct"
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules=modules,
)
In the above code, the LoRA technique is configured with:
training_arguments = TrainingArguments(
output_dir=output_dir, # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=1, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="paged_adamw_8bit",
logging_steps=1,
learning_rate=2e-3, # learning rate, based on QLoRA paper
weight_decay=0.001,
fp16=False,
bf16=False,
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
max_steps=-1,
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
group_by_length=False,
lr_scheduler_type="cosine", # use cosine learning rate scheduler
eval_strategy="steps", # save checkpoint every epoch
eval_steps = 0.2
)
In the above code, all the arguments for fine-tuning are defined.
trainer = SFTTrainer(
model=model,
args=training_arguments,
train_dataset=train_data,
eval_dataset=eval_data,
peft_config=peft_config,
dataset_text_field="text",
tokenizer=tokenizer,
max_seq_length=512,
packing=False,
dataset_kwargs={
"add_special_tokens": False,
"append_concat_token": False,
}
)
trainer.train()
You will be asked to input the wandb API key here so that you can track the experiments on wandb.
The above code sets up and starts the fine-tuning of a pre-trained model using Supervised Fine-Tuning (SFT) with a number of custom configurations:
trainer.save_model(output_dir)
tokenizer.save_pretrained(output_dir)
The above ode save both the trained model and tokenizer:
y_pred = predict(X_test, model, tokenizer)
print(classification_report(test_label1, y_pred))
Output From Fine-tuned Model
As we can see, the output of the fine-tuned model is far better than what we got from the base model for all categories. The fine-tuned model has drastically improved the predictions when compared to the predictions from the base model.
Out of the 200 rows in the test dataset, there are only 5 rows where this fine-tuned model has predicted the category wrongly. One of the wrongly predicted rows had the following text:
The actual label for this row was “Business” while the fine-tuned model predicted the category for this as “Politics”.
SLMs represent a significant advancement in the field of artificial intelligence. They offer a practical and efficient alternative to larger models. Their compact size allows for reduced computational costs and faster processing times, making them particularly suitable for real-time applications and resource-constrained environments. The ability to fine-tune SLMs for specific tasks enhances their performance while maintaining a balance between efficiency and accuracy. As AI technology continues to evolve, SLMs and techniques like parameter-efficient fine-tuning will play a crucial role in democratizing access to advanced AI solutions, paving the way for innovative applications across various industries.
A. SLMs are compact, efficient versions of large language models (LLMs) with fewer than 10 billion parameters, designed to be resource-efficient and faster to deploy.
A. Fine-tuning allows SLMs to specialize in certain domains by training them on relevant datasets, improving their ability to accurately interpret context and terminology specific to that domain.
A. PEFT (Parameter-Efficient Fine-Tuning) is an efficient alternative to traditional fine-tuning that focuses on adjusting a small subset of parameters, while retaining most of the original model’s structure. This method requires fewer resources and is faster than full model retraining.
A. LoRA (Low-Rank Adaptation) freezes the original model weights and introduces smaller, trainable low-rank matrices. This allows for efficient fine-tuning by reducing the number of parameters that need to be trained, leading to faster training times and lower resource consumption.
A. Fine-tuning is ideal for high-stakes applications requiring precision and context-awareness with enough resources, while prompt engineering is a flexible, cost-effective approach for quick adaptation and experimentation in various scenarios.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.