How to Fine-tune Llama 2 with Unsloth?

Guest Blog Last Updated : 28 May, 2024
7 min read

Introduction

Training and fine-tuning language models can be complex, especially when aiming for efficiency and effectiveness. One effective approach involves using parameter-efficient fine-tuning techniques like low-rank adaptation (LoRA) combined with instruction fine-tuning. This article outlines the key steps and considerations to fine-tune LlaMa 2 large language model using this methodology. It explores using the Unsloth AI framework to make the fine-tuning process even faster and more efficient.

We will go step by step to understand the topic better!

What is Unsloth?

Unsloth AI is a pioneering platform designed to streamline fine-tuning and training language models( Llama 2), making it faster and more efficient. This article is based on a hands-on session by Daniel Han, the co-founder of Unsloth AI. Daniel is passionate about pushing innovation to its limits. With extensive experience at Nvidia, he has significantly impacted the AI and machine learning industry. Let’s set up the Alpaca dataset to understand the Fine-tune Llama 2 with Unsloth. 

Setting Up the Dataset

The Alpaca dataset is popular for training language models due to its simplicity and effectiveness. It comprises 52,000 rows, each containing three columns: instruction, input, and output. The dataset is available on Hugging Face and comes pre-cleaned, saving time and effort in data preparation.

The Alpaca dataset has three columns: instruction, input, and output. The instruction provides the task, the input gives the context or question, and the output is the expected answer. For instance, an instruction might be, “Give three tips for staying healthy,” with the output being three relevant health tips. Now, we will format the dataset to ensure whether the dataset’s compatibility. 

Formatting the Dataset

We must format it correctly to ensure the dataset matches our training code. The formatting function adds an extra column, text, which combines the instruction, input, and output into a single prompt. This prompt will be fed into the language model for training.

Here’s an example of how a formatted dataset entry might look:

  • Instruction: “Give three tips for staying healthy.”
  • Input: “”
  • Output: “1. Eat a balanced diet. 2. Exercise regularly. 3. Get enough sleep.”
  • Text: “Below is an instruction that describes a task. Write a response that appropriately completes the request. \n\n Instruction: Give three tips for staying healthy. \n\n Response: 1. Eat a balanced diet. 2. Exercise regularly. 3. Get enough sleep. <EOS>”

The <EOS> token is crucial as it signifies the end of the sequence, preventing the model from generating never-ending text. Let’s train the model for better performance.

Training the Model

Once the dataset is properly formatted, we proceed to the training phase. We use the Unsloth framework, which enhances the efficiency of the training process.

Key Parameters for Training the Model

  • Batch Size: It determines how many samples are processed before updating the model parameters. A typical batch size is 2.
  • Gradient Accumulation: Specifies how many batches to accumulate before performing a backward pass. Commonly set to 4.
  • Warm-Up Steps: At the beginning of training, gradually increase the learning rate. A value of 5 is often used.
  • Max Steps: Limits the number of training steps. For demonstration purposes, this might be set to 3, but normally you would use a higher number like 60.
  • Learning Rate: Controls the step size during optimization. A value of 2e-4 is standard.
  • Optimizer: AdamW 8-bit is recommended for reducing memory usage.

Running the Training

The training script uses the formatted dataset and specified parameters to fine-tune Llama 2. The script includes functionality for handling the EOS token and ensuring proper sequence termination during training and inference.

Inference to Check the Model’s Ability

After training, we test the model’s ability to generate appropriate responses based on new prompts. For example, if we prompt the model with “Continue the Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13,” the model should generate “21.”

# alpaca_prompt = Copied from above

FastLanguageModel.for_inference(model) # Enable native 2x faster inference

inputs = tokenizer(

[

   alpaca_prompt.format(

       "Continue the fibonnaci sequence.", # instruction

       "1, 1, 2, 3, 5, 8", # input

       "", # output - leave this blank for generation!

   )

], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)

tokenizer.batch_decode(outputs)

You can also use a TextStreamer for continuous inference – so you can see the generation token by token instead of waiting the whole time!

# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
   alpaca_prompt.format(
       "Continue the fibonnaci sequence.", # instruction
       "1, 1, 2, 3, 5, 8", # input
       "", # output - leave this blank for generation!
   )
], return_tensors = "pt").to("cuda")


from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)

<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

Instruction:

Continue the Fibonacci sequence.

Input:

1, 1, 2, 3, 5, 8

Response:

13, 21, 34, 55, 89, 144<eos>

LoRa Model Integration

In addition to traditional fine-tuning techniques, incorporating the LoRa (Log-odds Ratio Attention) model can further enhance the efficiency and effectiveness of language model training. The LoRa model, known for its attention mechanism, leverages log-odds ratios to capture token dependencies and improve context understanding.

Key Advantages of the LoRa Model:

  1. Enhanced Contextual Understanding: The LoRa model’s attention mechanism enables it to better capture token dependencies within the input sequence, leading to improved contextual understanding.
  2. Efficient Attention Computation: The LoRa model optimizes attention computation using log-odds ratios, resulting in faster training and inference times than traditional attention mechanisms.
  3. Improved Model Performance: Integrating the LoRa model into the training pipeline can enhance model performance, particularly in tasks requiring long-range dependencies and nuanced context understanding.

Saving and Loading the Model

Post-training, the model can be saved locally or uploaded to HuggingFace for easy sharing and deployment. The saved model includes:

  • adapter_config.json
  • adapter_model.bin

These files are essential for reloading the model and continuing inference or further training.

To save the final model as LoRA adapters, use Huggingface’s push_to_hub for an online save or save_pretrained for a local save.

model.save_pretrained("lora_model") # Local saving

tokenizer.save_pretrained("lora_model")

# model.push_to_hub("your_name/lora_model", token = "...") # Online saving

# tokenizer.push_to_hub("your_name/lora_model", token = "...") # Online saving

Now, if you want to load the LoRA adapters we just saved for inference, set False to True:

if False:

   from unsloth import FastLanguageModel

   model, tokenizer = FastLanguageModel.from_pretrained(

       model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING

       max_seq_length = max_seq_length,

       dtype = dtype,

       load_in_4bit = load_in_4bit,

   )

   FastLanguageModel.for_inference(model) # Enable native 2x faster inference

# alpaca_prompt = You MUST copy from above!

inputs = tokenizer(

[

   alpaca_prompt.format(

       "What is a famous tall tower in Paris?", # instruction

       "", # input

       "", # output - leave this blank for generation!

   )

], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)

tokenizer.batch_decode(outputs)

Fine-Tuning on Unstructured Logs

Yes, fine-tuning can be used for unstructured logs stored in blob files. The key is preparing the dataset correctly, which can take some time but is feasible. It’s important to note that moving to lower bits in the model typically reduces accuracy, although often by only about 1%.

Evaluating Model Performance

Overfitting is often the culprit if a model’s performance deteriorates after fine-tuning. To assess this, you should look at the evaluation loss. For guidance on how to evaluate loss, refer to our Wiki page on GitHub. To avoid running out of memory during evaluation, use float 16 precision and reduce the batch size. The default batch size is usually around 8, but you might need to lower it further for evaluation.

Evaluation and Overfitting

Monitor the evaluation loss to check if your model is overfitting. Overfitting is likely to occur if it increases, and you should consider stopping the training run.

Fine-Tuning Tips and Techniques

Here are the tips and techniques that you must know: 

Memory Management

  • Use float 16 precision during evaluation to prevent memory issues.
  • Fine-tuning often requires less memory than other operations like saving the model, especially with optimized workflows.

Library Support for Batch Inference

  • Libraries such as Unsloft allow for batch inference, making it easier to handle multiple prompts simultaneously.

Future Directions

  • As models like GPT-5 and beyond evolve, fine-tuning will remain relevant, especially for those who prefer not to upload data to services like OpenAI. Fine-tuning remains crucial for injecting specific knowledge and skills into models.

Advanced Topics

  • Automatic Optimization of Arbitrary Models: We are working on optimizing any model architecture using an automatic compiler, aiming to mimic PyTorch’s compilation capabilities.
  • Handling Large Language Models: More data and increased rank in fine-tuning can improve outcomes for large-scale language models. Additionally, adjusting learning rates and training epochs can enhance model performance.
  • Addressing Fear and Uncertainty: Concerns about the future of fine-tuning amidst advancements in models like GPT-4 and beyond are common. However, fine-tuning remains vital, especially for open-source models, crucial for democratizing AI and resisting big tech companies’ monopolization of AI capabilities.

Conclusion

Fine-tuning and optimizing language models are crucial tasks in AI that involve meticulous dataset preparation, memory management, and evaluation techniques. Utilizing datasets like the Alpaca dataset and tools such as the Unsloth and LoRa models can significantly enhance model performance.

Staying updated with the latest advancements is essential for effectively leveraging AI tools. Fine-tune Llama 2 allows for model customization, improving their applicability across various domains. Key techniques, including gradient accumulation, warm-up steps, and optimized learning rates, refine the training process for better efficiency and performance. Advanced models like LoRa, with enhanced attention mechanisms and effective memory management strategies, like using float 16 precision during evaluation, contribute to optimal resource utilization. Monitoring tools like NVIDIA SMI help prevent issues like overfitting and memory overflow.

As AI evolves with models like GPT-5, fine-tuning remains vital for injecting specific knowledge into models, especially for open-source models that democratize AI.

Frequently Asked Questions

Q1: How do I know if my dataset is big enough?

A: More data typically enhances model performance. To improve results, consider combining your dataset with one from Hugging Face.

Q2: What resources are recommended for debugging and optimization?

A: NVIDIA SMI is a useful tool for monitoring GPU memory usage. If you’re using Colab, it also offers built-in tools to check VRAM usage.

Q3: Tell me about quantization and its impact on model saving.

A: Quantization helps reduce model size and memory usage but can be time-consuming. Always choose the appropriate quantization method and avoid enabling all options simultaneously.

Q4: When should I choose fine-tuning over Retrieval-Augmented Generation (RAG)?

A: Due to its higher accuracy, fine-tuning is often the preferred choice for production environments. RAG can be useful for general questions with large datasets, but it may not provide the same level of precision.

Q5: What’s the recommended number of epochs for fine-tuning, and how does it relate to dataset size?

A: Typically, 1 to 3 epochs are recommended. Some research suggests up to 100 epochs for small datasets, but combining your dataset with a Hugging Face dataset is generally more beneficial.

Q6: Are there any math resources you’d recommend for model training?

A: Yes, Andrew Ng’s CS229 lectures, MIT’s OpenCourseWare on linear algebra, and various YouTube channels focused on AI and machine learning are excellent resources to enhance your understanding of the math behind model training.

Q7: How can I optimize memory usage during model training?

A: Recent advancements have achieved a 30% reduction in memory usage with a slight increase in time. When saving models, opt for a single method, such as saving to 16-bit or uploading to Hugging Face, to manage disk space efficiently.

For more in-depth guidance on fine-tune LLaMA 2 and other large language models, join our DataHour session on LLM Fine-Tuning for Beginners with Unsloth.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details