Training and fine-tuning language models can be complex, especially when aiming for efficiency and effectiveness. One effective approach involves using parameter-efficient fine-tuning techniques like low-rank adaptation (LoRA) combined with instruction fine-tuning. This article outlines the key steps and considerations to fine-tune LlaMa 2 large language model using this methodology. It explores using the Unsloth AI framework to make the fine-tuning process even faster and more efficient.
We will go step by step to understand the topic better!
Unsloth AI is a pioneering platform designed to streamline fine-tuning and training language models( Llama 2), making it faster and more efficient. This article is based on a hands-on session by Daniel Han, the co-founder of Unsloth AI. Daniel is passionate about pushing innovation to its limits. With extensive experience at Nvidia, he has significantly impacted the AI and machine learning industry. Let’s set up the Alpaca dataset to understand the Fine-tune Llama 2 with Unsloth.
The Alpaca dataset is popular for training language models due to its simplicity and effectiveness. It comprises 52,000 rows, each containing three columns: instruction, input, and output. The dataset is available on Hugging Face and comes pre-cleaned, saving time and effort in data preparation.
The Alpaca dataset has three columns: instruction, input, and output. The instruction provides the task, the input gives the context or question, and the output is the expected answer. For instance, an instruction might be, “Give three tips for staying healthy,” with the output being three relevant health tips. Now, we will format the dataset to ensure whether the dataset’s compatibility.
We must format it correctly to ensure the dataset matches our training code. The formatting function adds an extra column, text, which combines the instruction, input, and output into a single prompt. This prompt will be fed into the language model for training.
The <EOS>
token is crucial as it signifies the end of the sequence, preventing the model from generating never-ending text. Let’s train the model for better performance.
Once the dataset is properly formatted, we proceed to the training phase. We use the Unsloth
framework, which enhances the efficiency of the training process.
AdamW 8-bit
is recommended for reducing memory usage.The training script uses the formatted dataset and specified parameters to fine-tune Llama 2. The script includes functionality for handling the EOS token and ensuring proper sequence termination during training and inference.
After training, we test the model’s ability to generate appropriate responses based on new prompts. For example, if we prompt the model with “Continue the Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13,” the model should generate “21.”
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Continue the fibonnaci sequence.", # instruction
"1, 1, 2, 3, 5, 8", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
You can also use a TextStreamer for continuous inference – so you can see the generation token by token instead of waiting the whole time!
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Continue the fibonnaci sequence.", # instruction
"1, 1, 2, 3, 5, 8", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
Continue the Fibonacci sequence.
Input:
1, 1, 2, 3, 5, 8
Response:
13, 21, 34, 55, 89, 144<eos>
In addition to traditional fine-tuning techniques, incorporating the LoRa (Log-odds Ratio Attention) model can further enhance the efficiency and effectiveness of language model training. The LoRa model, known for its attention mechanism, leverages log-odds ratios to capture token dependencies and improve context understanding.
Post-training, the model can be saved locally or uploaded to HuggingFace for easy sharing and deployment. The saved model includes:
These files are essential for reloading the model and continuing inference or further training.
To save the final model as LoRA adapters, use Huggingface’s push_to_hub for an online save or save_pretrained for a local save.
model.save_pretrained("lora_model") # Local saving
tokenizer.save_pretrained("lora_model")
# model.push_to_hub("your_name/lora_model", token = "...") # Online saving
# tokenizer.push_to_hub("your_name/lora_model", token = "...") # Online saving
Now, if you want to load the LoRA adapters we just saved for inference, set False to True:
if False:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = You MUST copy from above!
inputs = tokenizer(
[
alpaca_prompt.format(
"What is a famous tall tower in Paris?", # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
Yes, fine-tuning can be used for unstructured logs stored in blob files. The key is preparing the dataset correctly, which can take some time but is feasible. It’s important to note that moving to lower bits in the model typically reduces accuracy, although often by only about 1%.
Overfitting is often the culprit if a model’s performance deteriorates after fine-tuning. To assess this, you should look at the evaluation loss. For guidance on how to evaluate loss, refer to our Wiki page on GitHub. To avoid running out of memory during evaluation, use float 16 precision and reduce the batch size. The default batch size is usually around 8, but you might need to lower it further for evaluation.
Monitor the evaluation loss to check if your model is overfitting. Overfitting is likely to occur if it increases, and you should consider stopping the training run.
Here are the tips and techniques that you must know:
Fine-tuning and optimizing language models are crucial tasks in AI that involve meticulous dataset preparation, memory management, and evaluation techniques. Utilizing datasets like the Alpaca dataset and tools such as the Unsloth and LoRa models can significantly enhance model performance.
Staying updated with the latest advancements is essential for effectively leveraging AI tools. Fine-tune Llama 2 allows for model customization, improving their applicability across various domains. Key techniques, including gradient accumulation, warm-up steps, and optimized learning rates, refine the training process for better efficiency and performance. Advanced models like LoRa, with enhanced attention mechanisms and effective memory management strategies, like using float 16 precision during evaluation, contribute to optimal resource utilization. Monitoring tools like NVIDIA SMI help prevent issues like overfitting and memory overflow.
As AI evolves with models like GPT-5, fine-tuning remains vital for injecting specific knowledge into models, especially for open-source models that democratize AI.
A: More data typically enhances model performance. To improve results, consider combining your dataset with one from Hugging Face.
A: NVIDIA SMI is a useful tool for monitoring GPU memory usage. If you’re using Colab, it also offers built-in tools to check VRAM usage.
A: Quantization helps reduce model size and memory usage but can be time-consuming. Always choose the appropriate quantization method and avoid enabling all options simultaneously.
A: Due to its higher accuracy, fine-tuning is often the preferred choice for production environments. RAG can be useful for general questions with large datasets, but it may not provide the same level of precision.
A: Typically, 1 to 3 epochs are recommended. Some research suggests up to 100 epochs for small datasets, but combining your dataset with a Hugging Face dataset is generally more beneficial.
A: Yes, Andrew Ng’s CS229 lectures, MIT’s OpenCourseWare on linear algebra, and various YouTube channels focused on AI and machine learning are excellent resources to enhance your understanding of the math behind model training.
A: Recent advancements have achieved a 30% reduction in memory usage with a slight increase in time. When saving models, opt for a single method, such as saving to 16-bit or uploading to Hugging Face, to manage disk space efficiently.
For more in-depth guidance on fine-tune LLaMA 2 and other large language models, join our DataHour session on LLM Fine-Tuning for Beginners with Unsloth.