The ever-growing field of large language models (LLMs) unlocks incredible potential for various applications. However, fine-tuning these powerful models for specific tasks can be a complex and resource-intensive endeavor. TorchTune, a new PyTorch library, tackles this challenge head-on by offering an intuitive and extensible solution. PyTorch released the alpha tourchtune, a PyTorch native library for finetuning your large language models easily. According to the PyTorch design principles, it provides composable and modular building blocks along with easy-to-extend training recipes to fine-tune large language techniques such as LORA, and QLORA on various consumer-grade and professional GPUs.
In the past year, there has been a surge in interest in open large language models (LLMs). Fine-tuning these cutting-edge models for specific applications has become a crucial technique. However, this adaptation process can be complex, requiring extensive customization across various stages, including data and model selection, quantization, evaluation, and inference. Additionally, the sheer size of these models presents a significant challenge when fine-tuning them on resource-constrained consumer-grade GPUs.
Current solutions often hinder customization and optimization by obfuscating critical components behind layers of abstraction. This lack of transparency makes it difficult to understand how different elements interact and which ones need modification to achieve desired functionality. It addresses this challenge by empowering developers with fine-grained control and visibility over the entire fine-tuning process, enabling them to tailor LLMs to their specific requirements and constraints
TorchTune supports the following finetuning workflows:
Torch Tune supports the following models
Model | Sizes |
Llama2 | 7B, 13B |
Mistral | 7B |
Gemma | 2B |
Moreover, they will add new models in the coming weeks, including support for 70B versions and MoEs.
TorchTune provides the following fine-tuning recipes.
Training | Fine-tuning Method |
Distributed Training [1 to 8 GPUs] | Full [code, example], LoRA [code, example] |
Single Device / Low Memory [1 GPU] | Full [code, example], LoRA + QLoRA [code, example] |
Single Device [1 GPU] | DPO [code, example] |
Memory efficiency is important to us. All of our recipes are tested on a variety of setups including commodity GPUs with 24GB of VRAM as well as beefier options found in data centers.
Single-GPU recipes expose a number of memory optimizations that aren’t available in the distributed versions. These include support for low-precision optimizers from bitsandbytes and fusing optimizer step with backward to reduce memory footprint from the gradients (see example config). For memory-constrained setups, we recommend using the single-device configs as a starting point. For example, our default QLoRA config has a peak memory usage of ~9.3GB. Similarly LoRA on single device with batch_size=2 has a peak memory usage of ~17.1GB. Both of these are with dtype=bf16 and AdamW as the optimizer.
This table captures the minimum memory requirements for our different recipes using the associated configs.
TorchTune adheres to the PyTorch philosophy of promoting ease of use by offering native integrations with several prominent LLM tools:
To get started with fine-tuning your first LLM with TorchTune, see our tutorial on fine-tuning Llama2 7B. Our end-to-end workflow tutorial will show you how to evaluate, quantize and run inference with this model. The rest of this section will provide a quick overview of these steps with Llama2.
Follow the instructions on the official meta-llama repository to ensure you have access to the Llama2 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide.
tune download meta-llama/Llama-2-7b-hf \
--output-dir /tmp/Llama-2-7b-hf \
--hf-token <HF_TOKEN> \
Set your environment variable HF_TOKEN or pass in –hf-token to the command in order to validate your access. You can find your token here.
Llama2 7B + LoRA on single GPU
tune run lora_finetune_single_device --config llama2/7B_lora_single_device
For distributed training, tune CLI integrates with torchrun. Llama2 7B + LoRA on two GPUs
tune run --nproc_per_node 2 full_finetune_distributed --config llama2/7B_full
Make sure to place any torchrun commands before the recipe specification. Any CLI args after this will override the config and not impact distributed training
There are two ways in which you can modify configs:
You can easily overwrite config properties from the command-line:
tune run lora_finetune_single_device \
--config llama2/7B_lora_single_device \
batch_size=8 \
enable_activation_checkpointing=True \
max_steps_per_epoch=128
You can also copy the config to your local directory and modify the contents directly:
tune cp llama2/7B_full ./my_custom_config.yaml
Copied to ./7B_full.yaml
Then, you can run your custom recipe by directing the tune run command to your local files:
tune run full_finetune_distributed --config ./my_custom_config.yaml
Check out tune –help for all possible CLI commands and options. For more information on using and updating configs, take a look at our config deep-dive.
TorchTune empowers developers to harness the power of large language models (LLMs) through a user-friendly and extensible PyTorch library. Its focus on composable building blocks, memory-efficient recipes, and seamless integration with the LLM ecosystem simplifies the fine-tuning process for a wide range of users. Whether you’re a seasoned researcher or just starting out, TorchTune provides the tools and flexibility to tailor LLMs to your specific needs and constraints.
[…] journey into the area of Convolutional Neural Networks (CNNs) and Skorch, a revolutionary fusion of PyTorch’s deep studying prowess and the simplicity of scikit-learn. Discover how CNNs emulate human visible […]