Everyone needs to have faster and reliable inference from the Large Language models. vLLM, a cutting-edge open source framework designed to simplify the deployment and management of large language models with very less throughput. vLLM makes your job easier by offering efficient and scalable tools for working with LLMs. With vLLM, you can manage everything from model loading and inference to fine-tuning and serving, all with a focus on performance and simplicity. In this article we will implement vLLM using Gemma-7b-it model from HuggingFace. Lets dive in.
This article was published as a part of the Data Science Blogathon.
vLLM, short for “Virtual large language model,” is an open-source framework designed to streamline and optimize the use of large language models (LLMs) in various applications. vLLM is a game-changer in the AI space, offering a streamlined approach to handling large language models. Its unique focus on performance and scalability makes it an essential tool for developers looking to deploy and manage language models effectively.
The buzz around vLLM is due to its ability to handle the complexities associated with large-scale language models, such as efficient memory management, fast inference, and seamless integration with existing AI workflows. Traditional methods often struggle with efficient memory management and fast inference, two critical challenges when working with massive datasets and complex models. vLLM addresses these issues head-on, offering a seamless integration with existing AI workflows and significantly reducing the technical burden on developers.
In order to understand how, let’s understand the concept of KV Cache and PagedAttention.
KV Cache (Key-Value Cache) is a technique used in transformer models, specifically in the context of Attention mechanisms, to store and reuse the intermediate results of key and value computations during the inference phase. This caching significantly reduces the computational overhead by avoiding the need to recompute these values for each new token in a sequence, thus speeding up the processing time.
Despite being so efficient, in most of the cases, KV cache is large. For instance, in the LLaMA-13B model, a single sequence can take up to 1.7GB. The size of the KV cache depends on sequence length, which is variable and unpredictable, leading to inefficient memory usage.
Traditional methods often waste 60%–80% of memory due to fragmentation and over-reservation. To mitigate this, vLLM introduces PagedAttention.
PagedAttention addresses the challenge of efficiently managing memory consumption when handling very large input sequences, which can be a significant issue in transformer models. Unlike the KV Cache, which optimizes the computation by reusing previously computed key-value pairs, PagedAttention further enhances efficiency by breaking down the input sequence into smaller, manageable pages. The concept operates uses these manageable pages and performs attention calculations within these pages.
Unlike traditional attention algorithms, PagedAttention allows for the storage of continuous keys and values in a fragmented memory space. Specifically, PagedAttention divides the KV cache of each sequence into distinct KV blocks.
This in return allows the flexible memory management:
Lets implement the vLLM framework using the Gemma-7b-it model from HuggingFace Hub.
In order to get started, let’s begin by installing the module.
!pip install vllm
First, we import the necessary libraries and set up our Hugging Face API token. We only need to set HuggingFace API token only for few models that requires permission. Then, we initialize the google/gemma-7b-it model with a maximum length of 2048 tokens, ensuring efficient memory usage with torch.cuda.empty_cache() for optimal performance.
import torch,os
from vllm import LLM
os.environ['HF_TOKEN'] = "<replace-with-your-hf-token>"
model_name = "google/gemma-7b-it"
llm = LLM(model=model_name,max_model_len=2048)
torch.cuda.empty_cache()
SamplingParams is similar to the model keyword arguments in the Transformers pipeline. This sampling parameters is essential to achieve the desired output quality and behavior.
from vllm import SamplingParams
sampling_params = SamplingParams(temperature=0.1,
top_p=0.95,
repetition_penalty = 1.2,
max_tokens=1000
)
Each open-source model has its own unique prompt template with specific special tokens. For instance, Gemma utilizes <start_of_turn> and <end_of_turn> as special token markers. These tokens indicate the beginning and end of a chat template, respectively, for both user and model roles.
def get_prompt(user_question):
template = f"""
<start_of_turn>user
{user_question}
<end_of_turn>
<start_of_turn>model
"""
return template
prompt1 = get_prompt("best time to eat your 3 meals")
prompt2 = get_prompt("generate a python list with 5 football players")
prompts = [prompt1,prompt2]
Now that everything is set, let the LLM generate the response to the user prompt.
from IPython.display import Markdown
outputs = llm.generate(prompts, sampling_params)
display(Markdown(outputs[0].outputs[0].text))
display(Markdown(outputs[1].outputs[0].text))
Once the outputs is executed, it returns the processed prompts result that contains speed and output i.e., token per second. This speed benchmarking is beneficial to show the difference between vllm inference and other. As you can observe in just 6.69 seconds, we generated two user prompts.
Processed prompts: 100%|██████████| 2/2 [00:13<00:00, 6.69s/it, est. speed input: 3.66 toks/s, output: 20.70 toks/s]
Output: Prompt-1
Output: Prompt-2
We successfully executed the LLM with reduced latency and efficient memory utilization. vLLM is a game-changing open-source framework in AI, providing not only fast and cost-effective LLM serving but also facilitating the seamless deployment of LLMs on various endpoints. In this article we explored guide to vLLM using Gemma-7b-it.
Click here to access the documentation.
A. HuggingFace hub is the platform with most of the large language models are hosted. vLLM provides the compatibility to perform the inference on any HuggingFace open source large language models. Further vLLM also helps in the serving and deployment of the model on the endpoints.
A. Groq is a service with high-performance hardware specifically designed for faster AI inference tasks, particularly through their Language Processing Units (LPUs). These LPUs offer ultra-low latency and high throughput, optimized for handling sequences in LLMs. whereas vLLM is an open-source framework aimed at simplifying the deployment and memory management of LLM for faster inference and serving.
A. Yes, you can deploy LLMs using vLLM, which offers efficient inference through advanced techniques like PagedAttention and KV Caching. Additionally, vLLM provides seamless integration with existing AI workflows, making it easy to configure and deploy models from popular libraries like Hugging Face.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.