Decoding vLLM: Strategies for Supercharging Your Language Model Inferences

Ajay Last Updated : 10 Jul, 2024
9 min read

Introduction

Large Language Models (LLMs) have revolutionized how we interact with computers. However, deploying these models in production can be challenging due to their high memory consumption and computational cost. vLLM, an open-source library for fast LLM inference and serving, addresses these challenges by working with a novel attention algorithm called PagedAttention. This algorithm effectively manages attention keys and values, allowing vLLM to get higher throughput and lower memory usage than traditional LLM serving methods.

Learning Objectives

In this article, you will learn about:

  • Understand the challenges of LLM inference and the limitations of traditional approaches.
  • What is vLLM, and how does it work?
  • The benefits of using vLLM for LLM inference
  • Discover how vLLM’s PagedAttention algorithm overcomes these challenges
  • Integrate vLLM into your existing workflow.

This article was published as a part of the Data Science Blogathon.

Challenges of LLM Inference

LLMs have shown their worth in tasks like Text Generation, Summarization, language translation, and many more. However, deploying these LLMs with traditional LLM inference approaches suffers from several limitations:

  • High Memory Footprint: LLMs need large amounts of memory to store their parameters and intermediate activations(mainly the key and value parameters from the attention layers), making them challenging to deploy in resource-constrained environments.
  • Limited Throughput: Traditional implementations struggle to handle high volumes of concurrent Inference Requests, hindering scalability and responsiveness. This affects when the Large Language Model runs in the Production server and cannot work with the GPUs effectively.
  • Computational Cost: The intense load of matrix calculations involved in LLM inference can be expensive, especially on large models. With the High Memory and Low Throughout, this will further add in more costs.

Take your AI innovations to the next level with GenAI Pinnacle. Fine-tune models like Gemini and unlock endless possibilities in NLP, image generation, and more. Dive in today! Explore Now

What is vLLM?

vLLM is a high-throughput and memory-efficient LLM serving engine. It works with a novel attention algorithm called PagedAttention, which effectively manages attention keys and values by dividing them into smaller, more manageable chunks. This approach reduces the memory footprint of vLLM and allows it to get greater throughput compared to traditional LLM serving methods. While testing, the vLLM has performed 24x better than the conventional HuggingFace serving and up to 2-5x better than the HuggingFace Text Generation Inference (TGI). It further refines the inference process by continuously batching and optimizing CUDA kernels.

Benefits of vLLM

vLLM offers several benefits over traditional LLM serving methods:

  • Higher Throughput: vLLM can achieve up to 24x higher throughput than HuggingFace Transformers, the most popular LLM library. This allows you to serve more users with fewer resources.
  • Lower Memory Usage: vLLM needs very little memory compared to traditional LLM serving methods, making it ready to deploy on platforms with soft hardware.
  • OpenAI-compatible API: vLLM provides an OpenAI-compatible API, making it easy to integrate with existing LLM applications.
  • Seamless Integration with Hugging Face Models: vLLM can be used with different models, making it a go-to tool for LLM serving.

Getting Started with vLLM

Getting started with vLLM is very simple. The first thing would be installing the VLLM library, which can be done as shown below.

pip install vllm

The above command should install the vllm library. In the next step, we will be choosing our model to start inference, which is

from vllm import LLM, SamplingParams

# choosing the large language model
llm = LLM(model="gpt2-xl")

# setting the parameters
sampling_params = SamplingParams(temperature=0.8, top_p=0.90,max_tokens = 50)

vLLM Library

Here, first, we import two classes from the vllm library

  • LLM: This class is for downloading the models. Currently, vllm supports many models from different families, including gpt, llama, vicuna, bloom, and many more.
  • SamplingParams: This is the class for defining the model parameters that include things like temperature (how creative the model must be), top_p (to have all the topmost tokens whose combined overall Probability is 0.9), max_tokens (the maximum number of tokens the model can generate) and other model parameters

Next, we instantiate the LLM object. Here, we are choosing the gpt2-xl model, a 1.5Billion Parameter model. Then, we set the configurations for the model like the temperature, max_tokens, and the top_p.

So, running this code will download the pretrained gpt-xl model from the HuggingFace Hub and then load the model into the GPU. Now, we will create a Prompt and try to infer the model. At the same time, let’s test the amount of time taken to generate the response.

%%time

# defining our prompt
prompt = "Machine Learning is"

# generating the answer
answer = llm.generate(prompt,sampling_params)

# getting the generated text out from the answer variable
answer[0].outputs[0].text

Prompt

Here, we give the Prompt “Machine Learning is” and then pass this Prompt along with the SampingParams to the .generate() class of the LLM object. This will then generate the answer. The answer contains a list of elements of type RequestOutput. Each component of the list includes the generated text and other information. But here, we are only interested in the output generated and thus can access it from the answer through the outputs[0].text. The response generated can be seen below.

response generated | vLLM

We can see the time taken to generate is in milliseconds, which is pretty quick. Also, we get a sound output(gpt2-xl is an okay model and doesn’t generate great responses. We are using it for demo purposes because it fits in the free GPU provided by free colab). Now, let’s try giving the model a list of Prompts and check the time taken to generate the responses.

%%time

# defining our prompt

prompt = [

    "What is Quantum Computing?",
    "How are electrons and protons different?",
    "What is Machine Learning?",

]

# generating the answer
answers = llm.generate(prompt,sampling_params)

# getting the generated text out from the answer variable
for i in range(3):
 print("\nPrompt:",prompt[i],"\nGeneration:",answers[i].outputs[0].text)
 print()

Here the above code is self-explanatory. We are creating a list of Prompts, and we have 3 of them and then we are giving this list to the .generate() function of the LLM class. Then LLM class will generate a list of answers. We then traverse through the list and print the text from each response generated, thus giving the following output

vLLM

We can check the response generated by the gpt2-xl large language model above. The reactions are not that good and are cut off in the middle of sentences, and this is expected because the gpt2-xl is not the best-performing model out there. We see here the amount of time it took for the large language model to generate the response. It took 1 second to create responses for all 3 questions combined. This is an excellent inference speed and can be improved further with increased computing resources.

Serving LLMs Through vLLM

This section will look into how to server LLMs through the vLLM library. The process for this is straightforward. With vLLM, creating a server very similar to the OpenAI API protocol is possible. We will be hosting the server in a way that makes it reachable through the Internet. Let’s dive in by running the following commands.

curl ipv4.icanhazip.com

In the above, we run the curl command to the website ipv4.icanhazip.com. This will return the IPv4 public address of our machine. This public address will be used later to make the LLM available online.

Code Implementation

Next, we run the following Python command to serve the Large Language Model.

python -m vllm.entrypoints.openai.api_server \
    --host 127.0.0.1 \
    --port 8888 \
    --model bigscience/bloomz-560m \
    & \
    npx localtunnel --port 8888

We are running the api_server.py file from the vllm library in the above. We are providing the following options to this file.

  • Host: This is for giving the host of our API. Here, we will be working with the local host, 127.0.0.1. The host we serve this LLM can only reach this local host. Still, later, we will expose it to the outside internet.
  • Port: This is the Port where we want our application to run, and it is the Port where we want to serve the large language model. We can assign a random Port to it, and we are choosing the Port 8888
  • Model: Here, we give the model we want to serve with vLLM. As discussed, vLLM supports many model families like the GPT, Llama, and Mistral(check out the list here). For this example, we will go with bloomz-560m, a 560 million parameter model that can fit in the GPU.

Working with Local Tunnel

Uptohere is the part where the vLLM downloads the model from the huggingface hub, loads it in the GPU, and runs it. Assigning the host value localhost restricts external calls to the served Large Language Model; it can only be accessed internally. So, to expose it to the internet, we work with the local tunnel.

serving LLMs through vLLMs

Click the link to redirect to a new site as shown below.

Rare sites hang loca.it | vLLM

In this new site, we must provide the public IP we extracted before. And then click on the submit button. After this step, Bloom’s large language model will finally become available online. And to access it, we can work with the same curl command we have worked with before.

OpenAI API

Now, we can leverage the OpenAI API style of accessing the llm we serve online. Let’s try it with the below example.

OpenAI API

Here, I’m using the Hopscotch website, which provides a free online tool for API testing. Here, we are putting the URL provided by the local tunnel. We are explicitly putting the completions API URL (similar to the OpenAI Completions URL). In the body, we are providing the following key-value pairs.

  • Model: The model we are calling here is the bloom model.
  • Prompt: This will be the user Prompt.
  • Max_tokens: The maximum number of tokens generated by the large language model.
  • Temperature: This sets how creative the model will be, with 1 being the highest and 0 being the lowest.

The request will be a POST method because we send the data to the API, and the content type is application/json because we send JSON data. The output for the Prompt is as follow:

Output

The output format is very similar to the OpenAI API output format, where the generated text is present within the choice object. The response generated is “is the world’s largest producer of bananas,” which is not true and is expected because bloom-560 is not a well-performing llm. The primary purpose is to check how vllm makes serving the large language models simple. This way, we can easily switch out the OpenAI and vllm code with a breeze due to the similar API format.

Conclusion

vLLM is a powerful tool that can make LLM inferences more memory efficient and have a high throughput. By working with vLLM, we can deploy Large Language Models in production without worrying about resource limitations, enabling you to leverage the power of LLMs to improve your applications. The API, being very similar to OpenAI, allows the developers working with OpenAI to shift to other models quickly.

Some of the key takeaways from this article include:

  • vLLM is a high-throughput and memory-efficient LLM serving engine that can address these challenges
  • It leverages the concept of a novel attention algorithm called PagedAttention, which effectively manages attention keys and values
  • With vLLM, we can get higher throughput than traditional LLM serving methods
  • It is compatible with Hugging Face models and provides an OpenAI-compatible API
  • vLLM can be used to serve LLMs through an OpenAI-style API

Dive into the future of AI with GenAI Pinnacle. From training bespoke models to tackling real-world challenges like PII masking, empower your projects with cutting-edge capabilities. Start Exploring.

Frequently Asked Questions

Q1. What is LLM inference?

A. LLM inference uses a trained LLM to generate text, translate languages, or perform other tasks.

Q2. What are the limitations of traditional LLM serving methods?

A. Traditional LLM serving methods can be slow, memory-intensive, and expensive to scale.

Q3. How does vLLM overcome these limitations?

A. vLLM uses a novel attention algorithm called PagedAttention to manage attention keys and values efficiently. This allows vLLM to get higher throughput and lower memory usage than traditional LLM serving methods.

Q4. What large language models are supported by vLLM?

A. vLLM support models from the HuggingFace Hub. It supports LLM families like GPT, Llama, Mistral, and MPT.

Q5. How hard is migrating from OpenAI API to serving LLMs through vLLM?

A. Switching from OpenAI to other LLMs through vLLM is a straightforward process. Suppose your existing code uses the OpenAI API. In that case, this can be directly replaced with vLLM because vLLM serves the large language models with APIs very similar to the OpenAI API, and even the response format is identical to OpenAI API.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I work as a Developer in the field of Data Science. I constantly spend time learning new things be it related to AI, DataSceine, and CyberSecurity. Deep learning and machine learning are two topics that I find particularly fascinating, and Python is my preferred language for programming. Cyber Security is another field that I'm touching upon recently. I have experience with large-scale data analysis, and I have a solid grasp of a variety of deep learning and machine learning approaches, including neural networks, regression models, and natural language processing. I'm eager to take on new challenges and make a meaningful contribution to the industry, so I'm constantly seeking for ways to enlarge and deepen my knowledge and skills in the subject.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details