Large Language Models (LLMs) have revolutionized how we interact with computers. However, deploying these models in production can be challenging due to their high memory consumption and computational cost. vLLM, an open-source library for fast LLM inference and serving, addresses these challenges by working with a novel attention algorithm called PagedAttention. This algorithm effectively manages attention keys and values, allowing vLLM to get higher throughput and lower memory usage than traditional LLM serving methods.
In this article, you will learn about:
This article was published as a part of the Data Science Blogathon.
LLMs have shown their worth in tasks like Text Generation, Summarization, language translation, and many more. However, deploying these LLMs with traditional LLM inference approaches suffers from several limitations:
Take your AI innovations to the next level with GenAI Pinnacle. Fine-tune models like Gemini and unlock endless possibilities in NLP, image generation, and more. Dive in today! Explore Now
vLLM is a high-throughput and memory-efficient LLM serving engine. It works with a novel attention algorithm called PagedAttention, which effectively manages attention keys and values by dividing them into smaller, more manageable chunks. This approach reduces the memory footprint of vLLM and allows it to get greater throughput compared to traditional LLM serving methods. While testing, the vLLM has performed 24x better than the conventional HuggingFace serving and up to 2-5x better than the HuggingFace Text Generation Inference (TGI). It further refines the inference process by continuously batching and optimizing CUDA kernels.
vLLM offers several benefits over traditional LLM serving methods:
Getting started with vLLM is very simple. The first thing would be installing the VLLM library, which can be done as shown below.
pip install vllm
The above command should install the vllm library. In the next step, we will be choosing our model to start inference, which is
from vllm import LLM, SamplingParams
# choosing the large language model
llm = LLM(model="gpt2-xl")
# setting the parameters
sampling_params = SamplingParams(temperature=0.8, top_p=0.90,max_tokens = 50)
Here, first, we import two classes from the vllm library
Next, we instantiate the LLM object. Here, we are choosing the gpt2-xl model, a 1.5Billion Parameter model. Then, we set the configurations for the model like the temperature, max_tokens, and the top_p.
So, running this code will download the pretrained gpt-xl model from the HuggingFace Hub and then load the model into the GPU. Now, we will create a Prompt and try to infer the model. At the same time, let’s test the amount of time taken to generate the response.
%%time
# defining our prompt
prompt = "Machine Learning is"
# generating the answer
answer = llm.generate(prompt,sampling_params)
# getting the generated text out from the answer variable
answer[0].outputs[0].text
Here, we give the Prompt “Machine Learning is” and then pass this Prompt along with the SampingParams to the .generate() class of the LLM object. This will then generate the answer. The answer contains a list of elements of type RequestOutput. Each component of the list includes the generated text and other information. But here, we are only interested in the output generated and thus can access it from the answer through the outputs[0].text. The response generated can be seen below.
We can see the time taken to generate is in milliseconds, which is pretty quick. Also, we get a sound output(gpt2-xl is an okay model and doesn’t generate great responses. We are using it for demo purposes because it fits in the free GPU provided by free colab). Now, let’s try giving the model a list of Prompts and check the time taken to generate the responses.
%%time
# defining our prompt
prompt = [
"What is Quantum Computing?",
"How are electrons and protons different?",
"What is Machine Learning?",
]
# generating the answer
answers = llm.generate(prompt,sampling_params)
# getting the generated text out from the answer variable
for i in range(3):
print("\nPrompt:",prompt[i],"\nGeneration:",answers[i].outputs[0].text)
print()
Here the above code is self-explanatory. We are creating a list of Prompts, and we have 3 of them and then we are giving this list to the .generate() function of the LLM class. Then LLM class will generate a list of answers. We then traverse through the list and print the text from each response generated, thus giving the following output
We can check the response generated by the gpt2-xl large language model above. The reactions are not that good and are cut off in the middle of sentences, and this is expected because the gpt2-xl is not the best-performing model out there. We see here the amount of time it took for the large language model to generate the response. It took 1 second to create responses for all 3 questions combined. This is an excellent inference speed and can be improved further with increased computing resources.
This section will look into how to server LLMs through the vLLM library. The process for this is straightforward. With vLLM, creating a server very similar to the OpenAI API protocol is possible. We will be hosting the server in a way that makes it reachable through the Internet. Let’s dive in by running the following commands.
curl ipv4.icanhazip.com
In the above, we run the curl command to the website ipv4.icanhazip.com. This will return the IPv4 public address of our machine. This public address will be used later to make the LLM available online.
Next, we run the following Python command to serve the Large Language Model.
python -m vllm.entrypoints.openai.api_server \
--host 127.0.0.1 \
--port 8888 \
--model bigscience/bloomz-560m \
& \
npx localtunnel --port 8888
We are running the api_server.py file from the vllm library in the above. We are providing the following options to this file.
Uptohere is the part where the vLLM downloads the model from the huggingface hub, loads it in the GPU, and runs it. Assigning the host value localhost restricts external calls to the served Large Language Model; it can only be accessed internally. So, to expose it to the internet, we work with the local tunnel.
Click the link to redirect to a new site as shown below.
In this new site, we must provide the public IP we extracted before. And then click on the submit button. After this step, Bloom’s large language model will finally become available online. And to access it, we can work with the same curl command we have worked with before.
Now, we can leverage the OpenAI API style of accessing the llm we serve online. Let’s try it with the below example.
Here, I’m using the Hopscotch website, which provides a free online tool for API testing. Here, we are putting the URL provided by the local tunnel. We are explicitly putting the completions API URL (similar to the OpenAI Completions URL). In the body, we are providing the following key-value pairs.
The request will be a POST method because we send the data to the API, and the content type is application/json because we send JSON data. The output for the Prompt is as follow:
The output format is very similar to the OpenAI API output format, where the generated text is present within the choice object. The response generated is “is the world’s largest producer of bananas,” which is not true and is expected because bloom-560 is not a well-performing llm. The primary purpose is to check how vllm makes serving the large language models simple. This way, we can easily switch out the OpenAI and vllm code with a breeze due to the similar API format.
vLLM is a powerful tool that can make LLM inferences more memory efficient and have a high throughput. By working with vLLM, we can deploy Large Language Models in production without worrying about resource limitations, enabling you to leverage the power of LLMs to improve your applications. The API, being very similar to OpenAI, allows the developers working with OpenAI to shift to other models quickly.
Some of the key takeaways from this article include:
Dive into the future of AI with GenAI Pinnacle. From training bespoke models to tackling real-world challenges like PII masking, empower your projects with cutting-edge capabilities. Start Exploring.
A. LLM inference uses a trained LLM to generate text, translate languages, or perform other tasks.
A. Traditional LLM serving methods can be slow, memory-intensive, and expensive to scale.
A. vLLM uses a novel attention algorithm called PagedAttention to manage attention keys and values efficiently. This allows vLLM to get higher throughput and lower memory usage than traditional LLM serving methods.
A. vLLM support models from the HuggingFace Hub. It supports LLM families like GPT, Llama, Mistral, and MPT.
A. Switching from OpenAI to other LLMs through vLLM is a straightforward process. Suppose your existing code uses the OpenAI API. In that case, this can be directly replaced with vLLM because vLLM serves the large language models with APIs very similar to the OpenAI API, and even the response format is identical to OpenAI API.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.