In the fast-paced world of AI, crafting a smart, multilingual chatbot is now within reach. Picture a tool that understands and chats in various languages, helps with coding, and generates high-quality data effortlessly. Enter Meta’s Llama 3.1, a powerful language model that’s transforming AI and making it accessible to everyone. By combining Llama 3.1, Ollama, and LangChain, along with the user-friendly Streamlit, we’re set to create an intelligent and responsive chatbot that makes complex tasks feel simple.
This article was published as a part of the Data Science Blogathon.
Llama 3.1 represents the most recent update to Meta’s series of language models under the Llama line. In its version dated July 23, 2024, it comes with 8 billion, 70 billion, and—drum roll—a massive 405 billion parameters. These have been trained on a corpus of over 15 trillion tokens in this version, bigger than all the preceding versions put together; hence, improved performance and capabilities.
Meta maintains their commitment to open-source AI by making Llama 3.1 freely available to the community. This technique promotes innovation by allowing developers to create and improve models for a variety of applications. Llama 3.1’s open-source nature provides access to powerful AI, allowing more individuals to harness its capabilities without incurring large fees.
In the Llama ecosystem are over 25 partners, including AWS, NVIDIA, Databricks, Groq, Dell, Azure, Google Cloud, Snowflake, and many more, who make their services available right on day one. Such collaborations enhance the accessibility and utility of llama3.1, easing integration into a number of platforms and workflows.
Meta has introduced a number of new safety and security tools, including Llama Guard 3 and Prompt Guard, to make sure that it builds AI ethically. These ensure that Llama 3.1 is safe to be run, sans possible dangers accruing from the roll-out of Gen-AI.
Meta-evaluated Llama over over 150 benchmark datasets and across multiple languages, the results of which show this model to stand in good stead with the best in the field, which currently consists of GPT-4 and Claude 3.5 Sonnet, in various tasks, meaning Llama 3.1 stands right at the top tier in the firmament of AI.
Let us now set up the environment.
python -m venv env
Install dependencies from requirements.txt file.
langchain
langchain-ollama
streamlit
langchain_experimental
pip install -r requirements.txt
Click here to download Ollama.
ollama pull llama3.1
You can use it Locally using cmd.
ollama run llama3.1
We’ll now walk through run a Streamlit app that leverages the powerful Llama 3.1 model for interactive Q&A. This app transforms user questions into thoughtful responses using the latest in natural language processing technology. With a clean interface and straightforward functionality, you can quickly see how to integrate and deploy a chatbot application.
Import Libraries and Initialize Streamlit
We set up the environment for our Streamlit app by importing the necessary libraries and initializing the app’s title.
from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama.llms import OllamaLLM
import streamlit as st
st.title("LLama 3.1 ChatBot")
Style the Streamlit App
We customize the appearance of the Streamlit app to match our desired aesthetic by applying custom CSS styling.
# Styling
st.markdown("""
<style>
.main {
background-color: #00000;
}
</style>
""", unsafe_allow_html=True)
Create the Sidebar
Now we will add a sidebar to provide additional information about the app and its functionalities.
# Sidebar for additional options or information
with st.sidebar:
st.info("This app uses the Llama 3.1 model to answer your questions.")
Define the Chatbot Prompt Template and Model
Define the structure of the chatbot’s responses and initialize the language model that will generate the answers.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = ChatPromptTemplate.from_template(template)
model = OllamaLLM(model="llama3.1")
chain = prompt | model
Create the Main Content Area
This section sets up the main interface of the app where users can input their questions and interact with the chatbot.
# Main content
col1, col2 = st.columns(2)
with col1:
question = st.text_input("Enter your question here")
Process the User Input and Display the Answer
Now handling the user’s input, process it with the chatbot model, and display the generated answer or appropriate messages based on the input.
if question:
with st.spinner('Thinking...'):
answer = chain.invoke({"question": question})
st.success("Done!")
st.markdown(f"**Answer:** {answer}")
else:
st.warning("Please enter a question to get an answer.")
streamlit run app.py
or
python -m streamlit run app.py
Meta’s Llama 3.1 stands out as a groundbreaking model in the field of artificial intelligence. Its combination of scale, performance, and accessibility makes it a versatile tool for a wide range of applications. By maintaining an open-source approach, Meta not only promotes transparency and innovation but also empowers developers and organizations to harness the full potential of advanced AI. As the Llama 3.1 ecosystem continues to evolve, it is poised to drive significant advancements in how AI is applied across industries and disciplines. In this article we learned how we can build our own chatbot with Llama 3.1, Ollama and LangChain.
A. Llama 3.1 significantly improves upon its predecessors with a larger parameter count, better performance in benchmarks, extended context length, and enhanced multilingual and multimodal capabilities.
A. You can access Llama 3.1 via the Hugging Face platform and integrate it into your applications using APIs provided by partners like AWS, NVIDIA, Databricks, Groq, Dell, Azure, Google Cloud, and Snowflake.
A. Yes, especially the 8B variant, which provides fast response times suitable for real-time applications.
A. Yes, Llama 3.1 is open-source, with its model weights and code available on platforms like Hugging Face, promoting accessibility and fostering innovation within the AI community.
A. Practical applications include developing AI agents and virtual assistants, multilingual translation and summarization, coding assistance, information extraction, and content creation.
A. Meta has introduced new security and safety tools, including Llama Guard 3 and Prompt Guard, to ensure responsible AI deployment and mitigate potential risks.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.