DeepSeek R1 is here, and it’s proving to be incredibly helpful for building AI applications. Its advanced architecture, combining reinforcement learning with a Mixture of Experts (MoE) framework, ensures high efficiency and accuracy. In this article, I’m going to build a Retrieval-based Question Answering (RQA) system using DeepSeek R1, LangChain and Streamlit. This step-by-step guide will show you how to integrate DeepSeek R1 into a practical application, demonstrating its capabilities in handling real-world reasoning tasks.
This article was published as a part of the Data Science Blogathon.
Open-source foundation models have become a game-changer in the rapidly evolving field of Artificial Intelligence, enabling enterprises to develop and fine-tune AI applications. The AI community fosters based on these open-source models as they are advantageous to developers and end users. And this is the advantage of DeepSeek-R1.
DeepSeek-R1 is an open-source, reasoning model released by DeepSeek, a Chinese AI company. It’s purpose is to solve tasks that require logical reasoning, solving mathematical problems, and make real-time decisions. The DeepSeek-R1 models provide excellent performance and efficiency while handling a wide range of activities, from general reasoning to code creation.
Usually, Large language models (LLMs) undergo a three-stage training process. Firstly, during pre-training, they are exposed to vast amounts of text and code to learn general-purpose knowledge, enabling them to predict the next word in a sequence. Although proficient at this, they initially struggle to follow human instructions. Supervised fine-tuning is the next step, where the model is trained on a dataset of instruction-response pairs, significantly improving its ability to follow directions. Lastly, reinforcement learning further refines the model using feedback. This can be done through Reinforcement Learning from Human Feedback (RLHF), where human input guides the training, or Reinforcement Learning from AI Feedback (RLAIF), where another AI model provides feedback.
DeepSeek-R1-Zero model uses a pre-trained DeepSeek-V3-Base model which has 671 billion parameters. But it omits this supervised finetuning stage. use a large scale reinforcement learning technique called Group Relative Policy Optimization (GRPO).
Group Relative Policy Optimization (GRPO) is based upon the Proximal Policy Optimization (PPO) framework but discards the need for a value function model, thus simplifying the training process and reducing memory consumption. It basically generates multiple outputs for each input question and each output is given a score using a reward model. Then, the average of these rewards serves as the baseline to calculate the advantages and a KL Divergence term. But it struggles with readability issues as it’s output is difficult to understand and it often mixes up the languages. Thus, DeepSeek-R1 was created to address these issues.
DeepSeek-R1 builds upon DeepSeek-R1-Zero and fixes it’s issues. It’s trained in four stages, described as follows:
Open Source: It is distributed under an MIT license, allowing free inspection, modification, and integration into various projects. DeepSeek-R1 is available on platforms like GitHub and Azure AI Foundry, offering accessibility to developers and researchers.
Distilled Models: DeepSeek-R1 provides many distilled models, including DeepSeek-R1-Distill-Qwen-32B and smaller variants like Qwen-1.5B, 7B, and 14B. Distilled models are smaller models created after transferring knowledge from larger ones. This will allow developers to build and deploy AI-powered applications that run efficiently on-device.
It’s quite simple!
# Enter the command in terminal
ollama run deepseek-r1 # To use the default 7B model
# To use a specific model
ollama run deepseek-r1:1.5b
Output:
Let’s build a Retrieval Question Answering System with LangChain, powered by DeepSeek-R1 for reasoning!
Import necessary libraries, including streamlit, langchain_community.
import streamlit as st
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chains.combine_documents.stuff import create_stuff_documents_chain
from langchain.chains import RetrievalQA
Create a streamlit file uploader to allow CSV files to be uploaded.
# Streamlit file uploader for CSV files
uploaded_file = st.file_uploader("Upload a CSV file", type="csv")
if uploaded_file:
# Save CSV temporarily
temp_file_path = "temp.csv"
with open(temp_file_path, "wb") as f:
f.write(uploaded_file.getvalue())
Once CSV files are uploaded, load them to create embeddings. Embeddings are created using HuggingFaceEmbeddings to convert the CSV data into vector representations.
loader = CSVLoader(file_path=temp_file_path)
docs = loader.load()
embeddings = HuggingFaceEmbeddings()
Create a FAISS vector store from the documents and embeddings to enable efficient similarity search.
vector_store = FAISS.from_documents(docs, embeddings)
Initialize a retriever with the vector store, and specify the number of top documents to fetch (I have set it as 3).
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
By using Ollama, we can define the LLM. Mention the DeepSeek-R1 version as the parameter.
llm = Ollama(model="deepseek-r1:1.5b") # Our 1.5B parameter model
Here I am using a default, basic template but you can modify it according to your needs.
prompt = """
1. Use ONLY the context below.
2. If unsure, say "I don’t know".
3. Keep answers under 4 sentences.
Context: {context}
Question: {question}
Answer:
"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(prompt)
Use the StuffDocumentsChain to combine the LLM and the prompt template into a single chain for document-based question answering.
llm_chain = LLMChain(llm=llm, prompt=QA_CHAIN_PROMPT)
# Combine document chunks
document_chain = create_stuff_documents_chain(
llm=llm,
prompt=QA_CHAIN_PROMPT
)
Initialize the RetrievalQA chain, which integrates the retriever and the LLM to answer user queries based on relevant document chunks.
qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
chain_type="stuff",
)
Set up a Streamlit text input field where users can enter queries, process the input using the RetrievalQA chain, and display the generated response.
user_input = st.text_input("Ask your CSV a question:")
if user_input:
with st.spinner("Thinking..."):
try:
response = qa.run(user_input)
st.write(response)
except Exception as e:
st.error(f"Error: {str(e)}")
Save the python file (.py) and run it locally using the following command to view the UI.
#In terminal
streamlit run filename.py
Note: Ensure the necessary libraries are installed in your system. You can do so by the following command.
pip install streamlit langchain_community transformers faiss-cpu langchain
Here I have uploaded an Automobile dataset and asked it a question related to my csv file.
Advantage: Here’s what I liked about DeepSeek-R1’s reasoning – you can follow it’s logic! It displays it’s thinking process and why it has come to a conclusion. Thus, DeepSeek-R1 improves the explainability of LLMs!
DeepSeek-R1 shows the way forward for high-quality AI models with sophisticated reasoning and nuanced understanding. Combining powerful reinforcement learning techniques with an efficient Mixture of Experts architecture, DeepSeek-R1 provides solution for a variety of complex tasks, from code generation to deep reasoning challenges. Its open-source nature and accessibility further empower developers and researchers. With the continuous development of AI, open-source models such as DeepSeek-R1 are opening up the prospects of more intelligent and resource-efficient systems across various domains. With great performance, its unparalleled architecture, and impressive results, DeepSeek-R1 is poised for prominent future innovations in AI.
A. It is a smart neural network design that uses multiple specialized sub-models (experts). A gating system selects the most relevant experts for each input, ensuring only a few are active at a time. This makes the model more efficient than traditional dense models, which use all parameters.
A. DeepSeek’s chatbot is available on company’s website and is available for download on the Apple App Store and Google Play Store. It is also available on Hugging Face and DeepSeek’s API.
A. A Retrieval-based QA system fetches information from a dataset or documents and generates answers based on the retrieved content, rather than just depending upon pre-trained knowledge.
A. FAISS stands for Facebook AI Similarity Search. It enables fast and efficient similarity searches, allowing the system to retrieve the most relevant chunks of information from the CSV data.
A. The requirements vary based on the model size. For example, the 7B model needs at least 8GB of RAM, while the 33B model requires a minimum of 32GB of RAM.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.