Building a RQA System with DeepSeek R1 and Streamlit

Aditi V Last Updated : 31 Jan, 2025
7 min read

DeepSeek R1 is here, and it’s proving to be incredibly helpful for building AI applications. Its advanced architecture, combining reinforcement learning with a Mixture of Experts (MoE) framework, ensures high efficiency and accuracy. In this article, I’m going to build a Retrieval-based Question Answering (RQA) system using DeepSeek R1, LangChain and Streamlit. This step-by-step guide will show you how to integrate DeepSeek R1 into a practical application, demonstrating its capabilities in handling real-world reasoning tasks. 

Learning Objectives

  • Understand how the RQA System with DeepSeek R1 enhances reasoning and problem-solving.
  • Explore the architecture and key features of DeepSeek R1 for AI-driven Q&A.
  • Learn how to integrate DeepSeek R1 into retrieval-based question-answering systems.
  • Discover how reinforcement learning improves the accuracy of DeepSeek R1 responses.
  • Analyze real-world applications of DeepSeek R1 in coding, math, and logical reasoning.

This article was published as a part of the Data Science Blogathon.

What is DeepSeek-R1?

 Open-source foundation models have become a game-changer in the rapidly evolving field of Artificial Intelligence, enabling enterprises to develop and fine-tune AI applications. The AI community fosters based on these open-source models as they are advantageous to developers and end users. And this is the advantage of DeepSeek-R1.

DeepSeek-R1 is an open-source, reasoning model released by DeepSeek, a Chinese AI company. It’s purpose is to solve tasks that require logical reasoning, solving mathematical problems, and make real-time decisions. The DeepSeek-R1 models provide excellent performance and efficiency while handling a wide range of activities, from general reasoning to code creation. 

Training Process of DeepSeek-R1-Zero and DeepSeek-R1

Usually, Large language models (LLMs) undergo a three-stage training process. Firstly, during pre-training, they are exposed to vast amounts of text and code to learn general-purpose knowledge, enabling them to predict the next word in a sequence. Although proficient at this, they initially struggle to follow human instructions. Supervised fine-tuning is the next step, where the model is trained on a dataset of instruction-response pairs, significantly improving its ability to follow directions. Lastly, reinforcement learning further refines the model using feedback. This can be done through Reinforcement Learning from Human Feedback (RLHF), where human input guides the training, or Reinforcement Learning from AI Feedback (RLAIF), where another AI model provides feedback.

DeepSeek-R1-Zero model uses a pre-trained DeepSeek-V3-Base model which has 671 billion parameters. But it omits this supervised finetuning stage. use a large scale reinforcement learning technique called Group Relative Policy Optimization (GRPO). 

GRPO

Group Relative Policy Optimization (GRPO) is based upon the Proximal Policy Optimization (PPO) framework but discards the need for a value function model, thus simplifying the training process and reducing memory consumption. It basically generates multiple outputs for each input question and each output is given a score using a reward model. Then, the average of these rewards serves as the baseline to calculate the advantages and a KL Divergence term. But it struggles with readability issues as it’s output is difficult to understand and it often mixes up the languages. Thus, DeepSeek-R1 was created to address these issues. 

Four Stages of DeepSeek-R1

DeepSeek-R1 builds upon DeepSeek-R1-Zero and fixes it’s issues. It’s trained in four stages, described as follows:

  • Stage 1 (Cold Start): it starts with the pre-trained DeepSeek-V3-Base model and is fine-tuned on a small, high-quality dataset from DeepSeek-R1-Zero to improve readability.
  • Stage 2 (Reasoning Reinforcement Learning): enhances reasoning abilities through large-scale reinforcement learning, focusing on tasks like coding, math, science, and logic.
  • Stage 3 (Rejection Sampling and Supervised Fine-Tuning): the model generates multiple samples, retains only the correct and readable ones using rejection sampling. Then it is further fine-tuned with a generative reward model. This phase incorporates data beyond reasoning questions, broadening the model’s capabilities.
  • Stage 4 (Diverse Reinforcement Learning): applies rule-based rewards for tasks like math and uses feedback from a language model to align the model with human preferences.

Features of DeepSeek-R1

Open Source: It is distributed under an MIT license, allowing free inspection, modification, and integration into various projects. DeepSeek-R1 is available on platforms like GitHub and Azure AI Foundry, offering accessibility to developers and researchers.

  • Performance: DeepSeek-R1 performs comparably to OpenAI’s GPT-4 on various benchmarks, including tasks related to math, code generation, and complex reasoning. 
  • Mixture of Experts (MoE) Architecture: The model is built on a Mixture of Experts framework, containing 671 billion parameters, but activates only 37 billion during each forward pass. 

Distilled Models: DeepSeek-R1 provides many distilled models, including DeepSeek-R1-Distill-Qwen-32B and smaller variants like Qwen-1.5B, 7B, and 14B. Distilled models are smaller models created after transferring knowledge from larger ones. This will allow developers to build and deploy AI-powered applications that run efficiently on-device. 

How to use DeepSeek-R1 Locally?

It’s quite simple! 

  • Install Ollama for your local system.
  • Run the following command in your terminal. (DeepSeek-R1 ranges from 1.5B to 671B parameters)
# Enter the command in terminal 
ollama run deepseek-r1   # To use the default 7B model

# To use a specific model
ollama run deepseek-r1:1.5b 

Output: 

To install default 7b model OR it's 1.5b variant

Steps to Build a RQA System with DeepSeek R1

Let’s build a Retrieval Question Answering System with LangChain, powered by DeepSeek-R1 for reasoning! 

Step 1: Import Necessary Libraries

Import necessary libraries, including streamlit, langchain_community.

import streamlit as st
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chains.combine_documents.stuff import create_stuff_documents_chain
from langchain.chains import RetrievalQA

Step 2: Streamlit File Uploader

Create a streamlit file uploader to allow CSV files to be uploaded.

# Streamlit file uploader for CSV files
uploaded_file = st.file_uploader("Upload a CSV file", type="csv")

if uploaded_file:
    # Save CSV temporarily
    temp_file_path = "temp.csv"
    with open(temp_file_path, "wb") as f:
        f.write(uploaded_file.getvalue())

Step 3: Load CSV and Create Embeddings

Once CSV files are uploaded, load them to create embeddings. Embeddings are created using HuggingFaceEmbeddings to convert the CSV data into vector representations.

loader = CSVLoader(file_path=temp_file_path)
docs = loader.load()
embeddings = HuggingFaceEmbeddings()

Step 4: Create Vector Store

Create a FAISS vector store from the documents and embeddings to enable efficient similarity search.

vector_store = FAISS.from_documents(docs, embeddings)

Step 5: Connect a Retriever

Initialize a retriever with the vector store, and specify the number of top documents to fetch (I have set it as 3).

retriever = vector_store.as_retriever(search_kwargs={"k": 3})

Step 6: Define the LLM

By using Ollama, we can define the LLM. Mention the DeepSeek-R1 version as the parameter.

llm = Ollama(model="deepseek-r1:1.5b")  # Our 1.5B parameter model

Step 7: Create a Prompt Template

Here I am using a default, basic template but you can modify it according to your needs.

prompt = """
    1. Use ONLY the context below.
    2. If unsure, say "I don’t know".
    3. Keep answers under 4 sentences.

    Context: {context}

    Question: {question}

    Answer:
    """
    QA_CHAIN_PROMPT = PromptTemplate.from_template(prompt)

Step 8: Define the QA Chain

Use the StuffDocumentsChain to combine the LLM and the prompt template into a single chain for document-based question answering.

llm_chain = LLMChain(llm=llm, prompt=QA_CHAIN_PROMPT)
    
    # Combine document chunks
    document_chain = create_stuff_documents_chain(
        llm=llm,
        prompt=QA_CHAIN_PROMPT
    )

Step 9: Create the RetrievalQA Chain

Initialize the RetrievalQA chain, which integrates the retriever and the LLM to answer user queries based on relevant document chunks.

 qa = RetrievalQA.from_chain_type(
        llm=llm,
        retriever=retriever,
        chain_type="stuff", 
    )

Step 10: Create Streamlit UI for the application

Set up a Streamlit text input field where users can enter queries, process the input using the RetrievalQA chain, and display the generated response.

user_input = st.text_input("Ask your CSV a question:")

    if user_input:
        with st.spinner("Thinking..."):
            try:
                response = qa.run(user_input)  
                st.write(response)
            except Exception as e:
                st.error(f"Error: {str(e)}")

Save the python file (.py) and run it locally using the following command to view the UI.

#In terminal
streamlit run filename.py

Note: Ensure the necessary libraries are installed in your system. You can do so by the following command. 

pip install streamlit langchain_community transformers faiss-cpu langchain

Output

Here I have uploaded an Automobile dataset and asked it a question related to my csv file. 

output: RQA System with DeepSeek R1

Advantage: Here’s what I liked about DeepSeek-R1’s reasoning – you can follow it’s logic! It displays it’s thinking process and why it has come to a conclusion. Thus, DeepSeek-R1 improves the explainability of LLMs!

Conclusion

DeepSeek-R1 shows the way forward for high-quality AI models with sophisticated reasoning and nuanced understanding. Combining powerful reinforcement learning techniques with an efficient Mixture of Experts architecture, DeepSeek-R1 provides solution for a variety of complex tasks, from code generation to deep reasoning challenges. Its open-source nature and accessibility further empower developers and researchers. With the continuous development of AI, open-source models such as DeepSeek-R1 are opening up the prospects of more intelligent and resource-efficient systems across various domains. With great performance, its unparalleled architecture, and impressive results, DeepSeek-R1 is poised for prominent future innovations in AI.

Key Takeaways

  • DeepSeek-R1 is an advanced open-source reasoning model designed for logical problem-solving, math, and real-time decision-making.
  • The RQA System with DeepSeek R1 enables efficient document-based question-answering by leveraging retrieval-augmented generation techniques.
  • DeepSeek-R1’s training process includes reinforcement learning, rejection sampling, and fine-tuning, making it highly optimized for reasoning tasks.
  • The RQA System with DeepSeek R1 enhances AI explainability by displaying its step-by-step thought process in responses.
  • DeepSeek-R1’s Mixture of Experts (MoE) architecture activates only relevant parameters per task, improving efficiency while handling complex queries.

References

Frequently Asked Questions

Q1. What is Mixtures-of-Experts architecture?

A. It is a smart neural network design that uses multiple specialized sub-models (experts). A gating system selects the most relevant experts for each input, ensuring only a few are active at a time. This makes the model more efficient than traditional dense models, which use all parameters.

Q2. What are the other ways to access DeepSeek-R1?

A. DeepSeek’s chatbot is available on company’s website and is available for download on the Apple App Store and Google Play Store. It is also available on Hugging Face and DeepSeek’s API.

Q3. What is a Retrieval-based Question Answering (RQA) system?

A. A Retrieval-based QA system fetches information from a dataset or documents and generates answers based on the retrieved content, rather than just depending upon pre-trained knowledge. 

Q4. What is FAISS and why is it used?

A. FAISS stands for Facebook AI Similarity Search. It enables fast and efficient similarity searches, allowing the system to retrieve the most relevant chunks of information from the CSV data.

Q5. What are the system requirements for running DeepSeek-R1?

A. The requirements vary based on the model size. For example, the 7B model needs at least 8GB of RAM, while the 33B model requires a minimum of 32GB of RAM. 

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hello data enthusiasts! I am V Aditi, a rising and dedicated data science and artificial intelligence student embarking on a journey of exploration and learning in the world of data and machines. Join me as I navigate through the fascinating world of data science and artificial intelligence, unraveling mysteries and sharing insights along the way! 📊✨

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details