Gemma 2B vs Llama 3.2 vs Qwen 7B: Which Model Extracts Better?

Nibedita Dutta Last Updated : 18 Jan, 2025
9 min read

Entity extraction, also known as Named Entity Recognition, is a crucial task in natural language processing that focuses on identifying and classifying key information from unstructured text. This process involves detecting specific entities such as names of people, organizations, locations, dates, and various other categories of information within a body of text. The primary goal of entity extraction is to convert unstructured data into structured formats that can be easily analyzed and interpreted by computers. By transforming raw text into structured data, entity extraction facilitates better information retrieval, content organization, and insights generation from large volumes of textual data.

Entity extraction using Language Models has emerged as a powerful method for identifying and categorizing entities from unstructured text. Language Models excel in understanding the context surrounding words, which allows them to accurately identify entities based on their usage within sentences. This capability significantly reduces errors associated with ambiguous terms that traditional NER systems might misclassify due to a lack of contextual awareness

Learning Objectives

  • Understand the concept of entity extraction and its role in transforming unstructured text into structured data for better analysis and insights.
  • Explore how small language models enhance entity extraction by leveraging contextual understanding for accurate entity identification.
  • Compare the features, architecture, and performance of small language models like Gemma 2B, Llama 3.2, and Qwen 7B in entity extraction tasks.
  • Learn the process of implementing and evaluating small language models for entity extraction using practical tools like Google Colab and Ollama.
  • Analyze the comparative assessment results to identify the most effective small language models for specific entity extraction scenarios.

This article was published as a part of the Data Science Blogathon.

How Language Models Transform Entity Extraction?

Entity extraction has come a long way from traditional rule-based systems to machine learning models, and now to advanced language models. Unlike older methods, which often struggled with ambiguous terms or lacked the flexibility to adapt to new contexts, language models bring a contextual understanding of text. They analyze not just individual words but the relationships between them, allowing for a more accurate identification and classification of entities like names, organizations, locations, and dates.

Why Language Models can improve Entity Extraction?

What sets language models apart is their ability to leverage vast amounts of training data and sophisticated architectures, like transformer-based designs, to recognize patterns in text. This makes them exceptionally effective in handling complex sentences and detecting subtle variations in how entities are expressed. Whether it’s disambiguating terms like “Apple” (the company vs. the fruit) or recognizing new, domain-specific entities without retraining, language models have revolutionized the way unstructured data is transformed into actionable insights. Their adaptability and precision have made them indispensable tools in modern natural language processing.

Gemma 2B vs Llama 3.2 vs Qwen 7B: Overview

Small Language Models have fewer parameters (typically under 10 billion), which dramatically reduces the computational costs and energy usage. They focus on specific tasks and are trained on smaller datasets. This maintains a balance between performance and resource efficiency. 

Popular Small Language Models

Gemma 2B

Gemma 2B is a lightweight, state-of-the-art language model developed by Google, designed to perform effectively across various natural language processing tasks.

Key Features of Model

  • Number of Parameters: 2 Billion
  • Context Length: 8192 tokens
  • It has been trained on approximately 2 trillion tokens, primarily sourced from web documents, code, and mathematics, predominantly in English.
  • The model is open-source with publicly available weights.
  • Model Architecture: Gemma 2B utilizes a decoder-only transformer architecture.

Some other optimizations in the architecture of Gemma 2B are the following:

  • Multi-Query Attention (MQA)
  • Rotary Positional Embeddings (RoPE)
  • GeGLU Activations and RMSNorm.

Llama 3.2 1B and 3B

Llama 3.2 is a collection of multilingual large language models developed by Meta. It offers various parameter sizes, including the 1 billion (1B) and 3 billion (3B) versions.

Key Features of Model

  • The Llama 3.2 1B model consists of 1.23 billion parameters, while the Llama 3.2 3B model contains approximately 3.2 billion parameters. These lightweight options are suitable for deployment on edge devices and mobile platforms.
  • Context Length for both the models: 128,000 tokens
  • The Llama 3.2 1B and 3B model was trained on a substantial dataset consisting of up to 9 trillion tokens derived from various publicly available sources
  • The Llama 3.2 models are decoder-only transformer models. They are designed as auto-regressive language models, which means they generate text by predicting the next token based on the previous tokens in the sequence.
  • It is optimized for multilingual dialogue use cases, making it suitable for tasks such as retrieval and summarization across various languages

Qwen 7B

Alibaba Cloud developed Qwen 7B, a language model designed for a variety of natural language processing tasks.

Key Features of Model

  • Qwen 7B has 7 billion parameters, which allows it to capture complex patterns in language and perform a wide range of tasks effectively.
  • The Qwen 7B model has a context length of 8,192 tokens
  • The model was pretrained on over 2.4 trillion tokens from diverse sources, including web texts, books, and code.
  • Qwen 7B model is a decoder-only transformer. It is designed similarly to the LLaMA series of models, focusing on generating text by predicting the next token based on previous tokens in the sequence. It consists of 32 layers and 32 attention heads, with a hidden size of 4096, supporting efficient processing of input data.
  • Some other optimizations in the architecture of Gemma 2B are the following:
  • Rotary Positional Embeddings (RoPE)
  • SwiGLU activation function
  • RMSNorm.

Running Models on Google Colab Using Ollama For Entity Extraction

Running models on Google Colab using Ollama provides a seamless way to implement and evaluate small language models for entity extraction tasks. With minimal setup, users can leverage powerful models to process text and extract key entities efficiently.

Step1: Installing the Required Libraries

Below we will install all the required libraries:

!sudo apt update
!sudo apt install -y pciutils
!pip install langchain-ollama
!pip install ollama==0.4.2

Step2: Importing the Required Libraries

Once the installation is done, it is time to import the libraries.

import threading
import subprocess
import time
from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama.llms import OllamaLLM
from IPython.display import Markdown

Step3: Running Ollama in Background on Colab

Start the Ollama server in the background on Colab to enable seamless interaction with the language models.

def run_ollama_serve():
  subprocess.Popen(["ollama", "serve"])

thread = threading.Thread(target=run_ollama_serve)
thread.start()
time.sleep(5)

Step4: Fetching The CSV Data

We use the first 10 rows of this dataset from github for a comparison of extracted entities as outputs from different small language models.

import pandas as pd
df1 = pd.read_csv("generated_highlight_samples.csv",encoding='latin-1',header=None)
df1.columns =['text','entities_org']
df1.shape

Step5: Pulling Model from Ollama

Retrieve the desired language model from Ollama to begin processing text for entity extraction.

template = """Question: {question}"""

prompt = ChatPromptTemplate.from_template(template)

model = OllamaLLM(model="mistral")

chain = prompt | model

from tqdm import tqdm
resp=[]
for texts in tqdm(df1['text'].values.tolist()[:10]):
  input_data = {
    "question": """ONLY EXTRACT "Project", "Companies" and "People" from the following text in the format WITHOUT ANY ADDITIONAL TEXT ["Project": " " , "Companies" : " ", "People" : " "] - %s"""%(texts)}

  # Invoke the chain with input data and display the response in Markdown format
  response = chain.invoke(input_data)
  resp.append([texts,response])

# Create DataFrame of Extracted Entities
resp1 = pd.DataFrame(resp)
resp1.columns =['Text','Entities']
df2 = df1.iloc[:10,:]
resp1['entities_org']=df2['entities_org'].values.tolist()

Output_from_Gemma 2B

Output_from_Gemma 2B:  Entity Extraction

Output_from_Qwen 7B

Output_from_Qwen 7B:  Entity Extraction

Output_from_Llama 3.2 1 B

Output_from_Llama 3.2 1 B:  Entity Extraction

Output_from_Llama 3.2 3 B

Output_from_Llama 3.2 3 B:  Entity Extraction

Evaluation Framework Used For Assessment of Entity Extraction

The evaluation framework for assessing entity extraction focuses on measuring the accuracy of identified entities like projects, companies, and people. Each model’s output is scored based on its ability to extract entities correctly, partially, or not at all, with scores aggregated across multiple test cases. This approach ensures a fair comparison of model performance in diverse scenarios.

Let us take a sample row from the dataset.

"In a groundbreaking collaboration, Vertex brings together Allianz and Google,
leveraging their expertise to drive innovation, with David at the forefront,
overseeing a team that has achieved a 35% increase in operational efficiency and a
25% reduction in costs, ultimately enhancing customer experience for over 500,000
users, and paving the way for a potential 40% market expansion within the next two
years."

As given in the second column of the dataset, these are the valid Project, Companies and People Entities mentioned in the text.

{“projects”: [“Vertex”],”companies”: [“Allianz”,”Google”],”people”: [“David”]}

In order to evaluate the LLM model for entity extraction, we apply the following procedure:

  • If our LLM model is able to extract these entities accurately, then we give it a score of 1 against each of these categories.
  • If our LLM model is not able to extract any of these entities accurately, then we give it a score of 0 against each of these categories.
  • If the LLM model partially extracts some entities accurately, we assign it a score based on the percentage of correctly extracted entities (e.g., 0.5 if it extracts 1 out of 2 original entities correctly) for each category.

Example:

Output_Scenario_1: {“projects”: [“”],”companies”: [“Allianz”,”Google”],”people”: [“”]}

For the above output from the LLM, score becomes the following:
Number of Correctly Extracted Project Entities - 0
Number of Correctly Extracted Company Entities -1
Number of Correctly Extracted People Entities - 0 

Output _Scenario_2: {“projects”: [“Vertex”],”companies”: [“Google”],”people”: [“”]}

For the above output from the LLM, score becomes the following:
Number of Correctly Extracted Project Entities - 1
Number of Correctly Extracted Company Entities - 0.5
Number of Correctly Extracted People Entities - 0 

Finally, we sum these scores for all the rows in the dataset to calculate the total number of correctly extracted entities across each category, as the table below shows.

Comparative Assessment of Scores From Different Models

Model Number of Correctly Extracted Project Entities Number of Correctly Extracted Company Entities Number of Correctly Extracted People Entities Average Score
Gemma 2B 9 10 10 9.7
Llama 3.2 1 B 5 6.5 6.5 6
Llama 3.2 3 B 6 6.5 10 7.5
Qwen 7B 5 3 10 6

As we can see from the table above –

  • The accuracy for entity extraction comes to be highest for Gemma 2B.
  • The second highest accuracy comes to be for the model Llama 3.2 3 B with the highest accuracy in extracting People entities.
  • Qwen 7B performs the poorest in terms of accuracy for extracting Project and Company entities. However, it scores a 10 on 10 for extracting the People Entities.
  • Llama 3.2 1 B doesn’t perform greatly in extracting any category of entity.

According to the sample test results, Gemma 2B emerged as the top-performing model. Nevertheless, we highly recommend that users conduct their own testing with their specific datasets to confirm the findings.

Conclusion

The comparative assessment of models such as Gemma 2B, Llama 3.2 (both 1B and 3B versions), and Qwen 7B highlights the strengths of these advanced architectures in entity extraction tasks. Gemma 2B stands out with the highest accuracy overall, particularly excelling in extracting various entity types. Llama 3.2 3B also performs well, especially in identifying people entities, while Qwen 7B shows a strong performance in this category despite lower accuracy in extracting project and company entities.

Based on the sample testing example, Gemma 2B was the best-performing model. However, we strongly encourage users to test it on their own datasets to validate the results.

In summary, the incorporation of language models into entity extraction processes not only enhances accuracy but also provides the flexibility needed to adapt to evolving data landscapes. As these models continue to advance, they will play an increasingly critical role in transforming unstructured text into actionable insights across various industries.

Key Takeaways

  • Language Models significantly improve entity extraction by leveraging their ability to understand context, leading to more accurate identification and classification of entities compared to traditional NER systems.
  • Language Models can surpass traditional machine learning and deep learning models in NER accuracy. Language Models can handle entity extraction in multiple languages simultaneously, aiding global operations. Unlike traditional NER systems, Language Models can easily recognize new entities without extensive retraining.
  • Small Language Models have fewer parameters (typically under 10 billion), which dramatically reduces the computational costs and energy usage. They focus on specific tasks and are trained on smaller datasets.
  • Some of the latest Small Language Models include Meta’s Llama 3.2 model (1 billion and 3 billion), Qwen 2 (0.5 and 7 billion) model, Gemma 2 (2 and 9 billion) model.
  • In our comparative assessment of small language models for entity extraction, Gemma 2B leads in accuracy, particularly for a wide range of entity types, while Llama 3.2 3B excels in extracting “People” entities. Qwen 7B’s performance is notable for “People” entities but weak for “Project” and “Company” entities.

Frequently Asked Questions

Q1. How do Language Models help in entity extraction?

A. Language Models improve entity extraction by understanding the context around words, which allows for accurate identification of entities, reducing errors that traditional NER systems might make due to lack of context.

Q2. What are Small Language Models (SLMs)?

A. Small Language Models (SLMs) are language models with fewer parameters, typically under 10 billion, making them more resource-efficient. They are optimized for specific tasks and trained on smaller datasets, balancing performance and computational efficiency. These models are ideal for applications that require fast responses and minimal resource consumption.

Q3. What is the Llama 3.2 model and what makes it unique?

A. Llama 3.2 is a multilingual language model with versions of 1B and 3B parameters, designed for tasks such as retrieval and summarization in various languages. It supports up to 128,000 tokens of context and is optimized for dialogue use cases.

Q4. What is the Gemma 2B model and what are its features?

A. Gemma 2B is a lightweight, state-of-the-art language model developed by Google, featuring 2 billion parameters and a context length of 8,192 tokens, optimized for various NLP tasks. It utilizes a decoder-only transformer architecture and is open-source, trained on approximately 2 trillion tokens from diverse sources.

Q5. What are some key features of Qwen 7B model?

A. Alibaba Cloud developed Qwen 7B, a language model with 7 billion parameters and a context length of 8,192 tokens, designed for various NLP tasks. It uses a decoder-only transformer architecture, pre-trained on 2.4 trillion tokens, and includes optimizations like Rotary Positional Embeddings (RoPE) and SwiGLU activation.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Nibedita completed her master’s in Chemical Engineering from IIT Kharagpur in 2014 and is currently working as a Senior Data Scientist. In her current capacity, she works on building intelligent ML-based solutions to improve business processes.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details