Current text embedding models, like BERT, are limited to processing only 512 tokens at a time, which hinders their effectiveness with long documents. This limitation often results in loss of context and nuanced understanding. However, Jina Embeddings v2 addresses this issue by supporting sequences upto 8192 tokens, allowing for the preservation of context and enhancing the accuracy and relevance of the processed information in long documents. This advancement marks a substantial improvement in handling complex text data.
This article was published as a part of the Data Science Blogathon.
Long documents pose unique challenges in NLP. Traditional models process text in chunks, truncating context or producing fragmented embeddings that misrepresent the original document. This results in:
Jina Embeddings v2 directly addresses these issues by expanding the token limit to 8192, eliminating the need for excessive segmentation and preserving the document’s semantic integrity.
Also Read: Guide to Word Embedding System
Jina Embeddings v2 takes the best of BERT and supercharges it with cutting-edge innovations. Here’s how it works:
With ALiBi attention, a linear bias is incorporated into each attention score preceding the softmax operation. Each attention head employs a distinct constant scalar, m, which diversifies its computation. Our model adopts the encoder variant where all tokens mutually attend during calculation, contrasting the causal variant originally designed for language modeling. In the latter, a causal mask confines tokens to attend solely to preceding tokens in the sequence.
Jina Embeddings v2 delivers state-of-the-art performance across multiple benchmarks, including the Massive Text Embedding Benchmark (MTEB) and newly designed long-document datasets. Key highlights include:
The chart compares embedding models’ performance across retrieval and clustering tasks with varying sequence lengths. Text-embedding-ada-002 excels, especially at its 8191-token cap, showing significant gains in long-context tasks. Other models, like e5-base-v2, show consistent but less dramatic improvements with longer sequences, possibly affected by the lack of prefixes like query: in its setup. Overall, longer sequence handling proves critical for maximizing performance in these tasks.
Jina Embeddings v2 stands out not only for its ability to handle extended sequences but also for its competitive performance against proprietary models like OpenAI’s text-embedding-ada-002. While many open-source models cap their sequence lengths at 512 tokens, Jina Embeddings v2’s 16x improvement enables entirely new use cases in NLP.
Moreover, its open-source availability ensures accessibility for diverse organizations and projects. The model can be fine-tuned for specific applications using resources from its Hugging Face repository.
!pip install transformers
!pip install -U sentence-transformers
You can use Jina embeddings directly through the transformers library:
import torch
from transformers import AutoModel
from numpy.linalg import norm
# Define cosine similarity function
cos_sim = lambda a, b: (a @ b.T) / (norm(a) * norm(b))
# Load the Jina embedding model
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True)
# Encode sentences
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
# Calculate cosine similarity
print(cos_sim(embeddings, embeddings))
Output:
To process longer sequences, specify the max_length parameter:
embeddings = model.encode(['Very long ... document'], max_length=2048)
Alternatively, utilize Jina embeddings with the sentence-transformers library:
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
# Load the Jina embedding model
model = SentenceTransformer('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True)
# Encode sentences
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
# Calculate cosine similarity
print(cos_sim(embeddings, embeddings))
Control input sequence length as needed:
model.max_seq_length = 1024 # Set maximum sequence length to 1024 tokens
Also Read: Exploring Embedding Models with Vertex AI
Jina Embeddings v2 marks an important advancement in NLP, addressing the challenges of long-document embeddings. By supporting sequences of up to 8192 tokens and delivering strong performance, it enables a variety of applications, including academic research, enterprise search, and generative AI. As NLP tasks increasingly involve processing lengthy and complex texts, innovations like Jina Embeddings v2 will become essential. Its capabilities not only improve current workflows but also open new possibilities for working with long-form textual data in the future.
For more details or to integrate Jina Embeddings v2 into your projects, visit its Hugging Face page.
A. Jina Embeddings v2 supports sequences up to 8192 tokens, overcoming the 512-token limit of traditional models like BERT. This allows it to handle long documents without segmenting them, preserving global context and improving semantic representation.
A. The model incorporates cutting-edge innovations such as Attention with Linear Biases (ALiBi), Gated Linear Units (GLU), and a three-stage training paradigm. These optimizations enable effective handling of lengthy texts while maintaining high performance and efficiency.
A. You can integrate it using either the transformers or sentence-transformers libraries. Both provide easy-to-use APIs for text encoding, handling long sequences, and performing similarity computations. Detailed setup steps and example codes are provided in the guide.
A. Ensure you’re logged into Hugging Face to access gated models, and provide an access token if needed. Also, confirm compatibility of the model with your language requirements by selecting the appropriate identifier (e.g., for Chinese or German models).
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.