Exploring Embedding Models with Vertex AI

Sarvagya Agrawal Last Updated : 09 Jan, 2025
10 min read

Vectors are the basis for the majority of the most complex artificial intelligence applications, including semantic search or anomaly detection. In this article, we start right at the front with the basics of embeddings, moving on to understand sentence embeddings and vector representations. We’ll discuss simple practical approaches including mean pooling, cosine similarity and architecture of dual encoders employing BERT. You will also get insights on training a dual encoder model, and how to use embeddings for anomaly detection and using Vertex AI for fraud detection and content moderation among others.

Learning Objectives

  • Comprehend the role of vector embeddings in representing words, sentences, and other data types in a continuous vector space.
  • Understand the process of tokenization and how token embeddings contribute to sentence embeddings.
  • Understand the key concepts and best practices for deploying embedding models in Applications with Vertex AI to solve real-world AI challenges.
  • Learn how to optimize and scale Applications with Vertex AI by integrating embedding models for advanced analytics and intelligent decision-making.
  • Gain hands-on experience in training a dual encoder model by defining the encoder architecture and setting up the training process.
  • Implement anomaly detection using techniques such as Isolation Forest to identify outliers based on embedding similarities.

This article was published as a part of the Data Science Blogathon.

Understanding Vertex Embeddings

Vector embeddings are the general methods for representing a word or a sentence in an appropriate space. That is why the closeness of these embeddings is the most important: the smaller the distance between two words in the vector space mentioned above, the greater their similarity. While these embeddings were only used in the NLP, they are in other domains such as images, videos, audio, and graphs. CLIP is one of the most representative models for multimodal learning, which produces image and text embeddings.

The vector embeddings have the following applications:

  • LLMs use them as token embeddings after converting input tokens.
  • In semantic searches for searching the most relevant answer to a query in search engines.
  • In RAG, sentence embeddings enable the retrieval of relevant chunks.
  • Recommendation system for representing products in embedding space and finding the relevant products.

Let’s understand why sentence embeddings are important for RAG pipelines.

Understanding Vertex Embeddings

In the above figure, the retrieval engine plays a crucial role in determining which information in the database is relevant to the user query. But, how does it look for the information in the database? One of the ways is to utilize transformer-based cross-encoders to compare the query or question with all information and classify it as relevant or not. This approach is useful but very slow. There should be a better way to handle such tasks. Vector databases play an important role in storing the embeddings of all the information in the database and then utilizing similarity search to fetch the most relevant piece of information. This approach is faster but less accurate than the former approach.

Understanding Sentence Embeddings

Applying mathematical operations to the token embeddings generates sentence embeddings. Pre-trained models like BERT or GPT produce these token embeddings.

For instance, consider BERT model tokenization and embeddings for word tokens. Once word tokens are computed, then generate sentence embeddings by using a mean pooling operation. Here’s the walkthrough of the code:

model_name = "./models/bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)

def get_sentence_embedding(sentence):
    encoded_input = tokenizer(sentence, padding=True, truncation=True, return_tensors='pt')
    attention_mask = encoded_input['attention_mask']  
    
    with torch.no_grad():
        output = model(**encoded_input)

    token_embeddings = output.last_hidden_state
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()

   
    sentence_embedding = torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    
    return sentence_embedding.flatten().tolist()

The above code loads the bert-base-uncased model from Hugging Face and defines the get_sentence_embedding function. This function computes the sentence embedding by applying the mean pooling operation on the token embeddings generated by the BERT model.

Cosine Similarity of Sentence Embeddings

Cosine similarity is a widely used metric to measure the similarity between two vectors, making it ideal for comparing sentence embeddings. By computing the cosine similarity, we can determine how closely two sentences are related in the embedding space. Below is the implementation of this approach:

def cosine_similarity_matrix(features):
    norms = np.linalg.norm(features, axis=1, keepdims=True)
    normalized_features = features / norms
    similarity_matrix = np.inner(normalized_features, normalized_features)
    rounded_similarity_matrix = np.round(similarity_matrix, 4)
    return rounded_similarity_matrix
def plot_similarity(labels, features, rotation):
    sim = cosine_similarity_matrix(features)
    sns.set_theme(font_scale=1.2)
    g = sns.heatmap(sim, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd")
    g.set_xticklabels(labels, rotation=rotation)
    g.set_title("Semantic Textual Similarity")
    return g

The cosine_similarity_matrix function computes the cosine similarity between embeddings. The following code defines sentences across various topics, and the plot_similarity function analyzes their similarities by plotting a heat map.function computes the cosine similarity between embeddings. The following code defines sentences across various topics, and the plot_similarity function analyzes their similarities by plotting a heat map.

messages = [
    # Technology
    "I prefer using a MacBook for work.",
    "Is AI taking over human jobs?",
    "My laptop battery drains too quickly.",

    # Sports
    "Did you watch the World Cup finals last night?",
    "LeBron James is an incredible basketball player.",
    "I enjoy running marathons on weekends.",

    # Travel
    "Paris is a beautiful city to visit.",
    "What are the best places to travel in summer?",
    "I love hiking in the Swiss Alps.",

    # Entertainment
    "The latest Marvel movie was fantastic!",
    "Do you listen to Taylor Swift's songs?",
    "I binge-watched an entire season of my favorite series.",

]
embeddings = []
for t in messages:
    emb = get_sentence_embedding(t)
    embeddings.append(emb)

plot_similarity(messages, embeddings, 90)
Cosine Similarity of Sentence Embeddings

The output shown in Fig. 2 illustrates the similarity between various sentences. Most of the map appears predominantly red, suggesting high similarity across sentences, which is inconsistent with their actual content.  

Is there a better way to get the more accurate results? The next section will discuss about the dual encoder, one of the ways to get better results.

How to Train the Dual Encoder?

A dual encoder architecture uses two independent BERT encoders: one processes questions, and the other processes answers. Each input sequence passes through its respective encoder layers, and the model extracts the [CLS] token embedding as a compact representation of the entire sequence. After obtaining the [CLS] token embeddings for both the question and answer, the model calculates their cosine similarity. This similarity score serves as input to the loss function during training, allowing the model to learn how to align relevant questions and answers effectively.

How to Train the Dual Encoder?

Why CLS token embedding is important? The [CLS] token is designed to pool information from all other tokens in the sequence, making it a compact summary of the sequence’s meaning. Its effectiveness comes from the self-attention mechanism in BERT, which allows the [CLS] token to attend to all other tokens and aggregate their contextualized information.

Dual Encoder for Question-Answer Tasks

Dual encoders are commonly used in question-answer tasks to compute the relevance between questions and potential answers. This approach involves encoding both the question and the answer into a shared embedding space. Here’s how it can be implemented:

class Encoder(torch.nn.Module):
    def __init__(self, vocab_size, embed_dim, output_embed_dim):
        super().__init__()
        self.embedding_layer = torch.nn.Embedding(vocab_size, embed_dim)
        self.encoder = torch.nn.TransformerEncoder(
            torch.nn.TransformerEncoderLayer(embed_dim, nhead=8, batch_first=True),
            num_layers=3,
            norm=torch.nn.LayerNorm([embed_dim]),
            enable_nested_tensor=False
        )
        self.projection = torch.nn.Linear(embed_dim, output_embed_dim)
    
    def forward(self, tokenizer_output):
        x = self.embedding_layer(tokenizer_output['input_ids'])
        x = self.encoder(x, src_key_padding_mask=tokenizer_output['attention_mask'].logical_not())
        cls_embed = x[:,0,:]
        return self.projection(cls_embed)

Once, encoder module is declared, it can be used for training like any deep learning model.

Training the Dual Encoder

Training the dual encoder involves preparing and optimizing two separate networks for questions and answers to learn a shared embedding space. Let’s go through the steps:

Define the Hyperparameters

Hyperparameters like embedding size, sequence length, and batch size play a key role in configuring the training process. These parameters are defined as follows:

embed_size = 512
output_embed_size = 128
max_seq_len = 64
batch_size = 32
n_iters = len(dataset) // batch_size + 1

Initialize the tokenizer, question encoder and answer encoder

Before training, initialize the tokenizer and the dual encoders. These components map text inputs into embedding vectors for further processing.

tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
question_encoder = Encoder(tokenizer.vocab_size, embed_size, output_embed_size)
answer_encoder = Encoder(tokenizer.vocab_size, embed_size, output_embed_size)

Define the dataloader, optimizer and loss function

To train the model efficiently, set up a data loader for batching, an optimizer for parameter updates, and a loss function to guide learning.

dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)    
 optimizer = torch.optim.Adam(list(question_encoder.parameters()) + list(answer_encoder.parameters()), lr=1e-5)
loss_fn = torch.nn.CrossEntropyLoss()

Train the model for the specified number of epochs and batch size while minimizing the loss. After completing the training, use the encoder models for both the question and answer components independently to generate embeddings. Compare these embeddings to compute a similarity score and evaluate their relevance.

Application of Embeddings using Vertex AI

This section provides a step-by-step guide to applying embeddings using Vertex AI. The focus is on identifying whether a piece of text is an outlier within a given corpus by generating its embeddings with Vertex AI. This approach has significant industrial applications, such as:

  • Anomaly Detection
  • Fraud Detection
  • Content Moderation
  • Search and Recommendation Systems

Dataset Creation from Stack Overflow 

We will leverage BigQuery, Google Cloud’s serverless data warehouse, to query Stack Overflow data. Specifically, we’ll retrieve the first 500 posts (questions and answers) for each programming language: Python, HTML, R, and CSS. This will allow us to gather structured insights and analyze posts related to these popular programming languages efficiently.

from google.cloud import bigquery
import pandas as pd

def run_bq_query(sql):

    # Create BQ client
    bq_client = bigquery.Client(project = PROJECT_ID, 
                                credentials = credentials)


    job_config = bigquery.QueryJobConfig(dry_run=True, 
                                         use_query_cache=False)
    bq_client.query(sql, job_config=job_config)


    job_config = bigquery.QueryJobConfig()
    client_result = bq_client.query(sql, 
                                    job_config=job_config)

    job_id = client_result.job_id

    df = client_result.result().to_arrow().to_pandas()
    print(f"Finished job_id: {job_id}")
    return df


languageList= ["python", "html", "r", "css"]


stackoverflowDf = pd.DataFrame()

for language in languageList:
    
    print(f"generating {language} dataframe")
    
    query = f"""
    SELECT
        CONCAT(q.title, q.body) as input_text,
        a.body AS output_text
    FROM
        `bigquery-public-data.stackoverflow.posts_questions` q
    JOIN
        `bigquery-public-data.stackoverflow.posts_answers` a
    ON
        q.accepted_answer_id = a.id
    WHERE 
        q.accepted_answer_id IS NOT NULL AND 
        REGEXP_CONTAINS(q.tags, "{language}") AND
        a.creation_date >= "2020-01-01"
    LIMIT 
        500
    """
    languageDf = run_bq_query(query)
    languageDf["category"] = language
    stackoverflowDf = pd.concat([stackoverflowDf , languageDf], 
                      ignore_index = True) 

On running the above code, the output will be as shown below:

generating python dataframe
Finished job_id: 4ca80448-0adb-4dce-9b3a-4a8b84f34609
generating html dataframe
Finished job_id: e2df23cd-ce8d-4e03-8a23-398950c3cc67
generating r dataframe
Finished job_id: 37826d30-213d-4a9b-ae5d-f25b5ce8d7eb
generating css dataframe
Finished job_id: 04e7f798-eed6-4362-9814-8eaa4af01722

Generate Text Embeddings

To generate embeddings for a dataset of texts, we need to process the data in batches to optimize performance and adhere to API limitations. Below are the key steps for achieving this:

  • Batching the Dataset
  • Sending Batches to the Model
from vertexai.language_models import TextEmbeddingModel

model = TextEmbeddingModel.from_pretrained(
    "textembedding-gecko@001")
def generate_batches(sentences, batch_size = 5):
    for i in range(0, len(sentences), batch_size):
        yield sentences[i : i + batch_size]
stackoverflow_questions = so_df[0:200].input_text.tolist() 
batches = generate_batches(sentences = so_questions)

Get Embeddings on a Batch of Data

This helper function utilizes model.get_embeddings() to process a batch of input texts, efficiently generating and returning a list of embeddings, where each embedding corresponds to a specific text within the batch.

def encode_texts_to_embeddings(sentences):
    try:
        embeddings = model.get_embeddings(sentences)
        return [embedding.values for embedding in embeddings]
    except Exception:
        return [None for _ in range(len(sentences))]

Now, we will get the question embeddings:

question_embeddings = encode_text_to_embedding_batched(
                            sentences=so_questions,
                            api_calls_per_second = 20/60, 
                            batch_size = 5)

Identifying the Anomaly 

We can introduce an anomalous piece of text into the dataset and evaluate whether the outlier detection algorithm, such as Isolation Forest, can successfully identify it as an anomaly based on its embedding. This approach leverages the embedding’s ability to capture the semantic meaning of the text, enabling the detection of text that deviates significantly from the rest of the corpus.

from sklearn.ensemble import IsolationForest

input_text ="""
I am working on my car but can't  
remember the correct tire pressure.  
I've checked a few manuals but couldn't  
find any relevant details online

"""  
emb = model.get_embeddings([input_text])[0].values


embeddings_l = question_embeddings.tolist()
embeddings_l.append(emb)

embeddings_array = np.array(embeddings_l)

new_row = pd.Series([input_text, None, "baking"], 
                    index=stackoverflowDf.columns)
stackoverflowDf.loc[len(stackoverflowDf)+1] = new_row
stackoverflowDf.tail()

An additional row, which is an outlier, has been appended to the data frame stackoverflowDf. Figures 4 and 5 show the output of embeddings_array and stackoverflowDf, respectively.

Applications with Vertex AI
 stackoverflowDf output with appended outlier: Applications with Vertex AI

Using Isolation Forest to Identify Potential Outliers

Use the Isolation Forest algorithm to identify potential outliers within the dataset. The Isolation Forest classifier will predict -1 for potential outliers and 1 for non-outliers. By inspecting the rows that are classified as outliers, you can verify whether the “car” question is correctly identified as an anomaly. This approach allows for the detection of texts that deviate significantly from the main distribution, enabling insights into atypical data points that might warrant further investigation or specialized handling.

clf = IsolationForest(contamination=0.005, 
                      random_state = 2) 
preds = clf.fit_predict(embeddings_array)
print(f"{len(preds)} predictions. Set of possible values: {set(preds)}")
print(so_df.loc[preds == -1])

The output of the above program, rows that are detected anomalous, is shown in Figure 6.

Using Isolation Forest to Identify Potential Outliers: Applications with Vertex AI

Conclusion

Vector embeddings play a crucial role in modern machine learning applications, enabling efficient representation and retrieval of semantic information. By leveraging pre-trained models like BERT and techniques such as dual encoders and anomaly detection, we can enhance the accuracy and efficiency of tasks like question-answering, similarity analysis, and outlier detection. Understanding these concepts and their practical implementation, particularly through tools like Vertex AI, provides a strong foundation for tackling real-world challenges in NLP and beyond.

Key Takeaways

  • Dual encoders enable effective question-answer mapping by learning a shared embedding space for both inputs.
  • Hyperparameter tuning is essential to optimize the model’s performance and training efficiency.
  • Tokenization and encoder initialization transform raw text into embeddings ready for training.
  • Data loaders, optimizers, and loss functions are foundational components for efficient model training.
  • Clear modular steps ensure a structured approach to implementing and training dual encoders.

Frequently Asked Questions

Q1. What are vector embeddings?

A. Vector embeddings are numerical representations of data (like text) in a vector space, where proximity indicates similarity.

Q2. Why is the [CLS] token important in BERT?

A. The [CLS] token aggregates information from the entire sequence, serving as a compact representation for tasks like classification.

Q3. How does the dual encoder architecture work?

A. It uses two separate encoders for questions and answers, with their [CLS] token embeddings compared to determine relevance.

Q4. What is the purpose of anomaly detection in embeddings?

A. Anomaly detection identifies outliers by analyzing the embeddings of data points and detecting deviations from the norm.

Q5. How are embeddings generated with Vertex AI?

A. Vertex AI generates text embeddings by processing batches of text, allowing for efficient similarity analysis and outlier detection.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hi, I'm Sarvagya Agrawal, Software Engineer, with a strong passion for utilizing technology to drive positive change in society. I believe that technology is not just a skill, but an art form that can be leveraged to transform the world.
My primary focus lies in machine learning and web development, with strong programming skills in Python. I have worked on innovative projects, including developing an AI model to calculate cardiovascular risk factors from OCTA scans using computer vision algorithms and creating an AI-based web application for calculating financial risk based on an individual's spending trends.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details