Building Serverless Intelligent Chatbots with Amazon Bedrock and Knowledge Base

Abhishek Kumar Last Updated : 11 Apr, 2024
7 min read

Introduction

In today’s tech world, serverless architecture has transformed app development, eliminating server management hassle and enabling seamless scalability. AI-driven chatbots, especially when linked to Knowledge Bases, provide personalized, real-time responses, enhancing user experience. Enter Amazon Bedrock, an AWS platform crafting knowledge-driven chatbots with advanced language models for accurate, relevant interactions, revolutionizing customer support. This article will teach you how to create a serverless chatbot application leveraging Amazon Bedrock’s Knowledge Base. It will highlight the streamlined process and the transformative impact it can have on customer engagement.

Build Serverless Chatbots with Amazon Bedrock Knowledge Base

Setting Up the Data Source

Creating an Amazon S3 bucket is a foundational step in many AWS projects, serving as a secure and scalable storage option for data of all types. Here’s a detailed guide on how to create an S3 bucket via the AWS Management Console, along with best practices for setting permissions to ensure the security of your stored data.

  1. Locate the “Services” menu at the top of the console.
  2. Click on “Services” and find “S3” under the “Storage” category or use the search bar to find S3. Then click on “S3” to open the S3 dashboard.
  3. Click on the “Create bucket” button. Enter a unique name for your bucket, select your preferred AWS Region, and leave the other options at their default settings for simplicity, then click “Create bucket”.
  4. Once your bucket is created, open it by clicking on its name in the S3 dashboard, then click the “Upload” button.
  5. You can drag and drop files into the upload area or select “Add files” to choose files from your computer, followed by “Upload” to complete the process.

Remember, all uploaded files will inherit the bucket’s permissions, ensuring that your data remains secure under the default settings, which block all public access unless you configure otherwise for specific needs.

setting up the data source | AWS Amazon Bedrock Knowledge Base

Creating Amazon Bedrock Knowledge Base

Establishing an Amazon Bedrock Knowledge Base begins with a crucial understanding: it is currently accessible only in specific regions. To embark on this process, the initial step involves creating an IAM (Identity and Access Management) user. It’s important to note that the creation of a knowledge base is restricted to root users. Therefore, the following steps outline how to create an IAM user:

  1. Navigate to the IAM console within the AWS Management Console.
  2. Select ‘Users’ from the dashboard menu.
  3. Click on ‘Add User’ to initiate the creation process.
  4. Specify a username for the new IAM user.
Creating the Knowledge Base for chatbot 1
Creating the Knowledge Base for chatbot 2

After creating the user, proceed by selecting the user from the list and clicking on ‘Manage Console Access’.

manage console access | AWS Amazon Bedrock Knowledge Base

After clicking ‘Manage Console Access,’ proceed by clicking ‘Apply.’ This action prompts the system to generate a CSV file containing the necessary credentials. Download this file.

Next, utilize the provided ‘Console-sign-in-URL’ to access the AWS Management Console. This URL will direct you to the login page, where you can input the credentials from the downloaded CSV file to gain access.

Creating the Knowledge Base

To initiate the creation of the Knowledge Base, navigate to the appropriate section within the AWS Management Console and follow the provided prompts. Throughout the process, keep track of the selected configurations and settings to ensure they align with your requirements and budget considerations.

By maintaining awareness of the paid nature of the service, you can effectively manage costs and optimize the utilization of Amazon Bedrock for your specific needs.

Creating the Knowledge Base for chatbot 3

We will keep most of the options as default

Creating the Knowledge Base for chatbot 4

We’ll start by providing the S3 URI of the bucket we’ve created. Then, we’ll proceed to select embeddings and configure the vector store. For this setup, we’ll opt for Amazon’s default embeddings and vector store.

Creating the Knowledge Base for chatbot 5

Having successfully created the Knowledge Base, our next step is to proceed with the creation of a Lambda function.

Creating an AWS Lambda Function

  1. Navigate to the AWS Lambda console within the AWS Management Console.
  2. Click on ‘Create function’ to initiate the creation process.
  3. Choose the appropriate runtime environment for your function. Lambda supports various programming languages including Python, Node.js, and Java, for this article we will select Python as a runtime programming language(But in SS it’s Node).
Creating AWS lambda function 1 | serverless chatbot

After creating the Lambda function with default settings, our next step involves adjusting the timeout duration to accommodate potentially longer execution times. By increasing the timeout duration, you provide the Lambda function with more time to complete its execution, preventing premature termination and ensuring uninterrupted processing of tasks.

Creating AWS lambda function 2 | serverless chatbot

In the Lambda function’s configuration section, navigate to the ‘Role name’ and select it. Then, proceed to add the ‘AmazonBedrockFullAccess’ policy to grant necessary permissions.

Creating AWS lambda function 3 | serverless chatbot
Creating AWS lambda function 4 | serverless chatbot

With the granted permissions, the Lambda function is now capable of accessing our Knowledge Base within Bedrock.

Writing the RetrieveAndGenerate API to access data from the Knowledge Base (Lambda Function)

import json
#1. import boto3
import boto3
#2 create client connection with bedrock
client_bedrock_knowledgebase = boto3.client('bedrock-agent-runtime')
def lambda_handler(event, context):
    #3 Store the user prompt
    print(event['prompt'])
    user_prompt=event['prompt']
    # 4. Use retrieve and generate API
    client_knowledgebase = client_bedrock_knowledgebase.retrieve_and_generate(
    input={
        'text': user_prompt
    },
    retrieveAndGenerateConfiguration={
        'type': 'KNOWLEDGE_BASE',
        'knowledgeBaseConfiguration': {
            'knowledgeBaseId': 'Your-ID',
            'modelArn': 'arn:aws:bedrock:Your-Region::foundation-model/anthropic.claude-instant-v1'
                }
            })
            
    #print(client_knowledgebase)     
    #print(client_knowledgebase['output']['text'])
    #print(client_knowledgebase['citations'][0]['generatedResponsePart']['textResponsePart'])
    response_kbase_final=client_knowledgebase['output']['text']
    return {
        'statusCode': 200,
        'body': response_kbase_final
    }

We referenced this documentation while crafting the Lambda function, and for further details, you can consult it.

https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent-runtime/client/retrieve_and_generate.html

As observed, foundational models have been incorporated into the code. To enable access to these models, navigate to Bedrock’s interface. On the left-hand side, locate ‘Model Access,’ then proceed to ‘Manage Model Access.’ Here, select the models you require access to and confirm your selections by clicking ‘Save Changes.’

model access | AWS Amazon Bedrock Knowledge Base

Now we will want to test our lambda function for that we will create a test

Event JSON | AWS Amazon Bedrock Knowledge Base

Ultimately, we click on “Deploy” and subsequently proceed to test our Lambda function post-creation of the test scenario.

pinnacle knowledge lambda function

Creating REST API

  1. Navigate to Amazon API Gateway: Go to the AWS Management Console and select Amazon API Gateway.
  2. Create a New API: Click on “Create API” to start building your new API.
create REST API | AWS Amazon Bedrock Knowledge Base

3. Now Click on Create API and as we already have resources, we will not disturb it. We will click on Create Method and then choose the lambda function we have created.

method details | AWS Amazon Bedrock Knowledge Base

4. After selecting the appropriate Lambda function, proceed by configuring the URL query string parameters. Specify ‘prompt’ as the parameter name, then proceed to click on ‘Create Method’.

Method execution

5. Once the method is created, proceed to edit the Integration request. Click on the ‘Edit’ option, then navigate to the mapping template section. Here, specify the desired format for the GET request.

mapping templates | AWS Amazon Bedrock Knowledge Base

6. With the configuration of the REST API complete, you can now proceed to deploy it by selecting the “Deploy API” option. Choose the “New Stage” option and assign a name to your stage. As depicted in the screenshot below, for instance, you can set the prompt parameter to ‘How to train LLM from scratch’.

deploy API | AWS Amazon Bedrock Knowledge Base

Now it is time to see the result —

client certificate | AWS Amazon Bedrock Knowledge Base

As evident, we have obtained the outcome from the knowledge base regarding the training of Large Language Models (LLMs) from scratch.
NOTE- Please don’t forget to delete the knowledgebase from Amazon OpenSearch Service also delete collections so that you don’t get charged for the use.

Conclusion

In the journey through the digital transformation of customer engagement, we’ve explored the creation of a serverless chatbot leveraging Amazon Bedrock and AWS technologies. From setting up a secure and scalable S3 bucket for data storage to navigating the intricacies of Amazon Bedrock Knowledge Base for deep learning insights, this guide has walked you through each step with precision. The deployment of an AWS Lambda function marked a significant milestone, enabling the seamless execution of the RetrieveAndGenerate API, which is the core of our chatbot’s intelligence.

By integrating these components with a REST API, we’ve laid down a robust foundation for building chatbots that are not only responsive but also deeply knowledgeable, capable of drawing from vast databases to provide accurate, context-aware information. The practical steps outlined, accompanied by insights on permissions, security, and efficient API usage, serve as a beacon for developers looking to harness the capabilities of AI in enhancing customer interactions.

As we conclude, it’s clear that the integration of Amazon Bedrock with AWS services opens up a new realm of possibilities for developing chatbots that go beyond mere question-answering entities. These advanced bots are poised to revolutionize customer service, offering personalized, insightful interactions that can significantly enhance the user experience. This exploration is just the beginning, and the future of AI-powered communication looks brighter than ever.

Hello, I'm Abhishek, a Data Engineer Trainee at Analytics Vidhya. I'm passionate about data engineering and video games I have experience in Apache Hadoop, AWS, and SQL,and I keep on exploring their intricacies and optimizing data workflows 

:)

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details