In today’s tech world, serverless architecture has transformed app development, eliminating server management hassle and enabling seamless scalability. AI-driven chatbots, especially when linked to Knowledge Bases, provide personalized, real-time responses, enhancing user experience. Enter Amazon Bedrock, an AWS platform crafting knowledge-driven chatbots with advanced language models for accurate, relevant interactions, revolutionizing customer support. This article will teach you how to create a serverless chatbot application leveraging Amazon Bedrock’s Knowledge Base. It will highlight the streamlined process and the transformative impact it can have on customer engagement.
Creating an Amazon S3 bucket is a foundational step in many AWS projects, serving as a secure and scalable storage option for data of all types. Here’s a detailed guide on how to create an S3 bucket via the AWS Management Console, along with best practices for setting permissions to ensure the security of your stored data.
Remember, all uploaded files will inherit the bucket’s permissions, ensuring that your data remains secure under the default settings, which block all public access unless you configure otherwise for specific needs.
Establishing an Amazon Bedrock Knowledge Base begins with a crucial understanding: it is currently accessible only in specific regions. To embark on this process, the initial step involves creating an IAM (Identity and Access Management) user. It’s important to note that the creation of a knowledge base is restricted to root users. Therefore, the following steps outline how to create an IAM user:
After creating the user, proceed by selecting the user from the list and clicking on ‘Manage Console Access’.
After clicking ‘Manage Console Access,’ proceed by clicking ‘Apply.’ This action prompts the system to generate a CSV file containing the necessary credentials. Download this file.
Next, utilize the provided ‘Console-sign-in-URL’ to access the AWS Management Console. This URL will direct you to the login page, where you can input the credentials from the downloaded CSV file to gain access.
To initiate the creation of the Knowledge Base, navigate to the appropriate section within the AWS Management Console and follow the provided prompts. Throughout the process, keep track of the selected configurations and settings to ensure they align with your requirements and budget considerations.
By maintaining awareness of the paid nature of the service, you can effectively manage costs and optimize the utilization of Amazon Bedrock for your specific needs.
We will keep most of the options as default
We’ll start by providing the S3 URI of the bucket we’ve created. Then, we’ll proceed to select embeddings and configure the vector store. For this setup, we’ll opt for Amazon’s default embeddings and vector store.
Having successfully created the Knowledge Base, our next step is to proceed with the creation of a Lambda function.
After creating the Lambda function with default settings, our next step involves adjusting the timeout duration to accommodate potentially longer execution times. By increasing the timeout duration, you provide the Lambda function with more time to complete its execution, preventing premature termination and ensuring uninterrupted processing of tasks.
In the Lambda function’s configuration section, navigate to the ‘Role name’ and select it. Then, proceed to add the ‘AmazonBedrockFullAccess’ policy to grant necessary permissions.
With the granted permissions, the Lambda function is now capable of accessing our Knowledge Base within Bedrock.
Writing the RetrieveAndGenerate API to access data from the Knowledge Base (Lambda Function)
import json
#1. import boto3
import boto3
#2 create client connection with bedrock
client_bedrock_knowledgebase = boto3.client('bedrock-agent-runtime')
def lambda_handler(event, context):
#3 Store the user prompt
print(event['prompt'])
user_prompt=event['prompt']
# 4. Use retrieve and generate API
client_knowledgebase = client_bedrock_knowledgebase.retrieve_and_generate(
input={
'text': user_prompt
},
retrieveAndGenerateConfiguration={
'type': 'KNOWLEDGE_BASE',
'knowledgeBaseConfiguration': {
'knowledgeBaseId': 'Your-ID',
'modelArn': 'arn:aws:bedrock:Your-Region::foundation-model/anthropic.claude-instant-v1'
}
})
#print(client_knowledgebase)
#print(client_knowledgebase['output']['text'])
#print(client_knowledgebase['citations'][0]['generatedResponsePart']['textResponsePart'])
response_kbase_final=client_knowledgebase['output']['text']
return {
'statusCode': 200,
'body': response_kbase_final
}
We referenced this documentation while crafting the Lambda function, and for further details, you can consult it.
As observed, foundational models have been incorporated into the code. To enable access to these models, navigate to Bedrock’s interface. On the left-hand side, locate ‘Model Access,’ then proceed to ‘Manage Model Access.’ Here, select the models you require access to and confirm your selections by clicking ‘Save Changes.’
Now we will want to test our lambda function for that we will create a test
Ultimately, we click on “Deploy” and subsequently proceed to test our Lambda function post-creation of the test scenario.
3. Now Click on Create API and as we already have resources, we will not disturb it. We will click on Create Method and then choose the lambda function we have created.
4. After selecting the appropriate Lambda function, proceed by configuring the URL query string parameters. Specify ‘prompt’ as the parameter name, then proceed to click on ‘Create Method’.
5. Once the method is created, proceed to edit the Integration request. Click on the ‘Edit’ option, then navigate to the mapping template section. Here, specify the desired format for the GET request.
6. With the configuration of the REST API complete, you can now proceed to deploy it by selecting the “Deploy API” option. Choose the “New Stage” option and assign a name to your stage. As depicted in the screenshot below, for instance, you can set the prompt parameter to ‘How to train LLM from scratch’.
Now it is time to see the result —
As evident, we have obtained the outcome from the knowledge base regarding the training of Large Language Models (LLMs) from scratch.
NOTE- Please don’t forget to delete the knowledgebase from Amazon OpenSearch Service also delete collections so that you don’t get charged for the use.
In the journey through the digital transformation of customer engagement, we’ve explored the creation of a serverless chatbot leveraging Amazon Bedrock and AWS technologies. From setting up a secure and scalable S3 bucket for data storage to navigating the intricacies of Amazon Bedrock Knowledge Base for deep learning insights, this guide has walked you through each step with precision. The deployment of an AWS Lambda function marked a significant milestone, enabling the seamless execution of the RetrieveAndGenerate API, which is the core of our chatbot’s intelligence.
By integrating these components with a REST API, we’ve laid down a robust foundation for building chatbots that are not only responsive but also deeply knowledgeable, capable of drawing from vast databases to provide accurate, context-aware information. The practical steps outlined, accompanied by insights on permissions, security, and efficient API usage, serve as a beacon for developers looking to harness the capabilities of AI in enhancing customer interactions.
As we conclude, it’s clear that the integration of Amazon Bedrock with AWS services opens up a new realm of possibilities for developing chatbots that go beyond mere question-answering entities. These advanced bots are poised to revolutionize customer service, offering personalized, insightful interactions that can significantly enhance the user experience. This exploration is just the beginning, and the future of AI-powered communication looks brighter than ever.