As AI is taking over the world, Large language models are in huge demand in technology. Large Language Models generate text in a way a human does. They can be used to develop natural language processing (NLP) applications varying from chatbots and text summarizers to translation apps, virtual assistants, etc.
Google released its next-generation model named Palm 2. This model excels in advanced scientific and mathematical operations and is used in reasoning and language translations. This model is trained over 100+ spoken word languages and 20+ programming languages.
As it is trained in various programming languages, it can be used to translate one programming language to another. For example, if you want to translate Python code to R or JavaScript code to TypeScript, etc., you can easily use Palm 2 to do it for you. Apart from these, it can generate idioms and phrases and easily split a complex task into simpler tasks, making it much better than the previous large language models.
This article was published as a part of the Data Science Blogathon.
Using the Palm API, you can access the capabilities of Google’s Generative AI models and develop interesting AI-powered applications. However, if you want to interact directly with the Palm 2 model from the browser, you can use the browser-based IDE “MakerSuite”. But you can access the Palm 2 model using the Palm API to integrate large language models into your applications and build AI-driven applications using your company’s data.
Three different prompt interfaces are designed, and you can get started with any one among them using the Palm API. They are:
Navigate to the website https://developers.generativeai.google/ and join the maker suite. You will be added to the waitlist and will be given access probably within 24 hours.
Generate an API key:
Save the API key as we will use it further.
To use the API with Python, install it using the command:
pip install google-generativeai
Next, we configure it using the API key that we generated earlier.
import google.generativeai as palm
palm.configure(api_key=API_KEY)
To list the available models, we write the below code:
models = [model for model in palm.list_models()]
for model in models:
print(model.name)
Output:
models/chat-bison-001
models/text-bison-001
models/embedding-gecko-001
We use the model “text-bison-001” to generate text and pass GenerateTextRequest. The generate_text() function takes in two parameters i.e., a model and a prompt. We pass the model as “text-bison-001,” and the prompt contains the input string.
Explanation:
model_id="models/text-bison-001"
prompt='''write a cover letter for a data science job applicaton.
Summarize it to two paragraphs of 50 words each. '''
completion=palm.generate_text(
model=model_id,
prompt=prompt,
temperature=0.99,
max_output_tokens=800,
)
print(completion.result)
Output:
We define a while loop that asks for input and generates a reply. The response.last statement prints the response.
model_id="models/chat-bison-001"
prompt='I need help with a job interview for a data analyst job. Can you help me?'
examples=[
('Hello', 'Hi there mr. How can I be assistant?'),
('I want to get a High paying Job','I can work harder')
]
response=palm.chat(messages=prompt, temperature=0.2, context="Speak like a CEO", examples=examples)
for messages in response.messages:
print(messages['author'],messages['content'])
while True:
s=input()
response=response.reply(s)
print(response.last)
Output:
LangChain is an open-source framework that allows you to connect large language models to your applications. To use Palm API with langchain, we import GooglePalmEmbeddings from langchain.embeddings. The langchain has an embedding class that provides a standard interface for various text embedding models such as OpenAI, HuggingFace, etc.
We pass the prompts as an array, as shown in the below example. Then, we call llm._generate() function and pass the prompts array as a parameter.
from langchain.embeddings import GooglePalmEmbeddings
from langchain.llms import GooglePalm
llm=GooglePalm(google_api_key=API_KEY)
llm.temperature=0.2
prompts=["How to Calculate the area of a triangle?","How many sides are there for a polygon?"]
llm_result= llm._generate(prompts)
res=llm_result.generations
print(res[0][0].text)
print(res[1][0].text)
Output:
Prompt 1
1.
**Find the base and height of the triangle.
** The base is the length of the side of the triangle that is parallel to the ground, and the height is the length of the line segment that is perpendicular to the base and intersects the opposite vertex.
2.
**Multiply the base and height and divide by 2.
** The formula for the area of a triangle is A = 1/2 * b * h.
For example, if a triangle has a base of 5 cm and a height of 4 cm, its area would be 1/2 * 5 * 4 = 10 cm2.
Prompt 2
3
In this article, we have introduced Google’s latest Palm 2 model and how it is better than the previous models. We then learned how to use Palm API with Python Programming Language. We then discussed how to develop simple applications and generate text and chats. Finally, we covered how to embed it using Langchain framework.
A. To quickly get started with palm API in python, you can install a library using the pip command – pip install google generative-ai.
A. Yes, you can access Google’s Large Language Models and develop applications using Palm API.
A. Yes, Google’s Palm API and MakerSuite are available for public preview.
A. Google’s Palm 2 model was trained in over 20 programming languages and can generate code in various programming languages.
A. Palm API comes with both text and chat services. It provides multiple text generation capabilities.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.