How to get Started with Gemini Flash 1.5’s Code Execution Feature?

Ajay Kumar Reddy 10 Jul, 2024
6 min read

Introduction

Large Language Models, the successors to the Transformers have largely worked within the space of Natural Language Processing and Natural Language Understanding. From their introduction, they have been replacing the traditional rule-based chatbots. LLMs have a better ability to understand text and can create natural conversations, so they are replacing the conventional chatbots. But since their introduction, the LLMs are doing more than what they are capable of. Like converting Natural Language to SQL Queries, able to browse the internet to fetch the latest information. And now they have the ability even to execute code. In this article, we will look at the newly released feature of Gemini, i.e. the Code Execution.

Learning Objectives

  • Learn about Code Execution with LLMs.
  • Get introduced to Gemini Flash 1.5.
  • Learn how to get the API Key for Gemini.
  • Understanding how the LLMs fail in mathematical tasks.
  • Leveraging LLMs with Code Execution for precise and accurate answers.

This article was published as a part of the Data Science Blogathon.

Gemini – Google’s Large Language Model

Gemini models are a family of large language models introduced by Google. It is released by Google to rival the popular closed-source large language models like the GPT4 from OpenAI and Claude from Anthropic. Gemini is a multimodal large language model that is capable of understanding text, images, audio, and even videos.

GPT4 was able to do the same as Gemini does but one that it differentiates from Gemini is running the Code that it generates. And now recently Google has updated the Gemini model making it run code. The code execution is possible due to the function calling capabilities of Gemini, the Code Execution is very similar to it and the code it generates, will run and get the results to generate the final output to the user.

The code that Gemini generates will be run in an isolated sandboxed environment. Right now, only the numpy and the sympy libraries are present in the sandboxed environment and the generated code in no way can download and install new Python libraries.

Getting Started with Code Execution

Before we begin coding, we need to get the free API key that Google provides to test the Gemini model. The free API even supports the Code Execution. To get the free API, you can click on the link here. Now, we will start with installing the library.

!pip install -q -U google-generativeai

It is important to keep the -U flag while installing the google-generativeai library. This is because the Code Execution is a new feature and will require the latest version of google-generativeai library to work. Now we will authenticate ourselves.

import google.generativeai as genai

GOOGLE_API_KEY = "YOUR API KEY"

genai.configure(api_key=GOOGLE_API_KEY)

Here we import the google.generativeai library and call the .configure() method. To this, we give the API Key that we have obtained by signing to the Google AI Cloud. Now we can start working with the Gemini Models.

model = genai.GenerativeModel(model_name='gemini-1.5-flash')

response = model.generate_content("How are you?")

print(response.text)
Code Execution with Google Gemini Flash

Explanation

  • Here we start by creating an instance of the GenerativeModel Class.
  • While instantiating this object, we give the name of the model that we are working with, which here is the gemini-1.5-flash, which is the latest model from Google.
  • To test the model, we call the .generate_content() method and then, give the query to it, and store the generated text in the response variable.
  • Finally, we print the response. We can observe the response in pic above.

Not everything can be answered correctly with the Large Language Model. To test this, let us try asking the Gemini Flash model a simple question to display the first 5 letters of the word Mississippi.

response = model.generate_content("Trim this word to first 5 letters, Mississippi")

print(response.text)
Code response

Here, running the code and seeing the output above, we see that Google’s Gemini model, the latest LLM advancement from the Google team has failed to answer such an easy question. This is not only with the Google Gemini models, but even the GPT4 from OpenAI and even Claude from Anthropic fail to answer it.

This is because they do not have the ability to count backward. That is after generating the letter “i” the model has no idea that it has outputted the second letter. It just outputs a letter given the previous letter, but has no idea about the length of the previous letters.

Another Example

Let us take a look at another question that the large language model fails to answer.

response = model.generate_content("What is the sum of first 100 fibonaocci numbers?")

print(response.text)
Code Execution with Google Gemini Flash

Here, we ask the Gemini Flash model to give us the sum of the first 100 Fibonacci series. Running the code and seeing the output pic, we can say that the model has failed to answer our question. Instead of returning the sum, it has given us the steps to get the sum of the first 100 Fibonacci series. The model failed because large language models are text-generation models. They have no ability to perform mathematical operations

So in both cases, the model has failed. Now, what if the gemini model has ability to execute Python code? The model could try to write a code that could lead us to the answer we are expecting. Maybe for the first question, the model could perform a string operation and run the code and for the second question, it could write a function to calculate the sum.

Gemini – Code Execution

So now, let us try to ask the model the same two questions but this time, providing it access to the Code Execution tool.

model2 = genai.GenerativeModel(model_name='gemini-1.5-flash', 
tools = 'code_execution')

response = model2.generate_content("Trim this word to first 5 letters, \
Mississippi. Use code execution tool")

print(response.text)
Code Execution with Google Gemini Flash

Here again, we create an instance of the class GenerativeModel and give it the Gemini-1.5-flash model name, but along with it, we even give it the tools that the model can work with. And here we provide it with the code_execution tool. Now, we ask the same question to the model. This time, we even tell it to work with the code_execution tool.

Running the code and seeing the output pic above, we can notice that, the Gemini Flash model has written a Python code to do a string operation i.e. slicing here, it has sliced the first 5 letters of the word Mississippi and has finally given us the answered that we wanted. Now let us try the same with the second question, where we ask the LLM to give us the sum of the first 100 Fibonacci numbers.

response = model2.generate_content("What is the sum of first 100 fibanocci numbers?")

print(response.text)
output

Here, running the coding and seeing the output, we see that the Gemini Flash has generated a function to calculate the Fibonacci number. Then called the function by giving it 100 for n value and then finally printed the output. With the code_execution tool, the gemini llm was able to correctly give us the answer to the question. This way it can solve mathematical problems by making a code out of it and running the code to get the answer.

Conclusion

The introduction of code execution in Google’s Gemini model represents a significant advancement in the capabilities of large language models. By integrating this feature, Gemini can now not only understand and generate text but also execute code to solve complex problems. This development enhances its utility in a variety of applications, from natural language processing to performing specific computational tasks. The ability to run code allows Gemini to overcome some of the inherent limitations of language models, particularly in handling precise calculations and procedural tasks. 

Key Takeaways

  • Gemini can understand and process text, images, audio, and video, making it a true multimodal.
  • Large Language Models often fail to answer mathematical questions with precision, because they cannot perform calculations.
  • Code Execution allows an LLM to run code in a sandboxed environment.
  • Large Language Models can run Python Code by performing a tool call and giving the tool the relevant Python code to execute.
  • Google’s free API allows users to access the Gemini Flash API that can Execute Code.

Frequently Asked Questions

Q1. What is Gemini?

A. Gemini is a family of large language models introduced by Google, capable of understanding text, images, audio, and videos.

Q2. Does Gemini have the functionality to execute code?

A. Recently, Google has announced the feature of Code Execution for the Gemini Model. It is open to the public through the free Google Gemini API Key.

Q3. What libraries are available in Gemini’s sandboxed environment?

A. Currently, only the numpy and sympy libraries are available in Gemini’s sandboxed environment.

Q4. How does code execution improve Gemini’s capabilities?

A. With code execution, Gemini can generate and run Python code to perform tasks such as string operations and mathematical calculations accurately

Q5. How do you enable code execution for Gemini?

A. To enable code execution, create an instance of the GenerativeModel class with the code_execution tool and provide the appropriate model name.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Ajay Kumar Reddy 10 Jul, 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear