Today, we will discuss the first pattern in the series of agentic AI design patterns: The Reflection Pattern.
The Reflection Pattern is a powerful approach in AI, particularly for large language models (LLMs), where an iterative process of generation and self-assessment improves the output quality.
We can picture it as a course developer who creates content for an online course. The course developer first drafts a lesson plan and then reviews it to see what could be improved. They might notice that some sections are too complicated or that certain examples aren’t unclear. After this self-assessment, they revise the content, making adjustments to ensure it’s more understandable and engaging for students. Creating, reviewing, and refining this process continues until the lesson plan reaches a high-quality standard. In a nutshell, the Reflection Pattern involves repeating cycles of output generation, self-reflection, critique, and refinement, ultimately leading to more accurate and polished results.
Let’s understand the Agentic AI Reflection Pattern better with codes and architecture.
Overview
The Agentic AI Reflection Pattern is a method where the model generates, critiques, and refines its outputs through an iterative self-assessment process.
This pattern enhances the accuracy and quality of AI-generated content by mimicking human-like feedback and revision loops.
It is especially effective for large language models (LLMs), allowing them to catch mistakes, clarify ambiguities, and improve over multiple iterations.
The Reflection Pattern consists of three key steps: generation, self-reflection, and iterative refinement.
Practical applications include text generation, code development, and solving complex problems requiring continuous improvement.
Defined stopping criteria, like a fixed number of iterations or quality thresholds, prevent endless loops in the reflection process.
The Reflection Pattern is an agentic AI design pattern applied to AI models, where the model generates an initial response to a prompt, evaluates this output for quality and correctness, and then refines the content based on its own feedback. The model essentially plays the dual roles of creator and critic. The process involves several iterations where the AI alternates between these two roles until the output meets a certain level of quality or a predefined stopping criterion.
It evaluates its own work, checks for errors, inconsistencies, or areas where the output could be enhanced, and then makes revisions. This cycle of generation and self-assessment allows the AI to refine its responses iteratively, leading to much more accurate and useful results over time.
This pattern is especially valuable for large language models (LLMs) because language can be complex and nuanced. By reflecting on its own outputs, the AI can catch mistakes, clarify ambiguous phrases, and ensure that its responses better align with the intended meaning or task requirements. Just like our course developer refining lessons to improve learning outcomes, the Reflection Pattern enables AI systems to improve the quality of their generated content continuously.
Why Use the Reflection Pattern?
The reflection pattern is effective because it allows for incremental improvement through iterative feedback. By repeatedly reflecting on the output, identifying areas for improvement, and refining the text, you can achieve a higher-quality result than would be possible with a single generation step.
Imagine using this pattern when writing a research summary.
Prompt: “Summarize the key points of this research paper on climate change.”
Generate: The AI provides a brief summary.
Reflect: You notice that some important aspects of the paper, such as the implications of the findings, are missing.
Reflected Text: You update the summary to include these details and refine the language for clarity.
Iterate: You repeat the process until the summary accurately captures all the critical points.
This approach encourages continuous refinement and is particularly useful in complex tasks such as content creation, editing, or debugging code.
Key Components of the Reflection Pattern
The Reflection Pattern consists of three main components:
1. Generation Step
The process begins when a user provides an initial prompt, which could be a request to generate text, write code, or solve a complex problem. For example, a prompt might ask the AI to generate an essay on a historical figure or to implement an algorithm in a specific programming language.
Zero-Shot Prompting: The first generation is often done in a zero-shot style, where the AI generates a response without previous examples or iterations.
Initial Output: The output produced is considered a first draft. While it may be relevant and coherent, it may still contain errors or lack the necessary detail.
The goal of the generation step is to produce a candidate output that can be further evaluated and refined in subsequent steps.
2. Reflection Step
The reflection step is a critical phase where the AI model reviews its own generated content. This step involves:
Self-Critique: The model critiques its own work, identifying areas for improvement, such as factual errors, stylistic issues, or logical inconsistencies.
Feedback Generation: The AI generates specific feedback, which can include suggestions for restructuring content, adding details, or correcting mistakes.
Evaluation Criteria: The critique may be based on predefined criteria such as grammatical accuracy, coherence, relevance to the prompt, or adherence to specific formatting guidelines.
The reflection process can involve mimicking the style of a subject matter expert to provide more in-depth feedback. For instance, the AI might adopt the persona of a software engineer to review a piece of code or act as a historian critiquing an essay.
3. Iteration and Refinement
In this phase, the feedback generated during the reflection step is used to guide the next generation of output. The AI incorporates the suggested changes and improvements into a new version of the content. This cycle repeats multiple times, with each iteration bringing the output closer to the desired quality.
Adaptive Learning: Through this iterative process, the AI learns to recognize patterns in its own mistakes and refines its understanding of the task requirements.
Multiple Iterations: The process can be repeated for a fixed number of steps (e.g., 10 iterations) or until a specific stopping condition is met, such as achieving a certain level of content quality or encountering a “stop” keyword.
How the Reflection Pattern Works: Step-by-Step Flow?
Components
Prompt (Input): The initial input given to the model, which serves as the starting point for the text generation process.
Generate: The process where the AI model creates a response based on the prompt.
Output Text: The generated response from the model.
Reflect: A step where the generated output is analyzed, reviewed, or modified for quality improvements.
Reflected Text: The modified or adjusted output after reflecting on the initial generation.
Iterate: The process repeats, using the reflected text to generate a new output, further refining the result.
Flow Explained
Step 1 – Generate: The user starts by providing a Prompt to the AI model. For example, the prompt might be: “Write a short story about a cat that travels to space.”
Step 2 – Output Text: The model generates a response based on the prompt, such as: “Once upon a time, there was a cat named Whiskers who found a magical rocket ship in his backyard. Whiskers hopped inside and launched into space, where he met alien cats from the planet Meowtar.”
Step 3 – Reflect: You review the generated output for quality at this stage. You might notice that the story lacks detail about Whiskers’ emotions or the challenges faced during the journey.
Step 4 – Reflected Text: You revise the text or make suggestions for improvement based on the reflection. The reflected version might include additional details: “Whiskers, feeling both excited and scared, stepped into the rocket ship. As the engines roared to life, he gripped the seat tightly, wondering if he would ever see home again. The journey through space was filled with strange sights and dangers, like meteor showers and cosmic storms, which tested Whiskers’ bravery.”
Step 5 – Iterate: This refined text can now be fed back into the generation process, potentially serving as a new prompt or an improved foundation for further text generation. Based on the reflected text, the model can generate a more polished version of the story.
Practical Implementation of Agentic AI Reflection Pattern
Here’s the implementation of the agentic AI reflection pattern:
!pip install groq
import os
from groq import Groq
from IPython.display import display_markdown
os.environ["GROQ_API_KEY"] = "your_groq_api_key_here"
client = Groq()
!pip install groq: Installs the groq library, which provides the API interface to interact with the Groq platform.
import os, from pprint import pprint, and from groq import Groq: These lines import necessary libraries. os is used for environment management, pprint is for pretty-printing data structures, and Groq is for interacting with the Groq API.
from IPython.display import display_markdown: This is for displaying Markdown-formatted text in Jupyter notebooks.
os.environ[“GROQ_API_KEY”] = “your_groq_api_key_here”: Sets the environment variable GROQ_API_KEY to the provided API key. This is required to authenticate with the Groq API.
client = Groq(): Initializes a Groq client to communicate with the API.
generation_chat_history = [
{
"role": "system",
"content": "You are an experienced Python programmer who generate high quality Python code for users with there explanations"
"Here's your task: You will Generate the best content for the user's request and give explanation of code line by line. If the user provides critique,"
"respond with a revised version of your previous attempt."
"also in the end always ask - Do you have any feedback or would you like me to revise anything?"
"In each output you will tell me whats new you have added for the user in comparison to earlier output"
}
]
The code creates an initial generation_chat_history list with one entry. The “role”: “system” message establishes the context for the LLM, instructing it to generate Python code with detailed explanations.
generation_chat_history.append(
{
"role": "user",
"content": "Generate a Python implementation of the Fibonacci series for beginner students"
}
)
fibonacci_code = client.chat.completions.create(
messages=generation_chat_history,
model="llama3-70b-8192"
).choices[0].message.content
The next step adds a “user” entry to the chat history, asking for a Python implementation of the Fibonacci series.
fibonacci_code = client.chat.completions.create(…) sends a request to the LLM to generate the code based on the conversation history, using the specified model (llama3-70b-8192). The output is stored in the fibonacci_code variable.
The code generated by the model is added to the chat history with the “role”: “assistant”, indicating the model’s response.
display_markdown displays the generated code in Markdown format.
Output
Reflection Step
reflection_chat_history = [
{
"role": "system",
"content": "You are Nitika Sharma, an experienced Python coder. With this experience in Python generate critique and recommendations for user output on the given prompt",
}
]
reflection_chat_history.append(
{
"role": "user",
"content": fibonacci_code
}
)
critique = client.chat.completions.create(
messages=reflection_chat_history,
model="llama3-70b-8192"
).choices[0].message.content
display_markdown(critique, raw=True)
The reflection_chat_history list is initialized with a system prompt telling the model to act as a Python expert named Nitika Sharma and provide critique and recommendations.
The generated code (fibonacci_code) is added to the reflection_chat_history with the “role”: “user”, indicating that this is the input to be critiqued.
The model generates a critique of the code using client.chat.completions.create. The critique is then displayed using display_markdown.
The model generates a third version of the code (Generation_3), aiming to improve upon the previous iterations based on the critique provided.
Output
Here’s the consolidated output
for i in range(length):
if i % 2 == 0:
print("Generation")
else:
print("Reflection")
display_markdown(results[i], raw=True)
print()
You will find the improved code version for each step above, including generation, reflection, and iteration. Currently, we perform reflection manually, observing that the process often extends beyond 3-4 iterations. During each iteration, the critique agent provides recommendations for improvement. Once the critique is satisfied and no further recommendations are necessary, it returns a “<OK>” signal, indicating that the generation process should stop.
However, there is a risk that the critique agent may continue to find new recommendations indefinitely, leading to an infinite loop of reflections. To prevent this, it is a good practice to set a limit on the number of iterations.
Stopping Conditions
The Reflection Pattern relies on well-defined stopping conditions to prevent endless iterations. Common stopping criteria include:
Fixed Number of Steps: The process can be set to run for a specific number of iterations, after which the refinement process stops. For example, the content can be refined over 10 iterations, and then the loop can be ended.
Quality Threshold: A stopping criterion can be based on the quality of the output. If the AI reaches a level of refinement where further changes are minimal or the model generates a predefined stop keyword (e.g., “satisfactory”), the iteration stops.
Custom Criteria: Users can define custom stopping rules, such as a time limit or detecting a specific phrase that indicates completion.
I hope this clarifies how the reflection pattern operates. If you’re interested in building the agent, you can start by exploring how to implement a class.
Moreover, Agentic AI Reflection Patterns are increasingly shaping industries by enabling systems to improve autonomously through self-assessment. One prominent example of this is Self-Retrieval-Augmented Generation (Self-RAG), a method where AI retrieves, generates, and critiques its outputs through self-reflection.
Real-World Applications of Agentic AI Reflection Pattern
The Agentic AI Reflection Pattern leverages iterative self-improvement, allowing AI systems to become more autonomous and efficient in decision-making. By reflecting on its own processes, the AI can identify gaps, refine its responses, and enhance its overall performance. This pattern embodies a continual loop of self-evaluation, aligning the model’s outputs with desired outcomes through active reflection and learning. Here’s how Self-RAG uses the Agentic AI Reflection Pattern in its work:
Self-RAG: It Retrieves, Generates and Critique Through Self-Reflection
Self-reflective retrieval-augmented Generation (Self-RAG) enhances the factuality and overall quality of text generated by language models (LMs) by incorporating a multi-step self-reflection process. Traditional Retrieval-Augmented Generation (RAG) methods augment a model’s input with retrieved passages, which can help mitigate factual errors but often lack flexibility and may introduce irrelevant or contradictory information. Self-RAG addresses these limitations by embedding retrieval and critique directly into the generation process.
The Self-RAG method works in three key stages:
On-demand retrieval: Unlike standard RAG, where retrieval occurs automatically, Self-RAG retrieves information only when necessary. The model begins by evaluating whether additional factual content is required for the given task. If it determines that retrieval is helpful, a retrieval token is generated, triggering the retrieval process. This step ensures contextual and demand-driven retrieval, minimizing irrelevant or unnecessary information.
Parallel generation: After retrieving passages, Self-RAG generates multiple possible parallel responses using the retrieved information. Each response incorporates different degrees of reliance on the retrieved content. This diversity enables the model to handle complex prompts by exploring multiple approaches simultaneously, allowing for more accurate and flexible outputs.
Self-critique and selection: The model then critiques its own outputs by generating critique tokens. These critiques assess the quality of each generated response based on relevance, factual accuracy, and overall coherence. The model selects the most appropriate output by comparing these critiques, discarding irrelevant or contradictory information, and ensuring that the final response is both accurate and well-supported by the retrieved data.
This self-reflective mechanism is what distinguishes Self-RAG from conventional RAG methods. It enables the language model to retrieve information when needed dynamically, generate multiple responses in parallel, and self-evaluate the quality of its outputs, leading to better accuracy and consistency without sacrificing versatility.
Self-RAG vs. Traditional RAG
Here’s the comparison:
RAG:
A prompt such as “How did US states get their names?” is processed.
Step 1 involves the retrieval of several documents related to the prompt (shown as bubbles labeled 1, 2, 3). The retrieved passages are added to the input prompt.
Step 2 shows the language model generating a response based on the prompt plus the retrieved passages. However, it can produce outputs that are inconsistent (e.g., contradicting passages or introducing unsupported claims).
The model lacks a self-reflection mechanism, leading to potential errors or irrelevant content being included in the final generation.
Self-RAG:
The same prompt is processed using Self-RAG. The system retrieves on-demand, meaning retrieval only happens if needed, and the system dynamically decides when retrieval will be beneficial.
Step 1 retrieves multiple relevant passages, but it allows the model to selectively engage with this information rather than forcing all retrieved content into the response.
Step 2: Several outputs are generated in parallel. Each version varies in how it uses the retrieved passages, ensuring that irrelevant or contradictory information can be critiqued. For example, some outputs are marked as irrelevant or partially relevant.
Step 3: Self-RAG critiques the generated outputs and selects the best one. This involves rating each output for relevance, factual accuracy, and overall quality. In this case, Output 1 is selected as the most relevant, leading to a cleaner, more accurate final response.In summary, the figure contrasts how conventional RAG tends to incorporate retrieved passages without reflection, while Self-RAG selectively retrieves, generates, and critiques to achieve higher factuality and coherence.
How are Agentic AI and the Reflection Pattern Related?
The relationship between agentic AI and the reflection pattern is synergistic, as they enhance each other’s capabilities:
Improving Goal Achievement: Agentic AI benefits from the reflection pattern because it can more effectively pursue goals by learning from past actions. When the AI encounters obstacles, the reflection process allows it to revise its strategies and make better decisions in the future.
Adaptive Behavior: The reflection pattern is crucial for agentic AI to exhibit high adaptability. By constantly monitoring its own performance and learning from experiences, the AI can adjust its behaviour to changing circumstances. This is essential for autonomous systems operating in dynamic environments where rigid, pre-defined behaviours would fail.
Meta-Agency Development: Reflection allows agentic AI to pursue goals and improve its ability to pursue them. It might, for example, refine its task prioritization, change its problem-solving approach, or even update its own objectives based on new information. This ability to “reason about reasoning” adds an extra intelligence layer.
Avoiding Repetitive Mistakes: Through reflection, agentic AI can avoid making the same errors repeatedly by identifying patterns in past mistakes. This is especially important in agentic systems where autonomous decision-making may involve significant risk or consequences.
Ethical and Safety Considerations: As agentic AI becomes more autonomous, there are concerns about ensuring it behaves in a way that aligns with human values and safety guidelines. Reflection mechanisms can be designed to check if the AI’s actions remain within ethical boundaries, allowing for ongoing monitoring and adjustment of its behaviour.
The Reflection Pattern can be applied in various scenarios where iterative improvement of AI-generated content is beneficial. Here are some practical examples:
1. Text Generation
Essay Writing: The AI can generate a draft of an essay and then refine it by adding more information, improving sentence structure, or correcting factual errors based on its own critique.
Creative Writing: When used in creative writing tasks, such as generating stories or poems, the AI can reflect on elements like plot consistency, character development, and tone, refining these aspects iteratively.
2. Code Generation
Algorithm Implementation: The Reflection Pattern is highly beneficial in code generation tasks. For instance, if a user prompts the AI to write a Python implementation of a sorting algorithm like “merge sort,” the initial code might be functional but not optimal.
During the reflection step, the AI reviews the code for efficiency, readability, and edge case handling.
It then incorporates the feedback in the next iteration, refining the code to be more efficient, adding comments, or handling more edge cases.
Code Review: The AI can simulate a code review process by providing feedback on its own generated code, suggesting improvements such as better variable naming, adding error handling, or optimizing algorithms.
3. Problem Solving and Reasoning
Mathematical Proofs: AI can iteratively refine mathematical solutions or proofs, correct logical errors, or simplify steps based on self-assessment.
Complex Multi-Step Problems: In multi-step problems where the solution requires a sequence of decisions, the Reflection Pattern helps refine the approach by evaluating each step for potential improvements.
Conclusion
The Reflection Pattern offers a structured approach to enhancing AI-generated content by embedding a generation-reflection loop. This iterative process mimics human revision strategies, allowing the AI to self-assess and refine its outputs progressively. While it may require more computational resources, the benefits in terms of quality improvement make the Reflection Pattern a valuable tool for applications that demand high accuracy and sophistication.
By leveraging this pattern, AI models can tackle complex tasks, deliver polished outputs, and better understand task requirements, leading to better results across various domains.
In the next article, we will talk about the next Agentic Design Pattern: Tool Use!
Ans. The Reflection Pattern is an iterative design process in AI where the model generates content, critiques its output, and refines the response based on its self-assessment. This pattern is especially useful for improving the quality of text generated by large language models (LLMs) through continuous feedback loops.
Q2. How does the Reflection Pattern improve AI outputs?
Ans. By evaluating its own work, the AI identifies errors, ambiguities, or areas for improvement and makes revisions. This iterative cycle leads to increasingly accurate and polished results, much like how a writer or developer refines their work through drafts.
Q3. Why is the Reflection Pattern valuable for large language models (LLMs)?
Ans. LLMs handle complex and nuanced language, so the Reflection Pattern helps them catch mistakes, clarify ambiguous phrases, and better align their outputs with the prompt’s intent. This approach improves content quality and ensures coherence.
Q4. What are the key steps in the Reflection Pattern?
Ans. The three main steps are: 1. Generation – The model creates an initial output based on a prompt. 2. Reflection – The AI critiques its own work, identifying areas for improvement. 3. Iteration – The AI refines its output based on feedback and continues this cycle until the desired quality is achieved.
Hi, I am Pankaj Singh Negi - Senior Content Editor | Passionate about storytelling and crafting compelling narratives that transform ideas into impactful content. I love reading about technology revolutionizing our lifestyle.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.