To truly become an expert in GenAI Ops, the key is not just knowing what to learn, but how to learn it and apply it effectively. The journey begins with gaining a broad understanding of foundational concepts such as prompt engineering, Retrieval-Augmented Generation (RAG), and AI agents. However, your focus should gradually shift to mastering the intersection of Large Language Models (LLMs) and AI agents with operational frameworks – LLMOps and AgentOps. These fields will enable you to build, deploy, and maintain intelligent systems at scale.
Here’s a structured, week-by-week GenAI Ops Roadmap to mastering these domains, emphasizing how you will move from learning concepts to applying them practically.
Week 1-2 of GenAI Ops Roadmap: Prompt Engineering Fundamentals
Establish a comprehensive understanding of how language models process prompts, interpret language and generate precise and meaningful responses. This week lays the foundation for effectively communicating with LLMs and harnessing their potential in various tasks.
Week 1: Learn the Basics of Prompting
Understanding LLMs
Explore how LLMs, like GPT models, process input text to generate contextually relevant outputs.
Learn the mechanics of:
Tokenization: Breaking down input into manageable units (tokens).
Contextual Embeddings: Representing language in a model’s context.
Probabilistic Responses: How do LLMs predict the next token based on probability?
Prompting Techniques
Zero-Shot Prompting: Directly ask the model a question or task without providing examples, relying entirely on the model’s pretraining knowledge.
Few-Shot Prompting: Include examples within the prompt to guide the model toward a specific pattern or task.
Chain-of-Thought Prompting: Use structured, step-by-step guidance in the prompt to encourage logical or multi-step outputs.
Week 3-4 of GenAI Ops Roadmap: Exploring Retrieval-Augmented Generation (RAG)
Develop a deep understanding of how integrating retrieval mechanisms with generative models enhances accuracy and contextual relevance. These weeks focus on bridging generative AI capabilities with external knowledge bases, empowering models to provide informed and enriched responses.
Week 5-6 of GenAI Ops Roadmap: Deep Dive into AI Agents
Leverage foundational skills from prompt engineering and retrieval-augmented generation (RAG) to design and build AI agents capable of performing tasks autonomously. These weeks focus on integrating multiple capabilities to create intelligent, action-driven systems.
Week 5: Understanding AI Agents
What are AI Agents?
AI agents are systems that autonomously combine language comprehension, reasoning, and action execution to perform tasks. They rely on:
Language Understanding: Accurately interpreting user inputs or commands.
Knowledge Integration: Using retrieval systems (RAG) for domain-specific or real-time data.
Decision-Making: Determining the best course of action through logic, multi-step reasoning, or rule-based frameworks.
Task Automation: Executing actions like responding to queries, summarizing content, or triggering workflows.
Use Cases of AI Agents
Customer Support Chatbots: Retrieve and present product details.
Virtual Assistants: Handle scheduling, task management, or data analysis.
Research Assistants: Query databases and summarize findings.
Integration with Prompts and RAG
Combining Prompt Engineering with RAG:
Use refined prompts to guide query interpretation.
Enhance responses with retrieval from external sources.
Maintain consistency using structured templates and stop sequences.
Multi-Step Decision-Making:
Apply chain-of-thought prompting to simulate logical reasoning (e.g., breaking a query into subtasks).
Use iterative prompting for refining responses through feedback cycles.
Dynamic Interactions:
Enable agents to ask clarifying questions to resolve ambiguity.
Incorporate retrieval pipelines to improve contextual understanding during multi-step exchanges.
Week 6: Building and Refining AI Agents
Practical Step: Building a Basic AI Agent Prototype
1. Define the Scope
Domain Examples: Choose a focus area like customer support, academic research, or financial analysis.
Tasks: Identify core activities such as data retrieval, summarization, query answering, or decision support.
Agent Relevance:
Use planning agents for multi-step workflows.
Employ tool-using agents for integration with external resources or APIs.
2. Make Use of Specialized Agent Types
Planning Agents:
Role: Break tasks into smaller, actionable steps and sequence them logically.
Use Case: Automating workflows in a task-heavy domain like project management.
Tool-Using Agents:
Role: Interact with external tools (e.g., databases, APIs, or calculators) to complete tasks beyond text generation.
Use Case: Financial analysis using APIs for real-time market data.
Reflection Agents:
Role: Evaluate past responses and refine future outputs based on user feedback or internal performance metrics.
Use Case: Continuous learning systems in customer support applications.
Multi-Agent Systems:
Role: Collaborate with other agents, each specializing in a particular task or domain.
Use Case: One agent handles reasoning, while another performs data retrieval or validation.
3. Integrate Agent Patterns in the Framework
Frameworks:
Use tools like LangChain, Haystack, or OpenAI API for creating modular agent systems.
Implementation of Patterns:
Embed reflection loops for iterative improvement.
Develop planning capabilities for dynamic task sequencing.
4. Advanced Prompt Design
Align prompts with agent specialization:
For Planning: “Generate a step-by-step plan to achieve the following goal…”
For Tool Use: “Retrieve the required data from [API] and process it for user queries.”
For Reflection: “Analyze the previous response and improve accuracy or clarity.”
5. Enable Retrieval and Multi-Step Reasoning
Combine knowledge retrieval with chain-of-thought reasoning:
Enable embedding-based retrieval for relevant data access.
Use reasoning to guide agents through iterative problem-solving.
6. Multi-Agent Collaboration for Complex Scenarios
Deploy multiple agents with defined roles:
Planner Agent: Breaks the query into sub-tasks.
Retriever Agent: Fetches external data.
Reasoner Agent: Synthesizes data and generates an answer.
Validator Agent: Cross-checks the final response for accuracy.
7. Develop a Scalable Interface
Build interfaces that support multi-agent outputs dynamically:
Chatbots for user interaction.
Dashboards for visualizing multi-agent workflows and outcomes.
Testing and Refinement
Evaluate Performance: Test the agent across scenarios and compare query interpretation, data retrieval, and response generation.
Iterate: Improve response accuracy, retrieval relevance, and interaction flow by updating prompt designs and retrieval pipelines.
Example Use Cases
Customer Query Assistant:
Retrieves details about orders, product specifications, or FAQs.
Week 7 of GenAI Ops Roadmap: Introduction to LLMOps
Concepts to Learn
LLMOps (Large Language Model Operations) is a critical discipline for managing the lifecycle of large language models (LLMs), ensuring their effectiveness, reliability, and scalability in real-world applications. This week focuses on key concepts, challenges, and evaluation metrics, laying the groundwork for implementing robust LLMOps practices.
Importance of LLMOps
Ensures that deployed LLMs remain effective and reliable over time.
Provides mechanisms to monitor, fine-tune, and adapt models in response to changing data and user needs.
Occurs when the model’s predictions become less accurate over time due to shifts in data distribution.
Requires constant monitoring and retraining to maintain performance.
Data Privacy:
Ensures sensitive information is handled securely, especially when dealing with user-generated content or proprietary datasets.
Involves techniques like differential privacy and federated learning.
Performance Monitoring:
Involves tracking latency, throughput, and accuracy metrics to ensure the system meets user expectations.
Cost Management:
Balancing computational costs with performance optimization, especially for inference at scale.
Tools & Technologies
Monitoring and Evaluation
Arize AI: Tracks LLM performance, including model drift, bias, and predictions in production.
DeepEval: A framework for evaluating the quality of responses from LLMs based on human and automated scoring.
RAGAS: Evaluates RAG pipelines using metrics like retrieval accuracy, generative quality, and response coherence.
Retrieval and Optimization
FAISS: A library for efficient similarity search and clustering of dense vectors, critical for embedding-based retrieval.
OPIK: Helps optimize prompt engineering and improve response quality for specific use cases.
Experimentation and Deployment
Weights & Biases: Enables tracking of experiments, data, and model metrics with detailed dashboards.
LangChain: Simplifies the integration of LLMs with RAG workflows, chaining prompts, and external tool usage.
Advanced LLMOps Platforms
MLOps Suites: Comprehensive platforms like Seldon and MLFlow for managing LLM lifecycles.
ModelOps Tools: Tools like Cortex and BentoML for scalable model deployment across diverse environments.
Evaluation Metrics for LLMs and Retrieval-Augmented Generation (RAG) Systems
To measure the effectiveness of LLMs and RAG systems, you need to focus on both language generation metrics and retrieval-specific metrics:
Language Generation Metrics
Perplexity: Measures the uncertainty in the model’s predictions. Lower perplexity indicates better language modeling.
BLEU (Bilingual Evaluation Understudy): Evaluates how closely generated text matches reference text. Commonly used for translation tasks.
ROUGE (Recall-Oriented Understudy for Gisting Evaluation): Compares overlap between generated and reference text, widely used for summarization.
METEOR: Focuses on semantic alignment between generated and reference text, with higher sensitivity to synonyms and word order.
Retrieval-Specific Metrics
Precision@k: Measures the proportion of relevant documents retrieved in the top-k results.
Recall@k: Determines how many of the relevant documents were retrieved out of all possible relevant documents.
Mean Reciprocal Rank (MRR): Evaluates the rank of the first relevant document in a list of retrieved documents.
Normalized Discounted Cumulative Gain (NDCG): Accounts for the relevance and ranking position of retrieved documents.
Human Evaluation Metrics
Relevance: How well the generated response aligns with the query or context.
Fluency: Measures grammatical and linguistic correctness.
Helpfulness: Determines whether the response adds value or resolves the user’s query effectively.
Safety: Ensures generated content avoids harmful, biased, or inappropriate language.
Week 8 of GenAI Ops Roadmap: Deployment and Versioning
Concepts to Learn:
Focus on how to deploy LLMs in production environments.
Understand version control and model governance practices.
Tools & Technologies:
vLLM: A powerful framework designed for efficient serving and deployment of large language models like Llama. vLLM supports various techniques such as FP8 quantization and pipeline parallelism, allowing deployment of extremely large models while managing GPU memory efficiently
SageMaker: AWS SageMaker offers a fully managed environment for training, fine-tuning, and deploying machine learning models, including LLMs. It provides scalability, versioning, and integration with a range of AWS services, making it a popular choice for deploying models in production environments
Llama.cpp: This is a high-performance library for running Llama models on CPUs and GPUs. It is known for its efficiency and is increasingly being used for running models that require significant computational resources
MLflow: A tool for managing the lifecycle of machine learning models, MLflow helps with versioning, deployment, and monitoring of LLMs in production. It integrates well with frameworks like Hugging Face Transformers and LangChain, making it a robust solution for model governance
Kubeflow: Kubeflow allows for the orchestration of machine learning workflows, including the deployment and monitoring of models in Kubernetes environments. It’s especially useful for scaling and managing models that are part of a larger ML pipeline
Week 9 of GenAI Ops Roadmap: Monitoring and Observability
Concepts to Learn:
LLM Response Monitoring: Understanding how LLMs perform in real-world applications is essential. Monitoring LLM responses involves tracking:
Response Quality: Using metrics like accuracy, relevance, and latency.
Model Drift: Evaluating if the model’s predictions change over time or diverge from expected outputs.
User Feedback: Collecting feedback from users to continuously improve model performance.
Retrieval Monitoring: Since many LLM systems rely on retrieval-augmented generation (RAG) techniques, it’s crucial to:
Track Retrieval Effectiveness: Measure the relevance and accuracy of retrieved information.
Evaluate Latency: Ensure that the retrieval systems (e.g., FAISS, Elasticsearch) are optimized for fast responses.
Monitor Data Consistency: Ensure that the knowledge base is up-to-date and relevant to the queries being asked.
Agent Monitoring: For systems with agents (whether they’re planning agents, tool-using agents, or multi-agent systems), monitoring is especially important:
Task Completion Rate: Track how often agents successfully complete their tasks.
Agent Coordination: Monitor how well agents work together, especially in multi-agent systems.
Reflection and Feedback Loops: Ensure agents can learn from previous tasks and improve future performance.
Real-Time Inference Monitoring: Real-time inference is critical in production environments. Monitoring these systems can help prevent issues before they impact users. This involves observing inference speed, model response time, and ensuring high availability.
Experiment Tracking and A/B Testing: A/B testing allows you to compare different versions of your model to see which performs better in real-world scenarios. Monitoring helps in tracking:
Conversion Rates: For example, which model version has a higher user engagement.
Statistical Significance: Ensuring that your tests are meaningful and reliable.
Tools & Technologies:
Prometheus & Datadog: These are widely used for infrastructure monitoring. Prometheus tracks system metrics, while Datadog can offer end-to-end observability across the application, including response times, error rates, and service health.
Arize AI: This tool specializes in AI observability, focusing on tracking performance metrics for machine learning models, including LLMs. It helps detect model drift, monitor relevance of generated outputs, and ensure models are producing accurate results over time.
MLflow: MLflow offers model tracking, versioning, and performance monitoring. It integrates with models deployed in production, offering a centralized location for logging experiments, performance, and metadata, making it useful for continuous monitoring in the deployment pipeline.
vLLM: vLLM helps monitor the performance of LLMs, especially in environments that require low-latency responses for large models. It tracks how well models scale in terms of response time, and can also be used to monitor model drift and resource usage.
SageMaker Model Monitor: AWS SageMaker offers built-in model monitoring tools to track data and model quality over time. It can alert users when performance degrades or when the data distribution changes, which is especially valuable for keeping models aligned with real-world data
LangChain: As a framework for building RAG-based systems and LLM-powered agents, LangChain includes monitoring features that track agent performance and ensure that the retrieval pipeline and LLM generation are effective.
RAGAS (Retrieval-Augmented Generation Agent System): RAGAS focuses on monitoring the feedback loop between retrieval and generation in RAG-based systems. It helps in ensuring the relevance of retrieved information and the accuracy of responses based on the retrieved data
Week 10 of GenAI Ops Roadmap: Automating Retraining and Scaling
Concepts to Learn:
Automated Retraining: Learn how to set up pipelines that continuously update LLMs with new data to maintain performance.
Scaling: Understand horizontal (adding more nodes) and vertical (increasing resources of a single machine) scaling techniques in production environments to manage large models efficiently.
Tools & Technologies:
Apache Airflow: Automates workflows for model retraining.
Pipeline Parallelism: Split models across multiple stages or workers to optimize memory usage and compute efficiency. Techniques like GPipe and TeraPipe improve training scalability
Week 11 of GenAI Ops Roadmap: Security and Ethics in LLMOps
Concepts to Learn:
Understand the ethical considerations when deploying LLMs, such as bias, fairness, and safety.
Study security practices in handling model data, including user privacy and compliance with regulations like GDPR.
Tools & Technologies:
Explore tools for secure model deployment and privacy-preserving techniques.
Study ethical frameworks for responsible AI development.
Week 12 of GenAI Ops Roadmap: Continuous Improvement and Feedback Loops
Concepts to Learn:
Building Feedback Loops: Learn how to implement mechanisms to track and improve LLMs’ performance over time by capturing user feedback and real-world interactions.
Model Performance Tracking: Study strategies for evaluating models over time, addressing issues like model drift, and refining the model based on continuous input.
Tools & Technologies:
Model Drift Detection: Use tools like Arize AI and Verta to detect model drift in real-time, ensuring that models adapt to changing patterns.
MLflow and Kubeflow: These tools help in managing the model lifecycle, enabling continuous monitoring, versioning, and feedback integration. Kubeflow Pipelines can be used to automate feedback loops, while MLflow allows for experiment tracking and model management.
Other Tools: Seldon and Weights & Biases offer advanced tracking and real-time monitoring features for continuous improvement, ensuring that LLMs remain aligned with business needs and real-world changes.
Week 13 of GenAI Ops Roadmap: Introduction to AgentOps
Concepts to Learn:
Understand the principles behind AgentOps, including the management and orchestration of AI agents.
Explore the role of agents in automating tasks, decision-making, and enhancing workflows in complex environments.
Tools & Technologies:
Introduction to frameworks like LangChain and Haystack for building agents.
Learn about agent orchestration using OpenAI API and Chaining techniques.
Week 14 of GenAI Ops Roadmap: Building Agents
Concepts to Learn:
Study how to design intelligent agents capable of interacting with data sources and APIs.
Explore the design patterns for autonomous agents and the management of their lifecycle.
Tools & Technologies:
Learn about LangChain, LlamaIndex, and other frameworks for agent development.
Explore tools for creating and managing workflows with Celery and Flowise.
Week 15 of GenAI Ops Roadmap: Advanced Agent Orchestration
Concepts to Learn:
Dive deeper into multi-agent systems, where agents collaborate to solve tasks.
Understand agent communication protocols and orchestration techniques.
Tools & Technologies:
Study tools like Ray for large-scale agent coordination.
Learn about OpenAI’s Agent API for advanced automation.
Week 16 of GenAI Ops Roadmap: Performance Monitoring and Optimization
Concepts to Learn:
Explore performance monitoring strategies for agent systems in production.
Understand agent logging, failure handling, and optimization.
Tools & Technologies:
Study frameworks like Datadog and Prometheus for tracking agent performance.
Learn about optimization strategies using ModelOps principles for efficient agent operation.
Week 17 of GenAI Ops Roadmap: Security and Privacy in AgentOps
Concepts to Learn:
Understand the security and privacy challenges specific to autonomous agents.
Study techniques for securing agent communications and ensuring privacy during operations.
Tools & Technologies:
Explore encryption tools and access controls for agent operations.
Learn about API security practices for agents interacting with sensitive data.
Week 18 of GenAI Ops Roadmap: Ethical Considerations in AgentOps
Concepts to Learn:
Study the ethical implications of using agents in decision-making.
Explore bias mitigation and fairness in agent operations.
Tools & Technologies:
Use frameworks like Fairness Indicators for evaluating agent outputs.
Learn about governance tools for responsible AI deployment in agent systems.
Week 19 of GenAI Ops Roadmap: Scaling and Continuous Learning for Agents
Concepts to Learn:
Learn about scaling agents for large-scale operations.
Study continuous learning mechanisms, where agents adapt to changing environments.
Use tools like TensorFlow or PyTorch for implementing continual learning in agents.
Week 20 of GenAI Ops Roadmap: Capstone Project
The final week is dedicated to applying everything you’ve learned in a comprehensive project. This capstone project should incorporate LLMOps, AgentOps, and advanced topics like multi-agent systems and security.
Create a Real-World Application
This project will allow you to combine various concepts from the course to design and build a complete system. The goal is to solve a real-world problem while integrating operational practices, AI agents, and LLMs.
Practical Step: Capstone Project
Task: Develop a project that integrates multiple concepts, such as creating a personalized assistant, automating a business workflow, or designing an AI-powered recommendation system.
Scenario: A personalized assistant could use LLMs to understand user preferences and agents to manage tasks, such as scheduling, reminders, and automated recommendations. This system would integrate external tools like calendar APIs, CRM systems, and external databases.
Skills: System design, integration of multiple agents, external APIs, real-world problem-solving, and project management.
You’re now ready to explore the exciting world of AI agents with this GenAI Ops roadmap. With the skills you’ve learned, you can design smarter systems, automate tasks, and solve real-world problems. Keep practising and experimenting as you build your expertise.
Remember, learning is a journey. Each step brings you closer to achieving something great. Best of luck as you grow and create amazing AI solutions!
Hi, I am Janvi, a passionate data science enthusiast currently working at Analytics Vidhya. My journey into the world of data began with a deep curiosity about how we can extract meaningful insights from complex datasets.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.