Top 7 Strategies to Mitigate Hallucinations in LLMs

Ranjeeta Bhattacharya Last Updated : 09 Apr, 2024
5 min read

The introduction of Large Language Models (LLMs) has brought in a significant paradigm shift in artificial intelligence (AI) and machine learning (ML) fields. With their remarkable advancements, LLMs can now generate content on diverse topics, address complex inquiries, and substantially enhance user satisfaction. However, alongside their progression, a new challenge has surfaced: Hallucinations. This phenomenon occurs when LLMs produce erroneous, nonsensical, or disjointed text. Such occurrences pose potential risks and challenges for organizations leveraging these models. Particularly concerning are situations involving the dissemination of misinformation or the creation of offensive material. 

As of January 2024, hallucination rates for publicly available models range from approximately 3% to 16% [1]. In this article, we will delineate various strategies to mitigate this risk effectively

Take your AI innovations to the next level with GenAI Pinnacle. Fine-tune models like Gemini and unlock endless possibilities in NLP, image generation, and more. Dive in today! Explore Now

Contextual Prompt Engineering/Tuning

Prompt engineering is the process of designing and refining the instructions fed to the large language model to retrieve the best possible outcome. A blend of expertise and creativity is required to craft the best prompts to elicit specific responses or behaviors from the LLMs. Designing prompts that include explicit instructions, contextual cues, or specific framing techniques helps guide the LLM generation process. By providing clear guidance and context, GPT prompts engineering reduces ambiguity and helps the model generate more reliable and coherent responses.

Prompt Engineering Cheatsheet

Elements of a Prompt

These are the list of elements that make up a well-crafted prompt:

  • Context: Introducing background details or providing a brief introduction helps the LLM understand the subject and serves as a starting point for discussion.
  • Instructions: Crafting clear and concise questions ensures that the model’s response stays focused on the desired topic. For example, one might ask the model to “summarize the chapter in less than 100 words using simple English”.
  • Input Examples: Providing specific examples to the model helps generate tailored responses. For instance, if a customer complains, “The product I received is damaged,” the model can propose an appropriate reply and suggest potential reimbursement choices.
  • Output Format: Specifying the desired format for the response, such as a bullet-point list, paragraph, or code snippet, guides the LLM in structuring its output accordingly. For example, one might request “step-by-step instructions using numbered lists”.
  • Reasoning: Iteratively adjusting and refining prompts based on the model’s responses can significantly enhance output quality. Chain-of-thought prompting, for instance, breaks down multistep problems into intermediate steps, enabling complex reasoning capabilities beyond standard prompt methods.
  • Prompt Fine-Tuning: Adjusting prompts based on specific use cases or domains improves the model’s performance on particular tasks or datasets.
  • Refinement Through Interactive Querying: Iteratively adjusting and refining prompts based on the model’s responses enhances output quality and enables the LLM to use reasoning to derive the final answer, significantly reducing hallucinations.

Positive Prompt Framing

It has been observed that using positive instructions instead of negative ones yields better results (i.e. ‘Do’ as opposed to ‘Do not’). Example of negative framing:

Do not ask the user more than 1 question at a time. Example of positive framing: When you ask the user for information, ask a maximum of 1 question at a time.

Also Read: Are LLMs Outsmarting Humans in Crafting Persuasive Misinformation?

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is the process of empowering the LLM model with domain-specific and up-to-date knowledge to increase accuracy and auditability of model response. This is a powerful technique that combines prompt engineering with context retrieval from external data sources to improve the performance and relevance of LLMs. By grounding the model on additional information, it allows for more accurate and context-aware responses.

This approach can be beneficial for various applications, such as question-answering chatbots, search engines, and knowledge engines. By using RAG, LLMs can present accurate information with source attribution, which enhances user trust and reduces the need for continuous model training on new data.

Model Parameter Adjustment

Different model parameters, such as temperature, frequency penalty, and top-p, significantly influence the output created by LLMs. Higher temperature settings encourage more randomness and creativity, while lower settings make the output more predictable. Raising the frequency penalty value prompts the model to use repeated words more sparingly. Similarly, increasing the presence penalty value increases the likelihood of generating words that haven’t been used yet in the output.

The top-p parameter regulates response variety by setting a cumulative probability threshold for word selection. Overall, these parameters allow for fine-tuning and strike a balance between generating varied responses and maintaining accuracy. Hence, adjusting these parameters decreases the likelihood of the model imagining answers.

Model Development/Enrichment

  • Fine tuning a pre trained LLM: Fine tuning is the process where we train a pre-trained model with smaller, task-specific labelled dataset. By fine-tuning on a task-specific dataset, the LLM can grasp the nuances of that domain. This is especially vital in areas with specialized jargon, concepts, or structures, such as legal documents, medical texts, or financial reports. As a result, when faced with unseen examples from the specific domain or task, the model is likely to make predictions or generate outputs with higher accuracy and relevance. 
  • Fully Custom LLM: An LLM model can be developed from the ground up only on knowledge that is accurate and relevant to its domain. Doing so will help the model better understand the relationships and patterns within a particular subject. This will reduce chances of hallucinations, although not remove it completely. However, building own LLM is computationally costly and requires tremendous expertise.

Human Oversight

Incorporating human oversight preferably by subject matter experts clubbed with robust reviewing processes to validate the outputs generated by the language model, particularly in sensitive or high-risk applications where hallucinations can have significant consequences can greatly help dealing with misinformation. Human reviewers can identify and correct hallucinatory text before it is disseminated or used in critical contexts.

General User Education and Awareness

Educating users and stakeholders about the limitations and risks of language models, including their potential to generate misleading text, is crucial. We should encourage users to carefully assess and verify outputs, especially when accuracy is essential. It’s important to develop and follow ethical guidelines and policies governing language model use, particularly in areas where misleading information could cause harm. We must establish clear guidelines for responsible AI usage, including content moderation, misinformation detection, and preventing offensive content.

Continued research into mitigating LLM hallucinations acknowledges that while complete elimination may be challenging, implementing preventive measures can substantially decrease their frequency. It’s crucial to emphasize the significance of responsible and thoughtful engagement with AI systems and to cultivate greater awareness to maintain a necessary equilibrium in utilizing technology effectively without causing harm.

Conclusion

The prevalence of hallucinations in Large Language Models (LLMs) poses a significant challenge despite various empirical efforts to mitigate them. While these strategies offer valuable insights, the fundamental question of complete elimination remains unanswered.

I hope this article has shed light on hallucinations in LLMs and provided strategies for addressing them. Let me know your thoughts in the comment section below.

Dive into the future of AI with GenAI Pinnacle. From training bespoke models to tackling real-world challenges like PII masking, empower your projects with cutting-edge capabilities. Start Exploring.

Reference:

[1] https://huggingface.co/spaces/vectara/leaderboard

I'm a senior data scientist within the AI Hub wing of the world's largest custodian bank. As a machine learning practitioner, my work is intensely data-driven and requires me to utilize my cognitive ability to think through complex use cases and support end-to-end AI/ML solutions from the inception phase till deployment. My total experience as a Data Science / Technology consultant spans over 15+ years where I have performed multi-faceted techno-functional roles in the capacity of software developer, solution designer, technical analyst, delivery manager, project manager, etc. for IT Consulting Fortune 500 companies across the globe.

I hold an undergraduate degree in Computer science and engineering, a master's degree in data science, and multiple certifications and publications in these domains, demonstrating my commitment to continuous learning and knowledge sharing.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details