Enhancing Logical Reasoning in LLMs Through Prompt Engineering

About

Large Language Models (LLMs) have revolutionized text generation and comprehension tasks. However, their ability to perform logical reasoning – a crucial aspect of human intelligence – remains challenging. This session delves into Prompt Engineering, a powerful technique for unlocking the logical reasoning potential of LLMs. We will explore a diverse set of prompt engineering techniques, each designed to guide LLMs towards more robust reasoning, such as Chain-of-Thought Prompting, Least-to-Most Prompting, Decomposed Prompting, Interleaved Retrieval with CoT Prompting, Successive Prompting, Step-Back Prompting, Multi-Agent Prompting among others. We will leverage the LangChain framework in Python to demonstrate the practical implementation of these techniques. By examining the effectiveness of these techniques across various reasoning tasks, this session will equip you with the knowledge and tools to unlock the true potential of LLMs in logical reasoning.

Key Takeaways:

  • Understanding diverse prompt engineering techniques, including established methods like Chain-of-Thought Prompting as well as recent advancements such as Step-Back Prompting
  • Get a concrete understanding of how to code and execute prompts for LLM reasoning tasks using the popular LangChain framework
  • Explore the potential of Multi-Agent Prompting, where combining outputs from multiple, slightly biased LLMs can lead to potentially more robust reasoning

Speaker

video thumbnail
Book Tickets
Stay informed about DHS 2025

Download agenda

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details