Join us for a special session with Thomas Palmeira, who will share insights from his recent paper, "LLM Self-Correction with DECRIM: DECOMPOSE, CRITIQUE, AND REFINE for Enhanced Following of Instructions with Multiple Constraints." As large language models (LLMs) continue to evolve, a crucial area of focus has been improving their ability to understand and follow complex, multi-step instructions. This presentation will cover innovative techniques for addressing these challenges, providing a fresh perspective on enhancing model reliability and performance.
Thomas Palmeira's work tackles a key limitation faced by LLMs: adhering to detailed constraints often found in real-world tasks, such as following specific formats or maintaining multiple conditions simultaneously. Attendees will gain valuable insights into the latest methods for refining LLM outputs to better meet user expectations, without sacrificing usability.
Thomas Palmeira is a PhD candidate at École Polytechnique / Télécom Paris (IP Paris), specializing in NLP, large language models, and speech processing. He holds a Master’s degree in Applied Math & AI (MVA) from ENS Paris-Saclay and an engineering degree from the University of São Paulo. His research spans low-resource NLP, adversarial robustness, and few-shot learning, with publications in top-tier AI conferences and internships at Meta, Amazon, Apple, and NAVER Labs.
©2024 SSI Club. All Rights Reserved