A Human-Guided Approach to Context-Aware SQL Generation in Multi-Agent Frameworks
Rachel Jarvis
David Johnson
Querying information from relational databases often requires proficiency in SQL, creating a steep learning curve for users who lack programming or database management experience. Text-to-SQL systems aim to bridge this gap by automatically converting natural language questions into executable SQL statements. In recent years, multi-agent frameworks have gained traction for this task, as they enable complex query generation to be decomposed into specialized subtasks such as schema selection based on user intent, SQL synthesis, and refinement of SQL queries through execution-based error correction. This work explores the integration of a human feedback component within a multi-agent Text-to-SQL framework. Human input is introduced after the selector agent identifies relevant schemas and tables, offering targeted guidance before SQL generation. The objective is to examine how such feedback can improve the system’s accuracy and contextual understanding of queries. The implementation leverages OpenAI’s GPT-4.1 mini and GPT-4.1 nano models as the underlying language components. The evaluation is carried out using a standard Text-to-SQL benchmark dataset, focusing on key performance metrics such as execution accuracy and validity efficiency scores.