A State-Update Prompting Strategy for Efficient and Robust Multi-turn Dialogue
📰 ArXiv cs.AI
arXiv:2509.17766v2 Announce Type: replace-cross Abstract: Large Language Models (LLMs) struggle with information forgetting and inefficiency in long-horizon, multi-turn dialogues. To address this, we propose a training-free prompt engineering method, the State-Update Multi-turn Dialogue Strategy. It utilizes "State Reconstruction" and "History Remind" mechanisms to effectively manage dialogue history. Our strategy shows strong performance across multiple multi-hop QA datasets. For instance, on t
DeepCamp AI