Reasoning Graphs: Self-Improving, Deterministic RAG through Evidence-Centric Feedback

📰 ArXiv cs.AI

arXiv:2604.07595v2 Announce Type: replace Abstract: Language model agents reason from scratch on every query, discarding their chain of thought after each run. This produces lower accuracy and high variance, as the same query type can succeed or fail unpredictably. We introduce reasoning graphs, a graph structure that persists per-evidence chain of thought as structured edges connected to the evidence items they evaluate. Unlike prior memory mechanisms that retrieve distilled strategies by query

Published 15 Apr 2026
Read full paper → ← Back to Reads