Reasoning Primitives in Hybrid and Non-Hybrid LLMs

📰 ArXiv cs.AI

arXiv:2604.21454v1 Announce Type: cross Abstract: Reasoning in large language models is often treated as a monolithic capability, but its observed gains may arise from more basic operations. We study reasoning through two such primitives, recall and state-tracking, and ask whether hybrid architectures that combine attention-based retrieval with recurrent state updates are better suited than attention-only models for tasks that jointly require both. Using matched Olmo3 transformer and hybrid mode

Published 25 Apr 2026
Read full paper → ← Back to Reads