When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

📰 ArXiv cs.AI

arXiv:2604.03877v1 Announce Type: cross Abstract: Analogical reasoning is a core cognitive faculty essential for narrative understanding. While LLMs perform well when surface and structural cues align, they struggle in cases where an analogy is not apparent on the surface but requires latent information, suggesting limitations in abstraction and generalisation. In this paper we compare a model's probed representations with its prompted performance at detecting narrative analogies, revealing an a

Published 7 Apr 2026
Read full paper → ← Back to News