Relational Preference Encoding in Looped Transformer Internal States

📰 ArXiv cs.AI

arXiv:2604.09870v1 Announce Type: cross Abstract: We investigate how looped transformers encode human preference in their internal iteration states. Using Ouro-2.6B-Thinking, a 2.6B-parameter looped transformer with iterative refinement, we extract hidden states from each loop iteration and train lightweight evaluator heads (~5M parameters) to predict human preference on the Anthropic HH-RLHF dataset. Our pairwise evaluator achieves 95.2% test accuracy on 8,552 unseen examples, surpassing a full

Published 14 Apr 2026
Read full paper → ← Back to Reads