Pref-CTRL: Preference Driven LLM Alignment using Representation Editing

📰 ArXiv cs.AI

arXiv:2604.23543v1 Announce Type: cross Abstract: Test-time alignment methods offer a promising alternative to fine-tuning by steering the outputs of large language models (LLMs) at inference time with lightweight interventions on their internal representations. Recently, a prominent and effective approach, RE-Control (Kong et al., 2024), has proposed leveraging an external value function trained over the LLM's hidden states to guide generation via gradient-based editing. While effective, this m

Published 28 Apr 2026
Read full paper → ← Back to Reads