Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

📰 ArXiv cs.AI

arXiv:2604.04064v1 Announce Type: cross Abstract: Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generati

Published 7 Apr 2026
Read full paper → ← Back to News