Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling

📰 ArXiv cs.AI

arXiv:2604.24717v1 Announce Type: new Abstract: Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure, populated only by discrete ordinal indices. We argue that this rotation space is a largely overlooked second dimension of expressivity in the attention mechanism, one whose systematic exploration may

Published 28 Apr 2026
Read full paper → ← Back to Reads