Design Conditions for Intra-Group Learning of Sequence-Level Rewards: Token Gradient Cancellation
📰 ArXiv cs.AI
arXiv:2604.13088v1 Announce Type: cross Abstract: In sparse termination rewards, intra-group comparisons have become the dominant paradigm for fine-tuning reasoning models via reinforcement learning. However, long-term training often leads to issues like ineffective update accumulation (learning tax), solution probability drift, and entropy collapse. This paper presents a necessary condition for algorithm design from a token-level credit assignment perspective: to prevent reward-irrelevant drift
DeepCamp AI