Pre-trained LLMs Meet Sequential Recommenders: Efficient User-Centric Knowledge Distillation

📰 ArXiv cs.AI

arXiv:2604.21536v1 Announce Type: cross Abstract: Sequential recommender systems have achieved significant success in modeling temporal user behavior but remain limited in capturing rich user semantics beyond interaction patterns. Large Language Models (LLMs) present opportunities to enhance user understanding with their reasoning capabilities, yet existing integration approaches create prohibitive inference costs in real time. To address these limitations, we present a novel knowledge distillat

Published 25 Apr 2026
Read full paper → ← Back to Reads