DP-OPD: Differentially Private On-Policy Distillation for Language Models
📰 ArXiv cs.AI
arXiv:2604.04461v1 Announce Type: cross Abstract: Large language models (LLMs) are increasingly adapted to proprietary and domain-specific corpora that contain sensitive information, creating a tension between formal privacy guarantees and efficient deployment through model compression. Differential privacy (DP), typically enforced via DP-SGD, provides record-level protection but often incurs substantial utility loss in autoregressive generation, where optimization noise can amplify exposure bia
DeepCamp AI