Constraining Sequential Model Editing with Editing Anchor Compression

📰 ArXiv cs.AI

arXiv:2503.00035v2 Announce Type: replace-cross Abstract: Large language models (LLMs) struggle with hallucinations due to false or outdated knowledge. Given the high resource demands of retraining these models, there is an increasing focus on developing model editing. However, the general abilities of LLMs across downstream tasks are prone to significant degradation during sequential editing. This paper statistically observes that the parameter matrix after editing exhibits a significant deviat

Published 13 Apr 2026
Read full paper → ← Back to Reads