CSC: Turning the Adversary's Poison against Itself

📰 ArXiv cs.AI

arXiv:2604.21416v1 Announce Type: cross Abstract: Poisoning-based backdoor attacks pose significant threats to deep neural networks by embedding triggers in training data, causing models to misclassify triggered inputs as adversary-specified labels while maintaining performance on clean data. Existing poison restraint-based defenses often suffer from inadequate detection against specific attack variants and compromise model utility through unlearning methods that lead to accuracy degradation. Th

Published 25 Apr 2026
Read full paper → ← Back to Reads