Poisoning the Inner Prediction Logic of Graph Neural Networks for Clean-Label Backdoor Attacks

📰 ArXiv cs.AI

arXiv:2603.05004v2 Announce Type: replace-cross Abstract: Graph Neural Networks (GNNs) have achieved remarkable results in various tasks. Recent studies reveal that graph backdoor attacks can poison the GNN model to predict test nodes with triggers attached as the target class. However, apart from injecting triggers to training nodes, these graph backdoor attacks generally require altering the labels of trigger-attached training nodes into the target class, which is impractical in real-world sce

Published 15 Apr 2026
Read full paper → ← Back to Reads