Safe-FedLLM: Delving into the Safety of Federated Large Language Models

📰 ArXiv cs.AI

arXiv:2601.07177v2 Announce Type: replace-cross Abstract: Federated learning (FL) addresses privacy and data-silo issues in the training of large language models (LLMs). Most prior work focuses on improving the efficiency of federated learning for LLMs (FedLLM). However, security in open federated environments, particularly defenses against malicious clients, remains underexplored. To investigate the security of FedLLM, we conduct a preliminary study to analyze potential attack surfaces and defe

Published 15 Apr 2026
Read full paper → ← Back to Reads