AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference
📰 ArXiv cs.AI
arXiv:2604.03925v1 Announce Type: cross Abstract: Large language models struggle to accumulate evidence across multiple rounds of user interaction, failing to update their beliefs in a manner consistent with Bayesian inference. Existing solutions require fine-tuning on sensitive user interaction data, limiting their applicability in privacy-conscious settings. We propose AdaptFuse, a training-free framework that externalizes probabilistic computation entirely from the LLM: a symbolic module main
DeepCamp AI