Automatic Replication of LLM Mistakes in Medical Conversations

📰 ArXiv cs.AI

arXiv:2512.20983v2 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly evaluated in clinical settings using multi-dimensional rubrics which quantify reasoning quality, safety, and patient-centeredness. Yet, replicating specific mistakes in other LLM models is not straightforward and often requires manual effort. We introduce MedMistake, an automatic pipeline that extracts mistakes LLMs make in patient-doctor conversations and converts them into a benchmark of sin

Published 8 Apr 2026
Read full paper → ← Back to News