Social Meaning in Large Language Models: Structure, Magnitude, and Pragmatic Prompting
📰 ArXiv cs.AI
arXiv:2604.02512v1 Announce Type: cross Abstract: Large language models (LLMs) increasingly exhibit human-like patterns of pragmatic and social reasoning. This paper addresses two related questions: do LLMs approximate human social meaning not only qualitatively but also quantitatively, and can prompting strategies informed by pragmatic theory improve this approximation? To address the first, we introduce two calibration-focused metrics distinguishing structural fidelity from magnitude calibrati
DeepCamp AI