AI-Induced Human Responsibility (AIHR) in AI-Human teams
📰 ArXiv cs.AI
arXiv:2604.08866v1 Announce Type: cross Abstract: As organizations increasingly deploy AI as a teammate rather than a standalone tool, morally consequential mistakes often arise from joint human-AI workflows in which causality is ambiguous. We ask how people allocate responsibility in these hybrid-agent settings. Across four experiments (N = 1,801) in an AI-assisted lending context (e.g., discriminatory rejection, irresponsible lending, and low-harm filing errors), participants consistently attr
DeepCamp AI