Implicit Humanization in Everyday LLM Moral Judgments

📰 ArXiv cs.AI

arXiv:2604.22764v1 Announce Type: cross Abstract: Recent adoption of conversational information systems has expanded the scope of user queries to include complex tasks such as personal advice-seeking. However, we identify a specific type of sought advice-a request for a moral judgment (i.e. "who was wrong?") in a social conflict-as an implicitly humanizing query which carries potentially harmful anthropomorphic projections. In this study, we examine the reinforcement of these assumptions in the

Published 28 Apr 2026
Read full paper → ← Back to Reads