How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use
A study by MIT Media Lab finds that heavy use of chatbots travels with loneliness, emotional dependence, and other negative social impacts.
Overall, higher daily usage���across all modalities andconversation types���correlated with higher loneliness,dependence, and problematic use, and lower socialization.Exploratory analyses revealed that those with strongeremotional attachment tendencies and higher trust inthe AI chatbot tended to experience greater lonelinessand emotional dependence, respectively.
Artificial personality has always been the third rail of interaction design���from potential Clippy-style annoyance to damaging attachments of AI companions. Thing is, people tend to assign personality to just about anything���and once something starts talking, it becomes nearly unavoidable to infert personality and even emotion. The more human something behaves, the more human our responses to it:
These findingsunderscore the complex interplay between chatbot designchoices (e.g., voice expressiveness) and user behaviors(e.g., conversation content, usage frequency). We highlightthe need for further research on whether chatbots���ability to manage emotional content without fosteringdependence or replacing human relationships benefitsoverall well-being.
Go carefully. Don’t assume that your AI-powered interface must be a chat interface. There are other ways for interfaces to have personality and presence without making them pretend to be human. (See our Sentient Scenes demo that changes style, mood, and behavior on demand.)
And if your interface does talk, be cautious and intentional about the emotional effect that choice may have on people���especially the most vulnerable.
How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study | MIT Media Lab