Emotional Disruption
AI That Can Recognize And Respond To Emotions Creates New Possibilities For Harm
Share
As the intertwining of technology with human life deepens, “affective computing”—the use of algorithms that can read human emotions or predict our emotional responses—is likely to become increasingly prevalent. In time, the advent of artificial intelligence (AI) “woebots” and similar tools could transform the delivery of emotional and psychological care—analogous to heart monitors and step counters. But the adverse consequences, either accidental or intentional, of emotionally “intelligent” code could be profound.
Consider the various disruptions the digital revolution has already triggered—what would be the affective-computing equivalent of echo chambers or fake news? Of electoral interference or the micro-targeting of advertisements? New possibilities for radicalization would also open up, with machine learning used to identify emotionally receptive individuals and the specific triggers that might push them toward violence. Oppressive governments could deploy affective computing to exert control or whip up angry divisions.
To help mitigate these risks, research into potential direct and indirect impacts of these technologies could be encouraged. Mandatory standards could be introduced, placing ethical limits on research and development. Developers could be required to provide individuals with “opt-out” rights. And greater education about potential risks—both for people working in this field and for the general population—would also help.