2023, Impressed AI (technology that can sense and interact with human emotions) will be one of the major applications of machine learning. For example, Hume AI, founded by former Google researcher Alan Cowen, is developing tools to measure emotions from words, faces, and vocal expressions. Swedish company Smart Eyes recently acquired Affectiva, an MIT Media Lab spin-off that developed the SoundNet neural network, an algorithm that classifies emotions such as anger from audio samples in less than 1.2 seconds. Even video platform Zoom has implemented Zoom IQ. This is a feature that immediately provides users with real-time analysis of sentiment and engagement during virtual meetings.
In 2023, tech companies will release advanced chatbots. This chatbot can closely mimic human emotions to build more empathic relationships with users in banking, education, and healthcare. Microsoft’s chatbot Xiaoice has already been successful in China, with the average user reportedly conversing with “her” more than 60 times in a month. It also passed Turing-her test, and users couldn’t recognize her as a bot for 10 minutes. Chatbot interactions in healthcare will grow nearly 167% from her 2018 to reach 2.8 billion per year by 2023, according to an analysis by Juniper Research Consultancy. This frees up time for medical staff and could save healthcare systems around the world about $3.7 billion. .
By 2023, emotional AI will become commonplace in schools as well. In Hong Kong, some secondary schools are already using artificial intelligence programs developed by Find Solutions AI. The program measures microscopic movements of a student’s facial muscles to identify various negative and positive emotions. Teachers use the system to track student emotional changes, motivation, and focus so that they can intervene early if students lose interest.
The problem is that much of emotional AI is based on flawed science. Emotional AI algorithms reduce facial expressions and tone to emotions without considering a person’s social and cultural background or situation, even when trained on large and diverse datasets. For example, an algorithm can recognize and report when a person is crying, but it may not be able to accurately infer the reason and meaning behind the tears. Similarly, a frown doesn’t necessarily mean an angry person, but that’s the conclusion the algorithm is likely to reach. We all adapt our emotional expressions according to social and cultural norms, so expressions don’t always truly reflect our inner state. People often do “emotional work” to hide their true feelings. The way you express your emotions may be a learned response rather than a spontaneous expression. For example, women are more likely than men to modify their emotions, especially negatively-valued emotions such as anger, because they are expected to do so.
As such, AI technologies that infer emotional states can exacerbate gender and racial inequalities in society. For example, a 2019 UNESCO report showed the detrimental effects of gendering AI technology, with “feminine” voice assistant systems designed according to stereotypes of emotional passivity and submission. .
Facial recognition AI could also perpetuate racial inequality. An analysis of his 400 NBA games using Face, his two popular emotion recognition software programs, and Microsoft’s Face API, found that black players, on average, smile shown to allocate more negative emotions. These results reaffirm other research showing that black men are stereotyped to be aggressive and threatening, so they need to project more positive emotions in the workplace.
Emotional AI technologies will become even more pervasive in 2023, but if left unresolved, the problem will reinforce systemic racial and gender biases, replicate and reinforce global inequalities, and leave already marginalized. Those who are will be at a greater disadvantage.