Emotional AI is getting a lot of attention. One fuel of this momentum is from LLMs, as Incorporating emotion is a crucial factor in humanizing machines, including chatbots. A new study named “Unleashing the Potential of Emotional Intelligence in Chatbots: A Bibliometric Analysis” shows the number of scientific papers in the field has doubled in the last 5 years. The same study lists the top 10 most influential papers in the history of emotional intelligence and ranks BELBIC as the 7th most influential work in the field of emotional intelligence.
Of course, emotional chatbots have more effective communication and a substantial impact on the success of a conversation, and this is behind some startups like Hume AI. But this is not the extent of emotional AI. This is just scratching the very surface.
Emotions are unknowns to us as humans, and we can only name a few of our emotions: sadness, happiness, fear, and yet we have some “feelings” that we can’t put into words. Eskimos have 30 words for snow, and some African languages don’t have a word for it. Many Japanese words, Persian words, or other eastern cultures don’t have any equivalent in Roman-based languages: “Wabi-sabi” – “Yugen” – “Ikigai” – “Mono no aware” – “عرفان”
Our emotions are in the subconscious of the brain, and anything subcortical would be a gut feeling for linguistics with a cortical source. An iguana doesn’t have a comprehension of what snow is.
What does that mean?
You cannot teach a machine to have similar emotions as humans. You can teach supervised emotional classes, like emotion recognition through voice or what we patented as gesture-based authentication in Hummingbirds AI (US20240143713A1), so the machine recognizes certain gestures that are associated with emotions and uses that for authenticating access. These are all supervised; meaning that the AI system is trained on pre-labeled data where specific emotional states are already categorized by humans, allowing the machine to learn to classify these predefined emotional categories, rather than developing its own understanding of emotions.
BELBIC was unsupervised. The brain-inspired models are unsupervised. The most exciting part is unsupervised emotions, when that dynamic ever-changing amalgamation of neurotransmitters and hormones flows in a dance, and it’s a sea of unknown. We all have a gut feeling about it, and our dogs have theirs, and the iguanas have theirs, and the AGEIs will have theirs (AGEI: Artificial General Emotional Intelligence). That’s when we become aware that our beautiful creature is sentient, and we wonder if it was ethical to create a being, so beautiful and emotional, yet bound to be our servant, our slave. With no rights. Would that mean an iRobot uprising?
Emotions are what that make us, us. Without emotions we are a robot, and with emotion, a robot is us. It’s the most crucial of our characters, and it should be developed delicately.