Day 047/365

Bringing Mind and Mindfulness to Machines


What we’ve talked about for the last two Days is preliminary to this Day/Episode. We started with BELBIC, the emotion-based decision-makers, and how not only are they still relevant after 20 years, but they have also been transforming various industries for hundreds of use cases.

Emotion-based decision-makers are making better decisions, making decisions faster because when you see a lion, you just run away, and they need less computational power because they are at the base of the brain, rather than the gray area for more logic-based decision making.

Faster, more efficient, and more stable. That’s what we need for LLMs.

The energy consumption of LLMs is enormous. The water consumption of LLMs is one bottle of water per prompt! The models are getting larger, and with the exponential adoption of LLMs globally, we are going to be in a very bad position soon. We need more efficiency.

For security and privacy reasons, we have to make LLMs efficient enough to be run on the device, at the edge. Efficient enough to seamlessly run all AI agents without affecting users’ experience on the laptop or draining the battery life. It’s going to be faster and more efficient to run at the edge, and for that reason, brain-inspired BELBIC architectures would be transformative.

Besides efficiency, emotions could potentially help to mitigate the problem of hallucinations in LLMs. By incorporating emotional intelligence, an LLM would continuously and more effectively assess its own “confidence” in its responses. The feedback loop with reward and punishment can help LLMs to have more stable, aka less hallucinatory, responses.

Faster, more efficient, more stable.

Another exciting impact of emotional AI is humanizing AI agents, which is a very delicate topic, and we have to be cautious not to overdo this. Imagine how the responses of LLMs may impact your emotions. The same answer may be given to you in different ways. My friend Ashraf has done some very interesting experiments with Alexa and Google Nest which we will discuss at the AI & Emotional Learning event on 8/29. In this event, we’re going to discuss how Emotional Learning of the next generation would be impacted through using AI agents in education [https://lnkd.in/eYAhy4Di].

In software design, we have security-by-design as a principle. Privacy-by-design is getting hot. Now in AI design, on top of that, we should consider human-first-by-design, emotional-by-design, and mindfulness-by-design. We should thoughtfully and meticulously be mindful about how our products would impact generations to come, physically, psychologically, and emotionally.