Day 049/365

Can we trust AI to build trust among us?


Ari Lehavi and I had a 3.5-hour conversation about enterprise multi-agent applications, the agency of the agents, and how emotions can transform this future.

Think about it: Sara uses ChatGPT, I use Claude, why? Ankit uses Gemini, Ari, like more number of people, cancelled his ChatGPT subscription for Claude.

Among the most important reasons that we will pick our own Personal Agents from one of these handful of companies are: a. Trust b. Emotional resonance, which is fundamentally a product of trust. Which one emotionally resonates better with you depends a lot on whether you trust the system.

Everything is about trust: trust in the AI and confidence/trust in its creator. We discussed how cultures play a role in shaping our trust signature. Middle Easterners have a different trust appetite than Europeans, Africans than Latinos. Asians are also very different.

We agreed that at some point, our intelligence will be indistinguishable from the machines’, before they surpass ours. They will be good CFOs, good accountants, good HR, good marketing, and we will have hyper-efficient organizations. We disagreed on whether humans will still be relevant in such organizations, one reason being trust and also the need for human-in-the-loop.

We’re very close to a stage where we trust our agents to directly communicate with each other, first with our presence, supervision, and control. At some point, human supervision would dissipate along with human superiority. AI supervises AI better than humans, and it can potentially govern itself better than humans. When would we trust AI enough to give it full autonomy?

The agency and control of AI agents are going to be one of the most critical dilemmas of our generation.

Do we want to let them run wild and correct them if necessary, or do we want to release them already regulated? Whom do we trust to build this trust? Would AI be able to restore trust among us, or gift us the trust to trust humanity?