‘Sapiens’ Author Yuval Noah Harari Raises Alarm on AI Risks in Finance

Why Trust Techopedia
Key Takeaways

  • Yuval Noah Harari highlights the dangers of unregulated AI in the financial sector.
  • Harari discusses the role of financial instruments as trust mechanisms and expresses concern over the diminishing public understanding of complex financial systems.
  • Global warnings about AI risks include investment frauds and unrealistic promises by AI algorithms.

Yuval Noah Harari believes the rapid rise of AI innovation in the financial sector may lead to severe consequences due to the lack of regulation.

Renowned Israeli writer and historian Yuval Noah Harari warned that unregulated AI in finance could spiral out of control, leading to devastating consequences. He sounded the alarm about unchecked AI deployment in the financial sector.

Speaking at the Bank for International Settlements (BIS) Innovation Summit, Yuval Noah Harari emphasized the importance of preventing AI from becoming “completely unfathomable” and recommended effective regulation to mitigate misuse and adverse events.

Harari observed that financial instruments such as money and bonds serve as trust-building mechanisms, enabling millions of people to collaborate towards common goals.

Despite this trust, he cited a critical gap in making financial regulation accessible and understandable to the broader population while suggesting that only institutions can “keep humans in the loop.”

Harari further expressed concern over the public’s diminishing understanding of the financial system, pointing out that only a small fraction truly comprehends its complexities. Furthermore, he raised a thought-provoking question:

“What would happen if this understanding were to decline further, perhaps even to zero?”

The philosopher stated that AI is an advanced technology that operates on a fundamentally different level of reasoning compared to humans, which could lead to the creation of financial instruments “beyond” human comprehension.

In an AI-dominated financial world, trust would shift from humans to AI systems, leading to unprecedented implications. Particularly, this refers to economic crises, where politicians and regulators may be compelled to trust AI decision-making processes.

The Philosopher strengthened his arguments by drawing parallels to the 2007-2008 global financial crisis and the current AI revolution in finance. During the financial crisis, housing prices soared in the US, driving residents to take out loans they could not afford, leading to a surge in mortgage prime defaults.

Harari’s point about the 2007-2008 financial crisis is clear. The 2007 crisis was a clear case of financial innovation gone wrong. Regulators and financial institutions struggled to understand the risks associated with mortgage loans, leading to consequences for the global economy. With AI reshaping finance, governments could end up repeating history.

Global AI Concern Rises

Before Harari’s cautionary views on AI, other global regulatory agencies had been warning the public about the risks associated with AI.

Earlier this year, the SEC’s Office of Investor Education and Advocacy and the North American Securities Administrators Association and the Financial Industry Regulatory Authority issued a joint alert cautioning investors about the rise in investment frauds involving the purported use of AI and other emerging technologies.

Similarly, the Commodity Futures Trading Commission Office of Customer Education and Outreach issued a customer advisory advising against investing in schemes that promote “AI-created algorithms” promising guaranteed or unreasonably high returns.

In light of increased regulatory activity surrounding AI, experts believe global authority bodies will continue efforts to mitigate AI risks and ensure companies accurately represent their AI capabilities and the role of AI in their businesses.