US CIA’s Chatbot Shows AI Copilots are the New Search Engines 

Why Trust Techopedia
KEY TAKEAWAYS

Earlier this week, the U.S. Central Intelligence Agency (CIA) confirmed it was in the process of developing its own ChatGPT-style generative AI chatbot. The project highlights that generative AI copilots are on track to become core tools for data-driven organizations.

Earlier this week, the U.S. Central Intelligence Agency (CIA) confirmed it was in the process of developing its own ChatGPT-style generative AI chatbot.

The virtual assistant, developed by the CIA’s Open Source Enterprise Unit, will be designed to help its intelligence analysts scan open-source intelligence and public information to streamline its investigations. The idea is to equip human investigators to interpret large data sets at speed.

Director of the CIA’s AI unit, Randy Nixon, told Bloomberg: “We’ve gone from newspapers and radio to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going.

“We have to find needles in the needle field. The scale of how much we collect and what we collect has grown astronomically over the last 80-plus years. So much so, that this could be daunting and at times unusable for our consumers.”

AI Copilots are Going Mainstream

The announcement comes just as China is expanding its AI-powered surveillance capabilities, with Reuters finding that dozens of Chinese firms have begun using AI to sort data collected on residents.

From this perspective, the development of a ChatGPT-inspired virtual assistant is an attempt to ensure that China has no automated surveillance advantage.

However, more broadly, the CIA’s decision to experiment with generative AI demonstrates how the adoption of large language models (LLMs) is accelerating in both the private and public sectors.

Advertisements

Just as search engines became an every data tool for professionals processing data, generative AI “copilots” are emerging as a core tool to give human users the ability to summarize and interpret vast data sets and to help identify recurring patterns.

For instance, in the enterprise sector, OpenAI reported that 80% of Fortune 500 companies are experimenting with ChatGPT. While no two organizations are the same, one of the core challenges that generative AI addresses is the need to “find needles in the needle field,” which is becoming more and more difficult.

Making Data Make Sense

For years, enterprises have struggled to process the high volumes of data they collect. Some estimates suggest that unstructured data represents 80-90% of data in the enterprise. This is all data that needs to be understood by a human user or stakeholder on some level.

Generative AI helps human users to make sense of isolated data signals by providing natural language descriptions of what the activity means.

Two providers have illustrated this approach down to a tee, with Google Sec-PaLM using LLMs to tell the user if a script is malicious or not and Microsoft Security Copilot leveraging this technology to summarize threat signals taken from throughout an enterprise network.

In the case of the CIA, LLMs can process data taken from disparate sources on the open web. They help investigators contextualize isolated information and spot patterns through a copilot experience. Users can ask the chatbot questions in natural language and receive coherent responses to aid their investigations.

Problems on the Road Ahead

Although generative AI offers a lot of potential to aid enterprises and public sector organizations in processing large data sets, it also opens the door to some serious ethical concerns.

One of the key issues is the question of whether a user’s personally identifiable information (PII) will be scraped from the public web.

At the same time, if this is a black box AI model that isn’t disclosed to the public, what safeguards are there in place to ensure that the CIA is using AI ethically and responsibly? Are there measures in place to prevent the collection or processing of data it’s not authorized to, an area the EU has criticized the NSA for in the past?

Likewise, the CIA can’t afford to overlook some of the significant flaws that exist in modern language models, such as their ability to hallucinate or make up facts and figures.

Acknowledging “The Crazy Drunk Friend”

Fortunately, it appears that the CIA is recognizing these limitations as part of its roadmap. As the CIA’s CTO Nand Mulchandani explained at the Billington Cybersecurity Summit, while generative AI is a useful tool for spotting patterns in large data sets, users may be “challenged” in “areas where it requires precision.”

While Mulchandani suggested that intelligence analysts treat chatbots like “the crazy drunk friend” and scrutinize their output, the tendency of these tools to spread misinformation still presents serious risks to surveillance organizations.

Allowing the spread of hallucinated facts could have serious reputational and legal repercussions for enterprises.

In a national security context, the margin for error is much thinner, and just one scenario where an intelligence analyst fails to fact-check before acting on false information could have a devastating real-world impact.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.