


The U.S. intelligence community says it needs new tradecraft and training to prepare analysts and operatives to combat the dangers of generative artificial intelligence, including the production of false information that some warn could trigger bad decisions or a global disaster.
America’s spy agencies are getting an AI update, as the technology has exploded in both the public and private sector globally. The Office of the Director of National Intelligence and CIA have published a new strategy detailing how their rank and file will apply AI to “open-source intelligence,” referred to as OSINT — insights gleaned from the massive quantities of data that are commercially and publicly available.
The Intelligence Community Open-Source Intelligence Strategy for 2024-2026 published this month said new generative AI tools (GAI) are providing several opportunities and challenges for those collecting and analyzing open-source information to benefit America’s spy agencies.
“OSINT tradecraft and training must be updated and refined to mitigate the potential risks of GAI, including inaccuracies and hallucinations,” the strategy said. “The OSINT community should be at the forefront of the [intelligence community] in testing the use of GAI and developing and evolving the tradecraft for its use. This tradecraft will set the standards for the human-machine teaming that will be the foundation of OSINT in the future.”
Open-source intelligence can come from an array of devices connected to the internet, social media platforms, and sensors and software tools, among other things.
Generative AI largely refers to models that create text, images and videos in response to queries from users through tools such as ChatGPT. The tools focus on data and sometimes “hallucinate” responses, inventing answers with demonstrably false information.
The spy agencies and their analysts need trustworthy tools to help them separate fact from fiction and sort imminent threats from imaginary dangers.
America’s intelligence agencies are reviewing the applicability of such powerful AI tools. The CIA said last year it was studying the application of large language models — the powerful algorithms that undergird generative AI tools.
Lakshmi Raman, the CIA’s AI chief, said at a summit in July 2023 that the agency was in an exploration and experimentation phase in reviewing the tools.
By the fall, the National Security Agency was pressing forward with plans for a new AI Security Center to serve as a hub for testing and scrutinizing AI tools. NSA officials said in September that the AI Security Center would work closely with private industry, national labs, academics and others as it toiled to understand new threats posed by cutting-edge AI.
A goal of the AI activity inside America’s intelligence agencies is to prepare for a world where every spy uses AI — and can be used by it. The Office of the Director of National Intelligence’s Rachel Grunspan said last year that America’s intelligence community wanted to be “AI-first,” with everyone from senior leaders on down having AI tools at their fingertips.
The spy chiefs’ interest in adopting new AI tools across the workforce is evident in the intelligence community’s new open-source intelligence strategy.
The strategy said generative AI figures to become a powerful tool for the production of timely insights, such as by “aiding the identification of common themes or patterns in underlying data and quickly summarizing large amounts of text.”
Precisely what AI capabilities are deployed across the intelligence community are not fully known, but some details have emerged in recent months.
The AI company Primer told The Washington Times in February it was working with the NSA. The company said its platform, among other tasks, helps analysts identify cyber threat indicators across social media and uncover adversaries’ influence operations.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.