


DANIEL SLIM/GETTY IMAGES
Before allowing his more than 13,000 Pentagon employees to look up a piece of information about an American citizen, Defense Counterintelligence and Security Agency (DCSA) Director David Cattler has them ask themselves a question: does my mom know that the government can do this?
“The mom test,” as Cattler calls it, is a common-sense check on how DCSA — a sprawling agency that grants and denies U.S. security clearances to millions of workers — does its job. And it’s also the way Cattler thinks about his agency’s use of AI.
DCSA is the agency in charge of investigating and approving 95% of the federal government employees’ security clearances, which requires it to complete millions of investigations each year. This gives the agency access to a huge trove of private information, and in 2024, DCSA turned to AI tools to organize and interpret that data.
That doesn’t include ChatGPT, Bard, Claude, or other flashy generative AI models. Instead, it’s mining and organizing data in ways that Silicon Valley tech companies have done for years, using systems that show their work more clearly than most large language models do. For instance, Cattler said the agency’s most promising use case for these tools is prioritizing existing threats.
If not used carefully, these tools could compromise data security and introduce bias into government systems. But Cattler was nonetheless optimistic that some of AI’s less sexy functions might be game-changing for the agency — as long as they aren’t “black boxes.”
“We have to understand why it is credible and how it does what it does,” Cattler told Forbes. “We have to prove, when we use these tools for the purposes I’m describing, that they do what they say they do and they do it objectively and they do it in a highly compliant and consistent way.”
Many people may not even think of the tools Cattler is describing as AI. He was excited about the idea of building a heatmap of facilities that DCSA secures, with risks plotted across it in real time, updating when other government agencies receive a new piece of information about a potential threat. Such a tool, he said, could help DCSA “determine where to put the [metaphorical] firetrucks.” It wouldn’t be uncovering new information; it would just be presenting existing information in a more useful way.
Matthew Scherer, a senior policy counsel at the Center for Democracy and Technology, told Forbes that while AI can be useful to collate and organize information that has already been gathered and validated, it’s the next step — making critical decisions, like pointing out red flags during a background check process or collecting data from social media profiles— that can be dangerous. For instance, AI systems still struggle to differentiate between multiple people with the same name, leading to misidentification.
“I would have concerns if the AI system was making recommendations of some sort or putting its thumb on the scale for particular applicants,” Scherer said. “Then you’re moving into the realm of automated decision systems.”
Cattler said the department has stayed away from using AI to identify new risks. Even in prioritization, though, issues of privacy and bias can arise. In contracting with AI companies (Cattler declined to name any partners), DCSA has to consider what private data it feeds into proprietary algorithms, and what those algorithms can do with that data once they have it. Companies offering AI products to the general public have inadvertently leaked private data that customers entrusted to them — a breach of trust that would be catastrophic if it happened with data held by the Pentagon itself.
AI could also introduce bias into Department of Defense systems. Algorithms reflect the blind spots of the people who make them and the data that they’re trained on, and DCSA relies on oversight from the White House, Congress and other administrative bodies to protect against prejudices in its systems. A 2022 RAND Corporation report explicitly warned that AI could introduce biases into the security clearance vetting system “potentially as the result of programmer biases or historical racial differences.”
Cattler acknowledged that the societal values informing algorithms, including those at the Pentagon, change over time. Today, he said, the department is far less tolerant of extremist views than it used to be, but somewhat more tolerant of people who were once addicted to alcohol or drugs, and are now in recovery. “It was literally illegal in many places in the United States to be gay, until not that long ago,” he said. “That was a bias the system may have needed to work out of itself.”