THE AMERICA ONE NEWS
Jun 2, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Ryan Lovelace


NextImg:Exotic threats have U.S. security officials bracing for AI chaos

LAS VEGAS — “Invisibility cloaks” and “digital twins” – national security officials gathered here say new technologies emerging from the rapidly advancing field of artificial intelligence could soon give America’s enemies new weapons beyond people’s wildest imagination.

The technology for the invisibility cloak, in fact, already exists and the Pentagon has a program to battle back, according to Kathleen Fisher of the Defense Advanced Research Projects Agency.

Ms. Fisher displayed the “invisibility cloak” on a presentation slide describing how the Pentagon is working to disrupt state-of-the-art adversarial AI at the Black Hat USA 2024 hacker conference on Tuesday. The gathering, considered the premier computer security event of its kind, attracts leading hackers, tech-sector companies and government agencies from around the world.

The cloak is simply a colorful sweater with a hidden “adversarial” pattern that confuses the most sophisticated identifying programs. A person wearing the cloak is visible to the naked eye and surrounded by peers in an auditorium — but because of the pattern he is impossible to detect for cutting-edge AI surveillance systems trained to recognize all objects.

“We live in interesting AI times,” Ms. Fisher said. “It’s kind of the best of times, the worst of times — amazing new technology that we need to figure out how to leverage to make the world a better place, but also massive new threats that we need to figure out how to counter.”

GARD duty

DARPA’s program defending against the dark arts of AI is called GARD, or “Guaranteeing AI Robustness Against Deception.”

The program investigated adversarial AI for the purpose of learning how to stop AI-generated crime and chaos. Ms. Fisher, who leads DARPA’s Information Innovation Office, said the GARD program discovered that for roughly $60 you could put data on the internet that would cause AI’s “large language” models to do what hackers wanted instead of what the developers originally intended.

Large language models already undergird many popular generative AI tools that are widely available to the public, such as OpenAI’s popular ChatGPT.

The invisibility cloak was crafted by a research team from the University of Maryland, separately from DARPA, with the assistance of Facebook AI. The researchers published a paper in 2020 explaining they worked to “generate an adversarial pattern that, when placed over an object either digitally or physically, makes that object invisible to detectors.”

Asked if DARPA was interested in developing its own AI tricks alongside stopping such tools, a spokesperson told The Washington Times, “You have to understand how tools can be broken in order to develop defenses.”

Where the U.S. government sees peril, hackers and cybersecurity professionals see promise and profit.

Nvidia engineering manager Bartley Richardson said the world is “not that far away” from creating a digital twin for each and every one of the hundreds of hackers assembled at Black Hat’s AI summit on Tuesday.

A digital twin is a virtual replica of a person or a real-world asset or system. NVIDIA, an AI powerhouse whose market capitalization exceeded $3 trillion in June, is hard at work building digital twins. In 2021, NVIDIA began working with BMW on the tech, including exploring how to make a digital twin of an automotive factory.

Such technology would have wide application, from gaming to cybersecurity. World Wide Technology co-founder Jim Kavanaugh told the hacker conference that digital twin technology would also prove useful for health care, all the way down to recording and preserving an individual’s DNA.

Cybersecurity company Balbix unveiled “BIX” at Black Hat, demonstrating a new AI assistant for cyber risk and exposure management.

Balbix founder Gaurav Banga demonstrated how the AI assistant tailors answers for professionals working to recover from a cyberattack. He showed the chat assistant delivering specific actions for an IT worker to patch problems, while providing details on the financial impact to the company’s bottom line for a different company executive.

But for the intelligence community, the fear is that digital twins and AI assistants are ripe for cyberattacks, with hackers manipulating unwitting victims by compromising real people’s digital clones and AI assistants.

Kathryn Knerler, the U.S. intelligence community’s chief information security officer, warned that such tech will make so-called phishing attacks more difficult to stop. Phishing refers to scammers’ efforts to dupe people into revealing sensitive information, frequently through the use of emails containing malicious links.

“When I heard the idea of digital twins this morning and the AI assistant just recently here, of course I thought about, ‘Well what a great thing to target to be able to figure out how do I use all this great information to create the perfect phishing attack?’” Ms. Knerler said onstage. “And not only that, but to do it at scale. So knowing the person you’re going after, knowing their language, and then knowing their habits — all those put together into a great attack.”

National security officials repeatedly sought to assure cybersecurity professionals they are not naive to the threats posed by the cutting-edge tech employed by both hackers and legitimate businesses. For example, Ms. Fisher said her office was focused on deepfakes before the term even existed.

In 2024, she said a DARPA program built to detect deepfakes is focused on commercializing the technology at the direction of Congress, so people can have a better understanding of whether they are viewing manipulated content.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.