


Google revised its artificial intelligence principles, noticeably cutting a section preventing the company from using AI for weapons development.
The change comes as the tech giant is picking sides this week in the global race for control of AI development.
Google is now interested in AI that benefits national security and it wants democratic governments to oversee AI production, according to Google Senior Vice President James Manyika and Google DeepMind CEO Demis Hassabis.
“We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” the duo wrote on the company’s blog. “And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”
Supporting national security was not always top of mind for the tech behemoth that started as a search engine attracting eyeballs for indexing the internet.
An earlier version of Google’s guiding principles included a section saying the company would not pursue AI applications that could be used for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” The pledge’s deletion from Google’s AI principles was first spotted by Bloomberg.
Google’s workforce has long expressed reservations about the company’s involvement with military programs.
A software engineer fired by Google after raising alarms that the company’s AI may be sentient told The Washington Times in 2023 he feared AI makers would use the power of the exploding new technology for warfare. Blake Lemoine, ousted after experimenting with Google’s AI in 2022, told The Times he worried about the use of AI in assassinations.
The concerns about AI used by militaries and for attacks predate Mr. Lemoine’s ouster from Google.
After some employees complained about the company’s work to provide AI to the Defense Department for a program called Project Maven, Google reportedly quit the project when its contract expired in 2019.
Last year, the American government showed off what Project Maven accomplished.
U.S. Central Command chief technology officer Schuyler Moore told Bloomberg that Project Maven located enemy targets in Iraq, Syria and Yemen, which were selected for dozens of U.S. airstrikes.
Not every Google employee has reservations about working with the U.S. military or other defense and intelligence agencies.
U.S. Central Command benefited from the expertise of former director of Google Cloud AI, Andrew Moore, whom the command hired in 2023 to serve as its first-ever CENTCOM “adviser on AI, robotics, cloud computing, and data analytics.”
Google’s newfound openness to AI national security uses comes amid broader changes to the AI industry and the potential for new work between AI makers and the U.S. government.
Google is not the first AI maker to change its policy — market leader OpenAI previously rewrote its rules to allow it to work with the Pentagon on defense applications.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.