


Governments worldwide have begun to take action to respond to the rise of artificial intelligence.
As companies like Google and OpenAI make AI more prominent, countries are having to figure out how to manage the technology and the theoretical risks that it presents to citizens. Whether it is generative AI being used to spread misinformation in elections, AI replacing workers, or the existential threat of technology outpacing human intelligence, governments have realized the need to act and establish guidelines for how to wrangle the technology.
Here's how various world governments have prepared to rein in AI recently.
INFLATION HELD STEADY AT 3.4% IN SEPTEMBER IN FED'S PREFERRED GAUGE
United States
The United States has taken steps toward providing the tools to empower companies to use AI as much as possible while also ensuring it is safe for users. President Joe Biden is expected to release an executive order on Monday that will require advanced AI models to undergo assessments before government agencies are allowed to use them. It will also ease barriers to immigration so high-skilled workers can travel to the U.S. to help retain the country's edge. Biden has also acquired voluntary agreements from more than two dozen companies to enforce their rules and standards.
Members of Congress are working toward legislation but have been slow on the draw. The Senate has hosted two "AI Insight Forums" in the last two months and is expected to do several more in the near term to help lawmakers understand the implications of the technology. It has also hosted several hearings and has proposed multiple pieces of legislation for consideration by Congress. None of the bills have gained significant traction, however.
United Kingdom
The U.K. is scheduled to host an AI summit next week. The event will attract senior representatives from tech companies and governments around the world in an attempt to get them talking about the technology and perhaps sign documentation setting common standards between the countries. British Prime Minister Rishi Sunak received some criticism from others for inviting China to the summit since the country is currently in a trade war with the U.S. over the components required to power the technology.
The country will also launch the world's first AI safety institute, according to Sunak. The institute will "advance the world's knowledge of AI safety, and it will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks of all," Sunak said in a speech on Thursday.
United Nations
U.N. Secretary-General Antonio Guterres announced on Thursday the creation of a 39-person advisory board to address the problems of governing AI across international borders. The members will include officials from countries all over the world, as well as academics from the U.S., Russia, and Japan. The board will "undertake analysis and advance recommendations for the international governance of AI," Guterres said in a statement.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
European Union
The EU was one of the first national entities to create regulations for AI. The European Parliament approved the AI Act in June, a bill that establishes a comprehensive legal framework. The act bans the use of AI facial recognition in public spaces and AI predictive police software. It would also set new transparency measures for chatbots such as OpenAI's ChatGPT. The draft now needs to be agreed upon between Parliament and member states. The law is still undergoing negotiations with the hope of reaching a broader agreement on the legislation, but it is close to resolving some of the legal stumbling blocks, including which AI systems are considered "high risk" and thus more regulated than others.