


The popular adoption of AI in the form of apps like the image generator Midjourney or the chatbots ChatGPT and Bard has led to a greater interest in regulation of the industry.
Artificial intelligence has been growing across the technology industry as a prominent tool for information creation. Companies like Google and OpenAI have also made their AI products public, allowing millions of users access to high-end technology. With the explosion in use of the AI tools has come a massive surge in interest in new legal frameworks for ensuring that AI does not go awry. Here are some of the leading options that have been suggested.
NASHVILLE METRO COUNCIL TO VOTE ON REINSTATING OUSTED TENNESSEE LEADER
A pause on development
One proposal is for pausing the development of large language models like the one behind ChatGPT until their implications for society can be better understood. Most notably, Elon Musk, former 2020 Democratic presidential candidate Andrew Yang, and a group of tech leaders called for a six-month pause, enforced by the government, on training such models.
"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" they argued in an open letter in March. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
The idea of a pause, though, met resistance from some analysts and officials worried about losing the technological edge on China and other adversaries. “Artificial intelligence machine-learning is something that is resonant today and is something that our adversaries are going to continue to look to exploit,” Gen. Paul Nakasone of U.S. Cyber Command told House lawmakers.
An international ban on supercomputer-powered advanced AI
Eliezer Yudkowsky, a lead researcher at the Machine Intelligence Research Institute and prominent futurist, has advocated a permanent international ban on training large language models more powerful than the one underlying ChatGPT.
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter," Yudkowsky argued in a Time op-ed. The ban on AI training would need to be "indefinite and worldwide."
Yudkowsky, who is widely read among AI researchers, said governments worldwide need to do more to clamp down on AI development. He argued that GPUs, specialized processing units used for the complex calculations involved in large language models, need to be tracked, that large GPU clusters need to be shut down, and that countries should be willing to "destroy a rogue data center by airstrike" to mitigate the AI threat.
An AI bill of rights
The White House has released a blueprint for an AI bill of rights, which asks AI developers to emphasize several priorities in product design. These include ensuring that the data collected by AI products are not misused for malicious purposes, that designers account for algorithmic bias with regard to race or gender, that all data collected is stored and protected with appropriate privacy measures, and that users are aware when algorithms are affecting their user experience and can opt out if they wish to do so.
President Joe Biden met with his advisers on science and technology last week to discuss AI regulation.
Ban on facial recognition
One option for regulation would be a ban on the use of AI facial recognition software — a technology used for repression in China and that has been used controversially by the private sector in the U.S.
Multiple cities and states have banned the use of facial recognition software. New York, for example, passed a law in 2021 prohibiting the software within schools. California passed a law in 2020 that banned law enforcement from using facial recognition in their body cameras. The European Union is considering a similar ban on facial recognition software.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Requiring 'watermarking' of AI-generated images
To prevent the abuse of images generated by AI, Congress could pass legislation requiring that image-generating AI programs include a "watermark" in all images identifying their origin.
China's Cyberspace Administration issued similar regulations in December that restricts the sharing of AI-generated content unless it has a watermark attached. AI-generated content was restricted because it was previously "used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others' reputation and honor, and to counterfeit others' identities," the administration claimed.