


More than 40 leading technology companies have partnered to create an industry group that will regulate the use of open-source artificial intelligence.
IBM announced on Tuesday that it was creating the AI Alliance, which will focus on the responsible development of safety and security tools for open-source AI models.
VENEZUELA VOTES TO CLAIM REGION IN OIL-RICH GUYANA, RAISING FEARS OF CONFLICT
In recent months, Congress and Silicon Valley have called for guardrails for AI.
Open-source AI, or software that is publicly available, is notably of interest since it could be used by anybody. Meta has released its large language models, or the software powering chatbots, as open-source.
"We believe it's better when AI is developed openly — more people can access the benefits, build innovative products, and work on safety," Nick Clegg, Meta president of global affairs, said in a statement.
The coalition will establish a governing board and technical oversight committee to oversee the technology and develop safe practices. Other members of the Alliance include Oracle, Advanced Micro Devices, Intel, and Stability AI, as well as research organizations such as the University of Notre Dame.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Some of the leading AI companies did not join this Alliance. Most notably, OpenAI is not listed as an AI Alliance participant. The Alliance was launched over two weeks after the ChatGPT maker removed its CEO Sam Altman without warning, only to reinstate him five days later.
Congress is developing legislation to regulate AI, although it has not addressed open-source AI specifically. Senate Majority Leader Chuck Schumer (D-NY) is hosting his eighth and ninth AI Insight Forum on Wednesday, addressing "doomsday scenarios" and national security. Schumer has assured reporters that he intends to introduce legislation within "months" of the forums but has yet to identify any specific bills he supports.