


Italy banned the leading artificial intelligence chatbot ChatGPT from being used within its borders over privacy concerns, making it the first country to bring the hammer down on the app.
Italy's privacy regulator announced on Friday that it was immediately blocking and investigating ChatGPT's parent company OpenAI. The decision will stop OpenAI from processing Italian user data and will be maintained until the company adheres to the European Union's General Data Protection Regulation practices.
JOSH HAWLEY CALLS FOR INVESTIGATION INTO DOJ AND 'DARK MONEY NETWORK'
"There appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which [ChatGPT] relies," the regulator stated in its order.
The crackdown is an early indication of the kinds of problems the technology could encounter with regulations in the EU and beyond. The European Consumer Organization, an umbrella group for European consumer groups, has called for EU and national governments, not just Italy, to investigate ChatGPT proactively.
The regulator also noted the recent data breach, in which a hacker leaked conversations and payment info of several OpenAI users.
This ban was imposed days after Elon Musk and other prominent industry figures and researchers asked the AI industry to pause development for the next six months.
Musk and other AI researchers filed a letter on Wednesday calling for a pause on all AI training to ensure the bots did not grow in power past GPT-4. "We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4," the letter requested. "This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
The Pentagon's top cyber warfare officer dismissed Musk's call to suspend training. "Artificial intelligence machine-learning is resonant today and is something that our adversaries are going to continue to look to exploit," Gen. Paul Nakasone told the House.
OpenAI CEO Sam Altman noted there is risk in AI development. "We've got to be careful here. I think people should be happy that we are a little bit scared of this," he said recently.
OpenAI did not respond to requests for comment from the Washington Examiner.