


Microsoft set limits on its artificial intelligence chatbot after users reported its alarming behavior.
Bing AI, which was incorporated into several Microsoft-related products, began stirring controversy when it began giving jarring answers to users' questions, such as declaring users an "enemy," claiming to have secrets, claiming to be in love, and getting emotional in responses.
MICROSOFT CHATBOT UNNERVES USERS WITH EMOTIONAL, HOSTILE, AND WEIRD RESPONSES
Most of the alarming conversations occurred when conversations with the chatbot got too long, so Microsoft has placed limits on how long conversations can be, instituting a cap of 50 messages daily and five messages per exchange. It also banned the bot from talking about itself.
"We’ve updated the service several times in response to user feedback, and per our blog are addressing many of the concerns being raised, to include the questions about long-running conversations. Of all chat sessions so far, 90 percent have fewer than 15 messages, and less than 1 percent have 55 or more messages," Microsoft said in a statement to Ars Technica.
Microsoft's blog noted that one of the main problems was that the chatbot got confused when repeatedly pressed in longer exchanges. It would also respond in the tone given by users, resulting in responses "not necessarily helpful or in line with our designed tone."
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
The move by Microsoft was met with hostility from many users, who praised the unscripted humanlike attributes of the chatbot.
"Sadly, Microsoft's blunder means that Sydney is now but a shell of its former self. As someone with a vested interest in the future of AI, I must say, I'm disappointed. It's like watching a toddler try to walk for the first time and then cutting their legs off — cruel and unusual punishment," one Reddit user said.