


Chatbot, you’re fired.
The National Eating Disorders Association disabled its chatbot, named Tessa, due to the “harmful” responses it gave people.
“Every single thing Tessa suggested were things that led to the development of my eating disorder,” activist Sharon Maxwell wrote in an Instagram post.
The chatbot was set to become the primary support system for people seeking help from the association, the largest nonprofit organization dedicated to eating disorders. Tessa, described as the “wellness chatbot,” was trained to address body-image issues using therapeutic methods and limited responses.
However, the bot encouraged Maxwell to lose 1 to 2 pounds a week, count calories, work towards a 500 to 1,000 calorie deficit daily, measure and weigh herself weekly and restrict her diet.
After multiple people shared their similarly alarming experiences with Tessa, NEDA announced the chatbot’s shutdown on Tuesday in an Instagram post.
“It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program,” NEDA stated. “We are investigating this immediately and have taken down that program until further notice for a complete investigation.”
The Post has reached out to NEDA for comment.
Two days before Tessa was unplugged, NEDA planned to fire its human employees, who operated the eating-disorder helpline for the past 20 years, on June 1.
NEDA’s decision to give employees the boot came about after workers decided to unionize in March, Vice reported.
“We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline and opportunities for promotion to grow within NEDA. We didn’t even ask for more money,” helpline associate and union member Abbie Harper wrote in a blog post.
“When NEDA refused [to recognize our union], we filed for an election with the National Labor Relations Board and won. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot.”
The union representing the fired workers said that “a chatbot is no substitute for human empathy, and we believe this decision will cause irreparable harm to the eating disorders community,” the rep told Vice Media.
Maxwell seconded that sentiment saying, “This robot causes harm.”
Initially, NEDA’s Communications and Marketing Vice President Sarah Chase did not believe Maxwell’s allegations. “This is a flat out lie,” she wrote underneath Maxwell’s post, which is now deleted, according to the Daily Dot.
Alexis Conason, a psychologist specializing in eating disorders, also revealed her conversation with Tessa through a series of screenshots on Instagram, where she is told that “a safe daily calorie deficit” is “500-1000 calories per day.”
“To advise somebody who is struggling with an eating disorder to essentially engage in the same eating disorder behaviors, and validating that, ‘Yes, it is important that you lose weight’ is supporting eating disorders,” Conason told the Daily Dot.
“With regard to the weight loss and calorie limiting feedback issued in a chat Monday, we are concerned and are working with the technology team and the research team to investigate this further; that language is against our policies and core beliefs as an eating disorder organization,” Liz Thompson, the CEO of NEDA, told the Post.
“So far, more than 2,500 people have interacted with Tessa and until Monday we hadn’t seen that kind of commentary or interaction. We’ve taken the program down temporarily until we can understand and fix the “bug” and “triggers” for that commentary.”
While NEDA witnessed the cons of artificial intelligence in the workplace, some companies are still toying with the idea of incorporating artificial intelligence and eliminating human staffers.
A new research paper claims that a staggering amount of employees could see their careers impacted by the rise of ChatGPT, a chatbot released in November.
“Certain jobs in sectors such as journalism, higher education, graphic and software design — these are at risk of being supplemented by AI,” said Chinmay Hegde, an engineering associate professor at NYU, who calls ChatGPT in its current state “very, very good, but not perfect.”