The co-founder of an online mental health support service is facing backlash for using an artificial intelligence (AI) chatbot to automate responses to users without their consent.
Rob Morris, who started the Coco Mental Health Forum when he was a graduate student at the Massachusetts Institute of Technology, was accused of conducting the experiment on unconscious users in a comment thread after posting about the experiment on Twitter.
“Some important clarifications on my recent tweet thread: We weren’t carving people into chatting with GPT-3 unknowingly. (In retrospect, to reflect this ),” Morris posted twitter.
The reaction came after Morris posted a Twitter thread about the experiment in which Koko said he “used GPT-3 to provide mental health support to about 4,000 people.”
GPT-3, or Generative Pre-Trained Transformer 3, is an AI model that can mimic human communication through text using the initial human-sent text as a prompt.
Koko lets you anonymously share your mental health issues, seek help from your peers, and others can respond with messages of encouragement and advice.
The experiment used GPT-3 to draft what Morris called the “co-pilot” approach, with human oversight of the AI’s responses to approximately 30,000 messages.
He said that messages created by AI were rated higher than those created by humans, with response times 50% faster, “well below a minute.”
The experiment was withdrawn from the platform “pretty quickly,” Morris said. Because “it just didn’t work” after the user found out that the message was co-written by an AI chatbot.
“Simulated empathy feels strange and empty,” he wrote in the thread. It sounds unreal.”
Furthermore, Morris said people tend to know and appreciate when someone puts in the effort and puts in the time. is not shared.
But Morris hypothesizes that this emotional disconnect can eventually be overcome by machines, hinting at the future potential of AI therapy.
backlash
Several accounts in replies to Morris’ thread asked whether Morris had approval from an institutional review board (IRB) to conduct the experiment, and several others said the experiment was unethical. said.
In reply to Morris’ thread, one user said, “It’s sad that we had to conduct dehumanizing experiments on a vulnerable population to come to this conclusion.
Another user wrote, “Research conducted on humans without informed consent????”
Informed consent is a general principle that applies in research settings that require experimental subjects to be informed about the experiment and given the option of not participating.
Morris responded to the backlash by saying, “There were some big misunderstandings about what we did.”
Morris said people aren’t directly chatting with the AI, they’re giving peer supporters GPT-3 to craft answers and see if that makes it more effective. He said using GPT-3 is optional for peer supporters.
Morris answers
Morris later told tech platform Gizmodo that he conducted the experiment with himself and his Coco team, rather than unwitting users, and that all AI-generated responses were sent with the help of a “Coco bot.” It said it was sent with a disclaimer that it was written in borrowing from.
“Outside of academia, whether or not this type of work should go through the IRB process is an important issue, and we shouldn’t have tried to discuss it on Twitter,” Morris told Gizmodo. It should be a broader discussion within, and we want to be a part of it.”
According to Morris, AI’s technological inexhaustibility seems currently limited only by its lack of genuine emotion.
“True empathy can also be something that we humans can appreciate as unique,” Morris said in his first Twitter thread. , might be one of the things we can do.”
The Epoch Times reached out to Morris for comment.