When logging into Koko, an online emotional support chat service based in San Francisco, people expect to exchange messages with anonymous volunteers. They can seek relationship advice, discuss depression, and find support for pretty much anything else.
But the mental health support that thousands received wasn’t just for humans. Instead, it was augmented by robots.
In October, Koko conducted an experiment in which GPT-3, the newly popular artificial intelligence chatbot, wrote all or part of the response. Humans could edit responses and press buttons to send responses, but they weren’t always authors.
About 4,000 people got responses from Koko, which was at least partially written by AI, according to Koko co-founder Robert Morris.
An experiment on a small, little-known platform has been the subject of intense controversy since he disclosed it a week ago. This could indicate that more ethical controversies are likely to arise as AI technology permeates more consumer products and health services. .
GPT-3 is often fast and eloquent, so he thought it was an idea worth trying, Morris said in an interview with NBC News.
“People who saw the collaboratively written GTP-3 answers rated them much higher than those written purely by humans. It was an interesting observation,” he said.
Morris said he did not have official data to share from the test.
But once people learned that the message was co-written by machines, the benefits of improved writing were lost. “Simulated empathy feels strange and empty,” says Morris I have written on Twitter.
when he shared the results of his experiment on Twitter On January 6, he was inundated with criticism. Academics, journalists, and fellow technologists allege that he acted unethically and tricked people into subjects without their knowledge or consent when they were in a vulnerable position requiring mental health support. His Twitter thread has been viewed more than 8 million times.
Of course, the sender of the AI-generated message knew whether he created or edited the message. But the recipient only saw a notification like this: (co-written with Koko Bot)” did not give any details about what “Koko Bot” was.
In a demonstration posted online by Morris, GPT-3 responded to someone who said they struggled to be a better person. But it’s not easy, especially when you’re trying to do it alone, it’s hard to change our lives.But you’re not alone.”
According to Morris, they weren’t offered the option to opt out of the experiment other than not reading the response at all. “If you get a message, you can choose to skip it and not read it,” he said.
Leslie Wolfe, a law professor at Georgia State University who writes and teaches research ethics, is concerned that Coco has little to say to those seeking AI-enhanced answers. He said he was.
“This is an organization trying to provide much-needed support for mental health crises that don’t have enough resources to meet their needs, but when they manipulate vulnerable people, it doesn’t work,” she said. rice field. It can make people in emotional distress feel worse, especially if the AI produces biased or careless text that goes unreviewed, she said.
Now Coco is on the defensive about that decision, again about the casual way the tech industry as a whole can sometimes turn unpretentious people into lab rats, especially as more tech companies enter health-related services. I am facing a question.
Congress recommended oversight of some human tests after revelations in 1974 of harmful experiments, such as the Tuskegee Syphilis Study, in which government researchers injected hundreds of black Americans with syphilis. obliged. As a result, federally funded universities and others must follow strict rules when conducting experiments on humans. This is enforced by a process known as an Institutional Review Board (IRB).
However, in general, private companies and non-profit organizations that do not receive federal support and do not seek approval from the Food and Drug Administration have no such legal obligation.
Morris said Coco received no federal funding.
Alex John London, director of the Center for Ethics Policy at Carnegie Mellon University and author of a book on research ethics, said: said in an email.
He said even if an entity does not have to undergo an IRB review, it should do so to reduce risk. He said he would like to know what steps Koko took to ensure that study participants were “not the most vulnerable users in an acute psychological crisis.”
“High-risk users are always directed to crisis lines and other resources,” Morris said, adding, “Coco has carefully monitored the reaction when the feature was rolled out.”
There are notorious examples of technology companies exploiting the surveillance void. In 2014, Facebook conducted a psychological experiment on her 689,000 people, revealing that altering the content of people’s news feeds could spread negative or positive emotions like contagion. . Facebook, now known as Meta, apologized and overhauled its internal review process, but said people should have read Facebook’s terms of service to be aware of the possibility of such experiments. rice field. People really understand the contracts they enter into with platforms like Facebook.
But even after the fuss surrounding the Facebook study, there were no changes to federal law or policies to make oversight of experiments on humans universal.
Coco is not Facebook with a huge profit and user base. Koko is a non-profit platform and a passion project for former Airbnb data scientist Morris, who received his PhD from the Massachusetts Institute of Technology. This is a service for peer-to-peer support, not to confuse professional therapists, and is only available through other platforms such as Discord and Tumblr, not as a standalone his app.
About 10,000 volunteers have joined Koko in the last month, and about 1,000 people are being helped each day, Morris said.
“The broader point of my work is finding ways to help people who are experiencing emotional distress online,” he said. “Millions of people are suffering online seeking help.”
There is a nationwide shortage of trained professionals to provide mental health support, despite a surge in anxiety and depressive symptoms during the COVID-19 pandemic.
“We allow people to write short messages of hope to each other in a safe environment,” Morris said.
Critics, however, have focused on the question of whether participants gave informed consent to the experiment.
Camille Nebeker, a professor at the University of California, San Diego who specializes in human research ethics as it applies to emerging technologies, said Coco introduced an unnecessary risk to those seeking help. Informed consent by study participants At a minimum, she said, her consent was written in clear, simple language with an explanation of potential risks and benefits.
“Informed consent is very important for conventional research,” she said. “It’s a cornerstone of ethical practice, but if we don’t have to do it, the public can be at risk.”
She also pointed out that AI warns people of potential bias. Chatbots are ubiquitous in areas such as customer service, but they are still a relatively new technology. This month, New York City schools banned ChatGPT, a bot built on GPT-3 technology, from school devices and networks.
“We’re in the Old West,” said Nebeker. “It is too dangerous not to have some standard or consensus on the rules of the road.”
The FDA regulates some mobile medical apps that it says meet the definition of a “medical device,” such as those that help people quit their opioid addiction. But not all apps meet that definition, and the agency issued guidance in September to help businesses know the difference. In a statement provided to NBC News, FDA officials said some apps that offer digital therapy may be considered medical devices, but per FDA policy, the organization has identified has not commented on the company.
Lacking formal oversight, other organizations are working on ways to apply AI to health-related fields. Struggling to address ethical issues in AI, Google hosted a Health and Bioethics Summit in October with the Hastings Center, a non-profit research center and think tank in bioethics. In June, the World Health Organization included informed consent in one of his six “Guiding Principles” on the design and use of AI.
Coco has an advisory board of mental health professionals to review the company’s practices, but Morris said there is no formal process for approving proposed experiments.
Stephen Schueller, a member of the advisory board and professor of psychology at the University of California, Irvine, said that whenever Koko’s product team rolls out a new feature or tests a feature, the board reviews it. is not realistic. idea. He didn’t say if Coco made a mistake, but said it shows the need to discuss private sector research in public.
“We need to think seriously about how we use new technologies responsibly when they come online,” he said.
Morris said he never thought AI chatbots would solve the mental health crisis, and how being a Koko peer supporter turned into an “assembly line” of approving pre-written answers. said he didn’t like
But he says copy-paste pre-written answers have long been a feature of online help services, and organizations need to keep trying new ways to care for more people. A level review would stop that search, he said.
“AI is neither perfect nor the only solution. It lacks empathy and believability,” he said. However, he added, “the use of AI does not put him in a position to require the ultimate scrutiny of the IRB.”
If you or someone you know is in danger, call 988 and contact the Suicide and Crisis Lifeline. Also call the network formerly known as the National Suicide Prevention Lifeline at 800-273-8255, text HOME to 741741, or visit SpeakingOfSuicide.com/resources for additional resources can also do.
This article was originally published on NBCNews.com.