Brain Wealthy
    What's Hot

    Bridging the Gender Gap: Inspiring Words from the Women Making Waves on Starship | Annie Handrick | | Starship Technologies | March 2023

    March 8, 2023

    AI apps like ChatGPT may finally kill the cover letter

    March 8, 2023

    Snow Crash author Neal Stephenson on the metaverse, making movies, climate fears

    March 6, 2023
    Facebook Twitter Instagram
    Facebook Twitter Instagram
    Brain Wealthy
    • Home
    • Anxiety

      FSU psychologist receives $3.7 million grant to combat anxiety in seniors with Alzheimer’s and cognitive impairment

      February 2, 2023

      How anxiety came to dominate the big business of medical marijuana cards in Pa.

      February 2, 2023

      How to Reduce Anxiety in Stressful Situations

      February 2, 2023

      The cat in boots The last wish taught me about anxiety

      February 2, 2023

      Inseparable cat trio with ‘separation anxiety brothers’ find new home

      February 2, 2023
    • Emotion

      Leigh-Anne Pinnock shares emotional post about embarking on her solo career

      February 2, 2023

      Ontario paramedic emotional during last radio call

      February 2, 2023

      A Pianist Faces Death and Recorded Music of Unspeakable Emotions

      February 2, 2023

      Return of home post linking Gichaara to ancestors is emotional for north coast nation

      February 2, 2023

      Mother of two sues New York school district for ‘mental distress’ caused by mask enforcement

      February 2, 2023
    • Neurology

      Gardasil Injection Lawsuit Claims HPV Vaccine Caused Neurological and Autonomic Dysfunction

      February 2, 2023

      REGENXBIO’s Duchenne Therapy RGX-202 Clinical Trial Begins Patient Recruitment

      February 2, 2023

      Aducanumab for the treatment of Alzheimer’s disease

      February 2, 2023

      Potential for Effective Comparative Studies and Treatment Approval in Epilepsy Care: Anup Patel, MD

      February 2, 2023

      Head injury does not predict memory impairment in NFL retirees, UT Southwestern study shows: Newsroom

      February 2, 2023
    • Sleep

      Review: Never Sleep Again: The Elm Street Legacy – Blu-ray

      February 2, 2023

      Get a better night’s sleep with better pillows from The Pillow Bar in Dallas

      February 2, 2023

      Is it okay to sleep with a necklace on?

      February 2, 2023

      Does tart cherry juice improve sleep?

      February 2, 2023

      Social Jet Lag, Sleep Chronotypes, and Why We Gotta Close Our Eyes and Embrace It

      February 2, 2023
    • Brain Research

      spark!Talk – video available online

      February 2, 2023

      Studies have found that obesity-related neurodegeneration mimics Alzheimer’s disease.newsroom

      February 2, 2023

      The Brain Observatory: New Museum to Participate in Museum Month

      February 1, 2023

      who wants to live forever

      February 1, 2023

      UK company makes surprise forays

      February 1, 2023
    • Brain Wealth
      1. Mental Health
      2. View All

      Research project applies a global lens to student mental health

      February 2, 2023

      DC Metro shooting suspect undergoes mental health evaluation after rampage

      February 2, 2023

      Gov. Ho-Chol unveils details of $1 billion plan to overhaul New York State’s mental health care continuum

      February 2, 2023

      Boston, we have a problem: data on mental health and practice come in

      February 2, 2023

      Research project applies a global lens to student mental health

      February 2, 2023

      DC Metro shooting suspect undergoes mental health evaluation after rampage

      February 2, 2023

      Gov. Ho-Chol unveils details of $1 billion plan to overhaul New York State’s mental health care continuum

      February 2, 2023

      Boston, we have a problem: data on mental health and practice come in

      February 2, 2023
    Brain Wealthy
    Home»Mental Health»Mental Health Tech Firm Uses AI Chat in Experiment with Real Users
    Mental Health

    Mental Health Tech Firm Uses AI Chat in Experiment with Real Users

    brainwealthy_vws1exBy brainwealthy_vws1exJanuary 14, 2023No Comments8 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    When logging into Koko, an online emotional support chat service based in San Francisco, people expect to exchange messages with anonymous volunteers. They can seek relationship advice, discuss depression, and find support for pretty much anything else.

    But the mental health support that thousands received wasn’t just for humans. Instead, it was augmented by robots.

    In October, Koko conducted an experiment in which GPT-3, the newly popular artificial intelligence chatbot, wrote all or part of the response. Humans could edit responses and press buttons to send responses, but they weren’t always authors.

    About 4,000 people got responses from Koko, which was at least partially written by AI, according to Koko co-founder Robert Morris.

    An experiment on a small, little-known platform has been the subject of intense controversy since he disclosed it a week ago. This could indicate that more ethical controversies are likely to arise as AI technology permeates more consumer products and health services. .

    GPT-3 is often fast and eloquent, so he thought it was an idea worth trying, Morris said in an interview with NBC News.

    “People who saw the collaboratively written GTP-3 answers rated them much higher than those written purely by humans. It was an interesting observation,” he said.

    Morris said he did not have official data to share from the test.

    But once people learned that the message was co-written by machines, the benefits of improved writing were lost. “Simulated empathy feels strange and empty,” says Morris I have written on Twitter.

    when he shared the results of his experiment on Twitter On January 6, he was inundated with criticism. Academics, journalists, and fellow technologists allege that he acted unethically and tricked people into subjects without their knowledge or consent when they were in a vulnerable position requiring mental health support. His Twitter thread has been viewed more than 8 million times.

    Of course, the sender of the AI-generated message knew whether he created or edited the message. But the recipient only saw a notification like this: (co-written with Koko Bot)” did not give any details about what “Koko Bot” was.

    In a demonstration posted online by Morris, GPT-3 responded to someone who said they struggled to be a better person. But it’s not easy, especially when you’re trying to do it alone, it’s hard to change our lives.But you’re not alone.”

    According to Morris, they weren’t offered the option to opt out of the experiment other than not reading the response at all. “If you get a message, you can choose to skip it and not read it,” he said.

    Leslie Wolfe, a law professor at Georgia State University who writes and teaches research ethics, is concerned that Coco has little to say to those seeking AI-enhanced answers. He said he was.

    “This is an organization trying to provide much-needed support for mental health crises that don’t have enough resources to meet their needs, but when they manipulate vulnerable people, it doesn’t work,” she said. rice field. It can make people in emotional distress feel worse, especially if the AI ​​produces biased or careless text that goes unreviewed, she said.

    Now Coco is on the defensive about that decision, again about the casual way the tech industry as a whole can sometimes turn unpretentious people into lab rats, especially as more tech companies enter health-related services. I am facing a question.

    Congress recommended oversight of some human tests after revelations in 1974 of harmful experiments, such as the Tuskegee Syphilis Study, in which government researchers injected hundreds of black Americans with syphilis. obliged. As a result, federally funded universities and others must follow strict rules when conducting experiments on humans. This is enforced by a process known as an Institutional Review Board (IRB).

    However, in general, private companies and non-profit organizations that do not receive federal support and do not seek approval from the Food and Drug Administration have no such legal obligation.

    Morris said Coco received no federal funding.

    Alex John London, director of the Center for Ethics Policy at Carnegie Mellon University and author of a book on research ethics, said: said in an email.

    He said even if an entity does not have to undergo an IRB review, it should do so to reduce risk. He said he would like to know what steps Koko took to ensure that study participants were “not the most vulnerable users in an acute psychological crisis.”

    “High-risk users are always directed to crisis lines and other resources,” Morris said, adding, “Coco has carefully monitored the reaction when the feature was rolled out.”

    There are notorious examples of technology companies exploiting the surveillance void. In 2014, Facebook conducted a psychological experiment on her 689,000 people, revealing that altering the content of people’s news feeds could spread negative or positive emotions like contagion. . Facebook, now known as Meta, apologized and overhauled its internal review process, but said people should have read Facebook’s terms of service to be aware of the possibility of such experiments. rice field. People really understand the contracts they enter into with platforms like Facebook.

    But even after the fuss surrounding the Facebook study, there were no changes to federal law or policies to make oversight of experiments on humans universal.

    Coco is not Facebook with a huge profit and user base. Koko is a non-profit platform and a passion project for former Airbnb data scientist Morris, who received his PhD from the Massachusetts Institute of Technology. This is a service for peer-to-peer support, not to confuse professional therapists, and is only available through other platforms such as Discord and Tumblr, not as a standalone his app.

    About 10,000 volunteers have joined Koko in the last month, and about 1,000 people are being helped each day, Morris said.

    “The broader point of my work is finding ways to help people who are experiencing emotional distress online,” he said. “Millions of people are suffering online seeking help.”

    There is a nationwide shortage of trained professionals to provide mental health support, despite a surge in anxiety and depressive symptoms during the COVID-19 pandemic.

    “We allow people to write short messages of hope to each other in a safe environment,” Morris said.

    Critics, however, have focused on the question of whether participants gave informed consent to the experiment.

    Camille Nebeker, a professor at the University of California, San Diego who specializes in human research ethics as it applies to emerging technologies, said Coco introduced an unnecessary risk to those seeking help. Informed consent by study participants At a minimum, she said, her consent was written in clear, simple language with an explanation of potential risks and benefits.

    “Informed consent is very important for conventional research,” she said. “It’s a cornerstone of ethical practice, but if we don’t have to do it, the public can be at risk.”

    She also pointed out that AI warns people of potential bias. Chatbots are ubiquitous in areas such as customer service, but they are still a relatively new technology. This month, New York City schools banned ChatGPT, a bot built on GPT-3 technology, from school devices and networks.

    “We’re in the Old West,” said Nebeker. “It is too dangerous not to have some standard or consensus on the rules of the road.”

    The FDA regulates some mobile medical apps that it says meet the definition of a “medical device,” such as those that help people quit their opioid addiction. But not all apps meet that definition, and the agency issued guidance in September to help businesses know the difference. In a statement provided to NBC News, FDA officials said some apps that offer digital therapy may be considered medical devices, but per FDA policy, the organization has identified has not commented on the company.

    Lacking formal oversight, other organizations are working on ways to apply AI to health-related fields. Struggling to address ethical issues in AI, Google hosted a Health and Bioethics Summit in October with the Hastings Center, a non-profit research center and think tank in bioethics. In June, the World Health Organization included informed consent in one of his six “Guiding Principles” on the design and use of AI.

    Coco has an advisory board of mental health professionals to review the company’s practices, but Morris said there is no formal process for approving proposed experiments.

    Stephen Schueller, a member of the advisory board and professor of psychology at the University of California, Irvine, said that whenever Koko’s product team rolls out a new feature or tests a feature, the board reviews it. is not realistic. idea. He didn’t say if Coco made a mistake, but said it shows the need to discuss private sector research in public.

    “We need to think seriously about how we use new technologies responsibly when they come online,” he said.

    Morris said he never thought AI chatbots would solve the mental health crisis, and how being a Koko peer supporter turned into an “assembly line” of approving pre-written answers. said he didn’t like

    But he says copy-paste pre-written answers have long been a feature of online help services, and organizations need to keep trying new ways to care for more people. A level review would stop that search, he said.

    “AI is neither perfect nor the only solution. It lacks empathy and believability,” he said. However, he added, “the use of AI does not put him in a position to require the ultimate scrutiny of the IRB.”

    If you or someone you know is in danger, call 988 and contact the Suicide and Crisis Lifeline. Also call the network formerly known as the National Suicide Prevention Lifeline at 800-273-8255, text HOME to 741741, or visit SpeakingOfSuicide.com/resources for additional resources can also do.

    This article was originally published on NBCNews.com.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleHow to better control your emotions
    Next Article How Black and White Thinking Increases Health Anxiety
    brainwealthy_vws1ex
    • Website

    Related Posts

    Research project applies a global lens to student mental health

    February 2, 2023

    DC Metro shooting suspect undergoes mental health evaluation after rampage

    February 2, 2023

    Gov. Ho-Chol unveils details of $1 billion plan to overhaul New York State’s mental health care continuum

    February 2, 2023
    Add A Comment

    Leave A Reply Cancel Reply

    Top Posts

    Subscribe to Updates

    Get the latest sports news from SportsSite about soccer, football and tennis.

    This website provides information about Brain and other things. Keep Supporting Us With the Latest News and we Will Provide the Best Of Our To Makes You Updated All Around The World News. Keep Sporting US.

    Facebook Twitter Instagram Pinterest YouTube
    Top Insights

    Top UK Stocks to Watch: Capita Shares Rise as it Unveils

    January 15, 2021
    8.5

    Digital Euro Might Suck Away 8% of Banks’ Deposits

    January 12, 2021

    Oil Gains on OPEC Outlook That U.S. Growth Will Slow

    January 11, 2021
    Get Informed

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2023 brainwealthy. Designed by brainwealthy.
    • Home
    • Contact us
    • DMCA
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.