The company’s first line of defense is Section 230 of the Communications Decency Act. This is a 1996 federal law that gives Internet platforms broad immunity from allegations of harmful content posted by users. The law has effectively shielded them from legal claims, and voices on both sides of politics are calling for its reform.
“We have invested heavily in providing a safe experience for children across our platforms, introducing strong protections and dedicated features that prioritize children’s health,” Google spokesman Jose Castaneda said in an email. ‘ said.
“For example, through Family Link, parents can set reminders, limit screen time, and block certain types of content on supervised devices.”
Meta declined to comment. Representatives for Snap and TikTok did not immediately respond to requests for comment. The companies have previously said they are working to protect their youngest users, including by providing resources on mental health topics and improving safeguards to stop the spread of harmful content. rice field.
And in a recent example of backlash over how technological developments are encroaching on children’s lives, New York City’s public school system (the largest in the United States) last week let students access its ChatGPT artificial intelligence program to read texts. Prohibited to generate.
In a lawsuit Friday, the Seattle School District 1st School District asked a judge to confirm that the company caused public nuisance and to award monetary damages and funding to prevent and address the excessive use of social media. seeks to order remedies, including
The district said there had been a dramatic increase in suicides and emergency room mental health visits. He pleaded with all to hold our children accountable for the nationwide experiments they are conducting.
“Seattle School District 1 is taking this step to do just that,” according to the complaint. “Young people in plaintiffs’ communities are experiencing the same mental health crises that are being seen nationwide.”