Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing a key protection against lawsuits over content moderation decisions that involve artificial intelligence (AI).
Section 230 of the Communications Decency Act of 1996 currently offers a blanket âliability shieldâ in regards to how companies moderate content on their platforms.
However, as reported by CNN, Google wrote in a legal filing that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTubeâs algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content.
[HEADING=1]Automation in moderation[/HEADING]
Being part of an almost 27-year-old law, already targeted for reform by US President Joe Biden, Section 230 isnât equipped to legislate on modern developments such as artificially intelligent algorithms, and thatâs where the problems start.
The crux of Googleâs argument is that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. âVirtually no modern website would function if users had to sort through content themselves,â it said in the filing.
âAn abundance of contentâ means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites.
Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but that this puts the internet at risk of being a âvirtual cesspoolâ.
The tech giant also pointed out that YouTubeâs community guidelines expressly disavow terrorism, adult content, violence and âother dangerous or offensive contentâ and that it is continually tweaking its algorithms to pre-emptively block prohibited content.
It also claimed that âapproximatelyâ 95% of videos violating YouTubeâs âViolent Extremism policyâ were automatically detected in Q2 2022.
Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted âthe rise of ISISâ to prominence.
Read more
In an attempt to further distance itself from any liability on this point, Google responded by saying that YouTubeâs algorithms recommends content to users based on similarities between a piece of content and the content a user is already interested in.
This is a complicated case and, although itâs easy to subscribe to the idea that the internet has gotten too big for manual moderation, itâs just as convincing to suggest that companies should be held accountable when their automated solutions fall short.
After all, if even tech giants canât guarantee whatâs on their website, users of filters and parental controls canât be sure that theyâre taking effective action to block offensive content.
[ul]
[li] Hereâs our list of the best VPN with antivirus right now[/li][/ul]
Continue readingâŚ
Section 230 of the Communications Decency Act of 1996 currently offers a blanket âliability shieldâ in regards to how companies moderate content on their platforms.
However, as reported by CNN, Google wrote in a legal filing that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTubeâs algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content.
[HEADING=1]Automation in moderation[/HEADING]
Being part of an almost 27-year-old law, already targeted for reform by US President Joe Biden, Section 230 isnât equipped to legislate on modern developments such as artificially intelligent algorithms, and thatâs where the problems start.
The crux of Googleâs argument is that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. âVirtually no modern website would function if users had to sort through content themselves,â it said in the filing.
âAn abundance of contentâ means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites.
Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but that this puts the internet at risk of being a âvirtual cesspoolâ.
The tech giant also pointed out that YouTubeâs community guidelines expressly disavow terrorism, adult content, violence and âother dangerous or offensive contentâ and that it is continually tweaking its algorithms to pre-emptively block prohibited content.
It also claimed that âapproximatelyâ 95% of videos violating YouTubeâs âViolent Extremism policyâ were automatically detected in Q2 2022.
Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted âthe rise of ISISâ to prominence.
Read more
Google is offering your SMB website free anti-terrorism moderation tools
How Reddit turned its millions of users into a content moderation army
Weâve also listed the best free web filters right now
How Reddit turned its millions of users into a content moderation army
Weâve also listed the best free web filters right now
This is a complicated case and, although itâs easy to subscribe to the idea that the internet has gotten too big for manual moderation, itâs just as convincing to suggest that companies should be held accountable when their automated solutions fall short.
After all, if even tech giants canât guarantee whatâs on their website, users of filters and parental controls canât be sure that theyâre taking effective action to block offensive content.
[ul]
[li] Hereâs our list of the best VPN with antivirus right now[/li][/ul]
Continue readingâŚ