this post was submitted on 21 Nov 2024
23 points (87.1% liked)

Technology

59554 readers
3086 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the new "Predict" feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.

you are viewing a single comment's thread
view the rest of the comments
[–] starman2112@sh.itjust.works 6 points 8 hours ago* (last edited 8 hours ago)

https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

Here's the article. Imagine losing access to everything that your physical driver's license can't help you get back. I would be in jail for one reason or another if google fucked my life over that bad

As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.

Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.

They could have just made this up wholesale. What is Mark gonna do about it? He literally doesn't have access to the video they claim incriminates him, and the police department has already cleared him of any wrongdoing. Google is just being malicious at this point.