this post was submitted on 27 Sep 2024
782 points (98.4% liked)

Technology

59612 readers
2736 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.

you are viewing a single comment's thread
view the rest of the comments
[–] mosiacmango@lemm.ee 89 points 1 month ago* (last edited 1 month ago) (14 children)

This is actually a good sign for self driving. Google was using this data as a training set for Waymo. If AI is accurately identifying vehicles and traffic markings, it should be able to process interactions with them easier.

[–] iAmTheTot@sh.itjust.works 70 points 1 month ago (6 children)

As I understand it, the point of those captchas was never really "bots can't identify these things" (though you're right on that it was used to train). They use cursor movement, clicks, and other behaviours while you're solving it to detect if you are a bot or not.

[–] Takumidesh@lemmy.world 11 points 1 month ago* (last edited 1 month ago) (1 children)

It's a combination.

Most captchas goals generally aren't 100% prevention, it's to put a workload in front, this makes spamming the site cost money, a bankrolled attempt could just as easily outsource the captchas to real humans.

[–] Anivia 1 points 1 month ago

a bankrolled attempt could just as easily outsource the captchas to real humans.

Exactly. I've been using 2captcha for that for over a decade now

load more comments (4 replies)
load more comments (11 replies)