this post was submitted on 03 Nov 2024
1254 points (99.4% liked)

Fuck AI

1440 readers
39 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] skillissuer@discuss.tchncs.de 113 points 3 weeks ago (3 children)

it's seo games all over again

[–] Quill7513@slrpnk.net 41 points 3 weeks ago

and the former kings of tuning the algorithms to favor seo that bubbled useful info up harder have thrown it all out in the name of impressing everyone with how space age and sci fi their tech is. it's not about advancing science or even pushing a useful product. it's strictly a tool for scams. is it a surprise that scammers are gaming the google scam better than anyone else? not really. they've always had a step up compared to the average internet denizen thanks to practice. this is why i get so frustrated when people dismiss ai skepticism as being a product of luddites.

  1. you're getting scammed to think ai will benefit you
  2. systems built by scammers will always benefit scammers
  3. the luddites were right. scientific advancements should benefit the workers, not the rich
[–] jonne@infosec.pub 23 points 3 weeks ago (1 children)

I mean, this isn't specifically an AI issue, this is scammers updating the info in Google business listings because the airlines don't actually care to maintain those pages (and Google doesn't want actual humans doing any work to make sure their shit is accurate). This has been going on before AI, AI is just following the garbage in, garbage out model that everyone said was going to be the result of this push.

[–] orcrist@lemm.ee 14 points 3 weeks ago

Your historical information is accurate, but I disagree with your framing. This particular scam is so powerful because the information is organized, parsed, and delivered in a fashion that makes it look professional and makes it look believable.

Google and the other AI companies have put themselves in mind. They know that their system is encouraging this type of scam, but they don't dare put giant disclaimers at the top of every AI generated paragraph, because they're trying to pretend that their s*** is good, except when it's not, and then it's not their fault. In other words, it's basic dishonesty.

[–] spankmonkey@lemmy.world 18 points 3 weeks ago (1 children)
[–] Tippon@lemmy.dbzer0.com 10 points 3 weeks ago (1 children)

🎶 Old McDonald had a server farm... 🎶

[–] neanderthal@lemmy.world 9 points 3 weeks ago (1 children)

And Gem'ni was his nam-i....g-e-m-n-i

(I know it is Gemini, but we need 2 syllables and 5 letters to fit the song parady, so I made a contraction)

[–] brbposting@sh.itjust.works 8 points 3 weeks ago

Artistic license: validated

[–] BananaTrifleViolin@lemmy.world 84 points 3 weeks ago (5 children)

This is why "AI" should be avoided at all cost. It's all bullshit. Any tool that "hallucinates" - I. E. Is error strewn - is not fit for purpose. Gaming the AI is just the latest example of the crap being spewed by these systems.

The underlying technology has its uses but its niche and focused applications, nowhere near as capable or as ready as the hype.

We don't use Wikipedia as a primary source because it has to be fact checked. AI isn't anywhere as near accurate as Wikipedia.so why use it?

[–] brbposting@sh.itjust.works 18 points 3 weeks ago (2 children)

The underlying technology has its uses

Yes indeed agreed.

Sometimes BS is exactly what I need! Like, hallucinated brainstorm suggestions can work for some workflows and be safe when one is careful to discard or correct them. Copying a comment I made a week ago:

I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

load more comments (2 replies)
[–] notTheCat@lemmy.ml 13 points 3 weeks ago

Because some are lazy fucks

load more comments (3 replies)
[–] perviouslyiner@lemmy.world 55 points 3 weeks ago* (last edited 2 weeks ago)

Wait until you hear about the AI's programming abilities!

It "knows" that a Python program starts with some lines like: from (meaningless package name) include *

If you can register the package name it invents, your code could be running on some of the world's biggest companies' internal servers!

[–] stoy@lemmy.zip 45 points 3 weeks ago (1 children)

AI was launched with the promise of taking away the boring parts and letting us focus on the fun stuff.

In reality it takes away the fun stuff and gives us more boring things to do.

[–] Tartas1995@discuss.tchncs.de 15 points 3 weeks ago (2 children)

Being scammed isn't boring. It is blood boiling and (wrongly) shame-filled.

But yeah, you are right. The boring and the bad.

load more comments (2 replies)
[–] Suavevillain@lemmy.world 30 points 2 weeks ago (1 children)

AI results are always so bad. I don't like that there is AI medical results. That needs more pushback.

[–] cynar@lemmy.world 13 points 2 weeks ago (1 children)

Ironically, that is possibly one of the few legit uses.

Doctors can't learn about every obscure condition and illness. This means they can miss the symptoms of them for a long time. An AI that can check for potential matches to the symptoms involved could be extremely useful.

The provisio is that it is NOT a replacement for a doctor. It's a supplement that they can be trained to make efficient use of.

[–] DaPorkchop_@lemmy.ml 6 points 2 weeks ago (2 children)

Couldn't that just as easily be solved with a database of illnesses which can be filtered by symptoms?

[–] cynar@lemmy.world 7 points 2 weeks ago (1 children)

That requires the symptoms to be entered correctly, and significant effort from (already overworked) doctors. A fuzzy logic system that can process standard medical notes, as well as medical research papers would be far more useful.

Basically, a quick click, and the paperwork is scanned. If it's a match for the "bongo dancing virus" or something else obscure, it can flag it up. The doctor can now invest some effort into looking up "bongo dancing virus" to see if it's a viable match.

It could also do it's own pattern matching. E.g. if a particular set of symptoms is often followed 18-24 hours later by a sudden cardiac arrest. Flagging this up could be completely false. However, it could key doctors in on something more serious happening, before it gets critical.

An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter.

load more comments (1 replies)
[–] Duamerthrax@lemmy.world 4 points 2 weeks ago

In either case, a real doctor would be reviewing the results. Nobody is going to authorize surgeries or prescription meds from AI alone.

[–] Blaster_M@lemmy.world 30 points 3 weeks ago (1 children)

and that's why I always go straight to the company website to find that info instead of googling it

[–] Tartas1995@discuss.tchncs.de 10 points 3 weeks ago

I google for the company website (e.g. Wikipedia) and then I google with the site in mind for info. Works well

[–] LANIK2000@lemmy.world 26 points 2 weeks ago (1 children)

Yet again tech companies are here to ruin the day. LLM are such a neat little language processing tool. It's amazing for reverse looking up definitions (where you know the concept but can't remember some dumb name) or when looking for starting points or want to process your ideas and get additional things to look at, but most definitely not a finished product of any kind. Fuck tech companies for selling it as a search engine replacement!

[–] jj4211@lemmy.world 17 points 2 weeks ago

It is great at search. See this awesome example I hit just today from Google's AI overview:

Housing prices in the United States dropped significantly between 2007 and 2020 due to the housing bubble and the Great Recession:

2007: The median sales price for a home in the first quarter of 2007 was $257,400. The average price of a new home in September 2007 was $240,300.

2020: The average sales price for a new home in 2020 was $391,900.

See, without AI I would have thought housing prices went up between 2007, and that $391,900 was a bigger number than $257,400.

[–] Etterra@lemmy.world 18 points 2 weeks ago (11 children)

That's why you always get it from their website. Never trust a LLM to do a search engine's job.

load more comments (11 replies)
[–] njm1314@lemmy.world 18 points 2 weeks ago (4 children)

Would that make Google liable? I mean that wouldn't be a case of users posting information that would be a case of Google posting information in that case wouldn't it? So it seems to me they'd be legally liable at that point.

[–] TheRealLinga@sh.itjust.works 11 points 2 weeks ago

Ah but Google is a giant company and as is U.S. law doesn't have to fave consequences for anything

[–] FlyingSquid@lemmy.world 7 points 2 weeks ago

In a sane world? Yes.

load more comments (2 replies)
[–] OsrsNeedsF2P@lemmy.ml 15 points 3 weeks ago

Honestly I wanted to write a snug comment about "But but it even says AI can sometimes make mistakes!", but after clicking through multiple links and disclaimers I can't find Google actually admitting that

[–] GetOffMyLan@programming.dev 15 points 3 weeks ago

That is literally the worst use case for AIs. There's no way they should be letting it provide contact info like that.

Also they're stupid for dialing a random number.

[–] hark@lemmy.world 14 points 3 weeks ago

They gave up working search with algorithms that are easier to reason about and correct for with a messy neural network that is broken in so many ways and basically impossible to generally correct while retaining its core characteristics. A change with this many regressions should've never been pushed to production.

[–] FenrirIII@lemmy.world 11 points 3 weeks ago

Same happened to my wife. She gave them enough info that they threatened to call and cancel her flight unless she paid them. Never did cancel it

[–] answersplease77@lemmy.world 11 points 2 weeks ago

Google has been sponsoring scammers as first search results since its creation. Google has caused hundreds of millions of dollars in losses to people, and need to be sued for it.

[–] JimmyBigSausage@lemm.ee 10 points 3 weeks ago
[–] orcrist@lemm.ee 10 points 3 weeks ago

I feel like Jason ought to have considered it. Spammers have been using this kind of tactic for decades. Of course they're going to change to whatever medium is popular.

[–] Noit@lemm.ee 9 points 2 weeks ago (1 children)

This has been an issue since long before LLMs. Before the AI summary box, scammers used targeted ads to place ahead of the actual company you were searching for.

[–] Nalivai@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago)

Yeah, but that was easy to spot, both by people and by Google. There were at least some guardrails, imperfect, not ideal, but they existed. With llm there is basically none

[–] LovableSidekick@lemmy.world 8 points 3 weeks ago* (last edited 3 weeks ago)

Yep, that's Imitation Intelligence for ya.

[–] dragonfucker@lemmy.nz 6 points 3 weeks ago

Drag is making sure to eat one rock per day and put glue in pizza just like the Google AI says!

load more comments
view more: next ›