this post was submitted on 03 Nov 2024
1254 points (99.4% liked)

Fuck AI

1440 readers
29 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] njm1314@lemmy.world 2 points 2 weeks ago (1 children)

Regardless of what they call it, they're the ones presenting it. I'm not arguing they can't be tricked. I'm arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That's a pretty big distinction in a whole mess is different ways. Not the least of which is legal.

[โ€“] DarkThoughts@fedia.io 0 points 2 weeks ago

I'm sorry but no. It's not Google making that claim, it's just the LLM replying in a confident way because that's how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that's what you're looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you'd do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It's the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user's responsibility to check the actual search result.