this post was submitted on 03 Nov 2024
1254 points (99.4% liked)
Fuck AI
1440 readers
29 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm sorry but this has been a thing long before "AI" based results. Scammers always used tricks to end up at the top of search results.
Scammers have been a thing long before writing. That doesn't mean people shouldn't be made aware of new ways to be scammed.
That's what I'm saying though, it isn't a new way.
One could argue there is a new aspect. When it comes to retraining the public on what to trust, there’s a likely blind spot where a person may know to only call a number listed on a trusted website. So they'll check to see if they're on a bank domain before picking up the phone. If Google, being a big name, presents the number in an official looking way at the top of their pages, it may pass the sniff test. And get people into trouble.
Featured snippets would prominently display source URLs:
But AI summaries? More opaque:
Rights. Scammers were in the listing before, and AI has helped them appear more trustworthy.
That's meaningless with how easy it is to register legit looking faux domains and how it is even easier to create legit looking sub domains. People who fall for those type of scams will likely not even understand what a domain is.
At the top, but that isn't what this post is saying. This is saying that Google's AI gave the scammer answer. Not that they provided a link you could click on, but that Google itself said this is the number.
It's not an AI, it's just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It's still the same principle for both cases.
Regardless of what they call it, they're the ones presenting it. I'm not arguing they can't be tricked. I'm arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That's a pretty big distinction in a whole mess is different ways. Not the least of which is legal.
I'm sorry but no. It's not Google making that claim, it's just the LLM replying in a confident way because that's how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that's what you're looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you'd do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It's the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user's responsibility to check the actual search result.