this post was submitted on 25 May 2024
0 points (NaN% liked)

Technology

58009 readers
3055 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

top 16 comments
sorted by: hot top controversial new old
[–] SeattleRain@lemmy.world 0 points 3 months ago (1 children)

Does anyone have a realistic idea of how this happened? I get Google has been fallen off for awhile but they're still a multi billion dollar company.

[–] trollbearpig@lemmy.world -1 points 3 months ago* (last edited 3 months ago) (1 children)

I'm probably late, but in this case this is the combinations of 2 things.

  1. The usual capitalistic incentives ruined yet another company. There was a recent article about how Google pushed out the people who builded and maintaned search on favor of MBA growth focused assholes. Like they put the guy that was Yahoo's CEO while Yahoo search was crumbling, in charge of Google search to get him to increase the amount of searches they serve, and ads obviously. People keep suggesting to use DDG, or Kagi, or some other comercial product. And for now, we must because Google is basically useless right now. But just give time to the other companies to fall in the same trap hahaha.
  2. LLMs are not smart, not even close. They are just a parlor trick that has non technical people fooled. There is a lot of evidence to me, but to me the most obvious one is that they don't have anything resembling human short term memory. Like the way they make them look like they are having a conversation is by providing the entire conversation up to that point, including their own previous responses lol, as input/context so the bot autocompletes the conversation. It literally can't remember a single word of what you said on it's own. But sureee, they are just like humans lol.

So what we have here is obvious, we have a company trying to grow like cancer by any means necessary. And now they have a technology that allows them to create enough smoke and mirrors to fool non technical people. Sadly, as part of this they are also destroying the last places of the internet not fully controlled by corporations. Let's hope lemmy survives, but it's just a matter of time before they flood this place too.

[–] afraid_of_zombies@lemmy.world -1 points 3 months ago (1 children)

including their own previous responses lol, as input/context so the bot autocompletes the conversation. It literally can’t remember a single word of what you said on it’s own.

Chatgpt has had memory from previous conversations for about a month now and it's context window is no longer fixed. Additionally it has the ability to assign sentences to memory on its own. So if it "thinks" what you said is important it saves it.

[–] trollbearpig@lemmy.world -1 points 3 months ago* (last edited 3 months ago) (1 children)

Can you point me to the paper/article/whatever where this is being discussed please? I'm actually interested on learning about it. Even if I don't like the way they are using the technology, I'm still a programmer at hearth and would love to read about this.

To the point of the conversation, honestly man that was just an example of the many problems I see with this. But you have to understand that people like you keep asking us for proof that LLMs are not smart. But come on man, you are the ones claiming you solved the hard problem of mind, on the first try no less hahaha. You are the ones with the burden of proof here and you have provided nothing of the sort. Do better people or stop trying to confuse us with retoric.

[–] afraid_of_zombies@lemmy.world 0 points 3 months ago (1 children)

I mean it's just the release notes. Go to their website. I have used the memory feature myself on the app so know it's working and as for the context window it can actually tell you what it is for each session.

But you have to understand that people like you keep asking us for proof that LLMs are not smart.

Where? Where have I asked that? Don't strawman me, I am not your punching bag and won't defend something I didn't say. You can "come on man" all you want but it won't change my answer. I have made zero claims if this thing is smart or asked anyone to weight in on the issue either way.

I pointed out two features it has now, which I don't think anyone can dispute that it does have those features. It has a larger context window and memory that it can update. That is all I said, a very small claim that you can prove for yourself in under five minutes by going to their website.

[–] trollbearpig@lemmy.world -1 points 3 months ago* (last edited 3 months ago)

Oh, you are talking about this https://help.openai.com/en/articles/8590148-memory-faq hahahaha. I'm sorry man, but you are a moron or arguing in bad faith. That's yet another feature where they inject even more shit in the context/input to make it feel like the thing has memory. That's literally yet another example of what I was pointing out, so thanks for confirming my suspicions. Seriously dude, do better if you really want to have a conversation. Your response made me waste my time, and on top of that you insult me hahaha.

[–] Nualkris@lemm.ee 0 points 3 months ago (1 children)

Why does search need to be AI? I've had no problems finding any information I wanted under the former process.

[–] otacon239@feddit.de 0 points 3 months ago

Because [buzzword here]!

[–] HawlSera@lemm.ee 0 points 3 months ago (1 children)

It's just a fucking Chinese Room

[–] FaceDeer@fedia.io -1 points 3 months ago

And humans aren't?

[–] adam_y@lemmy.world 0 points 3 months ago (2 children)

Can we swap out the word "hallucinations" for the word "bullshit"?

I think all AI/LLM stuf should be prefaced as "someone down the pub said..."

So, "someone down the pub said you can eat rocks" or, "someone down the pub said you should put glue on your pizza".

Hallucinations are cool, shit like this is worthless.

[–] kbin_space_program@kbin.run 0 points 3 months ago (1 children)

Google search isnt a hallucination now though.

It instead proves that LLMs just reproduce from the model they are supplied with. For example, the "glue on pizza" comment is from a reddit user called FuckSmith roughly 11 years ago.

[–] billiam0202@lemmy.world 0 points 3 months ago (1 children)

Without knowing what specific comment it was, I'm going to guess it was on how advertisers make pizza look better in ads than real life?

[–] otacon239@feddit.de 0 points 3 months ago* (last edited 3 months ago)

Nope. They were just trolling and fucking around. It was obvious sarcasm:

https://www.reddit.com/r/Pizza/comments/1a19s0/comment/c8t7bbp/

[–] Eheran@lemmy.world 0 points 3 months ago (1 children)

No, hallucination is a really good term. It can be super confident and seemingly correct but still completely made up.

[–] richieadler@lemmy.myserv.one -1 points 3 months ago* (last edited 3 months ago)

It's a really bad term because it's usually associated with a mind, and LLMs are nothing of the sort.