this post was submitted on 07 Mar 2024
0 points (NaN% liked)

Memes

1156 readers
5 users here now

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] psvrh@lemmy.ca 0 points 7 months ago (1 children)

This gets into a tricky area of "what is consciousness, anyway?". Our own consciousness is really just a gestalt rationalization engine that runs on a squishy neural net, which could be argued to be "faking it" so well that we think we're conscious.

[–] Omega_Haxors@lemmy.ml 0 points 7 months ago* (last edited 7 months ago) (1 children)

Oh no we are NOT doing this shit again. It's literally autocomplete brought to its logical conclusion, don't bring your stupid sophistry into this.

[–] UraniumBlazer@lemm.ee 0 points 7 months ago (1 children)

Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.

[–] Omega_Haxors@lemmy.ml 0 points 7 months ago* (last edited 7 months ago) (1 children)

I'm starting to wonder if any of you even know how that shit even works internally, or if you just take what the hype media says at face value. It literally has one purpose and one purpose alone: Determine what the next word is going to be by calculating the probability which word will come after the next. That's it. All it does is try to string a convincing sentence using probabilities. It does not and cannot understand context.

The underlying tech is really cool but a lot of people are grotesquely overselling its capabilities. Not to say a neural network can't eventually obtain consciousness (because ultimately our brains are a union of a bunch of little neural networks working together for a common goal) but it sure as hell isn't going to be an LLM. That's what I meant by sophistry, they're not engaging with the facts, just some nebulous ideal.

[–] alphafalcon@feddit.de 0 points 7 months ago

I'm with you on LLMs being over hyped although that's already dying down a bit. But regarding your claim that LLMs cannot "understand context", I've recently read an article that shows that LLMs can have an internal world model:

https://thegradient.pub/othello/

Depending on your definition of "understanding" that seems to be an indicator of being more than a pure "stochastic parrot"