this post was submitted on 08 Jul 2024
822 points (96.8% liked)

Science Memes

10271 readers
2782 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CeeBee_Eh@lemmy.world 5 points 2 months ago (2 children)

The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it's locked into that state with no additional learning.

A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.

[–] Rolando@lemmy.world 13 points 2 months ago (2 children)

Contemporary LLMs are static. LLMs are not static by definition.

[–] wizardbeard@lemmy.dbzer0.com 1 points 2 months ago (1 children)

Could you point me towards one that isn't? Or is this something still in the theoretical?

I'm really trying not to be rude, but there's a massive amount of BS being spread around based off what is potentially theoretically possible with these things. AI is in a massive bubble right now, with life changing amounts of money on the line. A lot of people have very vested interest in everyone believing that the theoretical possibilities are just a few months/years away from reality.

I've read enough Popular Science magazine, and heard enough "This is the year of the Linux desktop" to take claims of where technological advances are absolutely going to go with a grain of salt.

[–] match@pawb.social 7 points 2 months ago

Remember that Microsoft chatbot that 4chan turned into a nazi over the course of a week? That was a self-updating language model using 2010s technology (versus the small-country-sized energy drain of ChatGPT4)

[–] CeeBee_Eh@lemmy.world 0 points 2 months ago

But they are. There's no feedback loop and continuous training happening. Once an instance or conversation is done all that context is gone. The context is never integrated directly into the model as it happens. That's more or less the way our brains work. Every stimulus, every thought, every sensation, every idea is added to our brain's model as it happens.

[–] merari42@lemmy.world 3 points 2 months ago

This is actually why I find a lot of arguments about AI's limitations as stochastic parrots very shortsighted. Language, picture or video models are indeed good at memorizing some reasonable features from their respective domains and building a simplistic (but often inaccurate) world model where some features of the world are generalized. They don't reason per se but have really good ways to look up how typical reasoning would look like.

To get actual reasoning, you need to do what all AI labs are currently working on and add a neuro-symbolic spin to model outputs. In these approaches, a model generates ideas for what to do next, and the solution space is searched with more traditional methods. This introduces a dynamic element that's more akin to human problem-solving, where the system can adapt and learn within the context of a specific task, even if it doesn't permanently update the knowledge base of the idea-generating model.

A notable example is AlphaGeometry, a system that solves complex geometry problems without human demonstrations and insufficient training data that is based on an LLM and structured search. Similar approaches are also used for coding or for a recent strong improvement in reasoning to solve example from the ARC challenge..