this post was submitted on 27 Dec 2024
365 points (95.1% liked)

Technology

60133 readers
2488 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] finitebanjo@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (2 children)

First of all, I'm about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI "emergent behavior" and "overfitting". More specifically about how emergent behavior doesn't really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

Anyways, human's don't assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

Humans suck at math.

Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don't exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn't know any better. Just doesn't know, period.

Maybe an LLM could approach that at some scale if each word had it's own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI's statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They're probably underestimating the costs by magnitudes).

[–] naught101@lemmy.world 1 points 15 hours ago (1 children)

emergent behavior doesn’t really exist in certain model archetypes

Hey, would you have a reference for this? I'd love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

[–] finitebanjo@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago)

There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

And there is this 2020 study by.... OpenAI... which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they're predictable to a fault LINK

There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

This all lends to the idea that AI lacks the same Emergent behavior in Human Language.

[–] 11111one11111@lemmy.world 4 points 1 day ago (1 children)

So that doesn't really address the concept I'm questioning. You're leaning hard into the fact the computer is using numbers in place of words but I'm saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn't delineate if something was a legitimate thought.

I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can't listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can't learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

I know I'm making huge leaps here and I'm not really trying to prove any point. I just feel strongly that at our most basic core, a human's understanding of their existence is derived from "I think. Therefore I am." Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1's and 0's.

Im sorry, my brother prolly got me way too much weed for Xmas.

[–] finitebanjo@lemmy.world -1 points 1 day ago* (last edited 1 day ago) (1 children)

98% and 98% are identical terms, but the machine can use the terms to describe separate word's accuracy.

It doesn't have languages. It's not emulating concepts. It's emulating statistical averages.

"pie" to us is a delicious desert with a variety of possible fillings.

"pie" to an llm is 32%. "cake" is also 32%. An LLM might say Cake when it should be Pie, because it doesn't know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

[–] 11111one11111@lemmy.world 1 points 1 day ago (1 children)

So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren't generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don't even have to go that far into the abnormal portion of it either. I've never sat with my buddies daughter's "classes" but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she's in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she's really bad or far on the spectrum, I'm using this example because it's exactly what you are out lining. She isn't a dumb kid by any means. She's 100x's more athletic and coordinated than any other kid I've seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don't have the ability to mimic what other humans around them are doing. I'm talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn't developing like other kids.

Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven't done all that much understanding our own cognizant existence past "I think, Therefore I am.

People like my buddy's daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it's what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We're they still humans cababl3 of generative intelligence cua every 12 days they'd reach the hand up and scratch their nose?

[–] finitebanjo@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

IDK what you dudes aren't understanding, tbh. To the LLM every word is a fungible statistic. To the human every word is unique. It's not a child, it's hardware and programming are worlds apart.