SlopppyEngineer

joined 1 year ago
[–] SlopppyEngineer@lemmy.world 78 points 1 week ago* (last edited 1 week ago) (3 children)

And then somebody made this and here we are.

From: https://lemmy.world/post/19351753

[–] SlopppyEngineer@lemmy.world 11 points 1 week ago (2 children)

Herman Cain Award flashbacks.

[–] SlopppyEngineer@lemmy.world 4 points 1 week ago

I want this with white chocolate

[–] SlopppyEngineer@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

If their nuclear bombers went airborne the second Ukraine troops crossed their borders people would've taken them seriously. It would've shown how serious they were. But here we are, weeks after the invasion onto Russian soil, and their strong man argument is changing a few words on paper. It's not very impressive or convincing.

[–] SlopppyEngineer@lemmy.world 70 points 1 week ago (1 children)

How that proposal looks in winter

[–] SlopppyEngineer@lemmy.world 6 points 1 week ago

And AFD won't "send migrants back" because that would remove their favorite Boogeyman. Expect don't symbolic tinkering and not that much more, otherwise they have to start all over again with another minority to blame. It won't improve these people's economic situation. EU exit and austerity is back on the menu with AFD.

[–] SlopppyEngineer@lemmy.world 1 points 1 week ago (2 children)

It does remind me of that recent Joe Scott video about the split brain. One part of the brain would do something and the other part of the brain that didn't get the info because of the split just makes up some semi-plausible answer. It's like one part of the brain does work at least partially like an LLM.

It's more like our brain is like a corporation, with a spokesperson, a president and vice president and a number of departments that with semi-independently. Having an LLM is like having only the spokesperson and not the rest of the work force in that building that makes up an AGI.

[–] SlopppyEngineer@lemmy.world 8 points 1 week ago (3 children)

they have to provide an answer

Indeed. That's the G in chatGPT. It stands for generative. It looks at all the previous words and "predicts" the most likely next word. You could see this very clearly with chatGPT-2. It just generated good looking nonsense based on a few words.

Then you have the P in chatGPT, pre-trained. If it happens to have received training data on what you're asking, that data is shown. It it's not trained on that data, it just uses what is more likely to appear and generates something that looks good enough for the prompt. It appears to hallucinate, lie, make stuff up.

It's just how the thing works. There is serious research to fix this and a recent paper claimed to have a solution so the LLM knows it doesn't know.

[–] SlopppyEngineer@lemmy.world 14 points 1 week ago

The tech sector is right now just running in hype and jumping from one hype to the next. It's a race to keep that investors throwing money at them with providing new targets to keep investors from realizing the stuff isn't that useful.

[–] SlopppyEngineer@lemmy.world 4 points 1 week ago

That's why they do regulatory capture to prevent that from happening. It all starts with money being equal to influence. This can temporarily be reset after a big crash of the system but sooner or later they start again.

[–] SlopppyEngineer@lemmy.world 35 points 1 week ago (1 children)

That sounds like classic Game Theory. Nobody's going to do it because it a few don't they have an advantage, except when it's forced from above changing the playing field.

[–] SlopppyEngineer@lemmy.world 3 points 1 week ago

Eternal Sunshine of the Spotless Mind

view more: ‹ prev next ›