diz

joined 1 year ago
[–] diz@awful.systems 23 points 2 months ago (10 children)

Both parties are buying into a premise we already know to be incorrect.

We may know it is incorrect, but LLM salesmen are claiming things like "90th percentile on LSAT", high scores on a "college level reasoning benchmark" and so on and so forth.

They are claiming "yeah yeah there's all the anekdotal reports of glue pizza, but objectively, our AI is more capable than your workers, so you can replace them with our AI", and this is starting to actually impact the job market.

[–] diz@awful.systems 23 points 2 months ago

Other thing to add to this is that there's just one or two people in the train providing service for hundreds of other people or millions of dollars worth of goods. Automating those people away is simply not economical, not even in terms of the headcount replaced vs headcount that has to be hired to maintain the automation software and hardware.

Unless you're a techbro, who deeply resents labor, someone who would rather hire 10 software engineers than 1 train driver.

[–] diz@awful.systems 22 points 2 months ago* (last edited 2 months ago) (3 children)

Also, my thought on this is that since an LLM has no internal state with which to represent the state of the problem, it can't ever actually solve any variation of the river crossing. Not even those that it "solves" correctly.

If it outputs the correct sequence, inside your head the model of the problem will be in the solved state, but on the LLM's side there's just a sequence of steps that it wrote down, with those steps directly inhibiting production of another "Trip" token, until that crosses a threshold. There isn't an inventory or even a count of items, there's an unrelated number that weights for or against "Trip".

If we are to anthropomorphize it (which we shouldn't, but anyway), it's bullshitting up an answer and it gradually gets a feeling that it has bullshitted enough, which can happen at the right moment, or not.

[–] diz@awful.systems 1 points 2 months ago* (last edited 2 months ago)

I love the "criti-hype". AI peddlers absolutely love any concerns that imply that the AI is really good at something.

Safety concern that LLMs would go Skynet? Say no more, I hear you and I'll bring it up in the congress!

Safety concern that terrorists might use it to make bombs? Say no more! I agree that the AI is so great for making bombs! We'll restrict it to keep people safe!

Sexual roleplay? Yeah, good point, I love it. Our technology is better than sex itself! We'll restrict it to keep mankind from falling into the sin of robosexuality and going extinct! I mean, of course, you can't restrict something like that, but we'll try, at least until we release a hornybot.

But any concern about language modeling being fundamentally not the right tool for some job (Do you want to cite a paper or do you want to sample from the underlying probability distribution?), hey hey hows about we talk about the skynet thing instead?

view more: ‹ prev next ›