this post was submitted on 10 Apr 2024
9 points (100.0% liked)

Programmer Humor

19463 readers
508 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] scrubbles@poptalk.scrubbles.tech 6 points 6 months ago (4 children)

The fun thing with AI that companies are starting to realize is that there's no way to "program" AI, and I just love that. The only way to guide it is by retraining models (and LLMs will just always have stuff you don't like in them), or using more AI to say "Was that response okay?" which is imperfect.

And I am just loving the fallout.

[–] zalgotext@sh.itjust.works 2 points 6 months ago

The best part is they don't understand the cost of that retraining. The non-engineer marketing types in my field suggest AI as a potential solution to any technical problem they possibly can. One of the product owners who's more technically inclined finally had enough during a recent meeting and straight up to told those guys "AI is the least efficient way to solve any technical problem, and should only be considered if everything else has failed". I wanted to shake his hand right then and there.

[–] bbuez@lemmy.world 1 points 6 months ago (1 children)

The fallout of image generation will be even more incredible imo. Even if models do become even more capable, training off of post-'21 data will become increasingly polluted and difficult to distinguish as models improve their output, which inevitably leads to model collapse. At least until we have a standardized way of flagging generated images opposed to real ones, but I don't really like that future.

Just on a tangent, openai claiming video models will help "AGI" understand the world around it is laughable to me. 3blue1brown released a very informative video on how text transformers work, and in principal all "AI" is at the moment is very clever statistics and lots of matrix multiplication. How our minds process and retain information is by far more complicated, as we don't fully understand ourselves yet and we are a grand leap away from ever emulating a true mind.

All that to say is I can't wait for people to realize: oh hey that is just to try to replace talent in film production coming from silicon valley

[–] MalReynolds@slrpnk.net -1 points 6 months ago* (last edited 6 months ago)

I see this a lot, but do you really think the big players haven't backed up the pre-22 datasets? Also, synthetic (LLM generated) data is routinely used in fine tuning to good effect, it's likely that architectures exist that can happily do primary training on synthetic as well.

[–] joyjoy@lemm.ee 1 points 6 months ago

using more AI to say “Was that response okay?”

This is what GPT 2 did. One day it bugged and started outputting the lewdest responses you could ever imagine.

[–] xmunk@sh.itjust.works 1 points 6 months ago (2 children)

Using another AI to detect if an AI is misbehaving just sounds like the halting problem but with more steps.

[–] match@pawb.social 0 points 6 months ago (1 children)

Generative adversarial networks are really effective actually!

[–] Natanael@slrpnk.net 1 points 6 months ago

As long as you can correctly model the target behavior in a sufficiently complete way, and capture all necessary context in the inputs!

[–] marcos@lemmy.world 0 points 6 months ago

Lots of things in AI make no sense and really shouldn't work... except that they do.

Deep learning is one of those.