this post was submitted on 15 Dec 2024
12 points (70.0% liked)

Hacker News

327 readers
225 users here now

RSS Feed of HackerNews

founded 3 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Mikina@programming.dev 14 points 1 week ago (3 children)

It's a fucking math function. Numbers go in, numbers go out. It's a glorified text suggestion.

If your results are, that it's hiding away information or trying to lock files used for it's configuration, then you specifically allowed it to do it, or more probably you have no idea how file locking works in the first place.

I hate this kind of AI doomsaying with passion, because it makes zero sense and only sways the discussion away from actual problems, while also being comparable in it's bullshitism as anti-vaxers are.

I mean, the problem they talk about is kind of missalingment, but because they are making nonsensical claims about how the AI is trying to go rogue, instead of actually talking about the real dangers of misalignment (like manipulating people into extremism to maximize their engagement on platforms, or not being factually correct), which will always be a limitation of any ML algorithm and is a reason why it shouldn't be used for 90% of cases it's being used in.

The article is literally cold reading. They are trying so hard to push their bullshit narrative, that it's painful to read. A software that locks his configuration file when running? Oh, I guess my git is also AI gone rogue, and doesn't want me to delete it.

Lol.

[–] Bosht@lemmy.world 3 points 1 week ago

Yup, agreed. AI is not actual AI but it seems more and more the general public doesn't understand this. Just because something has been fed enough data to mimic conversation doesn't mean it's actually thinking for itself.

Yes agree, the problem is the marketing bullshit term that is AI. What we have now are sophisticated algorithms for remixing data - very impressive but in no way AI.

The big problem is how in accurate they are, how they drift over time and need resetting and how biased they are in terms of what they're taught and what they're allowed to say (which then has unpredictable consequences).

A good example with the visual models is how many seem incapable of drawing a penis and give men vaginas. That's an inherent bias in what the models have been taught, and while amusing speaks of all the other biases that models are getting from being given curated data.

Another example is the shit summaries tools give in search engines that are frequently wrong.

This is basically alpha software that has been released on the world and oversold to inflate share prices. All the companies care about is being first and getting market share, in the hopes that the 90-9-1 split will occur and they want the 90% slice.

I agree that the article is pushing a narrative, but you have to recognize that AIs are absolutely not being kept in sandboxes where they cannot affect the outside world.

Some AIs are being asked to write code. Do all the users of that code check it thoroughly before putting it into production?

Apple have recently rolled out Apple Intelligence. Siri can do all kinds of things on your phone.

People are racing each other to put AI in everything, and the restrictions on them will be looser and looser.