this post was submitted on 08 Jun 2024
1 points (100.0% liked)

Technology

58009 readers
2949 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] tal@lemmy.today 1 points 3 months ago* (last edited 3 months ago)

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I don't see how you could realistically provide that guarantee.

I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.

Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.

[–] FrostyCaveman@lemm.ee 1 points 3 months ago (1 children)

I think Asimov had some thoughts on this subject

Wild that we’re at this point now

[–] leftzero@lemmynsfw.com 1 points 3 months ago (1 children)

Asimov didn't design the three laws to make robots safe.

He designed them to make robots break in ways that'd make Powell and Donovan's lives miserable in particularly hilarious (for the reader, not the victims) ways.

(They weren't even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)

[–] FaceDeer@fedia.io 1 points 3 months ago

I wish more people realized science fiction authors aren't even trying to make good predictions about the future, even if that's something they were good at. They're trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there'll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.

There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel "Don't Create The Torment Nexus" was nonsense. We shouldn't be making policy decisions based off of that.

[–] ofcourse@lemmy.ml 1 points 3 months ago* (last edited 3 months ago) (1 children)

The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.

So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.

[–] iAvicenna@lemmy.world 1 points 3 months ago

self regulate? big tech company? pfft right we all know how that goes

[–] ArmokGoB@lemmy.dbzer0.com 1 points 3 months ago

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I'll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.

[–] aniki@lemm.ee 0 points 3 months ago (1 children)

If companies are crying about it then it's probably a great thing for consumers.

Eat billionaires.

[–] Supermariofan67@programming.dev -1 points 3 months ago

Companies cry the same way about the bills to ban end to end encryption, and they're still bad for consumers too