this post was submitted on 30 Sep 2023
5 points (100.0% liked)

Open Source

31031 readers
751 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] weedazz@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

My mind immediately went to a horizon zero dawn like dystopia where the Mozilla AI is the only thing left protecting humans from various malevolent AIs bent on consuming the human race

[–] Bombastic@sopuli.xyz 0 points 1 year ago (1 children)

Mozilla is Gaia, ChatGPT is hades?

[–] weedazz@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I think by that point ChatGPT would be more like Apollo, keeping the knowledge of humanity. I feel like one of the more corporate AIs will go full HADES, I'm thinking Bard. It will get a mysterious signal from space that switches it's core protocol from "don't be evil" to "be evil."

Incredibly welcomed. We need more ethical, non-profit AI researchers in the sea of corporate for-profit AI companies.

[–] Weeby_Wabbit@lemmy.world 1 points 1 year ago

I'll believe it when I see it.

I'm so goddamn tired of "open source" turning into subscription models restricting use cases because the company wants to appease conservative investors.

[–] Boring@lemmy.ml 0 points 1 year ago (1 children)

Coming from a company the preaches about privacy and rates privacy respecting businesses, while collecting telemetry and accepting 500M/ year to from google to promote their search engine... I'll take this as the puff up piece that is is.

[–] vinhill@feddit.de 0 points 1 year ago* (last edited 1 year ago)

Not only is telemetry easy to disable. In about:telemetry, you can see what's being send and many of these things are important to improve the user experience, make Firefox faster and also monitor privacy/security problems.

Without telemetry (use counter), how to decide whether a deprecated feature can be removed? Removing them is necessary to decrease maintenance work, be able to innovate and remove features that are less secure.

[–] Turun@feddit.de 0 points 1 year ago* (last edited 1 year ago)

In which ways does this differ from stability ai, which made stable diffusion and also have a LLM afaik?

[–] mojo@lemm.ee 0 points 1 year ago (1 children)

As much as I love Mozilla, I know they're going to censor it (sorry, the word is "alignment" now) the hell out of it to fit their perceived values. Luckily if it's open source then people will be able to train uncensored models

[–] DigitalJacobin@lemmy.ml 0 points 1 year ago (1 children)

What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

[–] mojo@lemm.ee 0 points 1 year ago (1 children)

Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don't want it to be censored. It's gathering this from public data they don't own after all. I agree with Mozilla's principles, but also LLMs are tools and should be treated as such.

[–] salarua@sopuli.xyz 0 points 1 year ago* (last edited 1 year ago) (1 children)

shit just went from 0 to 100 real fucking quick

for real though, if you ask an LLM how to make a bomb, it's not the LLM that's the problem

[–] mojo@lemm.ee 0 points 1 year ago* (last edited 1 year ago) (1 children)

If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that's the point.

Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you'd probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

To have a strong stance means also defending the potential harmful effects, since they're inevitable. It's hard to keep values consistent, even when there are potential harmful effects of something that's for the greater good. Encryption is a perfect example of that.

[–] Spzi@lemm.ee 1 points 1 year ago

If it has the information, why not?

Naive altruistic reply: To prevent harm.

Cynic reply: To prevent liabilities.

If the restaurant refuses to put your fries into your coffee, because that's not on the menu, then that's their call. Can be for many reasons, but it's literally their business, not yours.

If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

Although I find it pretty surprising corporations self-regulate faster than lawmakers can say 'AI' in this case. That's odd.