swlabr

joined 1 year ago
[–] swlabr@awful.systems 7 points 4 months ago

that's a big oof for me dawg

[–] swlabr@awful.systems 9 points 4 months ago

OH got it. Thanks dawg. The automated hypothetical question bot is right once every 5 years

[–] swlabr@awful.systems 7 points 4 months ago (1 children)

Less recycling more reusing bottles for molotovs please

[–] swlabr@awful.systems 11 points 4 months ago (2 children)

(I’m still in the moment, please explain?)

[–] swlabr@awful.systems 10 points 4 months ago (2 children)

wait. You’re telling me a poly nest based on FF7 fandom exist(s/ed)? And I wasn’t part of it?!?!?!

[–] swlabr@awful.systems 9 points 4 months ago

Well it’s from china so it must be evil and its publicity must be minimised /s

[–] swlabr@awful.systems 18 points 4 months ago (2 children)

I rewrote the ad so they can lean into their marketing strategy.

Hard book have hard word and make head hurt, AI make book easy! More book read for you. No hard word. This good idea!

[–] swlabr@awful.systems 22 points 4 months ago (3 children)

I have decided to fossick in this particular guano mine. Let’s see here… “10 Cruxes of Artificial Sentience.” Hmm, could this be 10 necessary criteria that must be satisfied for something “Artificial” to have “Sentience?” Let’s find out!

I have thought a decent amount about the hard problem of consciousness

Wow! And I’m sure we’re about to hear about how this one has solved it.

Ok let’s gloss over these ten cruxes… hmm. Ok so they aren’t criteria for determining sentience, just ten concerns this guy has come up with in the event that AI achieves sentience. Crux-ness indeterminate, but unlikely to be cruxes, based on my bias that EA people don't word good.

  1. If a focus on artificial welfare detracts from alignment enough … [it would be] highly net negative… this [could open] up an avenue for slowing down AI

Ah yes, the urge to align AI vs. the urge to appease our AI overlords. We’ve all been there, buddy.

  1. Artificial welfare could be the most important cause and may be something like animal welfare multiplied by longtermism

I’ve always thought that if you take the tensor product of PETA and the entire transcript of the sequences, you get EA.

most or… all future minds may be artificial… If they are not sentient this would be a catastrophe

Lol no. We wouldn’t need to care.

If they are sentient and … suffering … this would be a suffering catastrophe

lol

If they are sentient and prioritize their own happiness and wellbeing this could actually quite good

also lol

maybe TBC, there's 8 more "cruxes"

[–] swlabr@awful.systems 10 points 4 months ago (1 children)

He has a smart oven with AI and wants to feed it data?

[–] swlabr@awful.systems 9 points 4 months ago* (last edited 4 months ago)

in the brained stem. straight up "shorkening it". and by "it", haha, well. let's just say. My liffspan

[–] swlabr@awful.systems 9 points 4 months ago

Don’t know what we need Gates for. Surely an AI should be able to spout this bullshit?

Ugh, so many people are working the "AI will solve X problem" mill. I don't need nor want AI to be there increasing output.

[–] swlabr@awful.systems 16 points 4 months ago (1 children)

“In the future, there will be brain-generated music”, said Ray Kay, to a young disciple.

“But Master Ray, isn’t music generally brain generated?”, the disciple asked.

“No, you fucking idiot. You fucking buffoon. How dare you question me,” replied Ray. It was then that the disciple reached enlightenment.

view more: ‹ prev next ›