Architeuthis

joined 1 year ago
[–] Architeuthis@awful.systems 4 points 5 days ago

This reference stirred up some neurons that really hadn't moved in a while, thanks.

[–] Architeuthis@awful.systems 12 points 5 days ago

I think the author is just honestly trying to equivocate freezing shrimps with torturing weirdly specifically disabled babies and senile adults medieval style. If you said you'd pledge like 17$ to shrimp welfare for every terminated pregnancy I'm sure they'd be perfectly fine with it.

I happened upon a thread in the EA forums started by someone who was trying to argue EAs into taking a more forced-birth position and what it came down to was that it wouldn't be as efficient as using the same resources to advocate for animal welfare, due to some perceived human/chicken embryo exchange rate.

[–] Architeuthis@awful.systems 16 points 6 days ago (9 children)

If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.

wat

[–] Architeuthis@awful.systems 16 points 6 days ago

This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn't crumble.

Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.

Dog, you've lost the plot.

FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn't indefensible, but the way it's framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you'd be giving money to people who are weird in a bad way.

[–] Architeuthis@awful.systems 19 points 1 week ago

No shot is over two seconds, because AI video can’t keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucks’ wheels don’t actually move. You’ll see more wrong with the ad the more you look.

Not to mention the weird AI lighting that makes everything look fake and unnatural even in the ad's dreamlike context, and also that it's the most generic and uninspired shit imaginable.

[–] Architeuthis@awful.systems 2 points 1 week ago

His overall point appears to be that a city fully optimized for self-driving cars would be a hellscape at ground level, even allowing for fewer accidents, so no real reason to belabor that point, which is mostly made in service to pointing out how dumb it is when your solution to reducing accident rates is "buy a new car" instead of anything systemic. like improving mass transit.

[–] Architeuthis@awful.systems 11 points 1 week ago* (last edited 1 week ago)

If you've convinced yourself that you'll mostly be fighting the AIs of a rival always-chaotic-evil alien species or their outgroup equivalent, you probably think they are.

Otherwise I hope shooting first and asking questions later will probably continue to be frowned upon in polite society even if it's automated agents doing the shooting.

[–] Architeuthis@awful.systems 13 points 1 week ago* (last edited 1 week ago) (10 children)

The job site decided to recommend me an article calling for the removal of most human oversight from military AI on grounds of inefficiency, which is a pressing issue since apparently we're already living in the Culture.

The Strategic Liability of Human Oversight in AI-Driven Military Operations

Conclusion

As AI technology advances, human oversight in military operations, though rooted in ethics and legality, may emerge as a strategic liability in future AI-dominated warfare.

~~Oh unknowable genie of the sketchily curated datasets~~ Claude, come up with an optimal ratio of civilian to enemy combatant deaths that will allow us to bomb that building with the giant red cross that you labeled an enemy stronghold.

[–] Architeuthis@awful.systems 20 points 2 weeks ago

Maybe Momoa's PR agency forgot to send an appropriate tribute to Alphabet this month.

[–] Architeuthis@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago)

I could go over Wolfram's discussion of biological pattern formation, gravity, etc., etc., and give plenty of references to people who've had these ideas earlier. They have also had them better, in that they have been serious enough to work out their consequences, grasp their strengths and weaknesses, and refine or in some cases abandon them. That is, they have done science, where Wolfram has merely thought.

Huh, it looks like Wolfram also pioneered rationalism.

Scott Aaronson also turns up later for having written a paper that refutes a specific Wolfram claim on quantum mechanics, reminding us once again that very smart dumb people are actually a thing.

As a sidenote, if anyone else is finding the plain-text-disguised-as-an-html-document format of this article a tad grating, your browser probably has a reader mode that will make it way more presentable, it's F9 on firefox.

[–] Architeuthis@awful.systems 2 points 1 month ago (1 children)

This was exactly what I had in mind but for the life of me I can't remember the title.

view more: ‹ prev next ›