this post was submitted on 31 Dec 2024
11 points (100.0% liked)

SneerClub

1012 readers
9 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
top 20 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 2 points 14 hours ago (1 children)

We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on "Functional Decision Theory" upon?

[–] dgerard@awful.systems 3 points 13 hours ago (1 children)

page tagged and question added to talk page

[–] blakestacey@awful.systems 2 points 13 hours ago

There's a "critique of functional decision theory"... which turns out to be a blog post on LessWrong... by "wdmacaskill"? That MacAskill?!

[–] istewart@awful.systems 2 points 16 hours ago

Forgive me but how is lightcone different from conebros???

[–] blakestacey@awful.systems 9 points 1 day ago* (last edited 1 day ago) (3 children)

You might think that this review of Yud's glowfic is an occasion for a "read a second book" response:

Yudkowsky is good at writing intelligent characters in a specific way that I haven't seen anyone else do as well.

But actually, the word intelligent is being used here in a specialized sense to mean "insufferable".

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

Ah, the book that isn't actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn't sufficiently self-aware to know that's what she was writing.

[–] blakestacey@awful.systems 3 points 13 hours ago

If you want to read Yudkowsky's explanation for why he doesn't spend more effort on academia, it's here.

spoiler alert: the grapes were totally sour

[–] blakestacey@awful.systems 1 points 11 hours ago

Gah. I've been nerd sniped into wanting to explain what LessWrong gets wrong.

You could argue that another moral of Parfit's hitchhiker is that being a purely selfish agent is bad, and humans aren't purely selfish so it's not applicable to the real world anyway, but in Yudkowsky's philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.

I'm impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It's a manner of thinking that couldn't have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.

[–] blakestacey@awful.systems 14 points 2 days ago (1 children)

Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

I'm trying, but I can't not donate any harder!

The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk.

Unironically the better choice: https://archiveofourown.org/donate

Yes but if I donate to Lightcone I can get a T-shirt for $1000! A special edition T-shirt! Whereas if I donated $1000 to Archive Of Our Own all I'd get is... a full sized cotton blanket, a mug, a tote bag and a mystery gift.

Holy smokes that's a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can't admit that so are desperately trying to delay the inevitable.

Also don't miss this promise from the middle:

Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. [...] Building an LLM-based editor. [...] AI prompts and tutors as a content type on LW

It's like an anti-donation message. "Hey if you donate to me I'll fill your forum with digital noise!"

[–] blakestacey@awful.systems 13 points 2 days ago

The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent

Uh-huh.

[–] blakestacey@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

The post:

I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.

The replies: "Kolmogorov complexity", "Pareto frontier", "reference class".

[–] Soyweiser@awful.systems 6 points 2 days ago (1 children)

"Kolmogorov complexity" lol ow god they are just tossing terms around again. Kolmogorov has nothing todo with being mega verbose.

[–] dgerard@awful.systems 9 points 2 days ago (1 children)

solving the halting problem by talking it to death

[–] self@awful.systems 9 points 2 days ago (1 children)

the CS experts on the orange site and LW: “how can there be a halting problem when I refuse to ever stop?”

[–] Soyweiser@awful.systems 5 points 1 day ago

Well that certainly is a problem

[–] sailor_sega_saturn@awful.systems 9 points 2 days ago* (last edited 2 days ago)

Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz

"reputational cost" eh? Let's see Mr. Moskovitz's reasoning in his own words:

Spoiler - It's not just about PR risk

But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don't want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.

I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.

I've long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I'm only trying to get back into integrity with my own.

If you look at my comments here and in my post, I've elaborated on other issues quite a few times and people keep ignoring those comments and projecting "PR risk" on to everything. ~~I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I'm going to stop now.~~ [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.

again, beyond "reputational risks", which narrows the mind too much on what is going on here

“PR risk” is an unnecessarily narrow mental frame for why we’re focusing.

I guess "we're too racist and weird for even a Facebook exec" doesn't have quite the same ring to it though.

[–] Soyweiser@awful.systems 13 points 2 days ago (1 children)

and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

Look, i already wasn't donating, no need to make it worse.

[–] blakestacey@awful.systems 16 points 2 days ago

The lead-in to that is even "better":

This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

"The reason for optimism is that we can cozy up to fascists!"