SneerClub

989 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
101
 
 

I'm called a Nazi because I happily am proud of white culture. But every day I think fondly of the brown king Cyrus the Great who invented the first ever empire, and the Japanese icon Murasaki Shikibu who wrote the first novel ever. What if humans just loved each other? History teaches us that we have all been, and always will be - great

read the whole thread, her responses are even worse

102
0
submitted 11 months ago* (last edited 11 months ago) by saucerwizard@awful.systems to c/sneerclub@awful.systems
 
 

Is uh, anyone else watching? This dude (chaos) was/is friends with Brent Dill.

103
 
 

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!

oh boy

archive: https://archive.is/uOP4y

104
 
 

warning: seriously nasty narcissism at length

archive: https://archive.is/eoXQj

this is a response to the post discussed in: https://awful.systems/post/220620

105
 
 

Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can't decide, but it's a depressing read.

May end up on SRD, but you read it here first.

106
107
 
 

In today's episode, Yud tries to predict the future of computer science.

108
 
 

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

109
 
 

“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”

From the short investor spiel document. Also they want to just bypass the FDA?

110
 
 
  • original post detailing mistreatment of employees
  • meta post about how a good rationalist should correctly epistemically assess the fairness of the post cataloguing and confirming the bad behaviour

tl;dr these fucking guys

111
 
 

this btw is why we now see some of the TPOT rationalists microdosing street meth as a substitute. also that they're idiots, of course.

somehow this man still has a medical license

112
113
114
115
 
 

Taleb dunking on IQ “research” at length. Technically a seriouspost I guess.

116
 
 

really: https://archive.ph/p0jPI

Roko’s twitter is an absolutely reliable guide to how recently a woman with dyed hair and facial piercings kicked him in the nuts again

117
118
 
 

Thought it worth sharing among so much very, very questionable material I've found in reading through the reference material of this book, I came across ths Blake Masters + Peter Thiel connection.

It's my obsession sneer because of how celebrated this god damn book is among the fight for the user UX community.

I’ve mostly been reading the material but need to back up and do an author background check for each one.

https://web.archive.org/web/20200101054932/https://blakemasters.com/post/20582845717/peter-thiels-cs183-startup-class-2-notes-essay

119
 
 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

120
 
 

In the far-off days of August 2022, Yudkowsky said of his brainchild,

If you think you can point to an unnecessary sentence within it, go ahead and try. Having a long story isn't the same fundamental kind of issue as having an extra sentence.

To which MarxBroshevik replied,

The first two sentences have a weird contradiction:

Every inch of wall space is covered by a bookcase. Each bookcase has six shelves, going almost to the ceiling.

So is it "every inch", or are the bookshelves going "almost" to the ceiling? Can't be both.

I've not read further than the first paragraph so there's probably other mistakes in the book too. There's kind of other 'mistakes' even in the first paragraph, not logical mistakes as such, just as an editor I would have... questions.

And I elaborated:

I'm not one to complain about the passive voice every time I see it. Like all matters of style, it's a choice that depends upon the tone the author desires, the point the author wishes to emphasize, even the way a character would speak. ("Oh, his throat was cut," Holmes concurred, "but not by his own hand.") Here, it contributes to a staid feeling. It emphasizes the walls and the shelves, not the books. This is all wrong for a story that is supposed to be about the pleasures of learning, a story whose main character can't walk past a bookstore without going in. Moreover, the instigating conceit of the fanfic is that their love of learning was nurtured, rather than neglected. Imagine that character, their family, their family home, and step into their library. What do you see?

Books — every wall, books to the ceiling.

Bam, done.

This is the living-room of the house occupied by the eminent Professor Michael Verres-Evans,

Calling a character "the eminent Professor" feels uncomfortably Dan Brown.

and his wife, Mrs. Petunia Evans-Verres, and their adopted son, Harry James Potter-Evans-Verres.

I hate the kid already.

And he said he wanted children, and that his first son would be named Dudley. And I thought to myself, what kind of parent names their child Dudley Dursley?

Congratulations, you've noticed the name in a children's book that was invented to sound stodgy and unpleasant. (In The Chocolate Factory of Rationality, a character asks "What kind of a name is 'Wonka' anyway?") And somehow you're trying to prove your cleverness and superiority over canon by mocking the name that was invented for children to mock. Of course, the Dursleys were also the start of Rowling using "physically unsightly by her standards" to indicate "morally evil", so joining in with that mockery feels ... It's aged badly, to be generous.

Also, is it just the people I know, or does having a name picked out for a child that far in advance seem a bit unusual? Is "Dudley" a name with history in his family — the father he honored but never really knew? His grandfather who died in the War? If you want to tell a grown-up story, where people aren't just named the way they are because those are names for children to laugh at, then you have to play by grown-up rules of characterization.

The whole stretch with Harry pointing out they can ask for a demonstration of magic is too long. Asking for proof is the obvious move, but it's presented as something only Harry is clever enough to think of, and as the end of a logic chain.

"Mum, your parents didn't have magic, did they?" [...] "Then no one in your family knew about magic when Lily got her letter. [...] If it's true, we can just get a Hogwarts professor here and see the magic for ourselves, and Dad will admit that it's true. And if not, then Mum will admit that it's false. That's what the experimental method is for, so that we don't have to resolve things just by arguing."

Jesus, this kid goes around with L's theme from Death Note playing in his head whenever he pours a bowl of breakfast crunchies.

Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy, sponsored in whatever maths or science competitions he entered. He was given anything reasonable that he wanted, except, maybe, the slightest shred of respect.

Oh, sod off, you entitled little twit; the chip on your shoulder is bigger than you are. Your parents buy you college textbooks on physics instead of coloring books about rocketships, and you think you don't get respect? Because your adoptive father is incredulous about the existence of, let me check my notes here, literal magic? You know, the thing which would upend the body of known science, as you will yourself expound at great length.

"Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics.

Wesley Crusher would shove this kid into a locker.