gerikson

joined 2 years ago
[–] gerikson@awful.systems 7 points 1 week ago* (last edited 1 week ago) (8 children)

TIL Richard fucking Hanania is a Rationalist, at least according to this excrescence from LW

https://www.lesswrong.com/posts/tKhbDBkstMJuBv7jg/three-months-in-evaluating-three-rationalist-cases-for-trump

The left-wing monoculture catastrophically damaged institutional integrity when public-health officials lied during the pandemic and when bureaucrats used threats and intimidation to censor speech on Facebook and Twitter and elsewhere—in the long-term this could move the country toward the draconian censorship regimes, restrictions on political opposition, and unresponsiveness to public opinion that we see today in England, France, and Germany.[1]

Yeah I'm sure trying to dictate to Harvard who they can hire and what courses they can teach is not leading to a "draconian censorship regime"


[1] to be clear this is attributed to Richard Ngo, not Hanania

[–] gerikson@awful.systems 10 points 2 weeks ago (1 children)

JZW link but meat is a 404 article, don't wanna bypass their paywall

Hello fellow kids! Doing crimes is TIGHT!

American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, and suspected drug and human traffickers [...]

[–] gerikson@awful.systems 14 points 2 weeks ago

Eminently punchable face

[–] gerikson@awful.systems 9 points 2 weeks ago (4 children)

I got it from a comment here, apparently some pie in the sky charity needs more money

https://www.lesswrong.com/posts/HL7zzZhCR59CCntpB/allfed-emergency-appeal-help-us-raise-usd800-000-to-avoid

[–] gerikson@awful.systems 5 points 2 weeks ago

His name is Scott.

[–] gerikson@awful.systems 9 points 2 weeks ago (4 children)

not every programmer posts to social media...

[–] gerikson@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

Not sure why anyone thought CS as a community can "save us". It's just as likely to be red/black-pilled (gold/black-pilled?) as any other heavily male tech adjacent community. The idea that nerds should be politically liberal because they were bullied in 80 high-school comedies is ludicrous.

[–] gerikson@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago)

“The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”

AGI is just around the corner guys

[–] gerikson@awful.systems 11 points 2 weeks ago (9 children)

The fact this commenter doesn't mention the absolute fuckton of machining videos on Youtube tells me they're just talking out of their ass.

(not that forcing those into LLM slop would help but a hypothetical AGI would learn a lot)

Also

Dario Amodei, CEO of Anthropic, recently worried about a world where only 30% of jobs become automated, leading to class tensions between the automated and non-automated. Instead, he predicts that nearly all jobs will be automated simultaneously, putting everyone "in the same boat."

Dario is delusional. We don't even have self driving cars outside of a very small, very expensive demo in SF

[–] gerikson@awful.systems 15 points 2 weeks ago (11 children)

LessWronger puts in the work and determines that LLMs can't spacially vizualize for shit, comments are like "well you're prompting it wrong" (paraphrased) as well as "why not pay experiences machinists to videotape what they're doing os their work can be automated"

https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a#comments

The article itself is worth reading for some insights in the challenges of using current "AI" (LLMs) to work in the real world.

[–] gerikson@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

"We convinced the dumbest people in the country that our made-up problems were real, and now we have a sad because they took us seriously."

 

The wider community is still on Reddit, I wonder if there’s an interest to have a small alternative?

If not, what’s a good Lemmy instance for these things?

 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

view more: ‹ prev next ›