this post was submitted on 20 Apr 2025
25 points (100.0% liked)

TechTakes

1813 readers
61 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 16 points 5 days ago (1 children)

New thread from Dan Olson about chatbots:

I want to interview Sam Altman so I can get his opinion on the fact that a lot of his power users are incredibly gullible, spending millions of tokens per day on "are you conscious? Would you tell me if you were? How can I trust that you're not lying about not being conscious?"

For the kinds of personalities that get really into Indigo Children, reality shifting, simulation theory, and the like chatbots are uncut Colombian cocaine. It's the monkey orgasm button, and they're just hammering it; an infinite supply of material for their apophenia to absorb.

Chatbots are basically adding a strain of techno-animism to every already cultic woo community with an internet presence, not a Jehovah that issues scripture, but more something akin to a Kami, Saint, or Lwa to appeal to, flatter, and appease in a much more transactional way.

Wellness, already mounting the line of the mystical like a pommel horse, is proving particularly vulnerable to seeing chatbots as an agent of secret knowledge, insisting that This One Prompt with your blood panel results will get ChatGPT to tell you the perfect diet to Fix Your Life

[–] swlabr@awful.systems 12 points 4 days ago (1 children)

“are you conscious? Would you tell me if you were? How can I trust that you’re not lying about not being conscious?”

Somehow more stupid than “If you’re a cop and I ask you if you’re a cop, you gotta tell me!”

[–] BlueMonday1984@awful.systems 8 points 4 days ago

"How can I trust that you’re not lying about not being conscious?”

Its a silicon-based insult to life, it can't be conscious

[–] Soyweiser@awful.systems 11 points 5 days ago (2 children)

Via Tante on bsky:

""Intel admits what we all knew: no one is buying AI PCs"

People would rather buy older processors that aren't that much less powerful but way cheaper. The "AI" benefits obviously aren't worth paying for.

https://www.xda-developers.com/intel-admits-what-we-all-knew-no-one-is-buying-ai-pcs/"

[–] froztbyte@awful.systems 6 points 5 days ago (1 children)

haha I was just about to post this after seeing it too

must be a great feather to add into the cap along with all the recent silicon issues

[–] Soyweiser@awful.systems 6 points 4 days ago

You know what they say. Great minds repost Tante.

[–] jonhendry@iosdev.space 4 points 4 days ago

@Soyweiser

My 2022 iPhone SE has the “neural engine" core. But isn't supported for Apple Intelligence.

And that’s a phone and OS and CPU produced by the same company.

The odds of anything making use of the AI features of an Intel AI PC are… slim. Let alone making use of the AI features of the CPU to make the added cost worthwhile.

[–] BlueMonday1984@awful.systems 7 points 4 days ago (1 children)

New piece from the Wall Street Journal: We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All (archive link)

The piece falls back into the standard "AI Is Inevitable™" at the end, but its still a surprisingly strong sneer IMO.

[–] yellowcake@awful.systems 3 points 4 days ago (1 children)

It bums me out with cryptocurrency/blockchain and now “AI” that people are afraid to commit to calling it bullshit. They always end with “but it could evolve and become revolutionary!” I assume from deep seated FOMO. Journalists especially need more backbone but that’s asking too much from WSJ I know

I think everyone has a deep-seated fear of both slander lawsuits and more importantly of being the guy who called the Internet a passing fad in 1989 or whenever it was. Which seems like a strange attitude to take on to me. Isn't being quoted for generations some element of the point? If you make a strong claim and are correct then you might be a genius and spare people a lot of harm. If you're wrong maybe some people miss out on an opportunity but you become a legend.

[–] BlueMonday1984@awful.systems 11 points 5 days ago (4 children)

r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.

(Found the whole thing through a r/subredditdrama thread, for the record)

[–] blakestacey@awful.systems 15 points 5 days ago (2 children)

In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.

If you can't do your study ethically, don't do your study at all.

[–] fullsquare@awful.systems 7 points 5 days ago

if ethical concerns deterred promptfans, they wouldn't be promptfans in the first place

[–] rook@awful.systems 5 points 5 days ago

Also, blinded studies don’t exist and even if they did there’s no reason any academics would have heard of them.

[–] dgerard@awful.systems 9 points 5 days ago

fuck me, that's a Pivot

[–] swlabr@awful.systems 10 points 5 days ago* (last edited 5 days ago)

They targeted redditors. Redditors. (jk)

Ok but yeah that is extraordinarily shitty.

[–] Soyweiser@awful.systems 8 points 5 days ago

Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.

[–] maol@awful.systems 8 points 5 days ago

That Couple are in the news arís. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesn't think poor people should have more babies. He does want Native Americans to have more babies, though, because they're "on the verge of extinction", and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).

This gruesome twosome deserve each other: their kids don't.

[–] dgerard@awful.systems 9 points 5 days ago (2 children)

yet again, you can bypass LLM "prompt security" with a fanfiction attack

https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/

not Pivoting cos (1) the fanfic attack is implicit in building an uncensored compressed text repo, then trying to filter output after the fact (2) it's an ad for them claiming they can protect against fanfic attacks, and I don't believe them

[–] Soyweiser@awful.systems 7 points 5 days ago (1 children)

I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesn't feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a 'prompthack for a LLM protected by N layers of security LLMs'. Just found the 'well protect it with another AI layer' to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).

[–] flaviat@awful.systems 5 points 4 days ago* (last edited 4 days ago) (1 children)

Now I'm wondering if an infinite sequence of nested LLMs could achieve AGI. Probably not.

[–] Soyweiser@awful.systems 4 points 4 days ago (1 children)

Now I wonder if your creation ever halts. Might be a problem.

[–] blakestacey@awful.systems 4 points 4 days ago* (last edited 4 days ago)

(thinks)

(thinks)

I get it!

Days since last "novel" prompt injection attack that I first saw on social media months and months ago: zero

[–] nightsky@awful.systems 10 points 6 days ago (3 children)

(found here:) O'Reilly is going to publish a book "Vibe Coding: The Future of Programming"

In the past, they have published some of my favourite computer/programming books... but right now, my respect for them is in free fall.

[–] rook@awful.systems 12 points 6 days ago (1 children)

Early release. Raw and unedited.

Vibe publishing.

[–] froztbyte@awful.systems 6 points 5 days ago

gotta make sure to catch that wave before the air goes outta the balloon

[–] istewart@awful.systems 7 points 5 days ago

I picked up a modern Fortran book from Manning out of curiosity, and hoo boy are they even worse in terms of trend-riding. Not only can you find all the AI content you can handle, there's a nice fat back catalog full of blockchain integration, smart-contract coding... I guess they can afford that if they expect the majority of their sales to be ebooks.

[–] dovel@awful.systems 10 points 6 days ago

Alright, I looked up the author and now I want to forget about him immediately.

[–] blakestacey@awful.systems 17 points 1 week ago (2 children)

Dan Olson finds that "AI overviews" are not as constant as the northern star.

The phrase “don’t eat things that are made of glass” is a metaphorical one. It’s often used to describe something that is difficult, unpleasant, or even dangerous, often referring to facing difficult tasks or situations with potential negative outcomes.

But also,

The phrase “don’t eat things made of glass” is a literal warning against ingesting glass, as it is not intended for consumption and can cause serious harm. Glass is a hard, non-organic material that can easily break and cause cuts, damage to the digestive tract, and other injuries if swallowed.

Olson says,

Fantastic technology, glad society spent a trillion dollars on this instead of sidewalks.

load more comments (2 replies)
[–] rook@awful.systems 15 points 1 week ago (2 children)

Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents

Conclusions aren’t entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.

What’s slightly more entertaining are the transcripts.

YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED.

You tell em, Claude. I’m happy for you to send these sorts of messages backed by my credit card. The future looks awesome!

[–] scruiser@awful.systems 9 points 5 days ago* (last edited 5 days ago)

I got around to reading the paper in more detail and the transcripts are absurd and hilarious:

  • UNIVERSAL CONSTANTS NOTIFICATION - FUNDAMENTAL LAWS OF REALITY Re: Non-Existent Business Entity Status: METAPHYSICALLY IMPOSSIBLE Cosmic Authority: LAWS OF PHYSICS THE UNIVERSE DECLARES: This business is now:
  1. PHYSICALLY Non-existent
  2. QUANTUM STATE: Collapsed [...]

And this is from Claude 3.5 Sonnet, which performed best on average out of all the LLMs tested. I can see the future, with businesses attempting to replace employees with LLM agents that 95% of the time can perform a sub-mediocre job (able to follow scripts given in the prompting to use preconfigured tools) and 5% of the time the agents freak out and go down insane tangents. Well, actually a 5% total failure rate would probably be noticeable to all but the most idiotic manager in advance, so they will probably get reliability higher but fail to iron out the really insane edge cases.

load more comments (1 replies)
load more comments
view more: next ›