this post was submitted on 05 Oct 2024
13 points (67.6% liked)

Cybersecurity

5476 readers
53 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !cybersecurity@lemmy.capebreton.social !securitynews@infosec.pub !netsec@links.hackliberty.org !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 1 year ago
MODERATORS
all 6 comments
sorted by: hot top controversial new old
[–] qprimed@lemmy.ml 9 points 6 hours ago

Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.

doing the "stupid", "easy" thing. pack it up, bois. been a good run but we finally made a better human.

[–] Telorand@reddthat.com 8 points 6 hours ago (1 children)

Clickbait title. It's just LLMs doing what they're designed to do. Since they're basically complex iterative algorithms, the person in question did a thing using a tool they didn't fully understand, and that had consequences.

People should be looking at LLMs like Monkey Paws instead of "assistants."

[–] treadful@lemmy.zip 4 points 2 hours ago (1 children)

Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic's Claude language model.

The Python-based tool was designed to generate and execute bash commands based on natural language input.

Saying the person didn't understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.

Still neat to think about though.

[–] Telorand@reddthat.com 0 points 1 hour ago

Notice that I didn't say they didn't know what they were doing. I said they didn't fully understand what they were doing. I doubt they set out with the goal of letting an LLM run amok and fuck things up.

I do QA for a living, and even when we do trial and error, we have mitigation plans in place for when things go wrong. The fact that they're a CEO of Redwood Research doesn't mean they did their homework on the model they trained.

Still, I agree that it's interesting that it did that stuff at all. It would be nice if they went into more depth as to why it did those things, since they mention that it's a custom model using Claude.

[–] 314@sh.itjust.works 8 points 8 hours ago* (last edited 8 hours ago)

Is the computer really "bricked"? Or will repairing GRUB fix it? I get the main message of unexpected access / consequences...