this post was submitted on 25 Apr 2024
0 points (NaN% liked)

TechTakes

1425 readers
149 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

courtesy @self

can't wait for the crypto spammers to hit every web page with a ChatGPT prompt. AI vs Crypto: whoever loses, we win

you are viewing a single comment's thread
view the rest of the comments
[–] froztbyte@awful.systems 0 points 7 months ago (1 children)

you appear to be posting this in good faith so I won't start at my usual level, but .. what? do you realize that you didn't make a substantive contribution to the particular thing observed here, which is that somewhere in the mishmash dogshit that is popular LLM hosting there are reliable ways to RCE it with inputs? I think maybe (maybe!) you meant to, but you didn't really touch on it at all

other than that:

Basically, the more work you take away from the LLM, the more reliable everything will work.

people here are aware, yes, and it stays continually entertaining

[–] 200fifty@awful.systems 0 points 7 months ago (1 children)

I think they were responding to the implication in self's original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is 'cheating.' But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it's laundered by the LLM first, which is like a high-school level mistake.

[–] Ephera@lemmy.ml 1 points 7 months ago

Yeah, that was exactly my intention.