stevedidwhat_infosec

joined 1 year ago

This is literally no other day.

This is what happens as we discover dangerous technologies, there is an arms face and mutual destruction is assured and walked back over time.

Game theory.

[–] stevedidwhat_infosec@infosec.pub 25 points 1 day ago* (last edited 1 day ago) (1 children)

Dudes in real time trying to wash his financial transactions without raising suspicion towards secret Bitcoin wallets

Lmaooo, wanna know which snake whispered this idea into his head so I can watch them closer

[–] stevedidwhat_infosec@infosec.pub -4 points 1 day ago (2 children)

You’ve discovered an artifact!! Yaaaay

If you ask GPT to do this in a more math questiony way, itll break it down and do it correctly. Just gotta narrow top_p and temperature down a bit

Literally just pandering to the party split, all it means. Trump couldn’t care less who is in as long as they back him and aren’t a nobody

[–] stevedidwhat_infosec@infosec.pub 7 points 3 days ago (1 children)

That’s… impossible…

…ITS OVER NINE THOUSANDDDDDD

This is legit art 10/10

I’m not convinced you can. The NSA had the whistle blown on their little fingerprint gloves they would wear, highly doubt in todays age you could know definitively who is doing what

Trying to interpret grammar/meaning for what is obviously a real-world schizo post is where u made your first mistake

[–] stevedidwhat_infosec@infosec.pub 3 points 1 week ago (1 children)

Don’t forget mobile device zero days via text

To me, this seems like it will be yet another bandaid fix. Hoping for your solution but expecting mine ultimately, and unfortunately.

 

Hey all!

While investigating some malvertising campaigns today, I noticed that one of the sponsored google search results, upon hovering, appeared to be changing/resolving through rather than simply showing what link was being used by the result.

Any ideas as to how this hover url result works and if you can disable resolving/force top-level results upon hovering over anchor elements?

Malvertising is hot hot hot!

 

Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?

They’re always talking about how the models are so good and can write malware but damn near every GPT model I’ve seen can barely write basic code - no shot it’s writing actually valuable malware, not to mention FUD malware as some are claiming.

Thoughts?

view more: next ›