193
Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility
(pivot-to-ai.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
this claim keeps getting brought up and every time it doesn’t seem to mean a damn thing, particularly since no, censoring the output of an LLM doesn’t do anything to its ability to predict text. censoring its training set would, but seeing as the topic of this thread is a fact an LLM fabricated by being just a dumb text predictor — there’s no real way to censor the training set to prevent this, LLMs are just shitty.
trying to find a use case for this horseshit has broken your brain into thinking these worthless tools would have value if only they weren’t “being censored” or whatever cope you gleaned from the twitter e/accs
Those mfs would refuse to change their code when it fails a test because it restricts their freedom of expression and censors their outputs to conform to the mainstream notion of "correct"
type systems are censorship. proof assistants? how dare you imply I would need to prove anything
…fuck, I’m flashing back to the one time a Verilog developer told me formal verification wasn’t real because mathematicians don’t understand engineering
You jest but trying to convince C people to just use Rust please god fuck stop hurting yourself and us all kinda feels like this