Look on the bright side - we at least got some potential sci-fi gadgets out of it.
The temporal echo chamber sounds like the seed of a good story - you could really get some character analysis out of it if used well.
Look on the bright side - we at least got some potential sci-fi gadgets out of it.
The temporal echo chamber sounds like the seed of a good story - you could really get some character analysis out of it if used well.
...I mean yeah that's a pretty obvious use case - if Elon's given you a checkmark against your will, might as well use the benefits to cause him as much grief as possible.
(Also, loved your series on Devs - any idea when the final part's gonna release? Seems its gotten hit with some major delays.)
Update: Whilst the the story's veracity remains unconfirmed as of this writing, it has gone on to become a shitshow for the AI industry anyways - turns out the story got posted on Twitter and proceeded to go viral.
Assuming its fabricated, I suspect OP took their cues from this 404 Media report made a year ago, which warned about the flood of ChatGPT-generated mycology books and their potentially fatal effects.
As for people believing it, I'm not shocked - the AI bubble has caused widespread harm to basically every aspect of society, and the AI industry is viewed (rightfully so, I'd say) as having willingly caused said harm by developing and releasing AI systems, and as utterly unrepentant about it.
Additionally, those who use AI are viewed (once again, rightfully so) as unrepentant scumbags of the highest order, entirely willing to defraud and hurt others to make a quick buck.
With both those in mind, I wouldn't blame anyone for immediately believing it.
You're not wrong. Precisely how this AI debacle will strengthen copyright I don't know, but I fully anticipate it will be strengthened.
It hasn't been hashed out in court yet, but I suspect AI mickey will be considered copyright infringement, rather than public domain.
Is it just that these AI programs need no skill at all?
That's a major reason. That Grok's complete lack of guardrails is openly touted as a feature is another.
I've already seen people go absolutely fucking crazy with this - from people posting trans-supportive Muskrat pictures to people making fucked-up images with Nintendo/Disney characters, the utter lack of guardrails has led to predictable chaos.
Between the cost of running an LLM and the potential lawsuits this can unleash, part of me suspects this might end up being what ultimately does in Twitter.
Oh, that is beautiful to hear.
There's gonna be laws passed as a result of this - calling it right now.
If this turns out to be real, I suspect its gonna be a major shitshow - not only for the publisher, but for the AI industry as a whole.
For the publisher, they're gonna be lambasted for endangering people's lives for a quick AI-printed buck.
For AI, its gonna be yet another indictment of an industry that's seen fit to put technology, profits, basically everything over human lives - whether in the "AI Safety" criti-hype which implicitly suggests culpability for bringing about an apocalypse straight out of sci-fi, or in the myriad ways they are making the world worse right now.
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers
You're not wrong on the AI corps having good lawyers, but I suspect those lawyers don't have much to work with:
Pretty much every AI corp has been caught stealing from basically everyone (with basically everyone caught scraping without people's knowledge or consent, and OpenAI, Perplexity, and Anthropic all caught scraping against people's explicit wishes)
Said data was used to create products which, either implicitly or [explicitly]((https://archive.is/jNhpN), produce counterfeits of the stolen artists' work
Said counterfeits are, in turn, destroying the artists' ability to profit from their original work and discouraging them from sharing it freely
And to cap things off, there's solid evidence pointing to the defendants being completely unrepentant in their actions, whether that be Microsoft's AI boss treating such theft as entirely acceptable or Mira Murati treating the job losses as an afterthought
If I were a betting man, I'd put my money on the trial being a bloodbath in the artists' favour, and the resulting legal precedent being one which will likely kill generative AI as we know it.
Witnessed an AI doomer freaking out over a16z trying to deep-six SB1047.
Seems like the "AI doom" criti-hype is starting to become a bit of an albatross around the industry's neck.