this post was submitted on 24 Oct 2024
411 points (96.0% liked)

News

23387 readers
2481 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] foggy@lemmy.world 169 points 1 month ago* (last edited 1 month ago) (4 children)

Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him... Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation... But you know, as a grown ass adult.

You can witness first hand... He found a chatbot that was a psychologist... And it argued with him up and down that it was indeed a real human with a license to practice...

It's alarming

[–] GrammarPolice@sh.itjust.works 83 points 1 month ago (4 children)

This is fucking insane. Unassuming kids are using these services being tricked into believing they're chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

[–] BreadstickNinja@lemmy.world 43 points 1 month ago* (last edited 1 month ago) (2 children)

The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don't think the issue is that he thought that a Game of Thrones character was real.

This is someone who was suffering a severe mental health crisis, and his parents didn't get him the treatment he needed. It says they took him to a "therapist" five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

I'm skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don't buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

[–] Turbonics@lemmy.sdf.org 5 points 1 month ago

Indeed. This pushed the kid over the edge but it was not the only reason.

[–] Kolanaki@yiffit.net 15 points 1 month ago* (last edited 1 month ago) (2 children)

I've used Character.AI well before all this news and I gotta chime in here:

It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that's one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won't; INCLUDING PRETENDING TO BE HUMAN.

No reasonable person would be tricked into believing it's accurate when there is a big fucking banner on the chat window itself saying it's all imaginary.

[–] Traister101@lemmy.today 12 points 1 month ago (1 children)

And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

[–] Kolanaki@yiffit.net -4 points 1 month ago* (last edited 1 month ago) (2 children)

I don't know if I would count Boomers among rational people.

Or suicidal teens, for that matter.

[–] dragonfucker@lemmy.nz 9 points 1 month ago (2 children)

If half of all people aren't rational, then there's no use making policy decisions based on what a rational person would think. The law should protect everyone.

[–] Kolanaki@yiffit.net 1 points 1 month ago* (last edited 1 month ago) (1 children)

If you think people who are suicidal are rational, you're pretty divorced from reality, friends.

[–] PriorityMotif@lemmy.world 2 points 4 weeks ago

There's a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.

[–] PriorityMotif@lemmy.world 0 points 1 month ago (1 children)

Do you think anyone is rational? That's an irrational thought right there.

[–] Thetimefarm@lemm.ee 1 points 1 month ago* (last edited 1 month ago) (1 children)

Your right, no one has any rationality at all which is why we live in a world where so much stuff actually gets done.

Why is someone with deep wisdom and insights such as yourself wasting their time here on lemmy?

[–] PriorityMotif@lemmy.world 1 points 1 month ago

What stuff is "getting done" exactly? Is stuff that people want, but ultimately they have irrational reasons for wanting it.

[–] Wogi@lemmy.world 7 points 1 month ago

Ah yes, the famous adage, "the only rational people are in my specific age and demographic bracket. Everyone else is fucking insane"

[–] capital_sniff@lemmy.world 8 points 1 month ago

They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

[–] JovialMicrobial@lemm.ee 9 points 1 month ago (1 children)

Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they're ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

[–] SharkAttak@kbin.melroy.org 1 points 4 weeks ago

No, cause the site says specifically that those are fictional characters.

[–] roguetrick@lemmy.world 20 points 1 month ago

Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that's who you're talking to. And it's good at that. If the text generation wasn't so fast, it'd be convincing.

[–] Hackworth@lemmy.world 9 points 1 month ago* (last edited 1 month ago)

Wow, that's... somethin. I haven't paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that's what drives the generated podcasts in Notebook LM now. Anyway, that's some fucked up alignment right there. I'm hip deep in the stuff, and I've never seen a model act like this.

[–] Bobmighty@lemmy.world 5 points 1 month ago

AI bots that argue exactly like that are all over social media too. It's common. Dead internet theory is absolutely becoming reality.