this post was submitted on 02 Jul 2024
37 points (100.0% liked)

TechTakes

1425 readers
175 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

with early "grieftech" entepreneur Helena Blavatsky

you are viewing a single comment's thread
view the rest of the comments
[–] FermiEstimate@lemmy.dbzer0.com 13 points 4 months ago (1 children)

Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”

We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of "exchanges" acceptable for telling someone their loved one is in hell? Like, is 20 in "Hey, not cool" territory, but it's all good once you get to 50? 40?

Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”

“They had already fully established that he wasn't in heaven,” Rohrer said.

Always a good sign when your best defense of the horrible thing your chatbot says is that it's in context.

[–] self@awful.systems 18 points 4 months ago

it’s very telling that 85 messages is considered a lot. your grief better resolve quick before the model loses coherency and starts digging quotes out of a plagiarized horror movie script

fuck it’s gross how one of the common use cases for LLMs is targeting vulnerable people with the hope they’ll develop a parasocial relationship with your service, so you can keep charging them forever