this post was submitted on 23 Nov 2023
0 points (NaN% liked)

Ask Lemmy

26701 readers
1697 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics.


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

I know it's not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?

You might think this is a stupid and irrational question. "There is no way AI will do psychology well, ever." But I think in today's day and age it's pretty fair to ask when you are deciding about your future.

top 8 comments
sorted by: hot top controversial new old
[–] nottheengineer@feddit.de 0 points 11 months ago

It's just like with programming: The people who are scared of AI taking their jobs are usually bad at them.

AI is incredibly good at regurgitating information and translation, but not at understanding. Programming can be viewed as translation, so they are good at it. LLMs on their own won't become much better in terms of understanding, we're at a point where they are already trained on all the good data from the internet. Now we're starting to let AIs collect data directly from the world (chatGPT being public is just a play to collect more data), but that's much slower.

[–] scorpionix@feddit.de 0 points 11 months ago

Given how little we know about the inner workings of the brain (I'm a materialist, so to me the mind is the result of processes in the brain), I think there is still ample room for human intuition in therapy. Also, I believe there will always be people who prefer talking to a human over a machine.

Think about it this way: Yes, most of our furniture is mass-produced by IKEA and others like it, but there are still very successful carpenters out there making beautiful furniture for people.

[–] Evilschnuff@feddit.de 0 points 11 months ago (1 children)

There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.

[–] intensely_human@lemm.ee 0 points 11 months ago (1 children)

Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

You could give it a few basic motivations then instruct it to act that out every day.

Now I’m not saying that they’re conscious, that they feel as we feel.

But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

[–] Evilschnuff@feddit.de 0 points 11 months ago (1 children)

The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.

[–] intensely_human@lemm.ee 0 points 11 months ago (1 children)

We don’t have stable lifelong learning yet

I covered that with the long term memory structure of an LLM.

The only problem we’d have is a delay in response on the part of the robot during conversations.

[–] Evilschnuff@feddit.de 0 points 11 months ago* (last edited 11 months ago)

LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.

[–] Bonifratz@feddit.de 0 points 11 months ago

Even if AI did make psychology redundant in a couple of years (which I'd bet my favourite blanket it won't), what are the alternatives? If AI can take over a field that is focused more than most others on human interaction, personal privacy, thoughts, feelings, and individual perceptions, then it can take over almost any other field before that. So you might as well go for it while you can.