Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try [email protected]
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.
Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.
I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.
It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.
If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.
What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.
That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.
You could give it a few basic motivations then instruct it to act that out every day.
Now I’m not saying that they’re conscious, that they feel as we feel.
But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.
Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.
The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.
I covered that with the long term memory structure of an LLM.
The only problem we’d have is a delay in response on the part of the robot during conversations.
LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.