this post was submitted on 23 Dec 2023
193 points (86.7% liked)

Technology

59285 readers
4823 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 180 points 10 months ago (5 children)

Friendly reminder that your predictive text, while very compelling, is not alive.

It's not a mind.

[–] [email protected] 84 points 10 months ago (2 children)

Cyberpunk 2077 sorta explores this a bit.

There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

[–] [email protected] 47 points 10 months ago* (last edited 10 months ago)

I don't care, Brandon was real to me okay 😭

[–] [email protected] 22 points 10 months ago (1 children)

Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.

[–] [email protected] 18 points 10 months ago (1 children)

The Turing test isn't just that there exists some conversation you can have with a machine where you wouldn't know it's a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn't come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn't understand the meaning of the text it generates.

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago)

Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the test exists. GPT could probably fool someone from the middle ages, but that person wouldn't know anything about what it is they are testing for exactly.

[–] [email protected] 20 points 10 months ago (4 children)

Prove to me you have a mind and I'll accept what you're saying.

[–] [email protected] 30 points 10 months ago (18 children)

Well no one can prove they have a mind to anyone other than themselves.

And to extend that, there's obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.

Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.

So, while there's no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.

I think the argument currently boils down to, "we understand how AI models work, but we don't understand how our minds work. Therefore, ???, and so no consciousness for AI"

[–] [email protected] 29 points 10 months ago

“No brain?”

“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

“So … what does the thinking?”

“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

“Thinking meat! You’re asking me to believe in thinking meat!”*

load more comments (17 replies)
[–] [email protected] 1 points 10 months ago* (last edited 9 months ago)
[–] [email protected] 1 points 10 months ago (2 children)

I can prove to you ChatGPT doesn't have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.

[–] [email protected] 10 points 10 months ago (2 children)

I'm confused by this idea. Maybe I'm just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.

[–] [email protected] 5 points 10 months ago

Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.

[–] [email protected] 1 points 10 months ago (1 children)

But some humans can, since they require simultaneous understanding of words' meanings as well as how they are spelled

[–] [email protected] 2 points 10 months ago

What should we conclude about most humans who cannot solve these crosswords?

It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.

[–] [email protected] 1 points 10 months ago

Can you please explain the reasoning behind the test?

load more comments (1 replies)
[–] [email protected] 10 points 10 months ago (1 children)

I don't think most people will care, so long as their NPC interaction ends up compelling. We've been reading stories about people who don't exist for centuries, and that's stopped no one from sympathizing with them - and now there's a chance you could have an open conversation with them.

Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn't manufactured.

Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

[–] [email protected] 2 points 10 months ago (1 children)

Poem poem poem poem then the NPC start quoting Mein Kampf and killing all the cat wizards.

[–] [email protected] 1 points 10 months ago (1 children)

Lol, yeah. If generative AI text stays as shitty as it is now, then this whole discussion moot. Whether that will be the case has yet to be seen. What is an indisputable fact, though, is that right now is the worst that generative AI will ever be again. It's only able to improve from here.

[–] [email protected] 1 points 10 months ago (1 children)

It's only able to improve from here.

That isn't actually true. With the rise in articles, posts and comments written by these algorithms, experts are warning about model collapse. Basically, the lack of decent human-written training data will destroy future generative AI before it can even start.

[–] [email protected] 2 points 10 months ago

That's an interesting point. We are seeing a similar kind of issue with search engines losing effectiveness due to search engine optimization on websites.

So it is possible that generative AI will become enshittened.

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago) (1 children)
[–] [email protected] 1 points 10 months ago

If you cut out a tiny bit of someone's brain and then hooked it up to a cpu, would it be a mind? No, of course not, lol. Even if we got Biocomputers to work, we still wouldn't have any synthetic hardware even close to being strong or fast enough to actually create or even simulate a brain.

[–] [email protected] 0 points 10 months ago (3 children)

While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

[–] [email protected] 8 points 10 months ago

What can't be a kind of mind to you?

[–] [email protected] 1 points 10 months ago (7 children)

Unless you want to call your predictive text on your keyboard a mind you really can't call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

[–] [email protected] 4 points 10 months ago (4 children)

No such thing has been "mathematically proven." The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

load more comments (4 replies)
[–] [email protected] 3 points 10 months ago* (last edited 10 months ago) (1 children)

I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

[–] [email protected] 1 points 10 months ago (1 children)

Sure thing: here's a white paper explicitly proving:

  1. No emergent properties (illusory due to bad measures)
  2. Predictable linear progress with model size

https://arxiv.org/abs/2304.15004

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

load more comments (5 replies)
[–] [email protected] 1 points 10 months ago

Sorry you're getting downvoted, you're correct. It's not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they're assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.

To be clear, I don't think these systems have experience, but it's impossible to rule out until an actual robust theory of mind comes around.