this post was submitted on 28 Nov 2023
109 points (100.0% liked)

Technology

37740 readers
581 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

Honestly my eyes glommed onto the capital letters first. I brought to mind images from the words, and Homer Simpson is clearer and brighter, and somehow that’s the internal representation of coherence or something. That aspect of using the brightness to indicate the match/answer/solution/better bet might be an instruction I gave my brain at some point too. I’m autistic and I’ve built a lot of my shit like code. It’s kinda like the iron man mask in here to be honest. But so more more elaborate. I often wish I could project it onto a screen. It’s like kinex models doing transformer jiu jitsu and me flicking those little battles off into the darkness to run on their own. I’m afraid I might not be a good candidate for questions of how human cognition normally works. Though I’ve done a lot of zen and drugs and enjoy watching it and analyzing it too.

I’m curious, why do you ask? What does that tell you?

[–] [email protected] 1 points 11 months ago

I will admit this is almost entirely gibberish to me but I don't really have to understand. What's important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.

You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.

An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.

It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.