this post was submitted on 25 Dec 2023
1918 points (97.9% liked)
People Twitter
5213 readers
2825 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a tweet or similar
- No bullying or international politcs
- Be excellent to each other.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So reading through your post and the article, I think you're a bit confused about the "curated response" thing. I believe what they're referring to is the user ability to give answers a "good answer" or "bad answer" flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.
The article also describes "commanders" reviewing and having the code team be responsive to changing the algorithm. Again this isn't picking responses for the AI. Instead ,it's reviewing responses it's given and deciding if they're good or bad, and making changes to the algorithm to get more accurate answers in the future.
I have not heard anything like what you're describing, with real people generating the responses real time for gpt users. I'm open to being wrong, though, if you have another article.
I might be guilty of misinformation here - perhaps it was a forerunner to ChatGPT, or even a different (competing) chatbot entirely, where they would read an answer from the machine before deciding whether to send it on to the end user, whereas the novelty of ChatGPT was in throwing off such shackles present in an older incarnation? I do recall a story along the lines that I mentioned, but I cannot find it now so that lends some credence to that thought. In any case it would have been multiple generations behind the modern ones, so you are correct that it is not so relevant anymore.