this post was submitted on 20 Jul 2023
251 points (96.7% liked)

Technology

59232 readers
3780 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 57 points 1 year ago (4 children)

Why is "98%" supposed to sound good? We made a computer that can't do math good

[–] [email protected] 48 points 1 year ago* (last edited 1 year ago) (3 children)

It’s a language model, text prediction. It doesn’t do any counting or reasoning about the preceding text, just completes it with what seems like the most logical conclusion.

So if enough of the internet had said 1+1=12 it would repeat in kind.

[–] [email protected] 11 points 1 year ago

There are five lights!

[–] [email protected] 6 points 1 year ago

Someone asked it to list the even prime numbers.. it then went on a long rant about how to calculate even primes, listing hundreds of them..

ChatGPT knows nothing about what it's saying, only how to put likely sounding words together. I'd use it for a cover letter, or something like that.. but for maths.. no.

[–] [email protected] 2 points 1 year ago

Not quite.

Legal Othello board moves by themselves don't say anything about the board size or rules.

And yet when Harvard/MIT researchers fed them into a toy GPT model, they found that the neural network best able to predict outputting legal moves had built an internal representation of the board state and rules.

Too many people commenting on this topic as armchair experts are confusing training with what results from the training.

Training on completing text doesn't mean the end result can't understand aspects that feed into the original generation of that text, and given a fair bit of research so far, the opposite is almost certainly the case to some degree.

[–] [email protected] 18 points 1 year ago

Reminds me of that West Wing moment when the President and Leo are talking about literacy.

President Josiah Bartlet: Sweden has a 100% literacy rate, Leo. 100%! How do they do that?

Leo McGarry: Well, maybe they don't and they also can't count.

[–] [email protected] 9 points 1 year ago

And it said simple math, too 🤣