this post was submitted on 20 Jul 2023
664 points (97.4% liked)

Technology

59598 readers
3833 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 64 points 1 year ago* (last edited 1 year ago) (5 children)

It's a machine learning chat bot, not a calculator, and especially not "AI."

Its primary focus is trying to look like something a human might say. It isn't trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

It doesn't need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

[–] [email protected] 23 points 1 year ago

You're right, but at least the satnav won't gaslight you into thinking it does understand Alfred Hitchcock.

[–] [email protected] 19 points 1 year ago (1 children)

so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

I think it trained on Reddit data

[–] [email protected] 9 points 1 year ago

acts humble and apologetic

We must be using different Reddits, my friend

[–] [email protected] 11 points 1 year ago (1 children)

This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

[–] [email protected] 3 points 1 year ago

It sounds like it's time to merge Wolfram Alpha's and ChatGPT's capabilities together to create the ultimate calculator.

[–] [email protected] 4 points 1 year ago (1 children)

to be fair, fucking up maths problems is very human-like.

I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

It doesn't calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there's a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.

[–] [email protected] 1 points 1 year ago

i know it doesn't calculate, that's why I suggested having known correct calculations in the training data to offset noise in the signal?

[–] [email protected] 3 points 1 year ago

If it's trying emulate a human then it's spot on. I suck at maths.