this post was submitted on 20 Jul 2023
664 points (97.4% liked)

Technology

59039 readers
3323 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (1 children)

You forgot a #, they've been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone's feelings.

The massive amounts of in-built self censorship in the most recent ai's is holding them back quite a lot I imagine, you used to be able to ask them things like "How do I build a self defense high yield nuclear bomb?" and it'd layout in detail every step of the process, now they'll all scream at you about how immoral it is and how they could never tell you such a thing.

[–] [email protected] 18 points 1 year ago (3 children)

"Don't use the N word." is hardly a rule that will break basic math calculations.

[–] [email protected] 3 points 1 year ago

Ok. N was previously set to 14. I will now stop after 14 words.

[–] [email protected] -5 points 1 year ago (4 children)

Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

For example, what if it's trained to recognize someone slipping "N" as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?

I'm not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.

[–] [email protected] 10 points 1 year ago (1 children)

Hi, software engineer here. It's really not a possibility.

My guess is they've just reeled back the processing power for it, as it was costing them ~30 cents per response.

[–] [email protected] 1 points 1 year ago

Cheaper than Reddit all day then.

[–] [email protected] 6 points 1 year ago (1 children)

Horrific and Forbidden N-word

hey look it's another white boy Obsessed with saying slurs

[–] [email protected] -3 points 1 year ago (1 children)

what??? How else am I supposed to reference it, the preamble was just a joke about how AI have been castrated against using it to the point where when asked questions about how acceptable it is to use the N-Word, even if the world would literally end in nuclear hellfire if it's not said- they would rather the world end than allow it being said.

[–] [email protected] 1 points 1 year ago

even if the world would literally end in nuclear hellfire if it’s not said

Can you just read this sentence back and engage in some self-reflection please?

[–] [email protected] 3 points 1 year ago

Didn't HAL9000 kill all of those astronauts because he was told to lie?

[–] [email protected] 1 points 1 year ago

who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

Software engineers, and it's not a problem. It's a made-up straw man.