this post was submitted on 20 Jul 2023
664 points (97.4% liked)

Technology

59598 readers
4072 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (3 children)

For simple language models sure but we're talking about chatGPT here. OpenAI has some pretty bold claims...

https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253

100 trillion bites is 100 terrabytes and if you have any amount of actual data in those parameters then the size of the data could easily get into the petabyte range.

[–] [email protected] 13 points 1 year ago

They list the currently available models that users of their API can select here:

https://platform.openai.com/docs/models/overview

They even say that while the main models are being continuously updated (read: re-trained) there are snapshots of previous models that will remain static.

So yes, they are storing and snapshotting the models and they have many different models available with which to perform inference at the same time.

[–] [email protected] 4 points 1 year ago

Each parameter corresponds to a single number, so if it’s using 16 bit numbers then that’s 200 TB. They might be using 32 bit numbers (400 TB) but wouldn’t be using anything larger.

[–] [email protected] 1 points 1 year ago

Makes me wonder how exactly they curate said data, its such an insane amount even teams of thousands of human programmers sifting through all of it 24/7 all day everyday wouldn't be able to fact check or assess all the data for years. Presumably they use ai to go over the data scraped and thrown into the model, since I cant imagine any human being able to curate it all.

I've heard from various videos detailing the topic that many of the developers have little to no clue as to what's going on inside the LLM once it's assembled and set about its work on training itself and what not- and I'm inclined to believe them, the human programmers simply set the params, and system up and then the system eats all the data loaded into it and immediately becomes a sort of black box which nobody knows exactly whats going on inside of it to produce the output it does.