this post was submitted on 07 Aug 2024
128 points (93.8% liked)

Technology

59381 readers
3394 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 months ago (1 children)

Such a software construct would look nothing like an LLM. We'd need something that matches the complexity and capabilities of a human brain before it's even been given anything to learn from.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

I have already learned a lot from the human knowledge LLM was trained on (and yes i know about halus and of course I fact check everything) but learning coding using a LLM teacher fucking rocks

Thanks to copilot, I “understand” linux kernel modules and what is needed to backport, for example.

[–] [email protected] 1 points 3 months ago

Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.

But flip that around.

You could never explain a brand new concept to an LLM which isn't already contained somewhere in its training data. You can't just give it a book about a new thing, or have a conversation about it, and then have it understand it.

A single book isn't enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.

Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.

Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you'd need to produce those several terabytes of training data about it, first.

And once you do, how do you know it isn't just pseudo-plagiarizing the contents of that training data?