this post was submitted on 05 Mar 2024
45 points (94.1% liked)

Technology

59414 readers
3284 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The article discusses the mysterious nature of large language models and their remarkable capabilities, focusing on the challenges of understanding why they work. Researchers at OpenAI stumbled upon unexpected behavior while training language models, highlighting phenomena such as "grokking" and "double descent" that defy conventional statistical explanations. Despite rapid advancements, deep learning remains largely trial-and-error, lacking a comprehensive theoretical framework. The article emphasizes the importance of unraveling the mysteries behind these models, not only for improving AI technology but also for managing potential risks associated with their future development. Ultimately, understanding deep learning is portrayed as both a scientific puzzle and a critical endeavor for the advancement and safe implementation of artificial intelligence.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 8 months ago (1 children)

This makes some very strong assumptions about what's going on inside the model. We don't know that we can think of concepts as being internally represented or that these concepts would make sense to humans.

Suppose a model sometimes seems to confuse the concept. There will be wrong examples in the training data. For all we know, it may have learned that this should be done if there was an uneven number of words since the last punctuation mark.

To feed text into an LLM, it has to be encoded. The normal schemes are for different purposes and not suitable. A text is broken down into tokens. A token can be a single character or an emoji, part of a word, or even more than a word. A token is represented by numbers and that's what the model takes as input and gives as output. A text, turned into numbers, is called an embedding.

The process of turning a text into an embedding is quite involved. It uses its own neural net. The numbers should already relate to the meaning. Because of the way these are trained, English words are often a single token, while words from other languages are dissected into smaller parts.

If an LLM "thinks" in tokens, then that's something it has learned. If it "knows" that a token has a language, then it has learned that.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

This makes some very strong assumptions about what’s going on inside the model.

I explicitly marked the potential explanations as "hypotheses", acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).

We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans. [implied: "you're assuming that LLMs represent concepts internally."]

The implication is incorrect.

"Concept" in this case is simply a convenient abstraction, based on how humans would interpret the output. I'm not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I'm placing my bets on the second one.

The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).


Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).