this post was submitted on 05 Feb 2024
196 points (89.5% liked)
Technology
59298 readers
4608 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I will read those, but I bet "accidentally good enough to convince many people." still applies.
A lot of things from LLM look good to nonexperts, but are full of crap.
https://notes.aimodels.fyi/self-rag-improving-the-factual-accuracy-of-large-language-models-through-self-reflection/
A cool paper. Using the LLM to judge value of new inputs.
I am always skeptical of summaries of journal articles. Even well meaning people can accidentally distort the conclusions.
Still LLM is a bullshit generator that can check bullshit level of inputs.
https://poke-llm-on.github.io/
Reinforcement learning. Cool project. Still no need to "know" anything. I usually play this type of have with short rules and monitoring the current state.
https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html
Author later discusses training on you data versus general datasets.
I am out of my depth, but does not seem to provide strong evidence for the modem not just repeating information that shows up a lot for the given inputs.
https://arxiv.org/abs/2310.02207
2 author paper with interesting evidence. Again, evidence not proof. Wait for the papers that cite this one.
https://notes.aimodels.fyi/researchers-discover-emergent-linear-strucutres-llm-truth/
References a 2 author paper. I am not an expert in the field, but it is important to read the papers that reference this one. Those papers will have criticisms that are thought out. In general, fewer authors means less debate between the authors and easier to miss details.