this post was submitted on 17 Aug 2023
343 points (98.0% liked)

Technology

59232 readers
3057 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 year ago (2 children)

"just as trustworthy as human authors" - Ok so you have no idea how these chatbots work do you?

[–] [email protected] 1 points 1 year ago (2 children)

You have a lot of faith in human authors.

[–] [email protected] 11 points 1 year ago (1 children)

Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING

[–] [email protected] 6 points 1 year ago

LLM AI bases its responses from aggregated texts written by ... human authors, just without having any sense of context or logic or understanding of the actual words being put together.

[–] [email protected] 0 points 1 year ago (1 children)

I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not an ML expert but we've been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It's just a fancy mirror. That's why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn't exist.

Do you really want to lose your time reading a blob of data with no coherency?

[–] [email protected] 4 points 1 year ago

Do you really want to lose your time reading a blob of data with no coherency?

We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you've found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it's possible to make a reliable model with that bullshit detector.