this post was submitted on 10 Sep 2023
87 points (79.6% liked)

Technology

59197 readers
3404 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 28 points 1 year ago* (last edited 1 year ago) (2 children)

To allow ChatGPT or comparable AI models to be deployed in hospitals, Succi said that more benchmark research and regulatory guidance is needed, and diagnostic success rates need to rise to between 80% and 90%.

Sucks if your one of the 10-20% who don't get proper treatment (maybe die?) because some doctor doesn't have time to double check. But hey ... efficiency!

[–] [email protected] 20 points 1 year ago (1 children)

Ya that's a fundamental misunderstanding of percentages. For an analogous situation with which we're all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.

Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won't develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don't understand something. It'll just spout bullshit when it's wrong, not because it doesn't know, but because it has no concept of knowing at all.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

It really needs to be pinned to the top of every single discussion around chatgbt:

It does not give answers because it knows. It gives answers because it thinks it looks right.

Remember back in school when you didn't study for a test and went through picking answers that "looked right" because you vaguely remember hearing the words in Answer B during class at some point?

It will never have wisdom and intuition from experience, and that's critically important for doctors.

[–] [email protected] 1 points 1 year ago

Or one of the ninty nine percent of people who don't give the AI their symptoms in medical terminology.