this post was submitted on 04 Sep 2024
263 points (100.0% liked)

TechTakes

1430 readers
153 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 2 months ago* (last edited 2 months ago) (1 children)

ChatGPT gives you a bad summary full of hallucinations and, as a result, you choose not to read the text based on that summary.

[–] [email protected] -5 points 2 months ago (1 children)

(For clarity I'll re-emphasise that my top comment is the result of misreading the word "documents" out, so I'm speaking on general grounds about AI "summaries", not just about AI "summaries" of documents.)

The key here is that the LLM is likely to hallucinate the claims of the text being shortened, but not the topic. So provided that you care about the later but not the former, in order to decide if you're going to read the whole thing, it's good enough.

And that is useful in a few situations. For example, if you have a metaphorical pile of a hundred or so scientific papers, and you only need the ones about a specific topic (like "Indo-European urheimat" or "Argiope spiders" or "banana bonds").

That backtracks to the OP. The issue with using AI summaries for documents is that you typically know the topic at hand, and you want the content instead. That's bad because then the hallucinations won't be "harmless".

[–] [email protected] 14 points 2 months ago (1 children)

But the claims of the text are often why you read it in the first place! If you have a hundred scientific papers you're going to read the ones that make claims either supporting or contradicting your research.

You might as well just skim the titles and guess.