195
submitted 7 hours ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 10 points 3 hours ago

To be honest, the one thing that LLMs actually are good at, is summarizing bodies of text.

Producing a critique of a manuscript isnt actually to far out for an LLM, it's sorta what it's always doing, all the time.

I wouldn't classify it as something to use as concrete review, and one must also keep in mind that context windows on LLMs usually are limited to only thousands of tokens, so they can't even remember anything more then like 5 pages ago. If your story is bigger than that, they'll struggle to comment on anything before the last 5 or so pages, give or take.

Asking an LLM to critique a manuscript is a great way to get constructive feedback on specific details, catch potential issues, maybe even catch plot holes, etc.

I'd absolutely endorse it as a step 1 before giving it to an actual human, as you likely can substantially improve your manuscript by iterating over it 3-4 times with an LLM, just covering basic issues and improvements, then letting an actual human focus on the more nuanced stuff an AI would miss/ignore.

[-] [email protected] 15 points 2 hours ago* (last edited 2 hours ago)

LLMs cannot provide critique

They can simulate what critique might look like by way of glorified autocomplete. But it cannot actually provide critique, because they do not reason, they do not critically think. They match their outputs based upon the most statistically likely interpretation of the input in what you could think of as essentially a 3D word cloud.

Any critique that you get from an llm is going to be extremely limited and shallow (And there's for the critical critique you require). The longer your text the less likely the critique that you receive is going to be relevant to the depth in which it may be needed.

It's good for finding mistakes, it's good for paraphrasing, it's good for targeting. It cannot actually critique, which requires a level of consideration that is impossible for LLMs today. There's a reason why text written by llms tends to have distinguishing features, or lack of, that's a bland statistically generated amalgamation of human writing. It's literally a "common denominator" generator.

this post was submitted on 22 Sep 2024
195 points (97.6% liked)

Facepalm

2562 readers
299 users here now

founded 1 year ago
MODERATORS