this post was submitted on 09 Aug 2023
-4 points (40.0% liked)
Science Fiction
13659 readers
7 users here now
Welcome to /c/ScienceFiction
December book club canceled. Short stories instead!
We are a community for discussing all things Science Fiction. We want this to be a place for members to discuss and share everything they love about Science Fiction, whether that be books, movies, TV shows and more. Please feel free to take part and help our community grow.
- Be civil: disagreements happen, but that doesn’t provide the right to personally insult others.
- Posts or comments that are homophobic, transphobic, racist, sexist, ableist, or advocating violence will be removed.
- Spam, self promotion, trolling, and bots are not allowed
- Put (Spoilers) in the title of your post if you anticipate spoilers.
- Please use spoiler tags whenever commenting a spoiler in a non-spoiler thread.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To a certain extent, yes, the training data is blindly being dumped in. There's no way terabytes of training data is being manually reviewed for accuracy. If for no other reason, it doesn't economically make sense to do so. It's simply not feasible for humans to manually currate all of that data and even if they did, human error still exists.
Your disbelief doesn't mean it's not happening. The data sources that go into AIs are indeed curated selectively. Honestly, what do you think happens, a webcrawler is told to just "go nuts" and whatever random data it spits out gets fed right in? Trainers pick their sources carefully. They deduplicate it, they format it, they do a lot of work on it.
Perfection is not required. Human error is fine in manageable amounts.