this post was submitted on 30 Sep 2023
56 points (95.2% liked)
Futurism
434 readers
1 users here now
A place to discuss the ideas, developments, and technology that can and will shape the future of civilization.
Tenets:
(1) Concepts are often better treated in isolation -- eg: "what if energy became near zero cost?"
(2) Consider the law of unintended consequences -- eg: "if this happens, then these other systems fail"
(3) Pseudoscience and speculative physics are not welcome. Keep it grounded in reality.
(4) We are here to explore the parameter spaces of the future -- these includes political system changes that advances may trigger. Keep political discussions abstract and not about current affairs.
(5) No pumping of vapourware -- eg: battery tech announcements.
See also: [email protected] and [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can either be for censorship, or against it. There's no "oh let's just censor it a little" middle-ground.
You may need to visit doctor because there is so major symptoms of brainrot in this comment. Guys we should allow childporn everywhere because "YoU CaN EiThEr Be FoR CeNsOrShIp, Or AgAiNsT It." Clearly there is no room for any nuance in anything, its a plainly black or white issue. Sure we can argue the slippery slope of who do we get to decide on what needs to be "censored" and that is something we have to navigate carefully but to go full brainmush and throw your hands up in the air and say you either have to let everything be out there or nothing.
Ah yes, the "think of the children!" argument, battle-cry of conservatives everywhere.
Okay big guy, since you are whining that is such an easy target. Ironic you are calling anyone here a conservative since its fucking hilarious seeing the community of Lemmy leans heavily left. So are you saying revenge porn, snuff material, rape footage, etc should be on every platform that has the ability to host video or images because if you don't allow it, once again in your braindead take its "Censorship"? I would hope you would say no since I'm hoping you are a sensible individual who understands that some moderation in materials that a host provides is something that needs to happen or we will be flooded with "illegal" material and spam. Spam blocking would also count as censorship but again hoping you are sensible to understand that its fine to censor some material. Like sure we may be able to agree on some things but you just need to understand your initial stance is way too broad, that is it almost irresponsible to suggest and I feel you are just pulling the trigger way too quickly on something and you didn't fully understand the ramifications of what you are suggesting.
I disagree. There is middle ground. If an engineer gives bad advice, it shouldn't be propagated -- you know, bridges fall down and people die. Where possible, the invalid info should be scrubbed and replaced with valid info. The engineering firm also has their reputation, permits to practice, etc. at stake. But an AI does not. There's no one to sue for negligence when someone takes invalid advice from an AI which is masquerading as a doctor. Etc. The companies making AIs are mostly trying to protect themselves when they put the gates in place.
You could go stand on your soapbox and shout suicide tips to the crowd as they walk by. You might get locked up as you're abetting a crime (in most jurisdictions). But what if you're posting suicide advice into a forum, and the advice was generated by an AI? What if a script is posting it? Where does the legal responsibility for harm fall?
A Large Language Model is just a set of computer algorithms designed to answer a user's question, it's just a tool. None of your arguments are at all relevant to the tool itself, but rather to how the tool is used. A hammer is designed to pound nails but it can also be used to murder somehow, are you going to sue the hammer manufacturer because they didn't prevent that?
If someone uses a hammer to murder someone, do they get away with it because the hammer wasn't designed to kill someone, so clearly it's not their fault? No, of course not. This article is nothing but rage-bait. They may as well have taken a hammer and started hitting everything they could (except for nails of course), and then wrote some bullshit about how Master-Craft produces items that can be used to perform abortions and kill Native Americans.
And as for my original post, this has to do with how the LLM is trained. There's several ways to 'censor' the output from a LLM, including prompts and ban tokens. This is what services like GPT or Stable Diffusion do, they don't censor the training data, they censor the inputs and outputs shown to the user. So should the training data be scrubbed of all traces of anything we find objectionable? There was plenty of murders in Hamlet, do we exclude that because the model might suggest poisoning your partner by pouring poison in their ear while they sleep?