this post was submitted on 30 Sep 2023
56 points (95.2% liked)

Futurism

434 readers
1 users here now

A place to discuss the ideas, developments, and technology that can and will shape the future of civilization.

Tenets:

(1) Concepts are often better treated in isolation -- eg: "what if energy became near zero cost?"
(2) Consider the law of unintended consequences -- eg: "if this happens, then these other systems fail"
(3) Pseudoscience and speculative physics are not welcome. Keep it grounded in reality.
(4) We are here to explore the parameter spaces of the future -- these includes political system changes that advances may trigger. Keep political discussions abstract and not about current affairs.
(5) No pumping of vapourware -- eg: battery tech announcements.

See also: [email protected] and [email protected]

founded 1 year ago
MODERATORS
 

The article and release are interesting unto themselves. However, as this is c/Futurism, let's discuss what happens in the future. How do you folks think this ideological battleground plays out in 5, 50, or 500 years?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

A Large Language Model is just a set of computer algorithms designed to answer a user's question, it's just a tool. None of your arguments are at all relevant to the tool itself, but rather to how the tool is used. A hammer is designed to pound nails but it can also be used to murder somehow, are you going to sue the hammer manufacturer because they didn't prevent that?

If someone uses a hammer to murder someone, do they get away with it because the hammer wasn't designed to kill someone, so clearly it's not their fault? No, of course not. This article is nothing but rage-bait. They may as well have taken a hammer and started hitting everything they could (except for nails of course), and then wrote some bullshit about how Master-Craft produces items that can be used to perform abortions and kill Native Americans.

And as for my original post, this has to do with how the LLM is trained. There's several ways to 'censor' the output from a LLM, including prompts and ban tokens. This is what services like GPT or Stable Diffusion do, they don't censor the training data, they censor the inputs and outputs shown to the user. So should the training data be scrubbed of all traces of anything we find objectionable? There was plenty of murders in Hamlet, do we exclude that because the model might suggest poisoning your partner by pouring poison in their ear while they sleep?