this post was submitted on 05 Sep 2023
322 points (98.8% liked)
Technology
59039 readers
3181 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Brandolini's law, aka the "bullshit asymmetry principle" : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.
I'm curious to see if AI tech can actually help fight some of the bullshit out there someday. I agree that current AI is only making it easier to produce bullshit, but I think with some advances it could be used to parse a long-winded batch of bullshit, and summarize it, maybe with bullet points about how the source material is wrong. If they can make an AI as confident as chatgpt, but without as much of the "makes stuff up left and right" it could be useful.
THEN we just have to worry about who owns the AI that parses and summarizes the info we take in, and what kind of biases they've baked into the tech...
It is one of the most difficult problems on earth: to decide between lie or truth.
And then think about the fine line when detecting irony, half-irony or other forms of humoristic non-truth.
I have high hopes for concepts like Toolformer where the model has to learn to use external APIs and resources like Wikipedia or Wolfram to get answers, rather than relying on the inscrutable and garbled soup of knowledge absorbed from the text training corpus directly. Systems plugged into knowledge graphs could have the best of both worlds - able to generate well-written novel text outputs AND the added rigor of "classical AI" style interpretability.
Those AI are the best ones to produce fake scientific papers. It's a cat and mouse game again. Those who can detect bullshit can produce the best bullshit.