this post was submitted on 28 Jul 2023
459 points (93.5% liked)

Technology

59232 readers
3057 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI just admitted it can't identify AI-generated text. That's bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (2 children)

So a few tidbits you reminded me of:

  • You're absolutely right: there's what's called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.

  • You're correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.

  • We've had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly "large," although they've gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn't really a thing until the past year.

  • Retraining image models on generated imagery does seem to cause problems, but I've noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.

  • Critical reading and thinking was always a requirement, as I believe you say, but certainly it's something needed for interpreting the output of LLMs in a factual context. I don't really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.

  • Most of the text models released by OpenAI are so-called "Generative Pretrained Transformer" models, with the keyword being "transformer." Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.

[–] [email protected] 1 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/viJt_DXTfwA?t=980

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] [email protected] 1 points 1 year ago (1 children)

These all align with my understanding! Only thing I'd mention is that when I said "we've not had llms available" I meant "LLMs this powerful ready for public usage". My b

[–] [email protected] 2 points 1 year ago

Yeah, that's fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn't rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.