this post was submitted on 05 Mar 2024
111 points (89.4% liked)

Technology

59454 readers
4454 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (2 children)

I 100% agree the genie is out of the bottle. People who want to walk back this change are not dealing with reality. AI and robotics are so valuable I very much doubt there's even any point in talking about slowing it down. All that's left now is to figure out how to use the good and deal with the bad - likely on a timeline of months to maybe one or two years.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

I'm personally waiting for legal cases to do with the use of AI trained on code, and whether the licenses apply to it.

If they don't, our GPL becomes almost useless because it can be laundered but at the same time we can begin using AI trained on code we don't necessarily abide by the license terms of (maybe even decomps I don't know how it'll go). Fight fire with fire and all. So I'd maybe look into that.

If they do, then I'll probably still use it, but mainly with permissively licensed code, and code also under the GPL (as I use the GPL)

And in both cases, they'd be local models, not "cLoUd" models run by the likes of M$

Until then, I'm not touching it.

[–] [email protected] 3 points 8 months ago (1 children)

That timeline of dealing with the bad looks incredibly optimistic. I imagine new issues will likely be regularly cropping up as well which we'll also have to address.

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago)

I agree. I'm talking about how quickly we're going to have strategies in place to deal not how quickly we'll have it all figured out. My guess is we have at best a year before it's a huge issue, and I agree with your take that figuring out human vs. AI content etc. is going to be an ongoing thing. Perhaps until AI gets so good it ceases to matter as much because it will be functionally the same.