this post was submitted on 26 Jun 2023
17 points (100.0% liked)
World News
22058 readers
53 users here now
Breaking news from around the world.
News that is American but has an international facet may also be posted here.
Guidelines for submissions:
- Where possible, post the original source of information.
- If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
- Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
- Do not post ragebait or shock stories. These will be removed.
- Do not post tabloid or blogspam stories. These will be removed.
- Social media should be a source of last resort.
These guidelines will be enforced on a know-it-when-I-see-it basis.
For US News, see the US News community.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The problem I see is that a lot of the LLM models are already open-source, so the legislation might try to limit it's usage, but that should have been done before their training. Right now anyone with a simple laptop can download a model and run it locally fully offline. Same goes to other AI technologies. To be honest the time to regulate was when those crappy deepfakes started showing a few years back, now it's defintely too late.
And yet we still haven't done anything to slow it down let alone stop it and that has been an issue for decades.
LLMs are very rudimentary forms of AI, they're not even the kind of AI that I think we should be worried about.
I wouldn't even call them an "intelligence" at all, more like an aggregated set of data.
Not an ML and AI specialist or anything but what I've seen of some of the technical details, with my "some" education and experience in software engineering, did not lead me to believe we are anywhere close to actual artificial intelligence.