Found a pretty good Tweet about HL: Alyx today:
BlueMonday1984
I give it a week before data starts to leak.
The public's gonna give themselves a sneak peek.
🎶 Tryna strike a chord and its probably A minorrrrrrrrrrr
(seriously, what the fuck HN)
anyone wanna take bets on how much pearlclutching surprisedpikachu we’ll see
I suspect we'll see a fair amount. Giving some specifics:
-
I suspect we'll see Sammy accused of endangering all of humanity for a quick buck - taking Altman at his word, OpenAI is attempting to create something which they themselves believe could wipe out humanity if they screw things up.
-
I expect calls to regulate the AI industry will louden in response to this - what Sammy's doing here is giving the true believers more ammo to argue Silicon Valley may potentially trigger the robot apocalypse that Silicon Valley themselves have claimed AI is capable of unleashing.
A quick update: @ai_shame is quitting Twitter, and Musk using posts for AI training is the reason why:
New piece from The Atlantic: The AI Boom Has an Expiration Date
The full piece is worth a read, but the conclusion's pretty damn good, so I'm copy-pasting it here:
All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.
If these nuclear plants manage to come to fruition, it'll be the sole miniscule silver lining of the bubble. Considering its AI, though, I expect they'll probably suffer some kind of horrific Chernobyl-grade accident which kills nuclear power for good, because we can't have nice things when there's AI involved.
the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available.
I personally anticipate this will be the lasting legacy of AI as a whole - everything that you mentioned was caused in the alleged pursuit of AGI/Superintelligence^tm^, and gen-AI has been more-or-less the "face" of AI throughout this whole bubble.
I've also got an inkling (which I turned into a lengthy post) that the AI bubble will destroy artificial intelligence as a concept - a lasting legacy of "crud and untruth" as you put it could easily birth a widespread view of AI as inherently incapable of distinguishing truth from lies.
It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.
I've already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair use's research exception makes it especially vulnerable to its repeal.
On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), I'd expect people are gonna be a lot stingier about providing source code or contributing to FOSS.
Update: My previous statement was wrong, turns out @ai_shame is still around