Naw, I figured it out; they absolutely don't care if AI doesn't work.
They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.
The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.
Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.
the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism
Hilarious.
Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny or perform creative tasks.
Today that's become so normalized that people are calling things thought to be literally impossible a speculative bubble because advancement that surprised everyone in the industry initially and then again with the next model a year later hasn't moved fast enough?
The industry is still learning how to even use the tech.
This is like TV being invented in 1927 and then people in 1930 saying that it's a bubble because it hasn't grown as fast as they expected it to.
Did OP consider the work going on at literally every single tech college's VC groups in optoelectronic neural networks and how that's going to impact decoupling AI training and operation from Moore's Law? I'm guessing no.
Near-perfect analysis, eh? By someone who read and regurgitated analysis by a journalist who writes for a living and may just have an inherent bias towards evaluating information on the future prospects of a technology positioned to replace writers?
We haven't even had a public release of multimodal models yet.
This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.
You have the insider clout of a 15 year old with a search engine
my god