this post was submitted on 06 Apr 2024
9 points (100.0% liked)

MoreWrite

111 readers
8 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS
9
submitted 7 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

This is just a draft, best refrain from linking. (I hope we'll get this up tomorrow or Monday. edit: probably this week? edit 2: it's up!!) The [bracketed] stuff is links to cites.

Please critique!


A vision came to us in a dream — and certainly not from any nameable person — on the current state of the venture capital fueled AI and machine learning industry. We asked around and several in the field concurred.

AIs are famous for “hallucinating” made-up answers with wrong facts. The hallucinations are not decreasing. In fact, the hallucinations are getting worse.

If you know how large language models work, you will understand that all output from a LLM is a “hallucination” — it’s generated from the latent space and the training data. But if your input contains mostly facts, then the output has a better chance of not being nonsense.

Unfortunately, the VC-funded AI industry runs on the promise of replacing humans with a very large shell script. If the output is just generated nonsense, that’s a problem. There is a slight panic among AI company leadership about this.

Even more unfortunately, the AI industry has run out of untainted training data. So they’re seriously considering doing the stupidest thing possible: training AIs on the output of other AIs. This is already known to make the models collapse into gibberish. [WSJ, archive]

There is enough money floating around in tech VC to fuel this nonsense for another couple of years — there are hundreds of billions of dollars (family offices, sovereign wealth funds) desperate to find an investment. If ever there was an argument for swingeing taxation followed by massive government spending programs, this would be it.

Ed Zitron gives it three more quarters (nine months). The gossip concurs with Ed on this being likely to last for another three quarters. There should be at least one more wave of massive overhiring. [Ed Zitron]

The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

AI is holding up the S&P 500. This means that when the AI VC bubble pops, tech will drop. Whenever the NASDAQ catches a cold, bitcoin catches COVID — so expect crypto to go through the floor in turn.

top 39 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 7 months ago

Yes.

Pessimistically, the world can stay irrational (heh) longer than we can stay solvent (alive and well enough to work wih this sneerious outlet)

[–] [email protected] 6 points 7 months ago

something something Confessions of a VC-Funded Opium Eater

[–] [email protected] 6 points 7 months ago (1 children)

The one major comment from me is I think you have a framing weakness centered on “if you know how LLMs work”, since that is a very load bearing point for much of what follows but holds no anchor point for those who do not know to also follow along (unless going on faith)

I realize you’re not writing this as an explainer blog, but a short sentence + link to elsewhere with an explanation (“for those who don’t know, has a decent explanation without being overly technical”) might be a good patch for that?

Is it worth also casting a light on the various vendor deals with e.g. Reddit (et al), in their search for structured and scoped training data?

[–] [email protected] 4 points 7 months ago (1 children)

yeah, it's the balance between "as you know" (when a given post will always be someone's first) and explaining the universe from first principles

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

Yep, know you know that. To be clear, my suggestion was more to have the extra outside ref link, not to suggest more work. Apologies, flubrain the last few days kicking my ass

[–] [email protected] 4 points 7 months ago

it’s a good section! you can tell it’s effective when an AI fan spontaneously appears to show us his entire ass.

my one suggestion is to expand upon this paragraph:

The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

I feel this could be followed up by a paragraph describing the extremely cultlike environment we know exists in damn near every serious AI company. for me, that’s the missing piece of why so many Ph.Ds are excited to be underpaid at OpenAI, a company with extremely questionable motives and practices in the academic space, and other AI companies with similar motives. through SneerClub we know the origins of that environment, but I haven’t seen a thorough analysis yet of the financial motives behind insisting your engineers are all members of ComStar, other than what we saw earlier this year after the schism at OpenAI. it’s very likely you have better sources for this stuff than I do though.

[–] [email protected] 3 points 7 months ago (2 children)

I'm not a stock person man, but didn't the hype from bitcoin last like a decade, despite not having a single widespread use case? Why wouldn't LLM hype last the same amount of time, when people actually use it for things?

[–] [email protected] 6 points 7 months ago

the market can stay fucking stupid well beyond reason and crypto has thoroughly disabused me of the efficient market hypothesis, but i don't think AI has the Ponzi-like nature of bitcoin, there's no dream of getting rich for free for the common degen

[–] [email protected] 5 points 7 months ago* (last edited 7 months ago) (1 children)

From my uneducated perspective, LLM hype seems to me more like any other tech bubble than Bitcoin. It is actually built on the promise of return on investment. But somehow the whole industry seems to burn way more money than it can rake in, and this has to, at some point, raise some eyebrows with the investors. Normally, they prop up dozens of startups, calculating with a high failure rate because one successful venture would cover the losses plus turn a profit. AI companies however burn so much money and still have no way to make that back, so this concept doesn’t work.

I don’t think you can keep this alive just by convincing the next idiot to pump in more money than you did, like you can with Bitcoin.

[–] [email protected] 5 points 7 months ago (1 children)

yeah, that's the precise prediction that I put at two years but Ed and the gossips both put at nine months

[–] [email protected] 5 points 7 months ago (1 children)

I don't have a dog in the race and I always think the bubbles will burst before they do. But with that caveat, shouldn't the interest rates be a factor?

My reasoning is that part of a bubble is that as long as line goes up there are assets which can be used for collateral for loans for new money to push the line up. With a low interest rate the new money is cheaper, with high interest it's more expensive. So all else equal, the boom should burst quicker with higher interest rates.

[–] [email protected] 4 points 7 months ago

yes, that's a lot of the reason - they're spending like it's ZIRP and it just isn't. Should mention that.

[–] [email protected] 2 points 5 months ago

Sorry I missed this, especially as a mod of this sub. It was a great post, so I'm assuming all the good feedback below was helpful