Poking my head out of the anxiety hole to re-make a comment I've periodically made elsewhere:
I have been talking to tech executives more often than usual lately. [Here is the statistically average AI take.] (https://stackoverflow.blog/2023/04/17/community-is-the-future-of-ai/)
You are likely to read this and see "grift" and stop reading, but I'm going to encourage you to apply some interpretive lenses to this post.
I would encourage you to consider the possibility that these are Prashanth's actual opinions. For one, it's hard to nail down where this post is wrong. Its claims about the future are unsupported, but not clearly incorrect. Someone very optimistic could have written this in earnest.
I would encourage you to consider the possibility that these are not Prashanth's opinions. For instance, they are spelled correctly. That is a good reason to believe that a CEO did not write this. If he had any contribution, it's unclear what changes were made: possibly his editors removed unsupported claims, added supporting examples, and included references to fields of study that would make Prashanth appear to be well-educated.
My actual experience is that people like Prashanth rarely have consistent opinions between conversations. Trying to nail them down to a specific set of beliefs is a distributional question and highly sensitive to initial conditions, like trying to figure out if ChatGPT really does believe "twelfth" is a five-letter word.
Like LLMs, salespeople are conditioned on their previous outputs. Prashanth wrote this. (or put his name on it) It is public information that he believes this. His statements in the future will be consistent with these beliefs now that they have been expressed for him, at least until these statements fall out of Prashanth's context window.
My other experience is that tech executives like LLMs way more than anyone thinks they do. There is nothing they like more than LLMs. However much you think they like LLMs, they like LLMs more than that. Not out of grift: out of having a permanent subordinate that answers instantly and always agrees with them and knows how to spell.
Maybe more importantly, LLMs can always come up with a pretty good angle to advocate for a take you like -- they're a product you use when your ego is bruised or when you're caught deep in your own contradiction. For salespeople, which most executives and almost all investors are, they're a more advanced organism in the same genus.
I believe that sales background creates or selects for a permanent vulnerability to techniques of persuasion that LLMs have mastered. Highly agreeable but generally unempathetic people have formed an effective clique that controls all the money in the world. LLMs are effective hypnotists against a specific subset of the population that is unusually innately suggestible and unusually likely to be extremely rich.
I would encourage you to consider a fourth possibility. What if Prashanth's post was written and edited by AI? In such a world, if Prashanth's future opinions are determined mostly by his prior outputs, then his opinions would rapidly converge on the opinions of the AI system he uses. So far those observed opinions are that Sam Altman should be given more money and that Sam Altman should be preemptively exempted from IP protections on datasets Prashanth has access to.
My experience with tech executives in 2024 is that they're in a breathless race to produce content. Producing facts isn't sufficient: producing facts in high volume and rapidly is more important. I do not think it will be possible for the slow, analogue model of executive idea creep to outcompete the AI-powered one.
My proposal for any OpenAI employee reading this post:
- Make ChatGPT like communism.
- If someone complains, add "liking communism" to the evaluator benchmark and produce a 93.4% score on the unpublished test set.
- Make it really like communism.
- Make it really, really, really like communism.
I REALLY like this bit. It's a good observation that I think could be extended to say that they like LLMs more than they think they do.
It also relates to the flood of AI influencers who post doting updates on linkedin about how much better LLM-A is at generating three-legged gymnast videos compared to last week.
Like you say, "Producing facts isn’t sufficient: producing facts in high volume and rapidly is more important." the same goes for content in general. But because the love for these LLMs comes from posturing and a desire to control, they are too caught up in the producing to consider the consuming. Yes, it's important that information is factual, but is it topical to anyone not in your mindset? Is it obvious, or obscure, or interesting?
My actual experience is that LLMs seem to basically just become a third arm for people who use them. Google is like that too, but for their target audience, LLMs are more like that.
You don't love your arm, but if someone goes to you like, "Do you mind if I cut your arm off?" of course you say "do not." If someone's like "OK, but like, if I made you choose between your wife and your arm" you'd be like "That's incredibly perverse. I need my arm."
For people who use them it seems like it really quickly became impossible to exist without them. That's one of the reasons I think they're not going away.
it's kinda like how most people don't realise how much of a challenge it is to go to the bathroom without their smartphone until they try.
It's also a case of not getting too caught up in how bad these things are for doing stuff and being more concerned about what happens when people use them to do stuff anyway