I need to think about this more. I think there's a category of engineer who adapts very closely to the expectations of execs -- it's kind of "pick me"-adjacent and it's more commonly a behavior of otherwise unskilled engineers. "Resembling an engineer" is certainly a behavior sales guys can adopt.
I think there's some engineers who actually see productivity gains from LLMs, which is often a factor of the kinds of problems they solve, but I distrust people who don't caveat this.
I think LLMs are effective persuaders, not just bias reinforcers.
In situations where the social expectations forced them to, I've seen a lot of CEOs temporarily push for visions of the future that I don't find horrifying. A lot of them learned milktoast pro-queer liberalism because basically all the intelligent people in their social circles adopted some version of that attitude. I think LLMs are helping here -- they generally don't hate trans people and tend to be antiracist, even in a fairly bungling way.
A lot of doofy LessWrong-adjacent bullshit abruptly filtered into my social circle and I think OpenAI somehow caused this to happen. Actually, I don't mind the LessWrong stuff -- they do a lot of interesting experimentation with LLMs and I find their extreme positions interesting when they hold and defend those positions earnestly. But hearing it from people who have absolutely no connection to that made me think "wow, these people are profoundly easily-influenced and do not know where their ideas are coming from."
I do think these particular stances got mainstreamed because they entail basically no economic concessions, but I also do not think CEOs understand this. I think it would be nice if LLMs just started treating, I don't know, Universal Basic Income as this obvious thing that everyone has already started agreeing with.