wow that lasted for what, just under a year? Impressive, it took less than a year for them to lose their morals.
Generative AI
A community about news, questions, development and all related subjects of Generative AI
I remember a few years ago when Google or AWS staff rebelled because the Pentagon was going to use their (hosting?) services. I guess those types people at OpenAI have either been let go, or beaten into submission. I understand the feeling of futility, when no matter how much effort you put into something, someone above you who has little to no understanding of what they're doing, won't listen to your advice/recommendation
Hit the nail on the head. It doesn't matter what the software/tech people say, business will do whatever the fuck it wants. If you have a moral objection? Guess what lucky case is you're moved to an irrelevant project and your career trajectory is immediately stopped. Otherwise insubordination, not doing your job duties, there's the door.
For a lot of us software people (myself included), I've made decisions in the past of "Well, they're going to build it anyway, I might as well try to enforce what I can from my level here". I know 100% I've said things aren't technically possible at key junctures when they started breaking moral lines. "Sorry, I just don't know of a way technically to make that happen". They can think of me being stupid, I don't care.
"Mo-rals"? 404 not found.
Computer, draw me up a battle plan, you did great at picking random apartment complexes to bomb last time.
When I see this sort of thing, I immediately remember something that I learned from discourse analysis: look at what is said and what is not said.
OpenAI knows that military and warfare are profitable and unpopular. So how do you profit from it without getting the associated bad rep ("OpenAI has bloods on its hands!")? Do it as silently as possible, and cover it under an explanation that it's "clearer" for you.
Hell yeah. I don't see this being nefarious so much as I do as the same way corporate spaces implement ChatGPT. It's going to be part of the enshittification of the military. Got an admin question? We just fired the admin specialists so ask the wonky robot. Got a medical question? Military healthcare has been dismantled so ask the wonky robot doctor. Aircraft mechanics are going to cause a crash because they made the robot mechanic hallucinate how an obscure part on a classified component must be installed. Everything will get worse the more they try to plug the holes in their manpower with a search engine that can pretend to be a horny minotaur.
The canary is dead.
I viewed OpenAI as a disturbance and an annoyance until now.
You realize, of course, this means war.