News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
To be clear, an operating system in an enterprise environment should have mechanisms to access and modify core system functions. Guard-railing anything that could cause an outage like this would make Microsoft a monopoly provider in any service category that requires this kind of access to work (antivirus, auditing, etc). That is arguably worse than incompetent IT departments hiring incompetent vendors to install malware across their fleets resulting in mass-downtime.
The key takeaway here isn't that Microsoft should change windows to prevent this, it's that Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.
I guarantee someone in their IT department raised the point of not just downloading updates. I can guarantee they advise to test them first because any borderline competent I.T professional knows this stuff. I can also guarantee they were ignored.
Also, part of the issue is that the update rolled out in a way that bypassed deployments having auto updates disabled.
You did not have the ability to disable this type of update or control how it rolled out.
https://www.crowdstrike.com/blog/falcon-content-update-preliminary-post-incident-report/
Their fix for the issue includes "slow rolling their updates", "monitoring the updates", "letting customers decide if they want to receive updates", and "telling customers about the updates".
Delta could have done everything by the book regarding staggered updates and testing before deployment and it wouldn't have made any difference at all. (They're an airline so they probably didn't but it wouldn't have helped if they had).
Except pretty much every paragraph in ISO27002.
That book?
Highlights include:
..etc. like, it's all in there. And I get it's super-fetch to do the cool stuff that looks great on a resume, but maybe, just fucking maybe, we should be operating like we don't want to use that resume every 3 months.
External people controlling your software rollout by virtue of locking you into some cloud bullshit for security software, when everyone knows they don't give a shit about your apps security nor your SLA?
Glad Skippy's got a good looking resume.
Yes, that book. Because the software indicated to end users that they had disabled or otherwise asserted appropriate controls on the system updating itself and it's update process.
That's sorta the point of why so many people are so shocked and angry about what went wrong, and why I said "could have done everything by the book".
As far as the software communicated to anyone managing it, it should not have been doing updates, and cloudstrike didn't advertise that it updated certain definition files outside of the exposed settings, nor did they communicate that those changes were happening.
Pretend you've got a nice little fleet of servers. Let's pretend they're running some vaguely responsible Linux distro, like a cent or Ubuntu.
Pretend that nothing updates without your permission, so everything is properly by the book. You host local repositories that all your servers pull from so you can verify every package change.
Now pretend that, unbeknownst to you, canonical or redhat had added a little thing to dnf or apt to let it install really important updates really fast, and it didn't pay any attention to any of your configuration files, not even the setting that says "do not under any circumstances install anything without my express direction".
Now pretend they use this to push out a kernel update that patches your kernel into a bowl of luke warm oatmeal and reboots your entire fleet into the abyss.
Is it fair to say that the admin of this fleet is a total fuckup for using a vendor that, up until this moment, was generally well regarded and presented no real reason to doubt while being commonly used? Even though they used software that connected to the Internet, and maybe even paid for it?
People use tools that other people build. When the tool does something totally insane that they specifically configured it not to, it's weird to just keep blaming them for not doing everything in-house. Because what sort of asshole airline doesn't write their own antivirus?
General practices aside, should they really not plan anybackups system though? Crowd strike did not cause 500 million in damages to delta, deltas disaster recovery response did.
Where do we draw the line there though I'm not sure. If you set my house on fire but the fire department just stands outside and watches it burn for no reason, who should I be upset with?
Well, in your example you should be mad at yourself for not having a backup house. 😛
There's a lot of assumptions underpinning the statements around their backup systems. Namely, that they didn't have any.
Most outage backups focus on datacenter availability, network availability, and server availability.
If your service needs one server to function, having six servers spread across two data centers each with at least two ISPs is cautious, but prudent. Particularly if you're setup to do rolling updates, so only one server should ever be "different" at a time, leaving you with a redundant copy at each location no matter what.
This goes wrong if someone magically breaks every redundant server at the same time. The underlying assumption around resiliency planning is that random failure is probabilistic in nature, and so by quantifying your failure points and their failure probability you can tune your likelihood of an outage to be arbitrarily low (but never zero).
If your failure isn't random, like a vendor bypassing your update and deployment controls, then that model fails.
A second point: an airline uses computers that aren't servers, and requires them for operations. The ticketing agents, the gate crew that manages where people sit and boarding, the ground crew that need to manage routine inspection reports, the baggage handlers that put bags on the right cart to get them to the right plane, and office workers who manage stuff like making sure fuel is paid for, that crews are ready for when their plane shows up and all that stuff that goes into being an airline that isn't actually flying planes.
All these people need computers, and you don't typically issue someone a redundant laptop or desktop computer. You rely on hardware failures being random, and hire enough IT staff to manage repairs and replacement at that expected cadence, with enough staff and backup hardware to keep things running as things break.
Finally, if what you know is "computers are turning off and not coming back online", your IT staff is swamped, systems are variously down or degraded, staff in a bunch of different places are reporting that they can't do their jobs, your system is in an uncertain and unstable position. This is not where you want a system with strict safety requirements to be, and so the only responsible action is to halt operations, even if things start to recover, until you know what's happening, why, and that it won't happen again.
As more details have come out about the issues that Delta is having, it appears that it's less about system resiliency, although needing to manually fix a bunch of servers was a problem, and more that the scale of flight and crew availability changes overloaded that aforementioned scheduling system, making it difficult to get people and planes in the right place at the right time.
While the application should be able to more gracefully handle extremely high loads, that's a much smaller failure of planning than not having a disaster recovery or redundancy plan.
So it's more like I built a house with a sprinkler system, and then you blew it up with explosives. As the fire department and I piece it back together, my mailbox fills with mail and tips over into a creek, so I miss paying my taxes and need to pay a penalty.
I shouldn't have had a crap mailbox, but it wouldn't have been a problem if you hadn't destroyed my house.
Competent IT staffing includes IT management
Delta didn't download the update, tho. Crowdstrike pushed it themselves.
yes, the incompetence was a management decision to allow an external vendor to bypass internal canary deployment processes.
If you own the network you can prevent anything you want.
Well said.
Sometimes we take out technical debt from the loanshark on the corner.