this post was submitted on 22 Jul 2024
10 points (85.7% liked)

Programming

17202 readers
438 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 2 months ago* (last edited 2 months ago) (1 children)

If you were a developer that knew you were responsible for developing ring zero code, massively deployed across corporate systems across the world, then you should goddamned properly test the update before deploying it.

This isn't a simple glitch like a calculation rounding error or some shit, the programmers of any ring zero code should be held fully responsible, for not properly reviewing and testing the code before deploying an update.

Edit: Why not just ask Dave Plummer, former Windows developer...

https://youtube.com/watch?v=wAzEJxOo1ts

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

If you system depends on a human never making a mistake, your system is shit.

It's not by chance that for example, Accountants have since forever had something which they call reconciliation where the transaction data entered from invoices and the like then gets cross-checked with something else done differently, for example bank account transactions - their system is designed with the expectation that humans make mistakes hence there's a cross-check process to catch those.

Clearly Crowdstrike did not have a secondary part of the process designed to validate what's produced by the primary (in software development that would usually be Integration Testing), so their process was shit.

Blaming the human that made a mistake for essentially being human and hence making mistakes, rather than the process around him or her not having been designed to catch human failure and stop it from having nasty consequences, is the kind of simplistic ignorant "logic" that only somebody who has never worked in making anything that has to be reliable could have.

My bet, from decades of working in the industry, is that some higher up in Crowdstrike didn't want to pay for the manpower needed for the secondary process checking the primary one before pushing stuff out to production because "it's never needed" and then the one time it was needed, it wasn't there, thinks really blew up massivelly, and here we are today.

[–] [email protected] 0 points 2 months ago (1 children)

Indeed, I fully agree. They obviously neglected on testing before deployment. So you can split the blame between the developer that goofed on the null pointer dereferencing and the blank null file, and the higher ups that apparently decided that proper testing before deployment wasn't necessary.

Ultimately, it still boils down to human error.

[–] [email protected] -1 points 2 months ago* (last edited 2 months ago)

Making a mistake once in a while on something one does all time is to be expected - even somebody with a 0.1% rate of mistakes will fuck up once in while if they do something with high enough frequency, especially if they're too time constrained to validate.

Making a mistake on something you do just once, such as setting up the process for pushing virus definition files to millions of computers in such a way that they're not checked inhouse before they go into Production, is a 100% rate of mistakes.

A rate of mistakes of 0.1% is generally not incompetence (dependes on how simple the process is and how much you're paying for that person's work), whilst a rate of 100% definitelly is.

The point being that those designing processes, who have lots of time to do it, check it and cross check it, and who generally only do it once per place they work (maybe twice), really have no excuse to fail the one thing they had to do with all the time in the World, whilst those who do the same thing again and again under strict time constraints definitelly have valid excuse to once in a blue moon make a mistake.