El Presidente Posted July 19, 2024 Posted July 19, 2024 Question to those far more seasoned in this field. Don't you test updates before pushing them live? ......anyone else calling bullshit on the line being sold so far?
Puros Y Vino Posted July 19, 2024 Posted July 19, 2024 You definitely test things beforehand in a lab. For sure. Either someone screwed something up along the way or they're hiding something more dire like a cyber attack
Popular Post Cigar Surgeon Posted July 19, 2024 Popular Post Posted July 19, 2024 The CEO of Crowdstrike is the same bellend that was the CTO of McAfee back in 2010 when they essentially did the same thing. Since this is a completely different company I can only assume he's brought that culture with him. I'm not in cybersecurity but I know enough to be dangerous. From the outside here's the list of failures Microsoft for allowing a third party access to the kernel that could create a situation like this in the first place End users for being complete mouth breathers and not having a copy of their BitLocker recovery keys Corporate culture for pushing a culture of faster instead of better Governments for not punishing corporations for unethical behavior that results in death or massive disruption to society Crowdstrike for allowing dev code to not be properly reviewed Crowdstrike for not having a QA team to catch the code Crowdstrike for not having a proper dev environment to test the code Crowdstrike and the entire industry for having a culture of mass deployment instead of staged deployment As Bill Adama said: All of this has happened before, and all of this will happen again. 9 1
Chibearsv Posted July 19, 2024 Posted July 19, 2024 For a system that is that big and that critical, I’d expect full testing before releasing. But I deal with these sorts of problems all the time in the systems we use. In our case, the system is a large data management system and rules are highly customizable for every company that uses it. Release notes are always given to user companies but many times the implications aren’t obvious, so the crashes happen. Sometimes the fixes are fast and easy and other times, they aren’t. I added a field to an input form this week with the help of the vendor client manager. Simple, easy, shouldn’t be a big deal, so why bother testing? Crashed our loan pricing tool for 1/2 a day until the vendor figured out how to unravel our update. We will try again next week with better testing. Sometimes deadlines have to be hit whether testing is completed or not. It’s hard to stop a release when some director has made a promise to higher ups. Then the project managers catch the wrath for screwing it up when it’s likely that they warned not to release. It happens. 2 1
cgoodrich Posted July 19, 2024 Posted July 19, 2024 2 hours ago, Cigar Surgeon said: As Bill Adama said: All of this has happened before, and all of this will happen again. Sorry @Cigar Surgeon, that wasn’t Bill Adama. It was originally stated by No.2, also known as Leoben Conoy. He first said that to Starbucks. 1 1
ha_banos Posted July 19, 2024 Posted July 19, 2024 It's cool to release fast these days. It's agile innit! There's Dora coming in the EU at least. Because corps can't be trusted. Not that regulation is always followed in spirit... Anecdote. They rolled out crowdstrike to servers where I was once. I noticed performance degradation. I dug around and found some falcon process causing all sorts of memory and CPU mischief. Took weeks of gathering metrics to persuade the vendor it was their fault. They halted the roll out to all prod servers. Ho hum.
BrightonCorgi Posted July 21, 2024 Posted July 21, 2024 Falcon sensor is quite intrusive. We need our security products and theirs to both ignore each other. Every security product has had a patch or update issue at some time. Crowdstrike hole I thought was interesting; the thing is if you reboot 15 times, then it disables itself.
MrBirdman Posted July 21, 2024 Posted July 21, 2024 On 7/19/2024 at 5:29 PM, Cigar Surgeon said: Microsoft for allowing a third party access to the kernel that could create a situation like this in the first place I believe this is unavoidable if you are going to have the most effective security. The rest of your list is entirely on point though. What a shitshow. I guess we can at least be grateful they didn’t accidentally push the Skynet update. 2
GerardMichaelTX Posted July 24, 2024 Posted July 24, 2024 It might have been a shitshow but it's the most effective product in neutralizing just about every threat out there and any company's best protection against zero day vulnerabilities. Anyone migrating over to something else because they're upset about this is pretty much a fool. 1
Cigar Surgeon Posted July 24, 2024 Posted July 24, 2024 I read the expected impact of this for Australia alone could pass $1bn USD. Fortunate 500 companies over $5.4bn USD. Pretty big 'whoopsie'. Hopefully when the US Congress calls George Kutz to testify they approach it with the same gusto as the Cheatle questioning.
Perla Posted July 24, 2024 Posted July 24, 2024 It was a breach of the 7P rule. Prior proper planning prevents pi$$ poor performance. 2
Popular Post El Presidente Posted July 24, 2024 Author Popular Post Posted July 24, 2024 You need to break these companies up or limit their marketshare. That is the role of Govt. One, two or three dominant players in such critical markets is madness. The public interest is not served by such an outcome. 7
raggie Posted July 25, 2024 Posted July 25, 2024 7 hours ago, GerardMichaelTX said: It might have been a shitshow but it's the most effective product in neutralizing just about every threat out there and any company's best protection against zero day vulnerabilities. Anyone migrating over to something else because they're upset about this is pretty much a fool. I’d disagree with the moving to a different product. Crowdstrike is a very good product, but there are so, so many other security controls every company should implement that a top of the line EDR shouldn’t be #1 priority. If this issue costed your business money, it’s reasonable to move to a different product. If the Snowflake data breaches are any indication of controls (or lack thereof), multi-factor authentication (MFA) is one of the first things everybody (companies and individuals alike) should implement. SMS is better than no but not ideal. Also, please get a password manager! 1
BrightonCorgi Posted July 25, 2024 Posted July 25, 2024 18 hours ago, GerardMichaelTX said: It might have been a shitshow but it's the most effective product in neutralizing just about every threat out there and any company's best protection against zero day vulnerabilities. Anyone migrating over to something else because they're upset about this is pretty much a fool. Agreed. The threats out there need not even be zero days. Name me a software that has never had a bad patch or update.
amberleaf Posted July 25, 2024 Posted July 25, 2024 I'm a sysadmin, and we used to test patches for two weeks before deployment to live. As mentioned many updates cover zero-day vulnerabilities, delaying deployment carries a greater risk so we now have to put faith in the vendor to get it right first time. Also: 1 3
GerardMichaelTX Posted July 25, 2024 Posted July 25, 2024 13 hours ago, raggie said: If the Snowflake data breaches are any indication of controls (or lack thereof), multi-factor authentication (MFA) is one of the first things everybody (companies and individuals alike) should implement. SMS is better than no but not ideal. Also, please get a password manager! When your vulnerabilities come from an attacker riding off your tokens from a piece of software you have approved to interact with O365 MFA becomes meaningless. Seen that happen several times. Another question, are you really gonna trust lastpass since they have been breached every other quarter? 1
BrightonCorgi Posted July 25, 2024 Posted July 25, 2024 9 hours ago, amberleaf said: I'm a sysadmin, and we used to test patches for two weeks before deployment to live. I was as a sysdmin at large hospital in Boston. We used to patch every server by hand. Could not afford a critical system going down it was a hospital. I did distribution and deployment at an investment bank in Boston and we had to research & publish each MS patch including impact, install and uninstall before being approved by CAB. This delayed patches at least a few days. Now-a-days it seems organizations are more trusting about patches. If it were a new version of CrowdStrike, most companies would go through a real qa, pilot, phased production push.
Jack Posted July 25, 2024 Posted July 25, 2024 5 hours ago, BrightonCorgi said: I was as a sysdmin at large hospital in Boston. We used to patch every server by hand. Could not afford a critical system going down it was a hospital. I did distribution and deployment at an investment bank in Boston and we had to research & publish each MS patch including impact, install and uninstall before being approved by CAB. This delayed patches at least a few days. Now-a-days it seems organizations are more trusting about patches. If it were a new version of CrowdStrike, most companies would go through a real qa, pilot, phased production push. Been there, done that, got laid off and my job outsourced to Microsoft (basically, more nuanced than that) by a failing company clawing at every dollar savings they could find. I don't blame the attitude, but I also don't understand the shock when it all goes Tango Uniform.
raggie Posted July 28, 2024 Posted July 28, 2024 On 7/26/2024 at 1:19 AM, GerardMichaelTX said: When your vulnerabilities come from an attacker riding off your tokens from a piece of software you have approved to interact with O365 MFA becomes meaningless. Seen that happen several times. Another question, are you really gonna trust lastpass since they have been breached every other quarter? That’s true. Those tokens are so powerful. Most businesses don’t seem to like the whole ‘least privilege’ process, nor controlling those high value tokens very well. As for lastpass, I will never use them personally. Either way it’s a step in the right direction, and most attacks focus on the easiest targets. ”If a bear charges, you don’t need to be the fastest, you just need to outrun the slowest!”
ha_banos Posted July 28, 2024 Posted July 28, 2024 I haven't read the debriefs. Was there no canary type rollout? Or it didn't cause problems during such a rollout? I'm all for being able to test in production...but you kind of have to prep for it.
Cigar Surgeon Posted July 28, 2024 Posted July 28, 2024 6 hours ago, ha_banos said: I haven't read the debriefs. Was there no canary type rollout? Or it didn't cause problems during such a rollout? I'm all for being able to test in production...but you kind of have to prep for it. https://www.crowdstrike.com/blog/falcon-content-update-preliminary-post-incident-report/ TLDR; one of their instance templates got pushed, it had an out of memory bounds execution that caused the BSOD, and their automated testing didn't catch it. 1
Mr. DD Posted July 28, 2024 Posted July 28, 2024 I got caught in this snafu on 19 Jul returning west from a week long biz trip in the Philly area. I had a connecting flight in PHX that I missed. The airports were chaotic that day. I also contracted COVID-19 after successfully avoiding it for 4 + years. 🤦🏻♂️ 2
BoliDan Posted July 28, 2024 Posted July 28, 2024 On 7/25/2024 at 9:23 AM, BrightonCorgi said: I was as a sysdmin at large hospital in Boston. We used to patch every server by hand. Could not afford a critical system going down it was a hospital. I did distribution and deployment at an investment bank in Boston and we had to research & publish each MS patch including impact, install and uninstall before being approved by CAB. This delayed patches at least a few days. Now-a-days it seems organizations are more trusting about patches. If it were a new version of CrowdStrike, most companies would go through a real qa, pilot, phased production push. Im glad our ERP doesnt press us too hard to update to current. Usually a few days later other clients will start posting the critical defects, so we can hedge. 1
Cigar Surgeon Posted July 29, 2024 Posted July 29, 2024 8 hours ago, Mr. DD said: I got caught in this snafu on 19 Jul returning west from a week long biz trip in the Philly area. I had a connecting flight in PHX that I missed. The airports were chaotic that day. I also contracted COVID-19 after successfully avoiding it for 4 + years. 🤦🏻♂️ Hope the symptoms were mild and you're back to 100% soon! 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now