Reading into the updates some more... I'm starting to think this might just destroy CloudStrike as a company altogether. Between the mountain of lawsuits almost certainly incoming and the total destruction of any public trust in the company, I don't see how they survive this. Just absolutely catastrophic on all fronts.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
If all the computers stuck in boot loop can't be recovered... yeah, that's a lot of cost for a lot of businesses. Add to that all the immediate impact of missed flights and who knows what happening at the hospitals. Nightmare scenario if you're responsible for it.
This sort of thing is exactly why you push updates to groups in stages, not to everything all at once.
Looks like the laptops are able to be recovered with a bit of finagling, so fortunately they haven't bricked everything.
And yeah staged updates or even just... some testing? Not sure how this one slipped through.
Not sure how this one slipped through.
I'd bet my ass this was caused by terrible practices brought on by suits demanding more "efficient" releases.
"Why do we do so much testing before releases? Have we ever had any problems before? We're wasting so much time that I might not even be able to buy another yacht this year"
Agreed, this will probably kill them over the next few years unless they can really magic up something.
They probably don't get sued - their contracts will have indemnity clauses against exactly this kind of thing, so unless they seriously misrepresented what their product does, this probably isn't a contract breach.
If you are running crowdstrike, it's probably because you have some regulatory obligations and an auditor to appease - you aren't going to be able to just turn it off overnight, but I'm sure there are going to be some pretty awkward meetings when it comes to contract renewals in the next year, and I can't imagine them seeing much growth
The amount of servers running Windows out there is depressing to me
The four multinational corporations I worked at were almost entirely Windows servers with the exception of vendor specific stuff running Linux. Companies REALLY want that support clause in their infrastructure agreement.
>Make a kernel-level antivirus
>Make it proprietary
>Don't test updates... for some reason??
I mean I know it's easy to be critical but this was my exact thought, how the hell didn't they catch this in testing?
I have had numerous managers tell me there was no time for QA in my storied career. Or documentation. Or backups. Or redundancy. And so on.
Completely justified reaction. A lot of the time tech companies and IT staff get shit for stuff that, in practice, can be really hard to detect before it happens. There are all kinds of issues that can arise in production that you just can't test for.
But this... This has no justification. A issue this immediate, this widespread, would have instantly been caught with even the most basic of testing. The fact that it wasn't raises massive questions about the safety and security of Crowdstrike's internal processes.
Yeah my plans of going to sleep last night were thoroughly dashed as every single windows server across every datacenter I manage between two countries all cried out at the same time lmao
I always wondered who even used windows server given how marginal its marketshare is. Now i know from the news.
Here's the fix: (or rather workaround, released by CrowdStrike) 1)Boot to safe mode/recovery 2)Go to C:\Windows\System32\drivers\CrowdStrike 3)Delete the file matching "C-00000291*.sys" 4)Boot the system normally
It's disappointing that the fix is so easy to perform and yet it'll almost certainly keep a lot of infrastructure down for hours because a majority of people seem too scared to try to fix anything on their own machine (or aren't trusted to so they can't even if they know how)
They also gotta get the fix through a trusted channel and not randomly on the internet. (No offense to the person that gave the info, it’s maybe correct but you never know)
This sort of fix might not be accessible to a lot of employees who don't have admin access on their company laptops, and if the laptop can't be accessed remotely by IT then the options are very limited. Trying to walk a lot of nontechnical users through this over the phone won't go very well.
I'm on a bridge still while we wait for Bitlocker recovery keys, so we can actually boot into safemode, but the Bitkocker key server is down as well...
This is going to be a Big Deal for a whole lot of people. I don't know all the companies and industries that use Crowdstrike but I might guess it will result in airline delays, banking outages, and hospital computer systems failing. Hopefully nobody gets hurt because of it.
Big chunk of New Zealands banks apparently run it, cos 3 of the big ones can't do credit card transactions right now
CrowdStrike: It's Friday, let's throw it over the wall to production. See you all on Monday!
Wow, I didn't realize CrowdStrike was widespread enough to be a single point of failure for so much infrastructure. Lot of airports and hospitals offline.
The Federal Aviation Administration (FAA) imposed the global ground stop for airlines including United, Delta, American, and Frontier.
Flights grounded in the US.
Ironic. They did what they are there to protect against. Fucking up everyone's shit
Maybe centralizing everything onto one company's shoulders wasn't such a great idea after all...
The thought of a local computer being unable to boot because some remote server somewhere is unavailable makes me laugh and sad at the same time.
I don't think that's what's happening here. As far as I know it's an issue with a driver installed on the computers, not with anything trying to reach out to an external server. If that were the case you'd expect it to fail to boot any time you don't have an Internet connection.
Windows is bad but it's not that bad yet.
Yep, stuck at the airport currently. All flights grounded. All major grocery store chains and banks also impacted. Bad day to be a crowdstrike employee!
Yep, this is the stupid timeline. Y2K happening to to the nuances of calendar systems might have sounded dumb at the time, but it doesn't now. Y2K happening because of some unknown contractor's YOLO Friday update definitely is.
https://www.theregister.com/ has a series of articles on what's going on technically.
Latest advice...
There is a faulty channel file, so not quite an update. There is a workaround...
-
Boot Windows into Safe Mode or WRE.
-
Go to C:\Windows\System32\drivers\CrowdStrike
-
Locate and delete file matching "C-00000291*.sys"
-
Boot normally.
My dad needed a CT scan this evening and the local ER's system for reading the images was down. So they sent him via ambulance to a different hospital 40 miles away. Now I'm reading tonight that CrowdStrike may be to blame.
I'm so exhausted... This is madness. As a Linux user I've busy all day telling people with bricked PCs that Linux is better but there are just so many. It never ends. I think this is outage is going to keep me busy all weekend.
A few years ago when my org got the ask to deploy the CS agent in linux production servers and I also saw it getting deployed in thousands of windows and mac desktops all across, the first thought that came to mind was "massive single point of failure and security threat", as we were putting all the trust in a single relatively small company that will (has?) become the favorite target of all the bad actors across the planet. How long before it gets into trouble, either because if it's own doing or due to others?
I guess that we now know
Honestly kind of excited for the company blogs to start spitting out their ~~disaster recovery~~ crisis management stories.
I mean - this is just a giant test of ~~disaster recovery~~ crisis management plans. And while there are absolutely real-world consequences to this, the fix almost seems scriptable.
If a company uses IPMI (~~Called~~ Branded AMT and sometimes vPro by Intel), and their network is intact/the devices are on their network, they ought to be able to remotely address this.
But that’s obviously predicated on them having already deployed/configured the tools.
Been at work since 5AM... finally finished deleting the C-00000291*.sys file in CrowdStrike directory.
182 machines total. Thankfully the process in of itself takes about 2-3 minutes. For virtual machines, it's a bit of a pain, at least in this org.
lmao I feel kinda bad for those companies that have 10k+ endpoints to do this to. Eff... that. Lot's of immediate short term contract hires for that, I imagine.
We had a bad CrowdStrike update years ago where their network scanning portion couldn’t handle a load of DNS queries on start up. When asked how we could switch to manual updates we were told that wasn’t possible. So we had to black hole the update endpoint via our firewall, which luckily was separate from their telemetry endpoint. When we were ready to update, we’d have FW rules allowing groups to update in batches. They since changed that but a lot of companies just hand control over to them. They have both a file system and network shim so it can basically intercept **everything **
crowdstrike sent a corrupt file with a software update for windows servers. this caused a blue screen of death on all the windows servers globally for crowdstrike clients causing that blue screen of death. even people in my company. luckily i shut off my computer at the end of the day and missed the update. It's not an OTA fix. they have to go into every data center and manually fix all the computer servers. some of these severs have encryption. I see a very big lawsuit coming...
lol
too bad me posting this will bump the comment count though. maybe we should try to keep the vote count to 404
My favourite thing has been watching sky news (UK) operate without graphics, trailers, adverts or autocue. Back to basics.