Ingram Micro suffers global outage as internal systems inaccessible

Company being silent......outage happened on a holiday.......employees lost access...... Sounds like ransomware to me! Big company like that, it's not an "if" but a "when" - guess we'll find out how good their DR setups were. I predict a written-by-the-lawyers, brief, no real information press release. Glad we didn't use them.
 
Surprised they didn't have a "business continuity" type of disaster recovery, like Datto or Axcient or Veeam. One would think a tech giant would....have a good system in place. Musta still had old Symantsuck Backup Exec on tape drives!
Usually, it's not the BDR that's the problem. It's the lawyers...

At this scale all of the impacted equipment has to be isolated, and retained as is. Recovery requires NEW hardware.

So unless the disaster impacted less than 50% of their running infrastructure, we're stuck waiting on the supplier to get servers to even start restoring. Huge mess.
 
Usually, it's not the BDR that's the problem. It's the lawyers...

At this scale all of the impacted equipment has to be isolated, and retained as is. Recovery requires NEW hardware.

So unless the disaster impacted less than 50% of their running infrastructure, we're stuck waiting on the supplier to get servers to even start restoring. Huge mess.
Offsite...spin up offsite. Like with Datto...if "on prem" is done, at least can get essential peeps access to offsite. We used that a few times with clients...part of the flexibility of such systems like Datto.

And...I know...ChingGram Micro had many more "moving parts" than the SMB's I'm used to managing...so there's that..but still...
 
Offsite...spin up offsite. Like with Datto...if "on prem" is done, at least can get essential peeps access to offsite. We used that a few times with clients...part of the flexibility of such systems like Datto.

And...I know...ChingGram Micro had many more "moving parts" than the SMB's I'm used to managing...so there's that..but still...
I'd be shocked if they had any physical infrastructure left honestly. And if its all Micro Services... nothing should be able to be "crypto'd". That is unless they lost some master keys.

A post mortem on this will be very interesting.
 
For all we know it could have been the network guy's fault... I think it will go like this... If we can see only see the After Action Review...


Inside Scoop: BGP Import Rule Took Down Ingram Micro **Satire***


The Ingram Micro outage was not originally caused by ransomware. The initial problem? A BGP import policy misconfiguration during a planned peering update.


Here’s the simplified version
  • Ingram was rolling out a BGP policy change across multiple global data centers to enforce stricter import filters and prep for a new hybrid cloud routing setup.
  • One of the updated route reflectors in their European market accidentally imported internal infrastructure prefixes (e.g., 10.0.0.0/8, 172.16.0.0/12) into the external BGP table and began advertising them to upstream peers.
  • Some upstream peers filtered the routes, but others accepted them
  • This caused return traffic from external SaaS services (like Microsoft 365, Salesforce, and even their SSO platform) to route back through Ingram’s own backbone instead of the public internet whereby return traffic did not return.
  • The result? Massive asymmetric routing, dropped sessions, and global authentication failures across internal systems.

The kicker?

  • To stop the damage, they shut down multiple BGP peers
  • Meanwhile, internal comms systems (email, Teams) were affected because of DNS resolution problems from recursive queries being routed incorrectly
The actual first domino appears to be a poorly scoped BGP import rule that should have dropped RFC1918 space but did not.


Take away: Someone didn’t write deny 10.0.0.0/8 le 32 in an import statement

*** Above is Satire ***
 
I should work for them. I could write their official statement and even tout transparency... and write some corporate nonsense like this:



Official Statement from Ingram Micro ****Satire*****

Ingram Micro is aware of a recent service disruption that temporarily impacted access to certain systems. Upon initial review, the event appears to have stemmed from an unanticipated deviation within an established operational process.

While the precise sequence of contributing factors is still under internal review, preliminary indications suggest that either a required procedural step may not have been executed as intended, or an unintended action may have been introduced during standard operations. This resulted in a major impact on system availability across select environments.

Our teams acted promptly to contain the event and are continuing efforts to fully restore services in a secure and controlled manner. We are also conducting a comprehensive analysis to assess all contributing factors and ensure appropriate safeguards are reinforced to prevent recurrence.

We appreciate the continued trust of our partners and customers and remain committed to transparency as further details become available.

**** Above is a Satire... not a press release from Ingram Micro ****
 
Back
Top