Backblaze publishes SSD statistics for the first time

Side note are Backblaze any good to use?
They have their place. I use them for my own and residential client computers. They aren't perfect, but do a damned good job, as long as you understand what is happening and what is not. Also, no real monitoring, so that takes it out of the running for all but the smallest of businesses. You don't have much granular control over what is being backed up, and they only keep deleted files for 30 days. They are dirt-cheap, however, and for the most part, it just works.

I also use B2 buckets for backing up NAS's (NASes?) for my commercial customers that have them.
 
B2 buckets are very handy for many purposes. NAS backups are one of the stronger uses!
Indeed. And, they are inexpensive as online storage goes. For NASs that function only as backup targets, (e.g. using Synology's Active Backup for Business or similar low-cost option), I worry about how long it would take to actually USE the data in the bucket in the event of a local disaster, like a fire. It does fulfill the offisite part of 3-2-1 backup just about as cheaply as possible, though - and with cheap, you don't get fast.

I've taken on a project this year of re-hashing the "backup talk" with all of my commercial customers. Re-explaining exactly what they are doing and what other options are available. There's a continuum of Cost/RecoveryTime with "Cheap and Slow" on the left and "Expensive but Fast" on the right. Or if you like, Cost/DisasterDownTime with Cheap/Long on the left and Expensive/Short on the right. Every backup system falls somewhere on this continuum, and it never a bad idea to remind folks that THEY are the ones choosing where on the continuum they are, not me. For SMB, this decision is always driven by cost, it seems, and no matter how many times you explain the cost of downtime to them, that's not real. Cost is always focused on real dollars out the door.
 
For SMB, this decision is always driven by cost, it seems, and no matter how many times you explain the cost of downtime to them, that's not real. Cost is always focused on real dollars out the door.

Well, I'd not say it's so much that "it's not real," but that they are betting on it not being probable, and they're probably right. God willing you'll never have to deal with a large scale disaster recovery. And very often, if you do, your IT assets are only a small part of the overall hell that you'll be dealing with (and if you have no physical location or equipment onto which things can be restored, then IT recovery is likely not your first priority).

Accurate risk analyses are part and parcel of accurate cost-benefit analyses. You are doing exactly the right thing in presenting the range of options, but it really is up to the customer to decide how likely they are to need any one of them and, if they were, where their priorities lie. Sometimes they convince themselves of something foolish, but you've done your part of the job.
 
@HCHTech I'm with you, I describe it as fire insurance. How much you buy determines how quickly you recover from that fire.

As for the Synology thing, most of my clients are in M365. So they put a Synology onsite, and then offsite that device with B2. And the building burns down... what do we care? The primary data is M365... it's not impacted! We just buy a new Synology, pull the HyperBackup from B2 to get the past data back, relink, and start syncing M365 again. The time it takes to do all that isn't really as important because the primary data the users use everyday is still available!

When sites grow to where they have Azure VPSs for whatever purpose, I then use a UTM to maintain a VPN tunnel to the Azure network in question. I can then use Active Backup for Business to backup the entire VPS to the Synology too. I still have Azure backup for actual restores, but the Synology copy is there just in case the entire Azure ecosystem goes away.... it's a means to leave me free to move those VPSs to other cloud providers too. And... well... it's still pretty cheap to do. But even if we didn't have that B2 bucket, in these cases all that's lost is some history, not the actual data. So if they don't do B2... Again I don't care!

I did find a bug in Active Backup for Business on Friday though... and it's now Wednesday with no response from Synology's support. So that's a bit of a downer... but I also found a work around... so I'm back to not caring.

My largest concern with this configuration is the time it takes to get a new Synology somewhere that can pull that data from B2. The availability is the largest time sink right now, not the actual data transfer.
 
My largest concern with this configuration is the time it takes to get a new Synology somewhere that can pull that data from B2. The availability is the largest time sink right now, not the actual data transfer.
That is my concern with a lot of equipment we have sold to customers. Ubiquiti, Synology, etc. They're great, but how long is it going to take to get another one. In the big server job I did recently, I built in the cost of TWO Unifi aggregation pro switches, because it's such a critical lynchpin of the infrastructure, I want to have a spare one sitting ready to deploy if necessary. I should be doing the same with Synology, but I haven't done that yet. We also sell a lot of Sonicwalls, and I have two NFR units here that I can deploy in a pinch if a customer's unit goes down - a "big one" (TZ500) and a "little one" (SOHO). My "big one" is at a customer's right now because theirs died with a power surge and it took 4 days to get a replacement - they would have been down for much of that time (or up but unprotected) if I didn't have that.

Especially these days, there's no way to guarantee availability of anything, so you have to stock you own emergency supply.
 
Well, I'd not say it's so much that "it's not real," but that they are betting on it not being probable, and they're probably right.

Yep - it's never necessary until it is. My favorite answer when someone asks why a thing quit working: "Everything works until it doesn't." Sometimes you get information as to the cause, sometimes you don't. That's life!
 
Oh - and while we're onto B2 buckets, one big downside, IMO: No per-bucket billing or usage analysis. No way to find out which bucket is tripping the bandwidth cap, no way to assign the costs per bucket other than your own math of dividing it up by storage used. It's not a horrible downside, but I imagine it would be more troublesome if you had hundreds of clients on it. With my couple of dozen or so, it's easy enough to just spread the cost evenly and be done. The dollars are small.
 
@HCHTech I avoid that by setting up an account for each and every bucket.

But then again, I'm not billing for that mess, I set it up for a client, in their name, slap their CC in it and let it run.
 
Those that are interested can find it here -


Crucial (a brand that was favored on our site survey) did not fair well but the sample sizes are tiny.

Some comments:


Me running the MX500 250GB as boot drive on every computer I own and every computer of my entire family: "This is fine".


I've deployed dozens of MX500 SSDs in various sizes from 250GB-2TB - some in light-duty server work, but mostly a lot of desktops/laptops.

I don't think I've seen a single failure yet...
 
@Appletax Meanwhile I refuse to use Crucial drives because all they do is fail. The only one I have left is in this rig, and it's relegated to storing games so if it dies I simply don't care!

Backblaze however isn't you and me... they're using these things in storage arrays that have buckets of crap written to them. They're shoveling a write load onto SSD drives that no home user is ever going to generate. The MX series has a write endurance that's weak in my experience BUT that doesn't mean it doesn't work for people that don't write a whole lot. I've seen them last in office duty work loads for years. They just don't tolerate MY usage. I find my Samsung and WD Blue SSD do tolerate my usage, and have done so for extremely long periods of time. I also find both of them to provide a superior warranty experience, as well as support software. So I use them instead.

The true fault reality is as of yet unknown, and this reality is reflected in the numbers they've published. See that AFR rate? Read the comments below it... They tell you that the data on the CT250MX500SSD1 is based on faults that happened in a 20 drive pool (2 drives died). This small sample size, plus the short duration of time the drives that died were used indicates DOA normal fault rates, but by the time you get done running the numbers through their standard equation you wind up with a huge percentage of failure. Having 2 SSDs of any make fault in a 20 drive lot is unlucky, but not unheard of!

In short, the numbers we see are are statistically immature and while interesting you shouldn't be using them to change purchasing habits. Time will grant the data we need, and I fully expect the Crucial drives to fair better when the year ends.
 
Gotta say you gotta give Backblaze some love for how open and transparent they are about their business. These drive surveys are great. Nobody else in that space does anything like that that I'm aware of. Also cool ... they have done extensive videos on how they build their storage pods and I think even give out instructions or other tech details for those who want to make their own.
 
@Appletax Meanwhile I refuse to use Crucial drives because all they do is fail. The only one I have left is in this rig, and it's relegated to storing games so if it dies I simply don't care!

Backblaze however isn't you and me... they're using these things in storage arrays that have buckets of crap written to them. They're shoveling a write load onto SSD drives that no home user is ever going to generate. The MX series has a write endurance that's weak in my experience BUT that doesn't mean it doesn't work for people that don't write a whole lot. I've seen them last in office duty work loads for years. They just don't tolerate MY usage. I find my Samsung and WD Blue SSD do tolerate my usage, and have done so for extremely long periods of time. I also find both of them to provide a superior warranty experience, as well as support software. So I use them instead.

The true fault reality is as of yet unknown, and this reality is reflected in the numbers they've published. See that AFR rate? Read the comments below it... They tell you that the data on the CT250MX500SSD1 is based on faults that happened in a 20 drive pool (2 drives died). This small sample size, plus the short duration of time the drives that died were used indicates DOA normal fault rates, but by the time you get done running the numbers through their standard equation you wind up with a huge percentage of failure. Having 2 SSDs of any make fault in a 20 drive lot is unlucky, but not unheard of!

In short, the numbers we see are are statistically immature and while interesting you shouldn't be using them to change purchasing habits. Time will grant the data we need, and I fully expect the Crucial drives to fair better when the year ends.

Very important: how does Crucial treat you when you need to RMA a drive under warranty? I am all about post purchase service.

I actually prefer buying Western Digital Blue SSDs from Walmart as they're sold in-store (not always in stock, tho). They are $53 there and much more expensive elsewhere.

Of course, when the best of the best is desired, I get Samsung SSDs.
 
Walmart and Amazon are both 54.95 for the 250gig as I type.

Oh. They were $53 for 500GB. Out of stock at the moment. Hope they get more and maintain that nice price.

I wouldn't be surprised if I went there now and found one in stock. Website not completely accurate on inventory.
 
Back
Top