What Is Chia Cryptocurrency and Why Is It About to Cause an SSD Shortage?

It depends. It all hinges on whether the market accepts it. So far some folks are joining because it's an investment and they want to be grassroots.

You never know till you know, you know?
 
Don't know why SSDs will be impacted. They're still very expensive for their capacities vs traditional hard drives. Don't know much about Chia but my understanding of the concept and others like it require you to use vast amounts of storage space. Don't think that the speed of the storage is important, it's the shear quantity of storage that matters.
 
I am building a system to mine chia today. I think it will effect large spinning drive prices more than ssd. Just incase I doubled my stock of ssd for my stores. The system I am building has 2 2tb m.2 ssd and 9 8tb spinners. If I could have bought 18tb spinners I would have but all the larger size drives are either overpriced or out of stock.
 
I doubled my stock of ssd for my stores.

I see Samsung SSDs are up in price. Their 250 GB were as cheap as $31 recently on Amazon but now is $44. Regardless. The small SSDs shouldn't suffer too much in availability as Ch-Ch-Chia benefits from large drives yet I expect their price to suffer.
 
Don't know why SSDs will be impacted. They're still very expensive for their capacities vs traditional hard drives.
Super fast SSD's are extremely important in a Chia rig in order to plot. You use them as temp drives in order to plot. The bigger and faster the drive, the better.

The system I am building has 2 2tb m.2 ssd and 9 8tb spinners.
I'm repurposing a Dell PowerEdge server to mine Chia. With 10x 16TB hard drives, 8x 14TB hard drives, and 4x of these SSD's, I should be able to plot very quickly:


My server has dual 10-core Xeon's and 192GB of RAM. I have almost a quarter of a petabyte available to mine with. My 10x 16TB drives come tomorrow and my SSDs on Sunday (I already have the 8x 14TB hard drives). I'm wondering whether it makes sense to RAID these SSDs together or not. They're already RAIDed (2x2TB NVMe m.2 SSDs on a single PCI-e card) so I think I'll just run them all simultaneously and plot to 4x hard drives at the same time. I hope my PERC H730p is up to the task. I have it in JBOD mode for this.
 
Super fast SSD's are extremely important in a Chia rig in order to plot. You use them as temp drives in order to plot. The bigger and faster the drive, the better.


I'm repurposing a Dell PowerEdge server to mine Chia. With 10x 16TB hard drives, 8x 14TB hard drives, and 4x of these SSD's, I should be able to plot very quickly:


My server has dual 10-core Xeon's and 192GB of RAM. I have almost a quarter of a petabyte available to mine with. My 10x 16TB drives come tomorrow and my SSDs on Sunday (I already have the 8x 14TB hard drives). I'm wondering whether it makes sense to RAID these SSDs together or not. They're already RAIDed (2x2TB NVMe m.2 SSDs on a single PCI-e card) so I think I'll just run them all simultaneously and plot to 4x hard drives at the same time. I hope my PERC H730p is up to the task. I have it in JBOD mode for this.
Looking forward to you showing us the system and how it performs.
 
The small SSDs shouldn't suffer too much in availability as Ch-Ch-Chia benefits from large drives yet I expect their price to suffer.
No idea whether it's related, but my usual choices of 240Gb SSD are for the first time in short supply at Amazon UK. I was limited to only one Kingston yesterday, and can't purchase any at all today. And the estimated delivery for Crucial is up to a month. o_O
 
Looking forward to you showing us the system and how it performs.
Doh! I got my drives in today and dug out the server and the specs weren't what I remembered. As it turns out, I took out one of the processors and 1/2 the RAM to sell to another client and never got around to replacing them. This was just my pony server* so I didn't need all that power anyway. Because I only have one processor, that means I can only run 9 plots in parallel. In addition, I only have 2x PCI-e x16 slots and one x8 slot on the board that actually work because the other 5x slots are only active when it has a second processor installed! It's going to cost another $1,100 to replace the processor and RAM that I took out. Damn! I was hoping to run 19 plots in parallel across my 4x SSDs and give each plot 9GB of RAM to work with. Should I order another processor and another 96GB of RAM for $1,100?

*Yes, I have a server dedicated to nothing more than downloading and storing MLP content. Sue me.

EDIT: I just looked up pricing and I can add a second processor for about $400 so I think I'm going to do that. I don't need 9GB of RAM for each plot. The 96GB of RAM on this server should allow me to run 19 parallel plots using 4GB of RAM per plot.
 
Last edited:
Doh! I got my drives in today and dug out the server and the specs weren't what I remembered. As it turns out, I took out one of the processors and 1/2 the RAM to sell to another client and never got around to replacing them. This was just my pony server* so I didn't need all that power anyway. Because I only have one processor, that means I can only run 9 plots in parallel. In addition, I only have 2x PCI-e x16 slots and one x8 slot on the board that actually work because the other 5x slots are only active when it has a second processor installed! It's going to cost another $1,100 to replace the processor and RAM that I took out. Damn! I was hoping to run 19 plots in parallel across my 4x SSDs and give each plot 9GB of RAM to work with. Should I order another processor and another 96GB of RAM for $1,100?

*Yes, I have a server dedicated to nothing more than downloading and storing MLP content. Sue me.

EDIT: I just looked up pricing and I can add a second processor for about $400 so I think I'm going to do that. I don't need 9GB of RAM for each plot. The 96GB of RAM on this server should allow me to run 19 parallel plots using 4GB of RAM per plot.
I would start with what you have and see what you can do. I have mine built and am testing different settings. Problem is it takes 5.5 hours to see results. You want to see how long it takes to finish phase 1 and how long to finish one plot. Phase 1 is cpu taxing. After that it is just writing to ssd.

I have tried the following phase 1
2threads 9000mb 119 minutes
12threads 9000mb 100 minutes
4 threads pending

I also read that anything over 4500 ram doesn’t improve performance. Still need to test that.

I figure it will take me 3 days to test everything then I will take out the calc and figure out how many I can do a day. With a 12 thread my goal is 18 a day. If I can get that I will have 800 plots (my storage limit) in 43 days. I am ok waiting a couple more weeks to fill up
 
@kwest The more plots you plot in parallel the faster it goes. That's why I want 20 cores / 40 threads. Windows needs at least 1 core / 2 threads by itself, so that means a 20 core / 40 thread setup can plot 19 plots in parallel at maximum compared to just 9 with a 10 core / 20 thread setup. My SSDs are 2TB each so I should be able to plot up to 7x k32 plots in parallel per SSD (if I had enough cores/threads to do 28x plots in parallel, which I don't). The write endurance of these WD SSDs is only 400TB compared to 1200TB on a Samsung 980 Pro. I've bought both of them to test and see which one is faster. My server doesn't have native m.2 support on the motherboard but I do have a BOSS card that supports m.2 SSDs in RAID. I might just RAID together 2x2TB Samsung 980 Pros. 10GB/s write speeds is nothing to sneeze at. The only problem is these servers b*tch and moan when you install consumer grade SSDs because the write endurance is so low compared to Enterprise AIC SSDs.

And these servers are freaking LOUD! I have it set up in my living room and it's annoying as hell. I've used bmc to manually change the fan speed, but I had to downgrade the iDRAC in order to do it. The few tests I've done the server was able to keep within acceptable temps at 30% fan speed. I might have to up that a bit once I install the second CPU.

I actually have Windows 10 Pro installed on this server because it's less likely to give me issues with connecting to the Chia network. I'm going to run a stress test on the CPUs and see what the minimum fan speed needs to be in order to cool them when they're under 100% load.
 
I was limited to only one Kingston yesterday, and can't purchase any at all today. And the estimated delivery for Crucial is up to a month.
Which models? There are no shortages (of budget models, at least) on Amazon Fr and the prices are around what they have been for several months.

If only you could still benefit from free movement of goods. :(
 
Which models? There are no shortages (of budget models, at least) on Amazon Fr and the prices are around what they have been for several months.

If only you could still benefit from free movement of goods. :(

Kingston SSDNow A400 240Gb: I can add it to the basket for delivery "tomorrow" but when checking out I get "We're sorry. This item is not purchasable now. We suggest you to remove it." Ended up having to buy from an Amazon seller rather than Amazon direct, delivery within a week, though I'm never keen on buying that way.

Crucial BX500 240 GB: "Free Delivery May 26 - June 7 for Prime members." No mention of other sellers.
 
@kwest The more plots you plot in parallel the faster it goes. That's why I want 20 cores / 40 threads. Windows needs at least 1 core / 2 threads by itself, so that means a 20 core / 40 thread setup can plot 19 plots in parallel at maximum compared to just 9 with a 10 core / 20 thread setup. My SSDs are 2TB each so I should be able to plot up to 7x k32 plots in parallel per SSD (if I had enough cores/threads to do 28x plots in parallel, which I don't). The write endurance of these WD SSDs is only 400TB compared to 1200TB on a Samsung 980 Pro. I've bought both of them to test and see which one is faster. My server doesn't have native m.2 support on the motherboard but I do have a BOSS card that supports m.2 SSDs in RAID. I might just RAID together 2x2TB Samsung 980 Pros. 10GB/s write speeds is nothing to sneeze at. The only problem is these servers b*tch and moan when you install consumer grade SSDs because the write endurance is so low compared to Enterprise AIC SSDs.

And these servers are freaking LOUD! I have it set up in my living room and it's annoying as hell. I've used bmc to manually change the fan speed, but I had to downgrade the iDRAC in order to do it. The few tests I've done the server was able to keep within acceptable temps at 30% fan speed. I might have to up that a bit once I install the second CPU.

I actually have Windows 10 Pro installed on this server because it's less likely to give me issues with connecting to the Chia network. I'm going to run a stress test on the CPUs and see what the minimum fan speed needs to be in order to cool them when they're under 100% load.
When you get yours setup test it. You might get faster results with 4 threads per plot. I have read several different things. For me I am now doing 2 threads and testing delays so I always have 6 running up to 31%. After 31% it doesn’t use processor. So if I can time delay so when one hits 31% a new ones start I will be golden. I am getting close to finding the optimal delay.
 
Back
Top