10GbE guidance

occsean

Active Member
Reaction score
127
Location
Oregon City, OR
Not having any experience in designing or deploying 10GbE networks I was hoping some of you who have could share your go to tips, tricks and recommended hardware.

The ultimate goal is to speed up data transfers on a small LAN that will have a NAS and three workstations needing the faster speeds but with the ability to scale storage on the NAS as well as additional workstations as my client grows.

My thought process was to purchase a new 8 bay QNAP unit and add in a single port 10Gb PCIe card, add cards to the workstations, new 10Gb switch, and run CAT7 through the office.

Alternatively, I've also considered a roll your own server solution using UnRAID running on dual socket 2011 Xeon's.

The client's business is providing archival work product for various entities in their field and they routinely receive and provide 100GB+ dumps of text documents and graphic files.

I appreciate any insight on your preferred way of getting this done..
 
You'd on the right track there....10 gig is getting more popular. We're often deploying it for at least parts of networks...uplinks from switches to the TOR (top of rack) switch..where the servers also plug into.
Don't assume you'll need/want jumbo frames (9000 MTU instead of default 1500)...test first, the router needs to support converting that if you enable jumbo frames, else certain web based things fail. Certain NICs can get problematic. Ensure latest drivers, and research quirks the specific NICs you'll use might have with 10 gig networks and MTU, TOE, etc.
 
+1 for SSDs in the NAS if they can afford it. I've only done servers at 10Gbe, but have had good luck with Intel NICs. We just did a HyperV host with 2 bonded 10Gbe connections to the switch. While the (30) workstations are still 1Gb, it made a huge difference in file transfer times. It's an architect using Revit, which pulls a local copy down of every project you open, so there is a ton of data moving over the network.
 
SSDs are all but required, you'd need a HUGE array of platters to supply enough raw read/write to handle 10gbit. SATA is basically a gigabit pipe, SAS gives you more room, but the point is you have to engineer a storage array that can outrun the LAN speeds you want to get the throughput you need. Typically the media imposes more of a problem than the bus.

But I'd be trying to stick with single interfaces as much as possible. LACP is great, until it isn't, and represents one more point of failure.
 
Back
Top