10G Upgrade

HCHTech

Well-Known Member
Reaction score
4,465
Location
Pittsburgh, PA - USA
I have an existing server at a client, and they are due for a new switch. I'd like to take this opportunity to upgrade the server connection to the network to 2x10G (current connection is 2xGigabit). In researching the motherboard, I've confirmed that the two integrated network ports are already 10G. They are RJ45, of course, so it looks like I have a choice:

I. I can use Cat6a/7 copper patch cables to connect the existing NIC to converters like THESE that connect to the switch's SPF+ ports

II. I could purchase a 10G NIC that already has SPF+ ports and use cables with transceivers on each end.

I'm a noob with 10G, so I'm wondering about the pros and cons of each method.
 
I've only dabbled a little in fibre myself so far but I suspect option 2 would be your best bet, ie stick to fibre end-to-end.

In my limited experience with fibre, I can highly recommend FS.com. Their products seem to be of a very high quality but at quite disruptively low prices. If you contact them with your requirements they should be able to advise you and tell you exactly what you need to order.
 
Honestly, I'd just hook up one. You've got 2gb now, upgrading to 10gb is 5 times the capacity, and if you don't have an SSD array I doubt the server can even use that much.

Leave the other 10gb port for expansion.

But yes, you need an SFP+ adapter to the media you require. Just be aware that if you want to use copper in this circumstance, you're going to want to keep it as short as possible.
 
If the servers and switches are in the same room (they often are)...we just use the RJ45...
We're using the Ubiquiti XG switches as "TOR" (top of the rack)...using the SFP+ ports to uplink lower switches to it, and RJ45 for servers. With the SFP+ transceivers used for additional servers.
 
Yes, you can with no issues. The SFP and SFP+ ports are hot-swappable interfaces that can be used for a variety of things and are not just for fiber.

Also, there is no such thing as Cat7 in the US.
 
The main advantage of fibre would be:

Distance
With the right cable you can run several Kilometres whereas CAT6a is limited to 100M.

Bandwidth
You can get 25GB, 40GB and 100GB links over fibre.


Assuming your switch is racked right next to the server and you only want 10GB - neither of those apply. So the only real pros/cons here are cost.

If you go for option 2 then a DAC cable would usually be cheapest. These are copper (kinda like coax) but limited to around 5m length. We use these ones for our switches/servers
https://www.amazon.com/StarTech-com...eywords=startech+dac+hp&qid=1574106775&sr=8-2
 
Last edited:
Just test out your transceivers ahead of time, not on the day of prepping and deploying. Many switches are picky which ones they work with. Since we use mostly Ubiquiti switches lately, we use the ones for Brocade from FS.COM (they've been labeling them for Ubiquiti lately).

We use the 1' or 6" DAC ones for switch uplinks...all pre made.
 
Just test out your transceivers ahead of time, not on the day of prepping and deploying. Many switches are picky which ones they work with.

This 100%. Transceivers can be very picky if not using genuine manufacturer parts (ie. the same thing at 5x cost).

I know with Aruba switches you have to enable 3rd party support using the command allow-unsupported-transceiver
Other brands likely have a similar command.

Also, if you go fibre you have to make sure everything matches such as connector type, single-mode or multi-mode etc. It's easy to get mixed up if you aren't familiar.
 
Last edited:
Also, you don't want to know how many mislabeled, intentionally or otherwise products are on Amazon.

I've lost count of how many "CAT6" cables I've bought on there that can't carry gigabit.
 
if you don't have an SSD array I doubt the server can even use that much.

I do in fact have an SSD array, so I'm going all out and using 2 connections. There are 25 users, and they move a ton of files around the network. Server is a HyperV host with 3 guests, a DC, an App server, and a management Win10 workstation.

Option 2 will definitely cost more, so I may just pickup those adapters anyway, they are cheap and if it works just as well, it's less intrusive - won't have to open up the server.
 
I do in fact have an SSD array, so I'm going all out and using 2 connections. There are 25 users, and they move a ton of files around the network. Server is a HyperV host with 3 guests, a DC, an App server, and a management Win10 workstation.

Option 2 will definitely cost more, so I may just pickup those adapters anyway, they are cheap and if it works just as well, it's less intrusive - won't have to open up the server.

SATA or SAS? Don't forget each SATA SSD caps out at 1gbit. So to hit 10gbit, you need a 10 drive array. Now if you're on SAS or PCIe drives, the game condenses a bit. But you're still going to need a sizable array. Most six disk SSD arrays can't even cap out a single 10gbit interface.

So again... how large is that array?
 
I have 3 arrays on that server (a hyperV host). The HostOS is on a 2-disk RAID1, the VMs are on a 4-disk RAID10 and the storage is on a 4-disk RAID10. My thoughts are that the incremental cost of opening both 10G lanes to the network is very small. Benefits are intangible until they aren't, of course. I'd absolutely rather put it to use than save it for expansion. If I need expansion at some future point, I can think about reconfiguring (or getting a 4-port 10G NIC). The switch will already have 2 SFP+ ports, so the only cost to use the second lane is the cost of the transceiver and a short Cat6e cable (and some minor configuration time). The server is racked next to the switch, so no distance problem. For $150 give or take, I'm doing it. It might be overkill, but it's inexpensive overkill in the right direction and with no downside.

This is no different than buying more storage space than you need, or more RAM than you need, or a bigger battery backup than you need, or a faster processor than you need. I'll do all of those things if it's a good deal. I don't ever want to defend NOT building in some horsepower overhead.
 
To each their own, but if you have 10 disks worth of SSD you are well within the space where a 10gbit NIC could be saturated. So yeah, even I'd team them up.
 
One of our guys is prepping a nice setup that we're installing at a package store chain client of his Thanksgiving weekend. 5x 48 port Unifi POE switches, with a 10 gig XG switch at the top of the rack. 3x different floors and a wing, connected with fiber. Got a box of those slim mono 6" cables for doing that setup, she'll look nice!
 
One of our guys is prepping a nice setup that we're installing at a package store chain client of his Thanksgiving weekend. 5x 48 port Unifi POE switches, with a 10 gig XG switch at the top of the rack. 3x different floors and a wing, connected with fiber. Got a box of those slim mono 6" cables for doing that setup, she'll look nice!

Nice. Post some pics when it's done!
 
Back
Top