So what's the deal with QSFP28 ports?

HCHTech

Well-Known Member
Reaction score
3,848
Location
Pittsburgh, PA - USA
I came across a server today, a fairly recent Dell, with a tantalizingly unused dual-port 100GbE Mellanox card. It has two QSFP28 ports, which are a bit wider than the normal SFP/SFP+ ports I'm used to seeing. This server has a 10GbE connection to the network already, but the switch has a couple of open 25GbE ports, so I'm thinking an easy win is in play here. I could, with the right transceivers and cable, give that server a 25GbE connection to the network, apparently. I'd love to highlight this in my bid to get the business.

In my SMB corner of the world, I haven't run into this fast of a connection before, so I'm wondering if the the QSFP28 ports mean something special, or are they just "what it takes" to get 100GbE. I presume, but will confirm, that they could auto-negotiate down to 25GbE. Fiber is such a pain because you have to match the transceivers on either end to be compatible with the device on that end...
 
I have been doing this for many years. What you see is that over the years the form-factor changes and media changes. I have worked with GBIC, SFP, SFP+, Zenpaxk, X2, SFP28, QSFP+, and QSFP28 You are conflating matching the transceivers with matching the media-type though.

Simply put you match the media-type not the transceivers, so you could get a multimode X2 transceiver that would be 10GBaseSR and an SFP+ that is 10GBaseSR, and link them just fine. All you would need is a duplex 50/125 micron multimode cable (I.e. OM3 or OM4 usually aqua though OM4 can be magenta) with one end being SC to connect to the X2 and the other LC to connect to the SFP+.

SFP, SFP+, and SFP28 all fit into the same form-factor slot, but the switch needs to support the media type and there is no negotiation, so a 10GBaseSR won’t negotiate down and link to 1000BaseSX. The SFP is 1 Gbps, SFP+ is 10 Gbps, and SFP28 is 25 Gbps. With SFP28 doing 25 Gbps multimode, I go OM4 though you can go 70 meters on OM3.

QSFP, QSFP+, and QSFP28 are the same form factor, but the switch needs to support it. It is just 4x the above, so a QSFP+ links at 40 Gbps…. QSFP28 is 100 Gbps.

I would automatically assume it is NOT magically going from a 100 Gbps QSFP28 to a 25 Gbps SFP28.

For example, 100G-BaseFR isn’t going to be able to link. Now if you happen to have a 100G-BaseSR4 then if you buy a break-out kit you can split it to four (4) 25G-BaseSR transceivers if the chassis or server drivers let you actually configure a breakout kit and it exists.

Again NOT automatic.
 
Holy cow - thanks.. I think I know now why the card is sitting there unused - haha. This is quite a bit above my pay grade, for sure.

So where to even start to see if this is possible? It's a Dell-sourced NIC, branded Mellanox. The switch is Ubiquiti Aggregation Pro, so it does have 4 SFP28 25GbE ports. I see THIS cable @ CDW, and THIS cable at FS.com (who I've used before successfully), but they are DAC. I think I'll do what I did last time I needed fiber connection bits, just chat with support @ FS.com, and they'll tell me what I need if it's possible.
 
Holy cow - thanks.. I think I know now why the card is sitting there unused - haha. This is quite a bit above my pay grade, for sure.

So where to even start to see if this is possible? It's a Dell-sourced NIC, branded Mellanox. The switch is Ubiquiti Aggregation Pro, so it does have 4 SFP28 25GbE ports. I see THIS cable @ CDW, and THIS cable at FS.com (who I've used before successfully), but they are DAC. I think I'll do what I did last time I needed fiber connection bits, just chat with support @ FS.com, and they'll tell me what I need if it's possible.
Right, so IF the Dell supports a break-out kit, it should suddenly appear like instead of having 2 Mellonox slots, once you put a cable in it should appear like four (4) slower interfaces vs one faster one.

The switch would also need to support the kit... Personally, I cannot imagine an $800 switch being even close to capable of handling this well.


I go with something more like this... Those are 100G-FR transceivers shown in the picture... I would just put a 100G-FR Dell in the chassis and connect it to a better switch with another 100G-FR transceiver.

Here is a new StackWise Virtual I am setting up today. These are Cisco Catylyst 9500 48y4c switches


1635964316952.png
 
You are probably correct - higher-horsepower equipment is undoubtedly needed. I'll just have to rely on my sunny personality now for getting the business, instead of offering a low-cost way to get 10X (minus overhead/IO limitations/etc.) more throughput...
 
I'm fairly certain with a QSFP28 to 4x SFP28 breakout cable you could attach one to the Unifi Switch and it will work. We have some Aruba switches connected like. As far as the switch is concerned it's using a SFP28 DAC cable and knows nothing more.

QSFP28 is in simplified terms just 4x SFP28 (25GB) combined in a single port to make 100GB. The Q stands for Quad.
 
I'm fairly certain with a QSFP28 to 4x SFP28 breakout cable you could attach one to the Unifi Switch and it will work. We have some Aruba switches connected like. As far as the switch is concerned it's using a SFP28 DAC cable and knows nothing more.

QSFP28 is in simplified terms just 4x SFP28 (25GB) combined in a single port to make 100GB. The Q stands for Quad.
You are correct. What I am saying is that should be fine provided the Dell chassis is the end receiving the QSFP28 and supports breaking it out into 4x SFP28. In that case, by all means the switch merely sees a an SFP28.

That said, I would really check the horsepower of that switch.... easier said than done. Many folks will balk at you suggesting a switch that actually has a number of QSFP28 ports especially if it is a Quality Cisco piece... What you see in the picture above that I took was two Cisco switches and four (4) genuine Cisco QSFP28 100G-FR transceivers. Each Transceiver was just over $2,200 making them almost 3x the cost of that switch. It really just shows that Cisco is overpriced.
 
Yep - I sold an aggregation pro to my biggest client (about 50 workstations, and they are only using it to connect 10GbE to their servers and 2 primary switches) last month, and that was the most expensive switch I've sold this year. :) In the shallow end of this pool where I swim, I don't run into expensive equipment very often. That 100GbE NIC I saw at this prospective client is the first one of those I've laid eyes on. For now, I'll let the big boys worry about such tech wizardry.
 
Yeah it's eye-opening the price of switching once you get into "enterprise grade" equipment.

Last year we were looking for something with at least 14 SFP+ ports to uplink several switches, servers, firewall etc at 10GB. Existing switches were Aruba so that was our first call. Cheapest model fitting the bill was a 3810m with 16x SFP+ ports at nearly £8,000. Unifi aggregation pro with 28x SFP+ ports is less than £800

Clearly the hardware itself is a very small fraction of the total cost.
 
Last edited:
Unifi aggregation pro with 28x SFP+ ports is less than £800

The one concern I have for this switch is that it is now a lynchpin of their infrastructure. I bought another one just to put on my shelf in case a power outage takes the active one out. I've approached them about purchasing a cold spare, they haven't said yes yet, so I felt I had to have one. I had another client lose 2 24-port switches at once (that were on smart UPS's!) after a power surge last year.
 
The one concern I have for this switch is that it is now a lynchpin of their infrastructure. I bought another one just to put on my shelf in case a power outage takes the active one out. I've approached them about purchasing a cold spare, they haven't said yes yet, so I felt I had to have one. I had another client lose 2 24-port switches at once (that were on smart UPS's!) after a power surge last year.
The trick is to use only fiber from the core to distribution and distribution to edge. When lightning happens it tends to knock out switches with ethernet cable and spare those with fiber in my experience.

Regarding switch reliability, always buy those with two power supplies and place each one on a different power service or a different UPS. If they support power stacking like the Cisco 9300 units, do that!

Lastly, get something that stacks and supports multi-chassis Etherchannel or something like HSRP/VRRP. At any rate in a proper topology you can lose an entire core or distribution switch and maintain non-stop forwarding. At the edge, a failed switch should NOT take out the entire stack but be isolated to no more than say 48 computers… a very small outage. At the edge a failed switch breaks a Stackwise 480G ring degrading it to 240 Gbps, which is generally more than enough even if you have 7 remaining operational switches. Of course thr uplink you ALWAYS span the uplink to the distribution layer from two separate switch chassis. That way if a switch with a fiber uplink fails it degrades from a 20Gbps etherchannel to 10Gbps.
 
The trick is to use only fiber from the core to distribution and distribution to edge. When lightning happens it tends to knock out switches with ethernet cable and spare those with fiber in my experience.

That's good to hear. This client is just barely big enough to need fancier than SMB-normal. About 50 workstations, 15 other network devices (printers, a NAS, postage meter, etc.) and 3 servers inhouse. I've got a Ubiquiti Aggregation Pro, which connects by fiber to the servers and copper to the firewall, and fiber to the 2 Ubiquiti Pro-48 switches for the workstations. The firewall is copper-only, so maybe I should move it to one of the copper switches, to keep the aggregation switch fiber-only. Full enterprise switching isn't really in the cards or budget.

Ubiquiti doesn't give you the possibility of dual power supplies, they have this external supplemental power supply (available, not included), but it won't actually power up the switch if the main supply dies, it will just keep it running until the power goes out. That seems like a dumb way to implement redundant power, but as you said, it's only $800, so that's why I opted to have a spare on hand.
 
Just to close this loop - I did in fact get this client, so this topic deserved further research. @NETWizz (no surprise here) is correct that the important thing is the support by the NIC.
For example, 100G-BaseFR isn’t going to be able to link. Now if you happen to have a 100G-BaseSR4 then if you buy a break-out kit you can split it to four (4) 25G-BaseSR transceivers if the chassis or server drivers let you actually configure a breakout kit and it exists.

...and unfortunately, the Mellanox Connectx-4 card, does not support a breakout cable. I can, and have however, purchased the correct DAC with a QSFP28 connector on one end and an SFP28 connector on the other, and voila, I have a 25Gb connection to the switch. It's not as dramatic as going from 10Gb to 100Gb, but it's a shade better than teaming two10Gb ports. Oh well, I had fun, anyway! 👍
 
Back
Top