Since building my hyperconverged home lab in Jan 2015 and finally sharing it on my blog Jan 2017 it has quickly become the most popular blog post by quite a margin. But one thing has always bothered me about the lab, with 1 SSD per ESXi host, and that’s the connectivity. Stuck in the slow lane at 1GbE it was always the limiting factor when configuring replication between hosts with Zerto and performing an initial sync. This was made even worse when I wanted to plug in a Rubrik r344 appliance (Supermicro 4 node server with 4 x 10GbE cards) because no matter how fast Rubrik can ingest the backup and live mount it back over NFS, 1GbE is as fast as it will go. Demonstrating live mounts over 1GbE certainly works, but it’s not the same wow factor as it is with 10GbE.
Something had to give. Do I build a new lab using motherboards with built-in 10GBase-T NICs? How cool would it be to remove the NUC motherboards and this time go for 3 mini-ITX motherboards to create a 3-node VSAN cluster, each with 128GB DDR4 ECC RAM, a 4TB SSD and 10TB SATA disk? Here is what I put in my shopping basket a few times over:
- 3 x Crucial 128GB DDR4 2133 MHz RDIMM Memory Kit (4 x 32GB) = $3,984
- 3 x Supermicro MBD-X10SDV-TLN4F-O Mini-ITX Server Motherboards = $2,490
- 3 x Samsung 850 EVO 2.5” 4TB SATA 3 SSD = $4,349
- 3 x Seagate 3.5” 10TB HDD 7200 RPM 256MB Cache = $1,383
This would create a 24 core, 48 thread, 384GB RAM, 12TB SSD, 30TB HDD beast, all inside my single Win D-Frame mini-ITX case. But crucially it would also cost $12,206. Ouch. Do I buy now and start drafting the divorce papers? Not a great plan I agree. So how can I get 10GbE connectivity on the cheap without a lab refresh and what would I use for switching (as you may notice that wasn’t even in the basket!)?
After looking around and asking for recommendations I finally settled on buying an Ubiquiti 10GbE switch, 10GbE PCI Express cards and doing an in-place upgrade to 10GbE city. Here is what I bought:
- Ubiquiti ES-16-XG Edge Switch for $511.99
https://www.amazon.com/Ubiquiti-Networks-ES-16-XG-Edge-Switch/dp/B01K2Y1HP0 - 2 x refurbished IBM Mellanox MNPH29D-XTR 81Y9993 ConnectX Dual Port 10GbE PCI-E Cards for $110 on ebay
- 2 x 4 pack of 10Gtek DAC Twinax cables for $67.99 (4 for the cards, 4 for my Rubrik appliance)
https://www.amazon.com/dp/B06XYZMKKZ/ref=twister_B06XFXVQSD?_encoding=UTF8&th=1 - 4 pack of 10Gtek 1000Base-T SFP Transceivers for $75.99 (for my non-10GbE components)
https://www.amazon.com/dp/B06XZ4XGXS/ref=twister_B01LX5IRP6?_encoding=UTF8&psc=1 - 2 x PCI-e 8x Slot Extension Cables for $27.58 (you’ll see why I needed these)
https://www.newegg.com/Product/Product.aspx?Item=9SIA1KT1HP7215&cm_re=Pci-e_riser_cable-_-9SIA1KT1HP7215-_-Product - 10 pack of black Cat 6 ethernet cables for $14.99 (as I can’t have a color mismatch!)
The total BOM came in at a reasonable $808.54 to upgrade my lab to 10GbE. Not too bad! Here is how I put it together, starting with ripping out the existing networking to leave just the motherboards and NAS:
With all the old cheap 1GbE switches gone there is certainly a lot more room to work with, but how can I fit 2 PCI Express cards into motherboards mounted on top of each other? Here you can see my problem:
As you probably already guessed from the shopping list my work around was to use PCI express extension cables:
I wanted to create a custom mount bracket for the cards but didn’t really have enough room. Fortunately, I found that the card heatsinks, network cables and stiffness of the PCI extension cables actually did a perfect job of holding the cards in place with a little maneuvering:
With the 10GbE cards done next comes the switch itself. Being 17.44 x 8.7 x 1.69 inches there is no way I’m fitting this inside the case. But neither do I want a separate switch to ruin the whole concept of a hyperconverged all-in-one lab. Here comes the ultimate in home lab hacks. I used garden wreath hangers and it worked a treat!
I glued the hangers onto the switch ready to hang off the back of my mini-ITX case:
Here it is together:
Pretty cool huh? Even though my NAS and 2 NUC motherboards are stuck on 1GbE my main 2 ESXi hosts ready to rock at full speed. Here you can see the end result ready to go with 4 spare 10GbE SFP ports for my Rubrik appliances:
One quick tip on the Ubiquiti switch is that you have to manually configure the SFP ports for 1000 Mbps transceivers otherwise they won’t work, unlike a standard 10GbE connection:
The Ubiquiti switch has more features than you can shake a stick at and I honestly can’t recommend it enough. Although it did add some noise to the lab, it’s not crazy loud like most rack mount kit. The last change to make was porting my vSphere port groups over to the new 10GbE connections. Starting with the VMKernel management which I did using my handy USB crash cart adapter:
And that’s it! I hope you found this interesting and you feel inspired to look at how you could also upgrade your own lab to 10GbE. At sub $1000 I believe 10GbE is now within reach of the home lab, albeit still a little crazy. I also now have the networking in place for when I’ve saved up enough pocket money to do the mother of all upgrades (just don’t tell my wife!). Thanks for reading,
Joshua
[…] home lab to 10GbE using an Ubiquiti ES-16-XG Edge Switch. If you missed it check it out here. I couldn’t be happier with the switch in terms of performance and features, but I had one big […]