Press "Enter" to skip to content

Upgrading your home lab to 10GbE on the cheap!

Joshua Stenhouse 1

Since building my hyperconverged home lab in Jan 2015 and finally sharing it on my blog Jan 2017 it has quickly become the most popular blog post by quite a margin. But one thing has always bothered me about the lab, with 1 SSD per ESXi host, and that’s the connectivity. Stuck in the slow lane at 1GbE it was always the limiting factor when configuring replication between hosts with Zerto and performing an initial sync. This was made even worse when I wanted to plug in a Rubrik r344 appliance (Supermicro 4 node server with 4 x 10GbE cards) because no matter how fast Rubrik can ingest the backup and live mount it back over NFS, 1GbE is as fast as it will go. Demonstrating live mounts over 1GbE certainly works, but it’s not the same wow factor as it is with 10GbE.

Something had to give. Do I build a new lab using motherboards with built-in 10GBase-T NICs? How cool would it be to remove the NUC motherboards and this time go for 3 mini-ITX motherboards to create a 3-node VSAN cluster, each with 128GB DDR4 ECC RAM, a 4TB SSD and 10TB SATA disk? Here is what I put in my shopping basket a few times over:

  • 3 x Crucial 128GB DDR4 2133 MHz RDIMM Memory Kit (4 x 32GB) = $3,984
  • 3 x Supermicro MBD-X10SDV-TLN4F-O Mini-ITX Server Motherboards = $2,490
  • 3 x Samsung 850 EVO 2.5” 4TB SATA 3 SSD = $4,349
  • 3 x Seagate 3.5” 10TB HDD 7200 RPM 256MB Cache = $1,383

This would create a 24 core, 48 thread, 384GB RAM, 12TB SSD, 30TB HDD beast, all inside my single Win D-Frame mini-ITX case. But crucially it would also cost $12,206. Ouch. Do I buy now and start drafting the divorce papers? Not a great plan I agree. So how can I get 10GbE connectivity on the cheap without a lab refresh and what would I use for switching (as you may notice that wasn’t even in the basket!)?

After looking around and asking for recommendations I finally settled on buying an Ubiquiti 10GbE switch, 10GbE PCI Express cards and doing an in-place upgrade to 10GbE city. Here is what I bought:

The total BOM came in at a reasonable $808.54 to upgrade my lab to 10GbE. Not too bad!  Here is how I put it together, starting with ripping out the existing networking to leave just the motherboards and NAS:

Hyperconverged Home Lab Pre-Upgrade

With all the old cheap 1GbE switches gone there is certainly a lot more room to work with, but how can I fit 2 PCI Express cards into motherboards mounted on top of each other? Here you can see my problem:

Dual Mounted Motherboards

As you probably already guessed from the shopping list my work around was to use PCI express extension cables:

PCI Extension Cables

I wanted to create a custom mount bracket for the cards but didn’t really have enough room. Fortunately, I found that the card heatsinks, network cables and stiffness of the PCI extension cables actually did a perfect job of holding the cards in place with a little maneuvering:

10GbE cards in place

With the 10GbE cards done next comes the switch itself. Being 17.44 x 8.7 x 1.69 inches there is no way I’m fitting this inside the case. But neither do I want a separate switch to ruin the whole concept of a hyperconverged all-in-one lab. Here comes the ultimate in home lab hacks. I used garden wreath hangers and it worked a treat!

The Ultimate Home Lab Hack

I glued the hangers onto the switch ready to hang off the back of my mini-ITX case:

The Wreath Hack

Here it is together:

Hanging Baskets, I mean switches

Pretty cool huh? Even though my NAS and 2 NUC motherboards are stuck on 1GbE my main 2 ESXi hosts ready to rock at full speed. Here you can see the end result ready to go with 4 spare 10GbE SFP ports for my Rubrik appliances:

10GbE Power!

Hyperconverged-Home-Lab-10GbE-Upgrade-Complete

One quick tip on the Ubiquiti switch is that you have to manually configure the SFP ports for 1000 Mbps transceivers otherwise they won’t work, unlike a standard 10GbE connection:

Ubiquiti Edge Switch Config

The Ubiquiti switch has more features than you can shake a stick at and I honestly can’t recommend it enough. Although it did add some noise to the lab, it’s not crazy loud like most rack mount kit. The last change to make was porting my vSphere port groups over to the new 10GbE connections. Starting with the VMKernel management which I did using my handy USB crash cart adapter:

Management VMKernel Port Config

And that’s it! I hope you found this interesting and you feel inspired to look at how you could also upgrade your own lab to 10GbE. At sub $1000 I believe 10GbE is now within reach of the home lab, albeit still a little crazy. I also now have the networking in place for when I’ve saved up enough pocket money to do the mother of all upgrades (just don’t tell my wife!). Thanks for reading,

Joshua

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: