Press "Enter" to skip to content

Rehoming the Hyperconverged Home Lab

Joshua Stenhouse 6

It’s time to admit that my Hyperconverged Home Lab isn’t as impressive spec wise as it once was. If this is the first time you’re reading about this then I recommend checking out the backstory here. When I first built it in 2014, like most builds, it was great:

  • 4 ESXi hosts, 24 cores total
  • 2 x 32GB RAM, 2 x 16GB RAM, total 96GB RAM
  • 2TB of spinning disk, 1.5TB of SSDs
  • 2.7TB 1GbE DS414 Synology NAS
  • 1GbE switching
  • All inside 1 micro ATX case

But it’s nearly 2018. I’m lacking compute, memory, speed, and dedupe. The era of spinning disk, SANs, and complexity is over. A truly hyperconverged infrastructure isn’t just about sticking hosts, networking, and a SAN, with 4 different interfaces, in the same box with a shiny new label anymore. Whether it’s a small case or a rack in your DC. That’s what legacy vendors do. It’s about using a simple, single, point and click interface that can be automated end to end with REST APIs. It’s about converging the entire infrastructure so you can scale-out with hosts using local storage intelligently and grow as you go. It’s about abstracting infrastructure from the engineer so you can build and provide services just like a public cloud. None of which my lab is going to give me. So, what should I do? Read on…

20170819_122732

I could try to sell all the individual parts, but this isn’t likely to generate much money. I could just put it all to one side and buy new, but that would be a waste. I could keep all the parts in the current case and buy a new case for a new lab, but I love my case too much for it not to be my primary lab server. To add to my conundrum, I also have 5 separate ESXi hosts in custom Zerto NUC cases as my part of my vSphere cluster:

labpic19

Even though these all aluminum cases are cool, the biggest problem with them is space. 5 of them with 5 dedicated power bricks takes up a lot of space and power sockets:

20171109_095859

The solution? 9 ESXi hosts in 1 case:20171011_192344
Yesss! The first thing I bought was a suitable case. I settled on a Lian Li PC-T60 workbench in black:

http://www.lian-li.com/en/dt_portfolio/pc-t60/

I found the case on eBay in Italy and the dual fan mount on eBay in Germany. After all, I wouldn’t want something that you could pick up at your local computer supermarket! I picked the workbench because I needed an open-air chassis that would allow me to stack motherboards on top of each other using multiple m3 standoffs, like this:

20171109_100044

It also allowed me to have 2 x 140mm fans to cool all 9 motherboards at just 14dBA, while also looking sweet:20171109_100807The next challenge was power. How can I run 9 motherboards on one PSU? It actually sounds more challenging than it really is. I opted for a 500w silent PSU from Seasonic, a Y splitter for the motherboard power cable and 7 x 4 pin Molex to 12v power cables (here) for the NUCs:

20171109_100116

Given each NUC DC3217IY motherboard only uses 17w of power, 7 needs 119w. Add in the 2 mini ITX Asrock C2550D4I motherboards at 48w each and the total load is 215w, ample capacity for the fans, spinning disks, and surges in demand.

The next challenge was the NAS. I couldn’t fit it into the case as is and neither did I just want to glue it on, so I decided to take the Synology DS414 NAS out of its case and put it into the hard drive bay:

20171109_100135

At first, I did this without any dedicated cooling, but I noticed it kept shutting down and a quick visit to the Synology GUI confirmed that it was overheating. To combat this, I installed an extra 120mm fan just for the NAS:

20171109_100146

The final challenge was networking. How do I keep the 2 x 10GbE cards for 2 of the ESXi hosts, yet integrate 1GbE networking for everything else? To solve this challenge, I removed the entire case from my 1u 24 port 1GbE switch then I attached it to the back:

20171011_192419

I then plugged one of the 10GbE cards into the top motherboard, and attached the second 10GbE card alongside it, but upside down, with a PCI extension cable:

20171109_100206

These can be optionally plugged into the 10GbE switch in my original case. If these aren’t connected, the ESXi hosts revert to the dual 1GbE ports. I then strapped a power brick underneath the case to connect the PSU, NAS, and switch, so I only need 1 power cable to run the whole lab which you can see in the bottom left:

20171109_100305

You might have noticed by this point that I put the case together slightly different from how Lian Li intended. As it comes flat packed I chose to have the PSU power switch facing forward with all the cables to the back, otherwise, it just looked messy. The only downside is the logo on the handle faces backward, but that wasn’t the end of the world.

So that’s it! I now have 9 ESXi hosts in 1 portable case:
20171008_09270820171109_100021Here’s a shot of the vSphere 6.5 vCenter managing all the hosts:vSphere6.5vCenterI’m calling it The Hyperconverged Home Lab 1.0. Why the number? Because of my favorite case, the Win-D frame m-ATX that I first saw in Japan 4 years ago, is now sat empty:

20171109_100421

There’s only 1 thing left to do. It’s time to build The Hyperconverged Home Lab 2.0! This time it’s going to be truly hyperconverged with 1 simple interface to manage. REST API driven. Capable of running hundreds of real VMs at a competitive price tag. 10GbE throughout. And, I don’t want to pay for my hypervisor anymore (or sign up for another bloody trial!). It has to be able to both compete with and leverage the cloud. Excited much? I am! Watch this space and thanks for reading,

Joshua

  1. Would be fun to convert some of those hosts to AHV hosts using our CE (Community Edition) and test that out!

    • Joshua Stenhouse Joshua Stenhouse

      I like your thinking. I’m not going to convert the existing hosts, but watch this space!

Leave a Reply to Upgrading my VMware home lab UPS & monitoring power usage – Virtually SoberCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Virtually Sober

Subscribe now to keep reading and get access to the full archive.

Continue reading