This is the first part of a series on running ESXi on a home server. See the second part, here.
It’s a been a while since updating this blog. Apologies, but it’s mostly because I’ve been really busy doing what I do As part of that though, I use a lot of VMWare Virtual Machines. Being an Apple “fanboy” (gah) I use VMWare Fusion on my Macs which is great for running various virtual machines. (In fact, Windows 7 and Windows Server 2008 R2 seem to run a hell of a lot better virtualised rather than natively. Go figure.)
Anyway, it got to a point where I decided I needed a bigger and more robust framework to run virtual machines on. Simply speaking, running one or two VMs on my Mac is fine, but when you need to run several at one time with limited resources, it starts to get a lot more involved. Given that VMWare ESXi Hypervisor is a free platform for running a single server VM host, I figured I’d experiment with building an ESXi server for home use. This blog post details some of the pain I went through in getting there.
Before I get started, there are a few terms I will use:
HOST – the physical machine, that is running ESXi
GUEST (or guest OS) – this is the virtual machine(s) that then runs on ESXi
What is ESXi
ESXi is a cut down linux kernel that acts as a hypervisor. That is, it’s a piece of software that runs on the physical machine, and then presents the physical hardware to the guest machines that want to use it. It in itself is not an OS. So they key difference here between say VMWare Fusion or Workstation is that you’re not running an edition of e.g., Windows Server which itself is running Virtual Machines – you’re, moreorless, cutting out the middle man.
This was in places a pretty painful experience. I used to build computers years ago. It was always a relatively straight-forward experience – buy the bits, chuck them in and cross your fingers. Things have moved on a long way since then and I haven’t really kept up.
One of the big mistakes I made was assuming that virtualisation was virtualisation was virtualisation. It’s not the case. In as simple terms as possible, there are two different types. VMWare (or indeed a “hypervisor”) is a hardware abstraction layer. Essentially it sits on top of the physical hardware and presents that hardware to the the guest OS. Typically, an abstraction layer blocks direct access to the hardware by providing a generic device (e.g., Network Card, Video Card, etc.) This is fine in most cases, but there may be occasions where this won’t work. For instance, if you have a particular piece of hardware (e.g., a TV Tuner) that you want to run from a VM, but the ESXi kernel doesn’t have a driver for it, then that device is not usable. There’s an alternative here – something called IOMMU, which, again, in simplest terms, allows the guest OS to see the actual device – and you can then install the correct device drivers, inside the guest OS. Magic. So what does this mean in hardware terms? The hardware you buy has to support this particularly style of virtualisation. So, for instance, if you’re looking at Intel-based chipsets, you may see processors that are VT-X – which does not support this latter style of virtualisation, whereas a processor that does support it will be VT-D.
If you’re confused, don’t worry – it took me a while to get my head around it, and I’m still not sure I’m there. But (after buying the incorrect hardware) I convinced myself I wanted VT-d support and so had to rethink what I was doing.
So, my objective was to configure an ESXi server running 5.1 (latest edition at time of writing) that supported VT-d. I won’t go in to all the other decisions I went through, as I rather figured that if I just threw a load of performance in to the box, then I’d get there. I also didn’t give too much consideration to noise/heat/power consumption, whereas I probably should have done. So the build I ended up with, consists of:
- Intel Core I7-3770 Quad Core 3.4GHZ Processor
- ASROCK z77 Extreme4 motherboard
- 32gb (yes, 32gb!) DDR3 1600mhz Corsaire Vengeance RAM (4x8gb sticks)
- 2x2TB Western Digital Black SATA 6gbps hard drives
- Corsaire Carbide 500R tower
- Corsaire ETX750W PSU
This machine is an absolute beast. I set myself a budget of £500, but I blew that easily. It came in around £750 from Dabs, with the exception of the RAM which got from Amazon. Can’t rate Dabs highly enough. It’s totally overpowered for what I need, but I figure it’ll keep going for a good few years which is the main thing.
A note on the processor
The mistake I initially made was with the processor. I initially got the Core I7-3770K processor. This is the unlocked version which supports overclocking and whatever. Fine if you’re a gamer and want maximum performance, but no good if you’re virtualising stuff as the K edition does not support VT-d and various other things. Hence, I switched it for the non-K edition. Likewise, I started with an ASUS V-7780 Pro motherboard. It doesn’t support VT-d hence I switched it for the ASROCK.
I also took the opportunity to upgrade my ADSL router from a Netgear DG834 to a Draytek Vigor 2830. Why? Well two reasons – 1) The Draytek has multiple Gigabit lan ports which I figured was going to be vital for communicating to and from the ESXi host, and 2) it supports a broader range of DDNS and VPN activities. One plan for the ESXi server is a public-facing VPN so that I can access the home network from anywhere.
Putting it all together
How I built the machine is a bit beyond the scope of this article, but it’s not very complicated. My advice is to take your time and go slowly. And always remove any static from your hands! I don’t bother with one of those weird wrist bracelet things – I just touch a radiator before I touch anything electrical.
Take the sides off the case and then install, in this order:
1) PSU – usually secured by 4 screws. The fan should be pointing in to the machine. The Carbide case only seemed to have holes for two of the screws, but was nice and solid anyway.
2) Motherboard back plate – this just slots in to the hole on the back of the case.
3) Motherboard – very gently remove the motherboard from the packaging, and holding it by the sides, line the back up inside the backplate. You’ll then see where the screws go. I had 8 holes, but for some reason only 6 of them lined up on the case. No big deal I figure. My case already had risers in place.
4) Processor. Scary but quite easy. Read the book for your motherboard to figure out the processor latch. Put the processor in and close the latch. Hopefully it doesn’t grind too much!
5) Processor fan – find the closest processor fan point on the motherboard and line up the cable with it. Then place the fan over the top and secure it down. Again, read the book with the processor to figure out exactly how it goes.
6) RAM. If your board supports 4 DIMMS but you’re not installing in all slots, then, guess what – read the book – it’ll tell you where to put them. This is especially important if your board supports dual channel memory – you’ll confuse it if you don’t put it in the right slots. And always use matched ram!. Lift up the tabs on either end, put both ends in so it’s even.. and push. More grinding, but it’ll be fine.
7) Hard drives – dead easy – most cases have a caddy now – slot the HDD in the caddy and you’re away.
8) Cables. Use the cable management in your case if it exists. The point is to have the inside of the case nice and spacious so that air can move around. You Again, read the book for your motherboard so you know which cables go where, especially for the case header cables and the HDD cables. You may need to adjust the position of your HDDs if, like me, the PSU cables were rigid.
Put the sides back on, plug it in, attach a monitor, hit the power button and cross your fingers! Everything should load up and you’ll either end up in the BIOS or on a blank screen / screen whinging about booting something. Go in to the BIOS now and have a poke around to see that everything is showing up properly. If you get any beeps, refer to your motherboard book to learn what they mean.
Mine booted up fine and the BIOS reporte all RAM and HDDs picked up normally. And it’s surprisingly quiet, humming away quietly. Fans seem to spin up and down as needed, although I could control this in the BIOS.
And the case even has some groovy lights in the front!
Next time I’ll talk about the ESXi build.