Home Lab 2.0 – Storage network

No matter what virtual machine (VM) platform you’re building, you need a solid foundation. One of the most critical choices is where to store your VMs and how the VM hosts will access it.

Usually the storage is separate to the VM host, however this doesn’t have to be the case. If you wanted to keep a lab environment as compact as possible you can always store the virtual machines on the same server as the storage, for large scale deployments there are some options like the Virtual SAN from VMWare if you want to distribute the storage between all the nodes.

I want to have a separate storage server for two reasons, first I can lump all the storage in one device, not just for VMs, but media and file storage too, and secondly it makes adding more VM nodes down the line very easy if I want HA or need more oomph. It also keeps the VM host very simple.

Given that my storage and my VM host are therefore separate machines I’ll need some form of interconnect. Excluding fun (but pricey!) stuff like Fiber Channel I’m left with the network options, namely iSCSI or NFS.

iSCSI (Internet Small Computer System Interface) has been around for a while now and is a block level protocol, meaning it allows a device to access storage across a network as if it were a drive, rather than a file share. It appears complex, but setting it up is remarkably easy once you get your head around it.

NFS (Network File System) is the standard way in the Linux/Unix world to share files, much like SMB is for Windows. It is a file level protocol, meaning that it can access individual files and folders across the network, rather than the all or nothing of iSCSI.

Given these two options are supported by most VM hosts and the performance difference between them seems to be negligible at best the choice is largely down to your preference.

I chose iSCSI for two reasons,one being that until NFS 4.1 the majority of traffic to a single data store uses a single connection, so at best I could use LACP for redundancy across NICs, but wouldn’t see the doubled bandwidth I might. Also different systems support different NFS versions, so depending on the VM platform I chose, I may or may not get to use the version of NFS I want.

So I went with iSCSI, which gives me MPIO multipathing, this allows me to use several NICs on each end for more bandwidth, but does bring in its own issues, namely latency and security. iSCSI is an unencrypted protocol, which means anyone who can sniff the packets as they go down the wire can see the data (unlikely, but I’m building this properly!) you can add IPSEC encryption to the network but that’s a big overhead. The second is latency, the less you have the better it works.

These two factors however lead to the same solution, and a simple one at that! I’m using a second network for storage. I strictly speaking don’t need to add a switch at this point, but I have for later, which leads on to something rather surprising; iSCSI works best on unmanaged switches! The general consensus on the SpiceWorks forum is that the extra processing done by a managed switch adds enough latency to make a difference, and who am I to disagree, especially as I have an old Netgear JGS524 kicking about that will do the trick! Also, by separating out the storage traffic it means I don’t need to worry about bandwidth use through my Nortel network switch and you have to be on the storage network to sniff it, mitigating the lack of encryption. A bit more power usage than a single switch, but hopefully worth the trade-off.

Set up through a temporary Nas4Free box to a Citrix Xenserver node it works a treat, although only a single storage NIC in each for now. Once I get a proper storage server set up I can start playing with multipathing and putting some load on to see what I can get out of it.

Home Lab 2.0 – Networking Core Switch

Over the years I have assembled a small motley collection of networking kit however nothing quite fitted the bill for what I need for my new lab. I have managed kit, but only 100BaseT and I have Gigabit kit, but only unmanaged.

What I needed was something with a good amount of ports, managed, Gigabit and with PoE because I can, and it’s useful for stuff later on. I also needed the 5000 bucks to buy one. Not wanting to blow the entire budget on a single bit of kit it was off to eBay to find something ex-corporate. Now I could have grabbed whatever Cisco unit ebay had to offer, but again they’re not exactly cheap.

Step up Nortel! Oh wait…

Nortel went bust some years ago, and the company was sold off in various chunks, but they used to make some solid networking kit. As they’re no longer about, and they’re much less desirable than the big Cisco their switches are a lot cheaper second hand than almost everything else. In fact, after looking around a bit I picked up a Baystack 5520-48T-PWR, which gives me 48 ports of lovely PoE Gigabit managed networking, delivered from a seller the other side of the States for under 250 dollars. Nice!

The fun thing about Nortel switches is that the switch arm was bought by Avaya, who still sell them, in a different color box. They’ll even support the original Nortel boxes if you want to spend money on support. In fact, when I booted the switch, I got this:

NortelBootScreen

That’s right, the logo on the outside doesn’t match the inside.

Of course, as with any bargain there is a catch. In this case though it’s not exactly a deal breaker. The switch is a full layer 3 switch for IPv4, but doesn’t contain the hardware to do IPv6 routing so it’ll only do layer 2 for IPv6. This isn’t a massive deal unless you’re running multiple IPv6 VLANs and you can always add a separate router to add IPv6 to your network later, in fact a quick glance at eBay suggests you can get something pretty heavy-duty from your favorite network kit provider for under 200 bucks.

As people I worked with might tell you I’m not the world’s biggest Avaya fan, having had to work with an old IPOffice system, and their support can be pretty ropey at times, but I’m not doing anything particularly taxing or out of the ordinary so we’ll see how it goes.

It’s now sitting on my desk humming away and routing IPv4 VLANs quite happily, it does indeed do PoE and it’s not too horrendous to configure. In fact, I’ve pretty much only needed to glance at Michael McNamara’s excellent blog to get 90% of the configuration done I needed (In fact, if you Google 5520 and what you need, the first link is nearly always his site anyway).

Now I just need to finish off the LackRack and get it mounted.

What’s in a CNAME?

While setting up the first server of my lab I thought I would add a small aside here on Canonical Name records (CNAME) with DNS. A lot of the DNS documentation out there explains what a CNAME is, but not always why you’d want to use it.

To begin with lets look at the difference between a standard A record and a CNAME, I’m using the ever-present contoso.com as the domain, in the style of Microsoft.

An A Record points directly to the server IP address, much like a name in a phone directory (remember those?) points to a phone number. When a computer looks up an A record it gets the IP ready for direct communication. They look something like this on the DNS server:

server.contoso.com     A     192.168.1.1

A CNAME always points to an A record, like an alias for the server. When your computer looks up a CNAME, the DNS server will replace the CNAME with the A record and reply with that. A CNAME pointing at another CNAME will probably work, if your DNS server will allow you to add the record, but don’t! it’s bad and it leads to confusion, mistakes and someone else looking at your DNS records and asking “Which Muppet did this then?”. They look like this:

www.contoso.com     CNAME     server.contoso.com

It’s a good idea when setting up any service to create a CNAME for that service, in the above example a web server (in fact, you’ll definitely want to do this for multiple web sites on the same box). This allows you to move that service to another server later on without needing to do any more than change where the CNAME points.

For example, I’ve just set up a NTP time server on my Ovirt Engine server (Spoilers!) however I may in the future find that the box isn’t capable of controlling my virtual server host nodes. If I had used the A record directly, I would have to go into each node and change the server address, or rename the old server and give the name to the new but then what about my time service? Both is much more work than a quick re-point.

So my DNS records would look like this:

server.contoso.com         A       192.168.1.1
time.contoso.com           CNAME   server.contoso.com
ovirt-engine.contoso.com   CNAME   server.consoso.com

I’d add the new server:

server.contoso.com         A       192.168.1.1
server2.contoso.com        A       192.168.1.2
time.contoso.com           CNAME   server.contoso.com
ovirt-engine.contoso.com   CNAME   server.consoso.com

Then when I’m ready to migrate just tweak the CNAME to repoint the service:

server.contoso.com         A       192.168.1.1
server2.contoso.com        A       192.168.1.2
time.contoso.com           CNAME   server.contoso.com
ovirt-engine.contoso.com   CNAME   server2.consoso.com

This also means I can run them concurrently, and roll-back is as easy as reverting the CNAME change.

The catch!

And there is one, not a big one, but it can cause and issue. Each DNS record has a Time To Live or TTL which tells a machine how long it should cache the record for in secords before it checks again with the server. This is normally a good thing, it means less load on your DNS servers and less DNS traffic, however if I adjust my CNAME as above and the TTL is set for say 1800 seconds it can take a machine 30 minutes to see the change.

The easy way around this is to remember to reduce the TTL before you make the change to a nice low number, remembering you’ll need to wait at least the old TTL for that to take effect. Don’t forget to reset it afterwards!

Home Lab 2.0 – Racking

For the past year most, if not all of my home lab has been packed into boxes. First in storage, then in shipping to our new home. This has made it tricky to do much (apart from sorting out my parents’ kit). Now we’ve finally settled I can rebuild it however rather than just bolting it all back together again it’s time for a rethink, and of course upgrades!

The first issue was where to build it. It used to all live in a 48U rack I was kindly given by my old work. Handily it breaks down to four uprights, a top and a bottom so getting it into position is not too bad. The spare room has full height cupboards with sliding doors that we could put the rack in, then when we have guests I can shut down all unnecessary kit and close the doors. A match made in heaven, apart from the small issue of about a quarter of an inch clearance. Nuts.

We needed a plan B, but what? I could just heap the stuff up on the floor, but that’s hardly ideal, and racks are not exactly cheap and I’d rather sink the cash into new hardware than metalwork. So step forward the LackRack from eth-0!

I’d seen this a while ago, and it should do the job of keeping everything neat and tidy without costing me the earth. I had a Lack table that hadn’t survived shipping from the UK, but dropping it into the wardrobe showed it would just fit. There is a shelf in the wardrobe, three Lack tables will fit underneath it stacked on each other and I need three tables to house all the kit!

After dining on meatballs at the Swedish furniture emporium known as Ikea the result was this:

IMG_1954 IMG_1955

As you can see it fits just with a gnat’s whisker to spare. The longer table will take servers, the top ones will take the various bits of network kit and power distribution. The kit will all have to face to the left to stop the door catching on patch cables, but it should do the trick nicely.

the legs are hollow which means I may need to add some reinforcement to take the weight of the kit, but I’ll see how that goes as I start to add the kit. Who knows, I might even manage more than one post a month!