VMware one-ups Microsoft with vSphere 5.1

The race for virtualization dominance between Microsoft and VMware has become more interesting with VMware's recent release of vSphere 5.1. We obtained vSphere around the same moment as the final release of Windows Server 2012, whose newly included virtual switch and enhanced Hyper-V features were designed to clobber VMware.

But back in the garages of their digital "brickyard", VMware was scheming to one-up the one-ups.

While we like Hyper-V3, there are both pronounced and subtle reasons why we like vSphere 5.1 a little more. Some of the competitive difficulties amount to classic Microsoft problems revolving around support for competitive platforms. But VMware also does a better job trying to lift the barriers to virtualization via annual aggressive releases.

The trump card of this release is the ability to move a virtual machine from one machine and storage space to another. If your use of virtualization is small, this release won't make much difference to you. But if you need optimizations or have an appreciation for moving VMs around as though they were almost toys, vSphere 5.1 does it.

The vSphere 5.1 specs are statistically awesome and yet esoteric. At the upper end, vSphere is capable of controlling a 1TB VM, or symmetrical multiprocessing (SMP) with up to 64 processors. We don't know of any commercial hardware that supports either of these.

The vSphere 5.1 pricing model was changed at VMworld to a more simplified model revolving around processors/cores, but it's still the priciest virtualization that we know of. It still has warts, but there has been much plastic surgery and lipstick applied, as well -- the face of a new web UI.

Included in the vSphere app kit is an updated Distributed Switch. The switch now supports more controls, including Network I/O Control (NetIOC) for admittance controls, IEEE 802.1p tagging for QoS/CoS flows, and enhanced vocabulary for Cisco and IBM virtual switches. There is increased monitoring capability for the switch, both in-band and out-of-band, and many of the changes reflect control capabilities that are suited towards 10Gigabit Ethernet.

We setup a local and VPN-connected network running several hardware servers thru a 10G Ethernet Extreme Networks Summit X650 (locally) and between our lab and network operations center connection. The reason? The vMotion software will jam an equal number of pre-bonded virtual and physical ports with a traveling virtual machine during VM movements. More ports, higher speed, means a faster movement from one metal server to the target host for a moving VM.

We started up configuration on a bare metal HP DL560 Gen8 server. This server has plentiful, even spectacular power and serious disk in a 2U frame, and uses what we believe to be pretty standard drivers. But VMware's vSphere 5.1 lacked drivers for it, so it hung with an indiscernible error message. We recognize that we received early, yet not beta, supposed-to-be-production code, so we contacted VMware and within a few hours, we had a custom-cut of 5.1, and from there, everything moved splendidly.

vCloud vs. vSphere: VMware explains security changes

Of the subtle upgrades, this edition is able to use more complex authorization and certificate trading schemes, and still has an ongoing affinity for authentication with Microsoft Active Directory. However, instead of the Windows-only client, we could now use browsers from Windows, MacOS and Linux. The UI is understandable and makes comparatively good use of browser windowing areas.

Our older vSphere Clients were immediately subject to a download of a new client type when we used them to access 5.1 turf, and managing a combination of 5.0 and 5.1 resources requires the 5.1 denominator of vSphere client -- which looks superficially identical to the old one. When we started looking to resources and configuration, we rapidly found newer features.

We wanted to test moving a VM from one machine to another, Storage vMotion-style, whose target didn't share the same storage. This means that the instance has to move its IP information, its storage basis, its work, and even its CPU-type on-the-fly

We moved it, although it required some initial work. We're used to the minutiae of setting up a VMware network, and little of that has changed. We provision our networks through ISO images that we store on an NFS network. Using NFS is still not without its pain on VMware, as initial boots from ISO images into VMs -- even when we've pre-built and pre-seeded images, require comparatively obscure setups.

The upshot is that if you pre-configure Linux and Windows Server VMs (we tested Windows 2008 R2 and Windows 2012 gold release), you can envelop them in what amounts to a virtual wrapper that isolates them (largely) from machine-specific settings. This means that ISOs can conceptually be "hatched" into instances that are "wrapped" with settings that allow them to be moved and manipulated more as true virtualized object instances than was possible before.

Hosts still need vMotion or Storage vMotion (the new secret sauce) to permit live migration across hardware. But VM instances become more atomic, keeping their functionality intact and are nearly immune to their external hosting environment's characteristics or even geography. They live in isolation, doing their work, and while they aren't ignorant of their external settings, the settings are a convenience -- they're plugged into sockets, very "The Matrix"-like.

Once we accessed a vSphere 5.1 host, our vSphere 5.0 client was updated automatically, and didn't give us much of a choice about the location of where the new vSphere client was going to reside, an installer inflexibility. Nonetheless, we installed the new client and obtained access immediately to our host VM platforms.

It's probably best at this point to install the vCenter Server Appliance (vCSA) locally to allow it to access resources remotely. Missing this step caused us delays.

Using vSphere 5.1 client, we wanted to deploy an OVF template that installs the VMware vCenter 5.1 Server Appliance (VCSA). The server appliance also holds the optional web UI, and is a management control center for vS51 installations. The VCSA uses a template file (OVA and OVF files that describe the procedure), and two VMware Virtual Disks (VMDK) -- four separate files in total.

The OVA/OVF template files execute and deploy from the client-local resources including http/https/ftp and local disks/shares. We used an NFS share controlled by our newly updated vSphere 5.1 client in the lab. The NFS files are about 70 miles away.

This was a mistake on our part, as the vSphere 5.1 client initially dragged the .OVF, .OVA, and the two VMDK files associated with the vSphere Server Appliance out of our NOC cabinet servers, across the Internet to the lab, where it dutifully then sent them back across the Internet to the target ESXi 5 host that we'd just brought up. The vCenter 5.1 client warned us: 149 minutes remaining; in reality, it took longer, about three hours. Locally, it would have taken perhaps a half hour.

This misery is obviated if one installs the Server Appliance locally. Remote execution would have been more handy, but as the VCSA does this, it has to be installed first.

The VCSA features are updated from the 5.0 version and offer more configuration options, especially authentication and database options. We could use an internal database to keep track of settings and configurations, or use an external database (Oracle was recommended). MS SQL Server can't be used, and we wondered why an open source database product wasn't offered for embedded tracking. The appliance is based on SUSE Linux 11, and it uses 4GB of memory and 8GB of disk. A non-monstrous installation ought to be more easily tracked with an internal LAMP-ish database product.

The VCSA can stand alone, or be synced with others in "Linked Mode,'' which requires authentication through Active Directory, and allows inventory views in a single group. Linked Mode VCSAs can't have vMotion migrations, however, which frustrated us.

Moving needles between haystacks

In the old model, VMware's vMotion allowed moving VMs, hot/live, between hosts if the hosts shared the same storage. VMware's Storage vMotion removes the limitation of requiring the same storage -- if other small constraints are respected, including the maximum number of concurrent vMotions of any type that can be handled. VMotions aren't encrypted, however, and so VMware recommends (and we agree) that Storage vMotions (and normal migrations) need to be in wire-secure environments.

The maximum number of concurrent migrations is often a function of network traffic capability. We could bond several 10GB ports together to maximize transfer and minimize downtime of hot/alive VMs, but on a network with congestion, or networks using VLANs, things could slow as VMs are tossed around. There are also limitations imposed on data stores that can be manipulated -- a function of the version of ESX or ESXi in play.

Using a Gigabit Ethernet network, Storage vMotion of a sample Windows 2008R2 VM took 11 minutes with two bonded 10G Ethernet ports and a back-channel connection. Linking all three available ports actually slowed things down (16 minutes), as the back channel seems to be necessary for traffic management during v-movements. But we had finally proven the concept. Numerous bonded 10G Ethernet ports would have likely shortened the process of live migration more quickly.

We moved a VM from the lab location through the Internet, to our cabinet at nFrame. Our local network connection is variably throttled by Comcast, and so we won't quote overall migration time. Let's just say it was a very long time. Nonetheless, it worked.

Storage vMotion removes one large VM movement problem by allowing, conditions permitting, VM movement and/or replication to "foreign" (if licensed) hosts. High availability within a data center is increased, as is the ability to optimize host CPU cycles by match-fitting VM workloads with host spare-cycles. The Distributed Switch appliance and 10G Ethernet ports can make all the difference. Slower links make Storage vMotion less practical. It's our belief that VMware will sell, by accident, more 10G Ethernet switches.

Increasing high availability through the use of rapid failover to an alternate cabinet, room or even cross-country site is a direct function of communications bandwidth and managerial strength in terms of concurrent migration operations capability. For now, even if all the hypervisor hosts are licensed and running the latest version of VMware with fully configured Storage vMotion, there are upper-end practical limits to how much and how frequently VMs can be moved/migrated.

How We Tested

We tested vSphere 5.1 on an existing network consisting of two sides, lab and NOC. The lab is joined to the NOC at nFrame in Carmel Ind., by a Comcast Business Broadband link into a Gigabit Ethernet connection supplied by nFrame. At the NOC at nFrame are several HP, Dell, and Lenovo hosts connected by an Extreme Networks 10GBE X650 crossbar L2/L3 switch.

We installed freshly or upgraded various hosts with vSphere, as well as client machines, then installed the vCenter Server Appliance as described, and subsequently used this installed appliance as our access method to the converted vSphere5.1 hosts. We installed several trial VMs from scratch, and two (Windows 2008 R2 minimal configuration VMs) to be used in a trial of Storage vMotion between hosts, as described. We also noted features of the web client, and overall changes between supported features between vSphere 5.0 and 5.1.

Henderson is principal researcher for ExtremeLabs, of Bloomington, Ind. He can be reached at kitchen-sink@extremelabs.com.

Read more about data center in Network World's Data Center section.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags MicrosoftVMwareData Centervirtualizationhardware systemsConfiguration / maintenance

Show Comments
[]