James Bottomley

Subscribe to James Bottomley: eMailAlertsEmail Alerts
Get James Bottomley: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Virtualization Magazine, PR on Ulitzer

Virtualization: Article

Virtualization Changes in 2.6.21

At that time, there were three contenders: Xen, VMware, and OpenVZ

Back in July 2006, one of the most contentious issues at the Linux Kernel Summit was what to do about virtualization. At that time, there were three contenders: Xen, VMware, and OpenVZ (the latter being a lighter-weight container-based approach). The biggest fight was between Xen and VMware over competing approaches to running kernel operations through their respective hypervisors: Xen touting their hypercall interface and VMware touting their VMI (Virtual Machine Interface) approach. Neither of these approaches was palatable to kernel developers for a variety of reasons, most of which were technical but also because selecting either would give that contender a great public relations boost over their competitor.

The reason for the fuss? Virtualization schemes like Xen and VMware can be programmed to work correctly on unmodified operating systems. However, they do this by intercepting certain actions the operating system takes (like trying to modify the page tables) and redirecting these operations so as to give the operating system the illusion that it's running exclusively on the hardware, while in reality, it is just one among many running in a so-called "virtual machine." The problem with this interception is that it's expensive (it really uses special CPU hardware to execute a trap into the hypervisor whenever the operating system does something that the hypervisor needs to check). Interception became much cheaper with CPU versions that contain virtualization technology (Vanderpool in Intel and Pacifica in AMD); however, it would be cheaper still if, instead of having to use hardware to watch the operating system, the virtualization layer could just plug into all of the operations that it needs to intercept. The two schemes being pushed by Xen and VMware are essentially different ways of plugging into the operating system.

The germ of a compromise at the Kernel Summit was achieved using an approach that had been taken by the venerable Power PC hypervisor interface (PPC has long had a firmware hypervisor with Linux support on the IBM p series of hardware). This approach, called paravirt ops (ref: http://lwn.net/Articles/194543), would essentially provide a well-defined but pluggable interface that could sit underneath the competing approaches and allow either to work comfortably in the kernel - sort of like a universal socket in the kernel for the Xen and VMware plug-ins.

In the interim, in kernel 2.6.20, a new virtualization interface, called KVM, went in with comparatively little fuss, ramping up the pressure on both Xen and VMware to produce their paravirt ops patches.

Finally, in the latest kernel version (2.6.21 released on April 25, 2007), VMware finally saw their VMI patch based on paravirt ops go into the Linux kernel (patches from Xen are also being applied in this kernel, but Xen still isn't there yet in terms of being fully available inside the Linux kernel).

What will VMware users see when running kernel 2.6.21 and beyond as a guest operating system? Essentially the benefits are twofold: first, and most important, increased speed. In theory, with VMI and hardware virtualization technology, a guest operating system should be able to operate almost as fast as the corresponding operating system would have run natively on the hardware. Second, VMware will be able to exercise greater control over the virtual guest operating system: this means operations like suspend and resume (for VMotion) or administrative operations like reallocating processing or memory resources should be much easier and quicker to perform.

Will users see these benefits immediately? Unfortunately, this might depend on the version of VMware being used. VMware released Workstation version 6 recently, which is capable of fully utilizing the VMI features in Linux Kernel 2.6.21 (although the VMI utilization is listed as "experimental"). However, for the more standard (and non-free) ESX version of VMware, or for the free VMware server product, support for VMI is still only going to be delivered at some point in the future.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Linux News 05/31/07 06:12:41 PM EDT

Back in July 2006, one of the most contentious issues at the Linux Kernel Summit was what to do about virtualization. At that time, there were three contenders: Xen, VMware, and OpenVZ (the latter being a lighter-weight container-based approach).