Our Ask the Experts series continues with another round of questions.

A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by Eric A.

What types of computing will (likely) never benefit from virtuaization?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

Quite a few HPC applications that scale easily over multiple cores and which can easily gobble up all the resources a physical host has. I don't see graphical intensive applications being virtualized quickly either. And of course webservers that need to scale out (use multiple nodes) don't have much use for virtualization either.

Question #2 by Alexander H.

GPGPU is becoming very important these days. When can we expect virtual machines to tap this resource unobstructed?

Answer #2 by Rich Brunner, VMware Chief Platform Architect

Let me speak from the bare metal hypervisor (server) POV. If you directly expose a GPGPU to a VM (virtual machine), you make VMware VMotion of the VM to a different system too difficult, fragile, and costly to attempt. There is no guarantee that the target system of a VMware VMotion even has any graphics controller above the simple 2D VGA capability living in the BMC of the server and few server customers want to waste the limited PCIe slots of a server for a graphics card. Even if you could claim some high performance graphics controller in each server today, which we do not see our SMB and enterprise customers rushing toward right now, there is still no guarantee of compatibility even at the GPGPU instruction set level (OpenCL vs CUDA vs DirectCompute). This compatibility breaks live migration. Attempting to address the compatibility requirements by emulation of the GPGU instruction set on systems which do not have it also leads to unacceptable performance. As a result, I do not expect anyone to seriously expose GPGUs in a commercial enterprise hypervisor scenario for at least a few more years. But, desktop hypervisors, which have less requirements for live migration, could get this to work sooner and paper over some of the incompatibility issues.
I think a better first step is for the hypervisor and VM to share common graphics rendering commands and primitives so that they hypervisor does not have to convert one graphics command set to another in order to render on the VM's behalf. A native driver in the hypervisor can then tweak the commands and take advantage of any hardware acceleration that a high-performance graphics card could provide if present. (This is being done today for "hosted" hypervisors such as VMware's Fusion product on the MAC with regards to OpenGL.) A GPGPU-capable graphics card offers the possibility of further "offline" (or asynchronous) acceleration of rendering and other hypervisor tasks that are invisible to the VMs on the server.
Having said that, it is clear that the microprocessor vendors are slowly integrating more capable graphics devices with their CPUs into the same processor package, at least for some market segments. If they ever decide to make this capability available in server processors, then more direct exposure of the GPGU to the VM which does not break VMware VMotion may become possible due to the resulting wide-spread availability and commonality of integrated GPGUs.

Question #3 by Craig R.

What is the Roadmap for breakthrough Security features and their implementation?

Answer #3 by Rich Uhlig, Intel Fellow

Going back to the early days of the definition of Intel VT, we actually had security in mind from the beginning, and so security is sort of already built into our existing VT feature roadmap. VMs provide a fundamentally stronger form of isolation between bodies of code because it works down to the OS kernel and device drivers running in ring 0. Our goal has been to help VMM software to further strengthen the security boundaries between VMs through hardware support. As an example, VT includes hardware mechanisms for remapping and blocking device DMA accesses to system memory, so that even a privileged ring-0 device driver running in one VM can’t access the memory belonging to another VM; that’s something that can’t be done without new hardware support. VT also simplifies the implementation of a VMM by reducing the amount of code needed to work around virtualization problems – that reduces the overall size of the trusted computing base and therefore the “attack surface” for malicious software to exploit. More recently, we’ve been adding hardware support to compute a cryptographic hash of the VMM kernel image that is loaded into the machine as it boots. This cryptographic measurement of the VMM can help to ensure that the VMM binary has not be tampered with before it begins to run. We call this “Trusted Execution Technology”.

Comments Locked

15 Comments

View All Comments

  • SlyNine - Tuesday, July 27, 2010 - link

    "I think a better first step is for the hypervisor and VM to share common graphics rendering commands and primitives so that >the< hypervisor does not have to convert one graphics command set to another in order to render on the VM's behalf."

    Just trying to make a good artical even better :).
  • npoc - Tuesday, July 27, 2010 - link

    Why just vmware? They are not the only game in town.

    KVM / qemu is on the up and up.
  • trancos - Tuesday, July 27, 2010 - link


    I know of at least 3 Virtualization platforms, Hyper-V, ESX/ESXi, Xen. Do you have any insights as far as performance is concerned (Disk I/O, processor performance, Networking) with these 3 different platforms?
  • haxter - Tuesday, July 27, 2010 - link

    Virtualized I/O should be considered a current front for virtualization security research. Virtual PCI devices in hardware offer more performance and stronger security too. While Rich mentions the Intel VT roadmap it's worth your time to read up on VT-d for IO:

    http://software.intel.com/en-us/articles/intel-vir...
    ( http://tinyurl.com/252fv9z )

    Check out more on Trusted eXecution Technology (TXT), aka LaGrande, here:

    http://www.intel.com/technology/security/
    http://www.oncloudcomputing.com/en/tag/hytrust/

    This allows the use of the in-chipset Trusted Platform Module (TPM) to test guest signatures to verify VM trust prior to boot. TXT lets VC security software do cool things like limit VM portability to certain trusted hardware.

    Virtualization security R&D is my day job so I'm immersed in this now.

    VMware is where the money is at. Xen users don't like to pay for software so innovation is limited there. HyperV is just starting to get my attention. KVM deserves a lot of praise and already has some support for both TXT and VT-d, check it out!

    Will DeHaan
  • justaviking - Tuesday, July 27, 2010 - link

    This is a repeat question from before.
    Maybe it is so basic and stupid it doesn't warrant a reply, but I'll ask again.

    Do you foresee virtualization becoming a component of an average consumer's PC? Just like many people buying a laptop at Best Buy don't differentiate between onboard and discreet graphics, might there be a role for virutualization on typical consumer PCs, even if they are not aware of it.

    If so, why, and when?
    If not, why not?
  • Peroxyde - Wednesday, July 28, 2010 - link

    By average user, I am referring to the user who doesn't even have the notion of an URL or address bar in the browser. They enter the site in the search box and click on the first link. These users will have absolutely no idea about Virtualization and will never need it. For the average user, cloud computing would probably be the most useful, they use the application via the browser and save their data in the cloud. Low responsibility, zero maintenance, little chance to screw up something.

    These average users already have difficulties to understand their own computers.They will not get the concept of a virtual computer running inside another computer.
  • justaviking - Thursday, July 29, 2010 - link

    Thanks, Peroxide.

    Let me relate my question to another feature that is beyond a lot of typical consumers. (I don't mean to be high-and-mighty or condescending when I say that.)

    DISK PARTITIONS - How many people understand the concept of a disk partition? When you want to use one, and what the pros and cons are?

    All the laptops I've purchased for family members have a partitioned disk. Why? To aid with system restoration, if needed.

    So there is an example of average consumers buying a "technology" that they actually know little or nothing about.

    In the same way, I wonder if we will soon find virtualization (even if it is hidden under the covers) in off-the-shelf desktops or laptops in the near future. And if so, why?
  • duploxxx - Wednesday, July 28, 2010 - link

    Question #1 by Eric A.

    What types of computing will (likely) never benefit from virtuaization?

    Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

    Quite a few HPC applications that scale easily over multiple cores and which can easily gobble up all the resources a physical host has. I don't see graphical intensive applications being virtualized quickly either. And of course webservers that need to scale out (use multiple nodes) don't have much use for virtualization either.

    all depends of the scope of an application, its app depending. you can still use VMware for HA only or being able to easily migrate towards new hw. These are our major reasons to virtualize, we have some HPC intensive apps always combined with few none HPC in a pool. We never have more vCPU then Pcpu in our configs and the same thing for memory but yet we virtualize as much as possible. increase uptime and reduce general cost but never reduce cost at i/o level.
  • gorgamin - Wednesday, July 28, 2010 - link

    this is all fine for the people who get to play with these expensive toys, but what about me? who only wishes that virtualization could access hardware graphics, iow, enable me to play games in a virtual windows xp pro environment while accessing my GPU.

    Also, when will linux finally get some decent game support? Come on ubuntu, you have everything else. work with the gaming companies. engage with them. enough with the elitist linux "you have to be able to recompile the kernel" guys .get all the latest games running on there and let other people with no experience, do it in a few clicks. It is possible.
  • HMTK - Wednesday, July 28, 2010 - link

    They're talking about professional grade hardware and software with high profit margins. How much would you pay for a hypervisor on your desktop machine?

Log in

Don't have an account? Sign up now