Runecast Academy

What is Virtualization?

Part 2 kicks off our introduction to virtualization, where we get into the why and how of how this technology developed and why it’s so important for the future of IT.

What is Virtualization?

Part 2 kicks off our introduction to virtualization, where we get into the why and how of how this technology developed and why it’s so important for the future of IT.

Some topics covered here include:

  • Hypervisors
  • Types of hypervisors
  • What makes all this so special

What is Virtualization?

Historically, IT departments have had difficulties responding in a timely manner to business demands. Providing infrastructure for new internal applications could take months, even with proper planning. Different services often require different Operating Systems (OS), specific configuration, and isolation from other programs.

This led over time to an increased number of physical servers, each running one OS and one application inside that OS. One can only imagine the operational overhead and costs associated with this – racking, cooling, power consumption, cabling, and so on. On top of that, the majority of those separate physical servers were using just a fraction of what they were actually capable of, resulting in lots of unused CPU and memory.

A Better Way: Hypervisor

There had to be a better way to provision, manage, and utilize compute workloads. And what better way than to decouple the hardware from the software you want to run on it? That’s exactly what virtualization helps us to achieve – it adds an abstraction layer between the physical hardware and your workloads.

This abstraction layer is known as a hypervisor. It provides access to physical resources (i.e. memory and CPU cycles) to one or multiple entities, called virtual machines (VMs). Yes, multiple! The hypervisor’s scheduler can serve the demand of many VMs requesting actual memory or CPU time. This provides better resource utilization of the physical servers, while still meeting the requirements for different OS, configuration, and isolation across multiple applications.

Hypervisor Types

There are two types of hypervisors:

  • Bare-metal (Type 1) – the hypervisor is installed directly on the server instead of OS like Windows, Linux, etc. Compared to a full-blown OS, the type 1 hypervisor has a much smaller footprint (a few hundred MBs) and it’s the right choice when building infrastructure for your virtual workloads. Two of the most popular commercial bare-metal hypervisors include VMware ESXi and Microsoft Hyper-V. There are also open-source projects such as KVM, which is developed and maintained by the Open Virtualization Alliance.
  • Hosted (Type 2) – the hypervisor is installed as a program inside the OS. It’s more suitable for end-user consumption.

The abstraction and the simplified management of workloads helped the cloud computing industry reach the state it’s in today. Provisioning time is lowered from months to minutes, bringing lots of benefits along the way.

A Game-Changing Approach

Now let’s take it a step further than better resource utilization and consider another scenario. Have you ever powered off your personal computer just so you can hook it up to a different power source? Or maybe you wanted to upgrade the amount of memory, insert a new disk or device? If so, you probably know that this requires you to stop all your work in progress and continue after you are ready with your intervention. While this is acceptable for end-users, in the IT enterprise world this will be counted as an outage.

Imagine you’re not able to access your favorite website, just because the company that is hosting it had to do similar maintenance of the server where the website is. That could be quite frustrating, but since virtualization provides an abstraction layer between hardware and VMs we can get around it. There are techniques, thanks to which a VM (workload) can be moved from one physical server to another, allowing such hardware maintenance to be performed without impacting the services running on it. This is a real game-changer for the application availability and data center operations all together.


Virtualization technology redefines the way we build and maintain IT infrastructures. It brings operational benefits by addressing two major problems:

  • Better resource utilization of physical servers along with a logical separation of workloads
  • Abstraction from the underlying hardware components, enabling maintenance and server replacements without impacting the running workload

 Combined, those mark the foundations of the Software-Defined Data Center (SDDC), where the core components – compute, storage, and network are abstracted from the underlying physical infrastructure.

Ivaylo Ivanov

Ivaylo is a VMware Engineer at Runecast. His primary focus includes the complete VMware products family, networking, and automation. He is a VCIX6-DCV, VCP7-CMA, and a 5-time vExpert. Find him on Twitter as @ivgivanov.

All Academy articles
No items found.