Which hardware for virtualization server?

Here are some notes that I took when I was choosing components for a server to run the KVM-based virtualized environment.


The new server was meant to be placed in our DMZ and should run a few virtual machines providing services to external clients.

Some basic hardware requirements were:

  • 1U rack mount
  • 1 x Xeon based CPU
  • 8 GB RAM
  • 2 x 500 GB
  • Hardware RAID
  • 2 x Intel NIC, 1Gbps

The software and services which are planned to be installed or implemented:

  • Debian Squeeze
  • KVM
  • MySQL
  • NTP
  • JBoss
  • VPN

Some virtualization notes

There are different performance goals:

  • Single guest performance
  • Aggregate performance
  • Density (as much guests running on single host)

Virtualization optimization depends on the needs. Experiment with transparent huge pages, try different IO schedulers, try hyperthreading… High performance virtualization is hard. Some things just cannot be emulated efficiently. Timekeeping has always been a virtualization headache. It is good to have a tickless kernel, and pvclock so guests can ask the host what time it is.


KVM (Kernel-based Virtual Machine) is virtualization hypervisor. It is integrated in kernel, lightweight and great in performance. The host machines need to be running either Intel VT or AMD-V chipsets that support hardware-assisted virtualization.



Since Xeon was on the requirements list, I was looking for ones with the following capabilities:

  • VT-x
  • Hyperthreading (?)
  • ECC

From the Xeon-5500 series, probably the best model for virtualization is E5520 (2.26GHz) in price/performance estimation. We have chosen quad-core Intel® Xeon® E5620. It is a quad core CPU that supports VT-x, ECC memory and hyperthreading.

By the way, the main difference between some cheaper consumer CPU’s such as Intel’s I7 and enterprise targeted CPU’s such as Intel’s Xeon, is the support for ECC memory which is a must for mission critical applications.


Nowadays, Virtualization Technology instruction set is implemented inside the CPU’s, which makes hypervisors simpler, thus providing better performance over software-only virtualization solutions.


Is Hyperthreading an advantage or can it have even a negative impact on the system? Because HT takes advantage of unknown variables (cache misses for example), it is hard to take advantage of HT. It seems that nobody has a real idea of how HT impacts on application performances. Depending on application internal architecture HT execution of threads can be a benefit or a real pain. Some suggest switching HT off.

Hyperthreading can reduce scheduling latencies, which reduces spinlock worst case overhead.


Registered on unbuffered?

In enterprise server systems, it is a question of registered or unbuffered memory modules. Registered (also called buffered) memory modules have a register between the DRAM modules and the system’s memory controller. They place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise.

The difference between registered memory and unbuffered memory is whether there are registers on the memory module. Almost all system memory in today’s PCs is unbuffered memory. For those who need to utilize more than 4GB of memory (maybe more like 16GB or 32GB) in a system, registered memory is absolutely a must-have. Registered memory is all about scalability and stability. A small performance hit is generally incurred.


ECC stands for Error Checking and Correction. ECC detects and corrects memory errors so it is highly advisable to use this type of modules in servers that utilize multi-gigabytes of memory and usually run 24/7, and have increased probability of soft errors.

Storage adapter (RAID)

Beware of fakeraid controllers! Check driver support for your operating system or support in Linux vanilla kernel, and decide whether to use battery backed cache. The cache memory in RAID controllers improves performance to some extent by storing information that was recently used, or that the controller predicts will be used in the future, so it can be supplied to the system at high speed if requested instead of necessitating reads from the slow hard disk platters. Battery backed cache is for data protection from unexpected power outage. In every case, the goal of the cache is the same: to provide a temporary storage area that allows a faster device to run without having to wait for a slower one.

One area where caching can impact performance significantly is write caching, sometimes also called write-back caching. When enabled, on a write, the controller tells the system that the write is complete as soon as the write enters the controller’s cache; the controller then “writes back” the data to the drives at a later time. The reason that write-back caching is so important with RAID is while writes are slightly slower than reads for a regular hard disk, for many RAID levels they are much slower.

Read performance under mirroring is far superior to write performance.

Chosen controller for our system is ServeRAID M5014 SAS/SATA Controllers because it is supported in Linux vanilla kernel (driver name is megaraid_sas), and it provides additional performance advantages of an adequate amount of cache (256MB) + we ordered a standard battery backup unit.

Hard drives

We have chosen 2 x IBM 500GB 7200 6Gbps NL SAS 2.5″ SFF Slim-HS HDD.


  • SAS is full duplex
  • SATA uses the ATA command set; SAS uses the SCSI command set
  • SAS hardware allows multipath I/O to devices while SATA (prior to SATA 3Gb/s) does not
  • SATA is more consumer, SAS targets critical server applications
  • SAS error-recovery and error-reporting use SCSI commands which have more functionality than the ATA SMART commands used by SATA drives

Network adapters

We have chosen Intel Ethernet Dual Port Server Adapter I340-T2 for IBM System x, as it is based on 82580 chip that is supported by Linux in igb driver.:

# dpkg -S "igb.ko"
linux-image-2.6.31-19-generic: /lib/modules/2.6.31-19-generic/kernel/drivers/net/igb/igb.ko
linux-image-2.6.31-20-generic: /lib/modules/2.6.31-20-generic/kernel/drivers/net/igb/igb.ko
linux-image-2.6.31-14-generic: /lib/modules/2.6.31-14-generic/kernel/drivers/net/igb/igb.ko
linux-image-2.6.31-22-generic: /lib/modules/2.6.31-22-generic/kernel/drivers/net/igb/igb.ko
linux-image-2.6.31-23-generic: /lib/modules/2.6.31-23-generic/kernel/drivers/net/igb/igb.ko
01:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I340-T2

When choosing network adapter always be careful to look if the device is supported by Linux (for example, for some Broadcom NeXtreme NIC’s you don’t have a driver for Debian).


Operating system

KVM supports both 32 and 64 bit guests. According to KVM’s guest list, Debian Squeeze is supported.


  1. http://www.linux-kvm.org/page/Main_Page
  2. http://www.linux-kvm.org/page/HOWTO
  3. http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaai/kvminstall/liaaikvminstallstart.htm
  4. http://searchservervirtualization.techtarget.com/answer/Hyperthreading-in-virtualized-environments
  5. http://www.redhat.com/promo/summit/2010/presentations/summit/in-the-weeds/thurs/riel-420-kernel/summit2010-kvm-optimizations.pdf
  6. http://www.pcguide.com/ref/hdd/perf/raid/conf/advCaching-c.html
  7. http://blog.fastmail.fm/2009/10/19/ibm-x3550-m2-or-x3650-m2-and-debianubuntu/

Comments are closed.