: KVM Host-Level Performance Tuning for VM-Series
Focus
Focus

KVM Host-Level Performance Tuning for VM-Series

Table of Contents

KVM Host-Level Performance Tuning for VM-Series

Execute the following steps on your KVM hosts to enhance performance:
  1. vCPU Pinning - Pin each VM-Series vCPU to a specific physical CPU core. This creates a 1:1 mapping that eliminates scheduling jitter and guarantees dedicated CPU resources for firewall processes.
    # Pin vCPUs 0 and 1 of the firewall VM to physical cores 2 and 3 virsh vcpupin <vm-name> 0 2 --live --config virsh vcpupin <vm-name> 1 3 --live --config
  2. NUMA Alignment - Ensure all the vCPUs and memory allocated to a VM-Series instance reside on the same physical NUMA node. This prevents cross-node memory access latency.
    # Check the server's NUMA hardware layout to plan core allocation numactl --hardware
  3. Emulator Pinning - Isolate the hypervisor emulator threads (which handle I/O and device emulation) to dedicated housekeeping cores, separate from those used by the VM-Series. This prevents hypervisor overhead from impacting packet processing.
    # Pin the emulator for the firewall VM to housekeeping cores 0 and 1 virsh emulatorpin <vm-name> 0-1 --live --config
  4. Kernel Core Isolation - Isolate the physical cores assigned to the VM-Series from the host's operating system scheduler. This ensures the host OS will not run its tasks on cores reserved for the firewall.
    # In /etc/default/grub, add the 'isolcpus' kernel parameter # This example isolates cores 2-20 for VM workloads GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2-20" # Update GRUB and reboot for the change to take effect sudo update-grub
  5. SR-IOV & NIC Locality - When using SR-IOV, the vCPUs for the VM-Series must be pinned to cores on the same NUMA node as the physical SR-IOV NIC to ensure the lowest latency path for network traffic.
    # 1. Identify the NUMA node of the physical NIC cat /sys/class/net/eth0/device/numa_node # 2. Use the result to guide vCPU pinning # If the result is '0', pin the VM's vCPUs to cores on NUMA Node 0 virsh vcpupin <vm-name> 0 4 --live --config virsh vcpupin <vm-name> 1 5 --live --config
  6. Hyper-Threading - For environments requiring the most predictable, deterministic latency, disabling Hyper-Threading in the server BIOS/UEFI is recommended. This ensures one vCPU has exclusive access to one physical core's full resources.
    # Verify 'Thread(s) per core' is 1 lscpu