: Prepare the Linux Server
Focus
Focus

Prepare the Linux Server

Table of Contents

Prepare the Linux Server

  • Check the Linux distribution version. For a list of supported versions, see VM-Series for KVM in the Compatibility Matrix.
  • Verify that you have installed and configured KVM tools and packages that are required for creating and managing virtual machines, such as Libvirt.
  • If you want to use a SCSI disk controller to access the disk to which the VM-Series firewall stores data, you must use virsh to attach the virtio-scsi controller to the VM-Series firewall. You can then edit the XML template of the VM-Series firewall to enable the use of the virtio-scsi controller. For instructions, see Enable the Use of a SCSI Controller.
    KVM on Ubuntu 12.04 does not support the virtio-scsi controller.
  • Verify that you have set up the networking infrastructure for steering traffic between the guests and the VM-Series firewall and for connectivity to an external server or the Internet. The VM-Series firewall can connect using a Linux bridge, the Open vSwitch, PCI passthrough, or SR-IOV capable network card.
    • Make sure that the link state for all interfaces you plan to use are up, sometimes you have to manually bring them up.
    • Verify the PCI ID of all the interfaces. To view the list, use the command:
      Virsh nodedev-list –tree
    • If using a Linux bridge or OVS, verify that you have set up the bridges required to send/receive traffic to/from the firewall. If not, create bridge(s) and verify that they are up before you begin installing the firewall.
    • If using PCI-passthrough or SR-IOV, verify that the virtualization extensions (VT-d/IOMMU) are enabled in the BIOS. For example, to enable IOMMU,
      intel_iommu=on
      must be defined in /etc/grub.conf. Refer to the documentation provided by your system vendor for instructions.
    • If using PCI-passthrough, ensure that the VM-Series firewall has exclusive access to the interface(s) that you plan to attach to it.
      To allow exclusive access, you must manually detach the interface(s) from the Linux server; Refer to the documentation provided by your network card vendor for instructions.
      To manually detach the interface(s) from the server., use the command:
      Virsh nodedev-detach
      <pci id of interface>
      For example,
      pci_0000_07_10_0
      In some cases, in /etc/libvirt/qemu.conf, you may have to uncomment
      relaxed_acs_check= 1
      .
    • If using SR-IOV, verify that the virtual function capability is enabled for each port that you plan to use on the network card. With SR-IOV, a single Ethernet port (physical function) can be split into multiple virtual functions. A guest can be mapped to one or more virtual functions.
      To enable virtual functions, you need to:
      1. Create a new file in this location: /etc/modprobe.d/
      2. Modify the file using the vi editor to make the functions persistent: vim /etc/modprobe.d/igb.conf
      3. Enable the number of number of virtual functions required: options igb max_vfs=4
      After you save the changes and reboot the Linux server, each interface (or physical function) in this example will have 4 virtual functions.
      Refer to the documentation provided by your network vendor for details on the actual number of virtual functions supported and for instructions to enable it.
  • Configure the host for maximum VM-Series performance. Refer to Performance Tuning of the VM-Series for KVM for information about configuring each option.
    • Enable DPDK. DPDK allows the host to process packets faster by bypassing the Linux kernel. Instead, interactions with the NIC are performed using drivers and the DPDK libraries. Open vSwitch is required to use DPDK with the VM-Series firewall.
    • Enable SR-IOV. Single root I/O virtualization (SR-IOV) allows a single PCIe physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or guest.
    • Enable multi-queue support for NICs. Multi-queue virtio-net allows network performance to scale with the number of vCPUs and allows for parallel packet processing by creating multiple TX and RX queues.
    • Isolate CPU Resource in a NUMA Node. You can improve performance of VM-Series on KVM by isolating the CPU resources of the guest VM to a single non-uniform memory access (NUMA) node.

Recommended For You