Deploy and Manage Prisma AIRS HSF Clusters on KVM
Focus
Focus
Prisma AIRS

Deploy and Manage Prisma AIRS HSF Clusters on KVM

Table of Contents

Deploy and Manage Prisma AIRS HSF Clusters on KVM

Deploy Prisma AIRS™ HSF clusters on Linux/KVM hypervisors using Strata Cloud Manager for Terraform generation and Panorama® for network security management and configuration.
Where Can I Use This?What Do I Need?
  • Prisma® AI Runtime Security™
  • PAN-OS® 12.1.5 & later
  • A controller or a VM to apply terraforms with SSH key authentication for KVM host root access
Prisma AIRS HSF cluster deployment on Linux/KVM hypervisors enhances scalability and performance capabilities to KVM environments, providing flexible infrastructure choices. This topic provides a step-by-step instructions for KVM host preparation, Terraform configuration generation through Strata Cloud Manager, deployment execution, initial validation, and ongoing management operations in your environment.

Prerequisites

The HSF solution utilizes an automated deployment and centralized management architecture:
  • Deployment: Clusters are deployed through Terraform files generated by Strata Cloud Manager (SCM).
  • Management: Centralized management (connectivity, monitoring, upgrades, and licensing) is managed through Panorama.
  • Lifecycle Operations: Node addition, deletion, and cluster destruction are performed directly through Terraform.
  • Node Modifications: Granular changes such as vCPU, memory, or interface adjustments are made directly on the node or through libvirt API calls.

KVM Host Infrastructure

Prepare your RHEL 9 KVM host by enabling hardware virtualization, SR-IOV, and IOMMU in the BIOS/kernel, installing necessary dependencies (Libvirt, OpenvSwitch), and ensuring system compatibility for virtio, OVS networking, and SR-IOV hostdev.
  • libvirt user setup: Configure /etc/libvirt/qemu.conf by adding user = "libvirt-qemu", group = "kvm", dynamic_ownership = 1, and security_driver = "none", then restart libvirtd with sudo systemctl restart libvirtd.
  • Permissions: Provision a sudo user for deployment.
  • Synchronize all your KVM hosts using Network Time Protocol (NTP).
  • The root partition on each host should have sufficient space to hold the VM qcow2 images and the host software. The recommended minimum root partition size is 256GB.
KVM Host Network Preparation
  1. Install necessary dependencies and packages on your Rhel9 KVM server.
  2. Add the number of VFs using the following command-- For example, for 8 VFs:
    sudo echo 8 > /sys/bus/pci/devices/<your-pci-bus-id>/sriov_numvfs
  3. Create virtual network interfaces for management, cluster control (CC), cluster interconnect (CI), and traffic interconnect (TI). In addition to this you will need to create external data interfaces. Configure external data interfaces following the same pattern listed below for management, CI, and TI interfaces.
    1. You may use the following sample XML to define a basic bridge network for management, CI, and TI interfaces:
      <network> <name>br0-net</name> <forward mode='bridge'/> <bridge name='br0'/> </network>
    2. Define and start your network using the following commands:
      sudo virsh net-define /tmp/br0-net.xml
      sudo virsh net-autostart br0-net
      sudo virsh net-start br0-net
    3. Verify your network is listed using:
      virsh net-list --all
    4. Provision SR-IOV Virtual Functions (VFs) if you use SR-IOV for EI and TI interfaces.
      If you are using SR-IOV, you may follow the sample XML to create a VNet:
      <network> <name>yxxx-sriov-vf</name> <forward mode='hostdev' managed='yes'/> <pf dev='enp202s0f0np0'/> </forward> </network>
KVM Host Permissions for Local or Remote Deployments
  1. Prepare a dedicated Linux machine as your Terraform controller.
  2. Install HashiCorp Terraform CLI from https://developer.hashicorp.com/terraform/install.
    # Install xsltproc for PCI devices sudo dnf install -y xsltproc # Install Terraform sudo dnf install -y dnf-plugins-core sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo sudo dnf install -y terraform
HSF solution supports two modes of Terraform deployment:
  • Local Deployment - In this mode you can apply terraform on each KVM host by copying terraform folders to each respective host. As a pre-requisite for this mode, you must have the terraform packages installed as outlined in the previous section.
  • Remote Deployment - In this mode you can use a central controller to apply the terraform on all intended KVM hosts. The deployment will be managed by deploy_cluster.sh script packages with terraforms.
For Remote Deployment:
A remote controller VM is required at the location where the Terraform deployment will be executed. Ensure SSH key authentication is set up for the sudo user on your KVM hosts for remote deployment.
  1. If you need root login permissions, explicitly allow SSH key authentication for the root user in your remote KVM server's SSH configuration file (/etc/ssh/sshd_config). Look for PermitRootLogin yes.
  2. If you change this setting, restart the SSH service: sudo systemctl restart sshd.
  3. Create a new SSH key pair: ssh-keygen.
  4. Add your private key to the agent: ssh-add ~/.ssh/id_rsa.
  5. Ensure your public key (~/.ssh/id_rsa.pub) is copied into the root user's ~/.ssh/authorized_keys file on the remote KVM server:
    ssh-copy-id -i ~/.ssh/id_rsa.pub <kvm-sudo-user>@<remote-server-ip>
  6. Ensure the known_hosts file on your controller has all the server keys to avoid a key mismatch.
  7. Delete entries pertaining to your KVM host from ~/.ssh/known_hosts.
  8. Run a keyscan to add your KVM host keys: ssh-keyscan <remote-server-ip> >> ~/.ssh/known_hosts.
Panorama pre-requisites
  1. Deploy Panorama - PAN-OS 12.1.5 and later.
  2. Configure the licensing API-key in Panorama to delicense the undeployed VMs during an un-deployment event. This key can be configured using the command below from the Panorama cli:
    request license api-key set key <key>
    You can generate the API key from the CSP portal.
  3. Generate VM auth key on the Panorama.
  4. For Logs, it is recommended to use a log collector in Panorama.
  5. Check if the Panorama is in Panorama mode and execute the command:
    show system info | match system-mode
  6. Install the Orchestrator plugin on Panorama to obtain default/reference templates.
  7. Clone the default template and modify it as needed for cluster and traffic configurations. You may choose to uninstall the plugin after cloning.
  8. Create the firewall cluster on Panorama using the same Cluster name you intend to provide on SCM for the terraform and type AI-HSF.
  9. Create a Template for external data interfaces. This template will be referenced while creating the Template Stack.
  10. Create a Device Group with the exact name you intend to use in the SCM for Terraform configurations. Ensure the template(s) are referenced within this Device Group.
  11. Create a Template Stack with the exact name you intend to use in the SCM for Template Stack Name. Ensure that you select the following two options:
    • Automatically push content when a software device (VM or container or ZTP) registers to Panorama and Enable clustering.
    • Under Templates, add the templates the user has created for external data interfaces and a fixed template that the user cloned from AI-HSF-CLUSTERING-DO-NOT-MODIFY.
  12. Commit the configuration to Panorama.
  13. Download the same content and AV versions on the Local and Device Deployment page of the Panorama.
  14. Schedule Download and Install of App&Thread and AV at the same time on both local and Device Deployment page of the Panorama.
  • Ensure the latest AV and content installed on the Panorama is also downloaded and deployed to devices. This is because any content released after the PAN-OS image will have a version higher than the content packaged with the image.
  • When scheduling future content upgrades for the Panorama and cluster nodes, ensure that the content deployment on both the local and device environments is set for the same time interval.