Configure Panorama to Secure VM Workloads and Kubernetes Clusters
Focus
Focus
AI Runtime Security

Configure Panorama to Secure VM Workloads and Kubernetes Clusters

Table of Contents

Configure Panorama to Secure VM Workloads and Kubernetes Clusters

Panorama configurations to secure your VM workloads/vNets and Kubernetes clusters post downloading the AI Runtime Security: Network intercept deployment Terraform template for Panorama in Strata Cloud Manager.
This page covers the configurations you need to secure your VM workloads/vNets and Kubernetes clusters, and route traffic after you apply the Panorama managed deployment Terraform template in your cloud environment.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes
On this page you will:
  • Configure the following in Panorama:
    • Interfaces
    • Zones
    • NAT Policy
    • Routers
    • Security Policies
  • Secure VM workloads
  • Secure Kubernetes clusters
  • Install a Kubernetes application with Helm
    • Expand all
      Collapse all
    • (Optional) Configure labels in your cloud environment for manual deployments.
AI Runtime Security is only supported for public clusters on GCP, Azure, and AWS cloud platforms.
Prerequisites

GCP

AI Runtime Security post deployment configurations in Panorama and GCP to protect VM workloads and Kubernetes clusters.
This guide provides step-by-step instructions to configure Panorama for securing VM workloads and Kubernetes clusters in GCP. The configurations include setting up interfaces, zones, NAT policies, virtual routers, and security policies.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes in Panorama
  1. Configure Interfaces:
    1. Navigate to Network > Interfaces.
    2. Configure Ethernet Interfaces: Configure a Layer 3 Interface for eth1/1 and eth1/2:
      • Interfaces: eth1/1 and eth1/2
      • Location: Specify the location if applicable
      • Interface Type: Layer3
      • IP Address: Dynamic (DHCP Client)
    3. Navigate to Network > Interfaces > Loopback:
      • In IPv4s, enter the ILB (Internal Load Balancer) private IP address.
      • Set Security Zone to trust for eth1/2 and untrust for eth1/1.
      • Ensure VR (Virtual Router) is set to default or the same as eth1/2.
  2. Configure Zones.
  3. Configure a logical router:
    1. Create a Logical Router and add the Layer 3 interfaces (eth1/1 and eth1/2).
    2. Configure a Static Route with the ILB static IP addresses for routing. Use the trust interface gateway IP address.
    You don’t have to configure the Virtual router, as advanced routing is enabled by default on AI Runtime Security: Network intercept.
  4. Add a security policy. Set the action to Allow.
    Ensure the policy allows health checks from the GCP Load Balancer (LB) pool to the internal LB IP from Panorama. Check session IDs to ensure the firewall responds correctly on the designated interfaces.
    Select Commit → Commit and Push, to push the policy configurations to the AI Runtime Security: Network intercept.

Configurations to Secure VM Workloads

  1. Configure Static Routes for VPC endpoints.
    1. For VPC subnet:
      • Edit the IPv4 Static Routes and add the route for the VPC IPv4 range CIDR subnets.
      • Set the Next Hop as eth1/2.
      • Set the Destination as the trust subnet gateway IP from Strata Cloud Manager.
      • Update the static route.
      Save the Logical Router.
  2. Push the policy configurations to the AI Runtime Security: Network intercept managed by Panorama (Panorama > Scheduled Config Push).

Configurations to Secure the Kubernetes Clusters

  1. Add pod and service IP Subnets to AI Runtime Security trust firewall rules:
    1. Get the IP addresses for pod and service subnets:
    1. Go to Kubernetes Engine -> Clusters.
    2. Select a Cluster and copy the Cluster Pod IPv4 and IPv4 Service range IP addresses.
  2. Follow the AI Runtime Security: Network intercept deployment in GCP to save and download the Terraform template.
  3. Edit the Terraform template to allow the following IP addresses in your VPC network firewall rules:
    • Navigate to the `<unzipped-folder>/architecture/security_project` directory.
    • Edit the `terraform.tfvars` file to add the copied IP addresses list to your `source_ranges`.
      firewall_rules = { allow-trust-ingress = { name = "allow-trust-vpc" source_ranges = ["35.xxx.0.0/16", "130.xxx.0.0/22", "192.xxx.0.0/16", "10.xxx.0.0/14", "10.xx.208.0/20"] # 1st 2 IPs are for health check packets. Add APP VPC/Pod/Service CIDRs priority = "1000" allowed_protocol = "all" allowed_ports = [] } }
  4. Apply the Terraform:
    terraform init terraform plan terraform apply
  5. Add Static Routes on the logical router for Kubernetes workloads:
    1. Pod Subnet:
      • Edit the IPv4 Static Routes and add a route with the Pod IPv4 range CIDR.
      • Set the Next Hop as eth1/2 (trust interface).
      • Set the Destination as the trust subnet gateway IP from Panorama.
    1. Service Subnet
    • Edit the IPv4 Static Routes add a route with the IPv4 Service range CIDR.
    • Set the Next Hop as eth1/2 (trust interface).
    • Set the Destination as the trust subnet gateway IP from Panorama.
  6. Add source NAT policy outbound traffic:
    • Source Zone: Trust
    • Destination Zone: Untrust (eth1/1)
    • Policy Name: trust2untrust or similar.
    • Set the Interface to eth1/1. (The translation happens at eth1/1).
      If needed, create a complementary rule for the reverse direction (for example, untrust2trust).
  7. Push the policy configurations to the AI Runtime Security: Network intercept managed by Panorama (Panorama > Scheduled Config Push).
    If you have a Kubernetes cluster running, follow the section to install a Kubernetes application with Helm.

Install a Kubernetes Application with Helm

Follow the below steps to install a Kubernetes application on a Kubernetes cluster.
  1. Change the directory to the Helm folder:
    cd <unzipped-folder>/architecture/helm
    GKE Autopilot clusters don’t support Helm deployments due to restrictions on modifying the kube-system namespace.
  2. Install the Helm chart:
    helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
  3. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  4. Check the pod status:
    kubectl get pods -A #Verify that the pods with names similar to `pan-cni-*****` are present.
  5. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  6. Check the services running in the `kube-system` namespace:
    kubectl get svc -n kube-system #Ensure that services `pan-cni-sa` and `pan-plugin-user-secret` are listed: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pan-cni-sa ClusterIP 10.xx.0.1 <none> 443/TCP 24h pan-plugin-user-secret ClusterIP 10.xx.0.2 <none> 443/TCP 24h
  7. Annotate the application `yaml` or `namespace` so that the traffic from the new pods is redirected to AI Runtime Security: Network intercept for inspection.
    annotations: paloaltonetworks.com/firewall: pan-fw
    For example, for all new pods in the “default” namespace:
    kubectl annotate namespace default paloaltonetworks.com/firewall=pan-fw

Azure

AI Runtime Security post deployment configurations in Panorama and Azure to protect VM workloads and Kubernetes clusters.
This guide provides step-by-step instructions to configure Panorama for securing VM workloads and Kubernetes clusters in Azure. The configurations include setting up interfaces, zones, NAT policies, virtual routers, and security policies.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes in Panorama

Configure Panorama

Interfaces

  1. Navigate to Network > Interfaces.
  2. Set the Configuration Scope to your AI Runtime Security folder.
  3. Select Add Interface.
      Expand all
      Collapse all
    • In the Ethernet tab, Configure a Layer 3 Interface for eth1/1(trust) and eth1/2(untrust):
    • Configure Network > Interfaces > Loopback to receive health checks from each load balancer:

Zones

  1. Configure Zones (Network → Zones).
  2. Select Add Zone.
  3. Enter a Name.
  4. Select Layer3 Interface type.
  5. In Interfaces, add $eth1 interface for trust zone and $eth2 interface for untrust zone.
  6. Save.

NAT

Configure the NAT policies for inbound and outbound traffic:
  1. Configure NAT policy for inbound traffic:
    1. Enter a Name indicating inbound traffic (for example, inbound-web).
    2. Original Packet:
      • In Source zones, click add and select untrust zone.
    3. Destination:
      • select untrust Zone.
      • Select any Interface.
      • In Addresses, click the add (+) icon and select the public Elastic Load Balancer (ELB) address.
    4. Choose any Service.
    5. Translated Packet:
      • In Translation, select Both.
      • In Source Address Translation, select the Dynamic IP and Port translation type.
      • In choice, select Interface Address.
      • In Interface, select eth1(ethernet1/1).
      • In Choice, select IP address.
      • Set the Static IP address as the Translation Type.
      • Select the destination Translated Address.
    6. Save.
  2. Configure NAT Policy for Outbound traffic:
    1. Enter a Name indicating inbound traffic (for example, outbound-internet).
    2. Original Packet:
      • In Source zones, click add and select trust zone.
      • In Addresses, click the add (+) icon and select the app-vnet and the Kubernetes pods CIDR you want to secure.
    3. Destination:
      • Select untrust destination zone.
      • Select any interface.
    4. Choose any Service.
    5. Translated Packet:
      • In Translation, select Source Address Only.
      • In Source Address Translation, select the Dynamic IP and Port translation type.
      • In choice, select Interface Address.
      • In Interface, select eth2(ethernet1/2).
      • In Choice, select IP address.
    6. Save.

Routers

Configure private and public virtual routers:
Azure health probe fails with a single virtual router (VR). Create multiple VRs to ensure probe success.
  1. Configure routing in Panorama (Network → Routing:
  2. Enter a Name indicating private and public routers (for example, vr-private and vr-public).
  3. In Interfaces, select eth1(ethernet1/1) for vr-private route and eth2(ethernet1/2) for vr-public route.
    Refer the section on Interfaces to see how to configure the $eth1 and $eth2 interfaces.
  4. In Advanced Settings, select Edit to configure the IPV4 Static Routes for vr-private and vr-public.
    1. Select Add Static Route and add the following routes:
    2. Application routing:
      1. Enter a Name (for example, app-vnet).
      2. In Destination, enter the CIDR address of your application.
      3. In Next Hop:
        • For vr-private, select IP Address and enter the gateway IP address of the private interface (eth1/1) in the IP Address.
          The gateway IP address is the first usable IP in the subnet's range (example, 192.168.1.1 for a /24 subnet). To find it, go to Azure Portal > Virtual Networks > [Your Virtual Network] > Subnets > [Private Subnet].
        • For vr-public, select Next Router and select the `vr-private` in the Next Router.
      4. In Interface, select eth1(ethernet1/1) subnet for `vr-private` and None for `vr-public`.
    3. Default routing:
      1. Enter a Name.
      2. In Destination, enter 0.0.0.0/0.
      3. In Next Hop:
        • For vr-private, select Next Router and enter the `vr-public` in the Next Router.
        • For vr-public, select IP Address and enter the gateway IP address of the vr-public interface (eth1/2) in the IP Address.
      4. In Interface, choose None for `vr-private` and eth2(ethernet1/2) for `vr-public`.
      5. Add or Update.
    4. Azure Load Balancer’s health probe:
      1. Enter a Name.
      2. In Destination, enter the IP address of the Azure Load Balancer’s health probe (168.63.129.16/32).
      3. In Next Hop, select IP Address for vr-private and vr-public.
        • In IP Address, enter the gateway IP address of the corresponding interfaces.
      4. In Interface, select eth1(ethernet1/1) for vr-private and eth2(ethernet1/2) for vr-public.
    5. Add.
  5. Save.
  6. Static routes summary for `vr-private`:
    Route TypeNameDestinationNext HopNext Hop ValueInterface
    Application routingapp-vnetApplication CIDRIP AdderssGateway IP address of vr-privateGateway IP address of vr-private
    Default Routingdefault0.0.0.0/0Next Routervr-publicNone
    Azure LB Health Probeazure-probe168.63.129.16/32IP AddressSubnet Gateway IP addresseth1(ethernet1/1)
  7. Static route summary for `vr-public`:
    Route TypeNameDestinationNext HopNext Hop ValueInterface
    Application routingapp-vnetApplication CIDRNext Routervr-privateeth1(ethernet1/1)
    Default Routingdefault0.0.0.0/0IP AddressGateway IP address of vr-publiceth2(ethernet1/2)
    Azure LB Health Probeazure-probe168.63.129.16/32IP AddressSubnet Gateway IP addresseth2(ethernet1/2)

Security Policy

  1. Add a security policy.
    Ensure the policy allows health checks from the Azure Load Balancer (LB) pool to the internal LB IP from Panorama. Check session IDs to ensure the firewall responds correctly on the designated interfaces.
  2. Select Commit → Commit and Push, to push the policy configurations to the AI network intercept (AI firewall).

Configurations to Secure VM Workloads

  1. Log in to the Panorama Web Interface
    Configure routes for vNet endpoints as explained in the Routers section above to ensure there is a route to your application.
  2. Select Commit and Push to Devices to push the policy configurations to your AI network intercept managed by Panorama (AI firewall).
  3. Create or update the NAT policy (refer to the NAT section above) to secure the VM workloads. Set the source address of the application you want to secure.

Configurations to Secure the Kubernetes Clusters

  1. Configure static routes (refer to the routes section above) on the Logical Router for Kubernetes workloads.
  2. Follow the below configurations for pod and service subnets static routes for pod for the Kubernetes workloads:
    1. Pod Subnet and Service subnet for vr-private:
      Route TypeNameDestinationNext HopNext Hop ValueInterface
      Pod subnetpod_routePod IPV4 range CIDRIP AddressSubnet Gateway IP addresseth1(ethernet1/1)
      Service subnetservice_route172.16.0.0/24IP AddressSubnet Gateway IP addresseth1(ethernet1/1)
    2. Pod Subnet and Service subnet for vr-public:
      Route TypeNameDestinationNext HopNext Hop ValueInterface
      Pod subnetpod_routePod IPV4 range CIDRNext Routervr-privateNone
      Service subnetservice_route172.16.0.0/24Next Routervr-privateNone
  3. Refer to the NAT policy in the above section to secure the Kubernetes clusters and set the source address of the Kubernetes pods CIDR you want to secure.
  4. Select Commit and Push to Devices to push the policy configurations to your AI network intercept managed by Panorama (AI firewall).

Install a Kubernetes Application with Helm

Follow the below steps to install a Kubernetes application on a Kubernetes cluster.
  1. Change the directory to the Helm folder:
    cd <unzipped-folder>/architecture/helm
  2. Install the Helm chart:
    helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
    Enable "Bring your own Azure virtual network" to discover Kubernetes-related vnets:
    1. In Azure Portal, navigate to Kubernetes services → [Your Cluster]→ Settings→ Networking.
    2. Under Network configuration, select Azure CNI as the Network plugin, then enable Bring your own Azure virtual network.
  3. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  4. Check the pod status:
    kubectl get pods -A #Verify that the pods with names similar to `pan-cni-*****` are present.
  5. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  6. Check the services running in the `kube-system` namespace:
    kubectl get svc -n kube-system #Ensure that services `pan-cni-sa` and `pan-plugin-user-secret` are listed: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pan-cni-sa ClusterIP 10.xx.0.1 <none> 443/TCP 24h pan-plugin-user-secret ClusterIP 10.xx.0.2 <none> 443/TCP 24h
  7. Annotate the application `yaml` or `namespace` so that the traffic from the new pods is redirected to the AI network intercept (AI firewall) for inspection.
    annotations: paloaltonetworks.com/firewall: pan-fw
    For example, for all new pods in the "default" namespace:
    kubectl annotate namespace default paloaltonetworks.com/firewall=pan-fw

AWS

AI Runtime Security post deployment configurations in Panorama and AWS to protect VM workloads and Kubernetes clusters.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes in Panorama

Configure Panorama

Interfaces

  1. Navigate to Network > Interfaces.
  2. Set the Configuration Scope to your AI Runtime Security folder.
  3. Select Add Interface.
      Expand all
      Collapse all
    • In the Ethernet tab, Configure a Layer 3 Interface for eth1/1(trust):

Zone

  1. Configure Zones (Network → Zones).
  2. Select Add Zone.
  3. Enter a Name.
  4. Select Layer3 Interface type.
  5. In Interfaces, add $eth1 interface for trust zone.
  6. Save.

Security Policy

  1. Add a security policy and set the action to Allow.
  2. Select Commit → Commit and Push, to push the policy configurations to the AI network intercept (AI firewall).

Install a Kubernetes Application with Helm

Follow the below steps to install a Kubernetes application on a Kubernetes cluster by applying the helm chart.
Prerequisites:
  • Go to your downloaded Terraform template and navigate to `<unzipped-folder>/architecture/helm`.
  • Apply the Terraform for the `security_project` as shown in the AWS deployment workflow.
    Deploying the Terraform for the security project creates the GWLB endpoints in your AWS account.
  • Open the `values.yaml` file found in the path: `<unzipped-folder>/architecture/helm`.
  • Update the `endpoints1` and `endpoints2` values with your GWLB endpoints IP addresses. Below is a sample `values.yaml` file:
    # Default values for ai-runtime-security. # This is a YAML-formatted file. # Declare variables to be passed into your templates. # Configure vpc endpoint per zone. This makes sure kubernetes # traffic is not sent across zone. Endpoints can be added or # removed based on requirements and zone availability. # GWLB VPC endpoint zone1 IP address. endpoints1: "" endpoints1zone: us-east-1a # GWLB VPC endpoint zone2 IP address. endpoints2: "" endpoints2zone: us-east-1b # PAN CNI image. cniimage: gcr.io/pan-cn-series/airs/pan-cni:latest # Resource namespace name. namespace: kube-system # Kubernetes ClusterID value range 1-2048. clusterid: 1
  • Apply the helm chart by following the below steps.
  1. Change the directory to the Helm folder:
    cd <unzipped-folder>/architecture/helm
  2. Install the Helm chart:
    helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
  3. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  4. Check the pod status:
    kubectl get pods -A #Verify that the pods with names similar to `pan-cni-*****` are present.
  5. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  6. Check the services running in the `kube-system` namespace:
    kubectl get svc -n kube-system #Ensure that services `pan-cni-sa` and `pan-plugin-user-secret` are listed: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pan-cni-sa ClusterIP 10.xx.0.1 <none> 443/TCP 24h pan-plugin-user-secret ClusterIP 10.xx.0.2 <none> 443/TCP 24h
  7. Annotate the application `yaml` or `namespace` so that the traffic from the new pods is redirected to the AI network intercept (AI firewall) for inspection.
    annotations: paloaltonetworks.com/firewall: pan-fw
    For example, for all new pods in the "default" namespace:
    kubectl annotate namespace default paloaltonetworks.com/firewall=pan-fw