Configure Panorama to Secure VM Workloads and Kubernetes Clusters
Focus
Focus
Prisma AIRS

Configure Panorama to Secure VM Workloads and Kubernetes Clusters

Table of Contents

Configure Panorama to Secure VM Workloads and Kubernetes Clusters

Panorama configurations to secure your VM workloads/vNets and Kubernetes clusters after you deploy Panorama managed Prisma AIRS AI Runtime: Network intercept.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes Clusters
This page covers the configurations you need to secure your VM workloads/vNets and Kubernetes clusters, and route traffic after you apply the Panorama-managed deployment Terraform template in your cloud environment.
On this page, you will:
  • Configure the following in Panorama:
    • Interfaces
    • Zones
    • NAT Policy
    • Routers
    • Security Policies
  • Secure VM workloads only for public clouds
  • Secure Kubernetes clusters in public and private clouds
  • Install a Kubernetes application with Helm
    • (Optional) Configure labels in your cloud environment for manual deployments.
      The deployment Terraform you generate from Strata Cloud Manager, automatically adds the required labels to organize your Prisma AIRS AI Runtime: Network intercept. For manual deployments, ensure you have the following labels (key-value pairs) in your Terraform template.
      • Add the following labels (key-value pairs) under Tags in the Terraform template file under your downloaded path `<azure|aws-deployment-terraform-path>/architecture/security_project/terraform.tfvars`. The value of these keys must be unique.
      • For GCP: `paloaltonetworks_com-trust` and `paloaltonetworks_com-occupied`.
      • For Azure and AWS: `paloaltonetworks.com-trust` and `paloaltonetworks.com-occupied`.
      • Ensure the network interface name in the security_project Terraform is suffixed by `-trust-vpc`.
Prisma AIRS AI Runtime: Network intercept is only supported for public clusters on GCP, Azure, and AWS cloud platforms and a few private clouds such as OpenShift, ESXi, and KVM.
Prerequisites

GCP

Prisma AIRS AI Runtime: Network intercept post-deployment configurations in Panorama and GCP to protect VM workloads and Kubernetes clusters.
This guide provides step-by-step instructions to configure Panorama for securing VM workloads and Kubernetes clusters in GCP. The configurations include setting up interfaces, zones, NAT policies, virtual routers, and security policies.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes Clusters in Panorama
  1. Configure Interfaces:
    1. Navigate to Network > Interfaces.
    2. Configure Ethernet Interfaces: Configure a Layer 3 Interface for eth1/1 and eth1/2:
      • Interfaces: eth1/1 and eth1/2
      • Location: Specify the location if applicable
      • Interface Type: Layer3
      • IP Address: Dynamic (DHCP Client)
    3. Navigate to Network > Interfaces > Loopback:
      • In IPv4s, enter the private IP address of the ILB (Internal Load Balancer).
      • Set Security Zone to trust for eth1/2 and untrust for eth1/1.
      • Ensure VR (Virtual Router) is set to default or the same as eth1/2.
  2. Configure Zones.
  3. Configure a logical router:
    1. Create a Logical Router and add the Layer 3 interfaces (eth1/1 and eth1/2).
    2. Configure a Static Route with the ILB static IP addresses for routing. Use the trust interface gateway IP address.
    You don’t have to configure the Virtual router, as advanced routing is enabled on Prisma AIRS AI Runtime: Network intercept, by default.
  4. Add a security policy. Set the action to Allow.
    Ensure the policy allows health checks from the GCP Load Balancer (LB) pool to the internal LB IP from Panorama. Check session IDs to ensure the firewall responds correctly on the designated interfaces.
    Select Commit → Commit and Push, to push the policy configurations to Prisma AIRS AI Runtime: Network intercept.

Configurations to Secure VM Workloads

  1. Configure Static Routes for VPC endpoints.
    1. For VPC subnet:
      • Edit the IPv4 Static Routes and add the VPC IPv4 range CIDR subnets route.
      • Set the Next Hop as eth1/2.
      • Set the Destination as the trust subnet gateway IP from Strata Cloud Manager.
      • Update the static route.
      Save the Logical Router.
  2. Push the policy configurations to the Prisma AIRS AI Runtime: Network intercept managed by Panorama (Panorama > Scheduled Config Push).

Configurations to Secure Kubernetes Clusters

  1. Add pod and service IP Subnets to Prisma AIRS AI Runtime: Network intercept trust firewall rules:
    1. Get the IP addresses for the pod and service subnets:
    1. Go to Kubernetes Engine -> Clusters.
    2. Select a Cluster and copy the Cluster Pod IPv4 and IPv4 Service range IP addresses.
  2. To save and download the Terraform template, follow the section on Deploy Prisma AIRS AI Runtime: Network Intercept in GCP.
  3. Edit the Terraform template to allow the following IP addresses in your VPC network firewall rules:
    • Navigate to the `<unzipped-folder>/architecture/security_project` directory.
    • Edit the `terraform.tfvars` file to add the copied IP addresses list to your `source_ranges`.
      firewall_rules = { allow-trust-ingress = { name = "allow-trust-vpc" source_ranges = ["35.xxx.0.0/16", "130.xxx.0.0/22", "192.xxx.0.0/16", "10.xxx.0.0/14", "10.xx.208.0/20"] # 1st 2 IPs are for health check packets. Add APP VPC/Pod/Service CIDRs priority = "1000" allowed_protocol = "all" allowed_ports = [] } }
  4. Apply Terraform:
    terraform init terraform plan terraform apply
  5. Add Static Routes on the logical router for Kubernetes workloads:
    1. Pod Subnet:
      • Edit the IPv4 Static Routes and add a route with the Pod IPv4 range CIDR.
      • Set the Next Hop as eth1/2 (trust interface).
      • Set the Destination as the trust subnet gateway IP from Panorama.
    1. Service Subnet
    • Edit the IPv4 Static Routes add a route with the IPv4 Service range CIDR.
    • Set the Next Hop as eth1/2 (trust interface).
    • Set the Destination as the trust subnet gateway IP from Panorama.
  6. Add source NAT policy for outbound traffic:
    • Source Zone: Trust
    • Destination Zone: Untrust (eth1/1)
    • Policy Name: trust2untrust or similar.
    • Set the Interface to eth1/1. (The translation happens at eth1/1).
      If needed, create a complementary rule for the reverse direction (for example, untrust2trust).
  7. Push the policy configurations to Prisma AIRS AI Runtime: Network intercept managed by Panorama (Panorama > Scheduled Config Push).
    If you have a Kubernetes cluster running, follow the section to install a Kubernetes application with Helm.

Secure a Kubernetes Application with Helm

  1. Navigate to the downloaded tar file and extract the contents:
    tar -xvzf <your-terraform-download.tar.gz>
  2. Navigate to the appropriate Helm directory based on your deployment configuration:
    • For VPC-level security:
      cd <unzipped-folder>/architecture/helm
    • For namespace-level security with traffic steering inspection:
      cd <unzipped-folder>/architecture/helm-<complete-app-name-path>
      • Navigate to each Helm application folder. When you configure traffic steering inspection, separate Helm charts are generated for each protected namespace, allowing granular security policies per application.
      • GKE Autopilot clusters do not support Helm deployments due to restrictions on modifying the kube-system namespace.
  3. Install the Helm chart using the appropriate command:
    • For VPC-level security:
      helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
    • For namespace-level security with traffic steering inspection:
      helm install ai-runtime-security helm-<complete-app-name-path> --namespace kube-system --values helm-<complete-app-name-path>/values.yaml
      Repeat this command for each namespace-specific Helm chart generated during the deployment process.
    This creates a container network interface (CNI), but doesn’t protect the container traffic until you annotate the application `yaml` or `namespace`.
  4. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  5. Check the pod status:
    kubectl get pods -A #Verify that the pods with names similar to `pan-cni-*****` are present.
  6. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  7. Verify the Kubernetes resources were created properly:
    a. Check the service accounts kubectl get serviceaccounts -n kube-system | grep pan b. Check the secrets kubectl get secrets -n kube-system | grep pan c. Check the services: `kubectl get svc -n kube-system | grep pan`
    You should see resources like pan-cni-sa (service accounts), pan-plugin-user-secret (secrets), and pan-ngfw-svc (service).
  8. Annotate at the pod level in your application yaml so that the traffic from the pod is redirected to the Prisma AIRS AI Runtime: Network intercept for inspection.
    Annotate the pod using the below command:
    • For VPC-level security:
      kubectl annotate namespace <namespace-to-be-annotated> paloaltonetworks.com/firewall=pan-fw
    • For namespace-level security with traffic steering inspection:
      kubectl annotate pods --all paloaltonetworks.com/subnetfirewall=ns-secure/bypassfirewall
    Ensure every pod has this annotation to be moved to the ‘protected’ state across all cloud environments.
    Restart the existing application pods after applying Helm and annotating the pods for all changes to take effect. This enables the firewall to inspect the pod traffic and secure the containers.

Azure

Prisma AIRS AI Runtime: Network intercept post deployment configurations in Panorama and Azure to protect VM workloads and Kubernetes clusters.
This guide provides step-by-step instructions to configure Panorama for securing VM workloads and Kubernetes clusters in Azure. The configurations include setting up interfaces, zones, NAT policies, virtual routers, and security policies.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes Clusters in Panorama

Configure Panorama

Interfaces

  1. Navigate to Network > Interfaces.
  2. Set the Configuration Scope to your AI Runtime Security folder.
  3. Select Add Interface.
    • In the Ethernet tab, Configure a Layer 3 Interface for eth1/1(trust) and eth1/2(untrust):
      • Enter Interface Name (Create interfaces for both eth1/1(trust) and eth1/2(untrust) interfaces).
      • Select Layer3 Interface type.
      • In Logical Routers, select `lr-private` for eth1/1 and `lr-public` for eth1/2.
      • In Zone, select trust for eth1/1 and untrust for eth1/2.
      • Select DHCP Client type under IPV4 address.
      • Enable IPV4 for both eth1/1 and eth1/2 interfaces.
      • For eth1/2 (untrust) only, enable Automatically create default route pointing to default gateway provided by server.
      • Select Advanced Settings > Other Info.
      • Select a Management Profile switch HTTPS enabled under Administrative Management Services or create a new one:
      • Add.
    • Configure Network > Interfaces > Loopback to receive health checks from each load balancer:
      • Select the Logical Routers.
      • Set the trust Zone for private Logical Router and untrust Zone for the public Logical Router.
      • In the IPv4s section, enter the private IP address of the Internal Load Balancer (ILB).
        This IP address is in the output displayed after successfully deploying the `security_project` Terraform, as described in the Deploy AI Runtime Security: Network Intercept in Azure page.
      • Expand Advanced Settings Management Profile and add your allow-health-checks profile.
      • Add or Save.

Zones

  1. Configure Zones (Network → Zones).
  2. Select Add Zone.
  3. Enter a Name.
  4. Select Layer3 Interface type.
  5. In Interfaces, add $eth1 interface for trust zone and $eth2 interface for untrust zone.
  6. Select Save.

NAT

Configure the NAT policies for inbound and outbound traffic:
  1. Configure NAT policy for inbound traffic:
    1. Enter a Name indicating inbound traffic (for example, inbound-web).
    2. Original Packet:
      • In Source zones, click add and select untrust zone.
    3. Destination:
      • select untrust Zone.
      • Select any Interface.
      • In Addresses, click the add (+) icon and select the public Elastic Load Balancer (ELB) address.
    4. Choose any Service.
    5. Translated Packet:
      • In Translation, select Both.
      • In Source Address Translation, select the Dynamic IP and Port translation type.
      • In choice, select Interface Address.
      • In Interface, select eth1(ethernet1/1).
      • In Choice, select IP address.
      • Set the Static IP address as the Translation Type.
      • Select the destination Translated Address.
    6. Save.
  2. Configure NAT Policy for Outbound traffic:
    1. Enter a Name indicating outbound traffic (for example, outbound-internet).
    2. Original Packet:
      • In Source zones, click add and select trust zone.
      • In Addresses, click the add (+) icon and select the app-vnet and the Kubernetes pods CIDR you want to secure.
    3. Destination:
      • Select untrust destination zone.
      • Select any interface.
    4. Choose any Service.
    5. Translated Packet:
      • In Translation, select Source Address Only.
      • In Source Address Translation, select the Dynamic IP and Port translation type.
      • In choice, select Interface Address.
      • In Interface, select eth2(ethernet1/2).
      • In Choice, select IP address.
    6. Save.

Routers

Configure private and public virtual routers:
Azure health probe fails with a single virtual router (VR). Create multiple VRs to ensure probe success.
  1. Configure routing in Panorama (Network → Logical Routers → Router Settings:
  2. Enter a Name indicating private and public routers (for example, lr-private and lr-public).
  3. In Interfaces, select eth1(ethernet1/1) for lr-private route and eth2(ethernet1/2) for lr-public route.
    Refer the section on Interfaces to see how to configure the $eth1 and $eth2 interfaces.
  4. In Advanced Settings, select Edit to configure the IPV4 Static Routes for lr-private and lr-public.
    1. Select Add Static Route and add the following routes:
    2. Application routing:
      1. Enter a Name (for example, app-vnet).
      2. In Destination, enter the CIDR address of your application.
      3. In Next Hop:
        • For lr-private, in the IP Address field, enter the gateway IP address of the private interface (eth1/1).
          The gateway IP address is the first usable IP in the subnet's range (example, 192.168.1.1 for a /24 subnet). To find it, go to Azure Portal > Virtual Networks > [Your Virtual Network] > Subnets > [Private Subnet].
        • For lr-public, in the Next Router field, select `lr-public`.
      4. In Interface, select eth1(ethernet1/1) subnet for `lr-private` and None for `lr-public`.
    3. Default routing:
      1. Enter a Name.
      2. In Destination, enter 0.0.0.0/0.
      3. In Next Hop:
        • For lr-private, in the Next Router field, enter `lr-private`.
        • For lr-public, in the IP Address field, enter the gateway IP address of the `lr-public` interface (eth1/2).
      4. In Interface, choose None for `lr-private` and eth2(ethernet1/2) for `lr-public`.
      5. Add or Update.
    4. Azure Load Balancer’s health probe:
      1. Enter a Name.
      2. In Destination, enter the IP address of the Azure Load Balancer’s health probe (168.63.129.16/32).
      3. In Next Hop, select IP Address for vr-private and vr-public.
        • In IP Address, enter the gateway IP address of the corresponding interfaces.
      4. In Interface, select eth1(ethernet1/1) for lr-private and eth2(ethernet1/2) for lr-public.
    5. Select Add.
  5. Select Save.

Security Policy

  1. Add a security policy.
    Ensure the policy allows health checks from the Azure Load Balancer (LB) pool to the internal LB IP from Panorama. Check session IDs to ensure the firewall responds correctly on the designated interfaces.
  2. Select Commit → Commit and Push, to push the policy configurations to the Prisma AIRS AI Runtime: Network intercept.

Configurations to Secure VM Workloads

  1. Log in to the Panorama Web Interface
    Configure routes for vNet endpoints as explained in the Routers section above to ensure there is a route to your application.
  2. Select Commit and Push to Devices to push the policy configurations to your Prisma AIRS AI Runtime: Network intercept managed by Panorama.
  3. Create or update the NAT policy (refer to the NAT section above) to secure the VM workloads. Set the source address of the application you want to secure.

Configurations to Secure Kubernetes Clusters

  1. Configure static routes (refer to the routes section above) on the Logical Router for Kubernetes workloads.
  2. Follow the below configurations for pod and service subnets static routes for pod for the Kubernetes workloads:
    1. Pod Subnet and Service subnet for lr-private:
      Route TypeNameDestinationNext HopNext Hop ValueInterface
      Pod subnetpod_routePod IPV4 range CIDRIP AddressSubnet Gateway IP addresseth1(ethernet1/1)
      Service subnetservice_route172.16.0.0/24IP AddressSubnet Gateway IP addresseth1(ethernet1/1)
    2. Pod Subnet and Service subnet for lr-public:
      Route TypeNameDestinationNext HopNext Hop ValueInterface
      Pod subnetpod_routePod IPV4 range CIDRNext Routerlr-publicNone
      Service subnetservice_route172.16.0.0/24Next Routerlr-publicNone
  3. Refer to the NAT policy in the above section to secure the Kubernetes clusters and set the source address of the Kubernetes pods CIDR you want to secure.
  4. Select Commit and Push to Devices to push the policy configurations to your Prisma AIRS AI Runtime: Network intercept managed by Panorama.

Secure a Kubernetes Application with Helm

This section covers how to install and configure the Helm chart to secure your Kubernetes applications based on the protection level you selected during deployment.
The Helm chart installation process and directory structure vary depending on whether you selected VPC-level protection or namespace-level protection with traffic steering inspection. VPC-level protection secures all applications within the VPC, while namespace-level protection with traffic inspection provides granular control over specific application traffic flows and CIDR-based inspection rules.
Your deployment configuration determines the specific Helm chart structure and commands required for your environment.
  1. Navigate to the downloaded tar file and extract the contents:
    tar -xvzf <your-terraform-download.tar.gz>
  2. Navigate to the appropriate Helm directory based on your deployment configuration:
    • For VPC-level security:
      cd <unzipped-folder>/architecture/helm
    • For namespace-level security with traffic steering inspection:
      cd <unzipped-folder>/architecture/helm-<complete-app-name-path>
      Navigate to each Helm application folder. When you configure traffic steering inspection, separate Helm charts are generated for each protected namespace, allowing granular security policies per application.
  3. Install the Helm chart using the appropriate command:
    • For VPC-level security:
      helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
    • For namespace-level security with traffic steering inspection:
      helm install ai-runtime-security helm-<complete-app-name-path> --namespace kube-system --values helm-<complete-app-name-path>/values.yaml
      Repeat this command for each namespace-specific Helm chart generated during the deployment process.
    This creates a container network interface (CNI), but doesn’t protect the container traffic until you annotate the application `yaml` or `namespace`.
    Enable "Bring your own Azure virtual network" to discover Kubernetes-related vnets:
    1. In Azure Portal, navigate to Kubernetes services → [Your Cluster]→ Settings→ Networking.
    2. Under Network configuration, select Azure CNI as the Network plugin, then enable Bring your own Azure virtual network.
  4. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  5. Check the pod status:
    kubectl get pods -A
    Verify that the result of the above command lists the pods with names similar to `pan-cni-*****`.
  6. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  7. Verify the Kubernetes resources were created properly:
    a. Check the service accounts kubectl get serviceaccounts -n kube-system | grep pan b. Check the secrets kubectl get secrets -n kube-system | grep pan c. Check the services: `kubectl get svc -n kube-system | grep pan`
    You should see resources like pan-cni-sa (service accounts), pan-plugin-user-secret (secrets), and pan-ngfw-svc (service).
  8. Annotate at the pod level in your application yaml so that the traffic from the pod is redirected to the Prisma AIRS AI Runtime: Network intercept for inspection.
    Annotate the pod using the below command:
    • For VPC-level security:
      kubectl annotate namespace <namespace-to-be-annotated> paloaltonetworks.com/firewall=pan-fw
    • For namespace-level security with traffic steering inspection:
      kubectl annotate pods --all paloaltonetworks.com/subnetfirewall=ns-secure/bypassfirewall
    Ensure every pod has this annotation to be moved to the ‘protected’ state across all cloud environments.
    Restart the existing application pods after applying Helm and annotating the pods for all changes to take effect. This enables the firewall to inspect the pod traffic and secure the containers.

AWS

Prisma AIRS AI Runtime: Network intercept post deployment configurations in Panorama and AWS to protect VM workloads and Kubernetes clusters.
Where Can I Use This?What Do I Need?
  • Secure VMs and Kubernetes Clusters in Panorama

Configure Panorama

Interfaces

  1. Navigate to Network > Interfaces.
  2. Set the Configuration Scope to your AI Runtime Security folder.
  3. Select Add Interface.
    • In the Ethernet tab, Configure a Layer 3 Interface for eth1/1(trust):
      • In Interface Name, enter eth1/1.
      • Select Layer3Interface type.
      • In Logical Routers, select `vr-private` for eth1/1.
      • In Zone, select trust for eth1/1.
      • Select DHCP Client type under IPV4 address.
      • Enable IPV4 for both eth1/1.
      • Select Advanced Settings > Other Info.
      • Select a Management Profile switch HTTPS enabled under Administrative Management Services or create a new one:
      • Add.

Zone

  1. Configure Zones (Network → Zones).
  2. Select Add Zone.
  3. Enter a Name.
  4. Select Layer3 Interface type.
  5. In Interfaces, add $eth1 interface for trust zone.
  6. Save.

Security Policy

  1. Add a security policy and set the action to Allow.
  2. Select Commit → Commit and Push, to push the policy configurations to Prisma AIRS AI Runtime: Network intercept.

Secure a Kubernetes Application with Helm

This section covers how to install and configure the Helm chart to secure your Kubernetes applications based on the protection level you selected during deployment.
The Helm chart installation process and directory structure vary depending on whether you selected VPC-level protection or namespace-level protection with traffic steering inspection. VPC-level protection secures all applications within the VPC, while namespace-level protection with traffic inspection provides granular control over specific application traffic flows and CIDR-based inspection rules.
Your deployment configuration determines the specific Helm chart structure and commands required for your environment.
Prerequisites:
  • Go to your downloaded Terraform template and navigate to `<unzipped-folder>/architecture/helm`.
  • Apply Terraform for the `security_project` as shown in the Deploy Prisma AIRS AI Runtime: Network Intercept in AWS.
    Deploying the Terraform for the security project creates the GWLB endpoints in your AWS account.
  • Open the `values.yaml` file found in the path: `<unzipped-folder>/architecture/helm`.
  • Update the `endpoints1` and `endpoints2` values with your GWLB endpoints IP addresses. Below is a sample `values.yaml` file:
    # Default values for ai-runtime-security. # This is a YAML-formatted file. # Declare variables to be passed into your templates. # Configure vpc endpoint per zone. This makes sure kubernetes # traffic is not sent across zone. Endpoints can be added or # removed based on requirements and zone availability. # GWLB VPC endpoint zone1 IP address. endpoints1: "" endpoints1zone: us-east-1a # GWLB VPC endpoint zone2 IP address. endpoints2: "" endpoints2zone: us-east-1b # PAN CNI image. cniimage: gcr.io/pan-cn-series/airs/pan-cni:latest # Resource namespace name. namespace: kube-system # Kubernetes ClusterID value range 1-2048. clusterid: 1
  • Apply the helm chart by following the below steps.
  1. Navigate to the downloaded tar file and extract the contents:
    tar -xvzf <your-terraform-download.tar.gz>
  2. Navigate to the appropriate Helm directory based on your deployment configuration:
    • For VPC-level security:
      cd <unzipped-folder>/architecture/helm
    • For namespace-level security with traffic steering inspection:
      cd <unzipped-folder>/architecture/helm-<complete-app-name-path>
      Navigate to each Helm application folder. When you configure traffic steering inspection, separate Helm charts are generated for each protected namespace, allowing granular security policies per application.
  3. Install the Helm chart using the appropriate command:
    • For VPC-level security:
      helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
    • For namespace-level security with traffic steering inspection:
      helm install ai-runtime-security helm-<complete-app-name-path> --namespace kube-system --values helm-<complete-app-name-path>/values.yaml
      Repeat this command for each namespace-specific Helm chart generated during the deployment process.
    This creates a container network interface (CNI), but doesn’t protect the container traffic until you annotate the application `yaml` or `namespace`.
  4. Verify the Helm installation:
    #List all Helm releases helm list -A #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  5. Check the pod status:
    kubectl get pods -A #Verify that the pods with names similar to `pan-cni-*****` are present.
  6. Check the endpoint slices:
    kubectl get endpointslice -n kube-system #Confirm that the output shows an ILB IP address: NAME ADDRESSTYPE PORTS ENDPOINTS AGE my-endpointslice IPv4 80/TCP 10.2xx.0.1,10.2xx.0.2 12h
  7. Verify the Kubernetes resources were created properly:
    a. Check the service accounts kubectl get serviceaccounts -n kube-system | grep pan b. Check the secrets kubectl get secrets -n kube-system | grep pan c. Check the services: `kubectl get svc -n kube-system | grep pan`
    You should see resources like pan-cni-sa (service accounts), pan-plugin-user-secret (secrets), and pan-ngfw-svc (service).
  8. Annotate at the pod level in your application yaml so that the traffic from the pod is redirected to the Prisma AIRS AI Runtime: Network intercept for inspection.
    Annotate the pod using the below command:
    • For VPC-level security:
      kubectl annotate namespace <namespace-to-be-annotated> paloaltonetworks.com/firewall=pan-fw
    • For namespace-level security with traffic steering inspection:
      kubectl annotate pods --all paloaltonetworks.com/subnetfirewall=ns-secure/bypassfirewall
    Ensure every pod has this annotation to be moved to the ‘protected’ state across all cloud environments.
    Restart the existing application pods after applying Helm and annotating the pods for all changes to take effect. This enables the firewall to inspect the pod traffic and secure the containers.

Secure Container Traffic in Private Cloud

Use Prisma AIRS AI Runtime: Network intercept to secure container traffic in private clouds.
Where Can I Use This?What Do I Need?
  • Secure container traffic deployed in private cloud using Prisma AIRS AI Runtime: Network intercept
This section shows how to configure Prisma AIRS AI Runtime: Network intercept to secure Kubernetes workloads—including containers and AI applications—in private cloud environments using Panorama managed firewall. Prisma AIRS AI Runtime: Network intercept supports Rosa OpenShift and Rancher.
This page also covers Panorama configurations to route traffic through Prisma AIRS AI Runtime: Network intercept.
If you have clusters in a private cloud, you can follow this workflow by applying the Helm chart and routing the traffic through Prisma AIRS AI Runtime: Network intercept.
For Panorama managed Prisma AIRS AI Runtime: Network intercept, you can apply the Prisma AIRS Helm chart without going through the deployment workflow on the Strata Cloud Manager.
The diagram shows how Prisma AIRS AI Runtime: Network intercept integrates with OpenShift using CNI chaining. In this setup, Prisma AIRS AI Runtime: Network intercept runs as a secondary CNI plugin, alongside the cluster’s primary CNI. In this setup, Prisma AIRS AI Runtime: Network intercept redirects east-west container traffic through Panorama-managed firewalls for real-time, AI-driven inspection and policy enforcement.

Configure Panorama to Secure Kubernetes Clusters

Interfaces

  1. Navigate to Network > Interfaces.
  2. Set the Configuration Scope to your AI Runtime Security folder.
  3. Select Add Interface.
    • In the Ethernet tab, Configure a Layer 3 Interface for eth1/1(trust).
    • Enter Interface Name (Create interface for eth1/1(trust) interface).
    • Select the Layer3 Interface type.
    • In Logical Routers, select `lr-private` for eth1/1.
    • In Zone, select trust for eth1/1.
    • In the IPV4 address, select Static or DHCP Client type.
    • Enable IPV4 for eth1/1.
    • Select Advanced Settings Other Info.
    • Select a Management Profile or create a new one.
      In Administrative Management Services, enable HTTPS.
    • Click Add.

Zones

  1. Configure Zones (Network → Zones).
  2. Select Add Zone.
  3. Enter a Name.
  4. Select the Layer3 Interface type.
  5. In Interfaces, add the $eth1 interface for the trust zone.
  6. Select Save.

NAT Policy

Configure the NAT Policy for outbound traffic.
  1. Configure NAT policy for inbound traffic:
    1. Enter a Name indicating inbound traffic (for example, inbound-web).
    2. Original Packet:
      • In Source zones, click add and select trust zone.
    3. Destination:
      • Select trust destination zone.
      • Select any Interface.
      • In Addresses, click the add (+) icon and select the `app-vnet` and the Kubernetes pods CIDR you want to secure.
    4. Choose any Service.
    5. Translated Packet:
      • In Translation, select Source Address Only.
      • In Source Address Translation, select the Dynamic IP and Port translation type.
      • In choice, select Interface Address.
      • In Interface, select eth1(ethernet1/1).
      • In Choice, select an IP address.
    6. Select Save.

Logical Routers

Configure private logical routers.
  1. Navigate to Network → Logical Routers → Router Settings.
  2. Enter a Name indicating a private router (for example, lr-private).
  3. In Interfaces, select eth1(ethernet1/1) for lr-private route.
    Refer to the section on Interfaces to see how to configure the $eth1 interface.
  4. In Advanced Settings, select Edit to configure the IPv4 Static Routes for lr-private.
    1. Select Add Static Route and add the following routes:
    2. Application routing:
      1. Enter a Name (for example, app-vnet).
      2. In Destination, enter the CIDR address of your application.
      3. In the Next Hop:
        • For lr-private, in the IP Address field, enter the gateway IP address of the private interface.
    3. Default routing:
      1. Enter a Name.
      2. In Destination, enter 0.0.0.0/0.
      3. In the Next Hop:
        • For lr-private, in the IP Address field, enter the gateway IP address of the private interface.
      4. Select Add or Update.
  5. In Interface, select eth1(ethernet1/1) for lr-private.
  6. Select Add.
  7. Select Save.

Security Policy

  1. Add a security policy rule with an AI security profile attached to it.
  2. Set the security policy action to Allow.
  3. Select Commit → Commit and Push, to push the policy configurations to the Prisma AIRS AI Runtime: Network intercept.

Install Kubernetes Cluster and Set Up Panorama

Install Kubernetes cluster and set up Panorama.
  1. Install the Kubernetes Plugin and Set up Panorama.
    Add Kubernetes cluster information to Panorama to ensure that the two can communicate with each other.
    Check the monitoring interval. The default interval at which Panorama polls the Kubernetes API server endpoint is 30 seconds.
    1. Navigate to Panorama Plugins Kubernetes Setup General.
    2. Ensure to select the Enable Monitoring checkbox.
    3. Click the gear icon to edit the Monitoring Interval and change to a range of 30-300 seconds.
    4. Navigate to Panorama Plugins Kubernetes Setup Cluster, and Add Cluster.
      Ensure that you don’t add the same Kubernetes cluster to more than one Panorama (single instance or HA pair) appliance because you may see inconsistencies in how the IP-address-to mappings are registered to the device groups.
    5. Enter a Name and the API Server Address.
      This is the Endpoint IP address for the cluster, which you can get from your Kubernetes deployment. Enter a name, up to 20 characters, to uniquely identify the name of the cluster. You can’t modify this name because Panorama uses the cluster name when it creates tags for the pods, nodes, and services it discovers within the cluster. The format of the API server address can be a hostname or an IP address:port number, and you don’t need to specify the port if you're using port 443, which is the default port.
    6. Select the environment Type on which your cluster is deployed.
      The available options are AKS, EKS, GKE, Native Kubernetes, OpenShift, and Other.
    7. Upload the service account Credential that Panorama requires to communicate with the cluster. As described in the create service accounts for cluster authentication workflow, the filename for this service account is plugin-svc-acct.json.
      If you're uploading the service credentials through CLI/API, please gzip the file, and then do a base64 encoding of the compressed file before you upload or paste the file contents into the Panorama CLI or API. These steps are not required if you're uploading the service credential file on the GUI.
    8. Click OK.
      You can leave the Label Filter and Label Selector configuration to be filled in later. This optional task enables you to retrieve any custom or user-defined labels for which you want Panorama to create tags.

Apply Helm chart to Deploy Prisma AIRS AI Runtime: Network Intercept

This section covers how to install and configure the Helm chart to secure your Kubernetes applications based on the protection level .
  1. Clone the GitHub repository.
  2. The helm structure looks like:
    |____helm |____templates |____.helmignore |____Chart.yaml |____plugin-serviceaccount.yaml |____values.yaml
  3. Edit the `values.yaml` file as per your firewall deployment:
    • For a standalone firewall, update the endpoints value with the trust IP address of the standalone firewall.
    • For an active/passive firewall deployment, update the endpoints value with the trust interface IP address of the active-primary server.
      These changes are valid for OpenShift and Rancher.
    Here’s a sample `values.yaml` file:
    # Default values for ai-runtime-security. # This is a YAML-formatted file. # Declare variables to be passed into your templates. # Firewall trust interface IP Address for on-prem endpoints: 10.101.255.253 # This is the PAN CNI image cniimage: gcr.io/pan-cn-series/airs/pan-cni:latest # This is the AI firewall trust CIDR and is an optional parameter. # Helps in reducing hops in East-West cluster traffic. fwtrustcidr: "" # Resource namespace name namespace: kube-system # This is the Kubernetes Cluster ID value ranging between 1 and 2048. clusterid: 1
  4. Install the helm chart with the following command:
    helm install ai-runtime-security helm --namespace kube-system --values helm/values.yaml
  5. Verify the Helm installation with the following command:
    #List all Helm releases helm list -A
    The output looks similar to:
    #Ensure the output shows your installation with details such as: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ai-runtime-security kube-system 1 2024-08-13 07:00 PDT deployed ai-runtime-security-0.1.0 11.2.2
  6. Check the pod status with the following command:
    kubectl get pods -A
    Verify that the result of the above command lists the pods with names similar to `pan-cni-*****`.
  7. Check the endpoint slices using the following command:
    kubectl get endpointslices -n kube-system | grep pan
    Confirm that the output shows an ILB IP address:
    NAME ADDRESSTYPE PORTS ENDPOINTS AGE pan-ngfw-svc-endpoints IPv4 6080 10.101.255.253 12h
    Ensure that the endpoint slice IP address points to the trust interface IP address of the firewall.
  8. Verify the Kubernetes resources were created properly:
    a. Check the service accounts kubectl get serviceaccounts -n kube-system | grep pan b. Check the secrets kubectl get secrets -n kube-system | grep pan c. Check the services: `kubectl get svc -n kube-system | grep pan`
    You should see resources like pan-cni-sa (service accounts), pan-plugin-user-secret (secrets), and pan-ngfw-svc (service).
  9. Annotate at the pod level in your application yaml so that the traffic from the pod is redirected to the Prisma AIRS AI Runtime: Network intercept for inspection.
    1. Annotate the pod using the below command:
      For VPC-level security:
      kubectl annotate namespace <namespace-to-be-annotated> paloaltonetworks.com/firewall=pan-fw
    1. In OpenShift, use the following command to annotate the app pod `yaml` file:
      kubectl annotate namespace <namespace-to-be-annotated> k8s.v1.cni.cncf.io/networks=pan-cni
    2. For namespace-level security with traffic steering inspection:
      kubectl annotate pods --all paloaltonetworks.com/subnetfirewall=ns-secure/bypassfirewall
    Annotate each pod, so the pods are moved to the "protected" state across all cloud environments.
    Restart the existing application pods after applying Helm and annotating the pods for all changes to take effect. This enables the firewall to inspect the pod traffic and secure the containers.
  10. For OpenShift, to make the `multus` plugin work, deploy "NetworkAttachmentDefinition" "pan-cni" in every app pod's namespace:
    kubectl apply -f pan-cni-net-attach-def.yaml -n <target-namespace>