Deploy the CN-Series Firewall as a DaemonSet on GKE
Focus
Focus
CN-Series

Deploy the CN-Series Firewall as a DaemonSet on GKE

Table of Contents

Deploy the CN-Series Firewall as a DaemonSet on GKE

Where Can I Use This?
What Do I Need?
  • CN-Series Firewall
    deployment
  • CN-Series 10.1.x or above Container Images
  • Panorama
    running PAN-OS 10.1.x or above version
  • Helm 3.6 or above version client
    for CN-Series deployment using Helm
Complete the following procedure to deploy the CN-Series firewall as a Daemonset on GKE platform:
  1. Set up your Kubernetes cluster.
    To create a cluster in GKE, do the following:
    1. Click the navigation menu, go to
      Kubernetes Engine
      , and then select
      clusters
      .
    2. Click
      Create
      .
    3. Select the
      GKE Standard
      as the cluster mode that you want to use, and then click
      Configure
      .
    4. Enter the cluster basic information including Name, Version, Location, Node subnet, and then click
      Create
      .
    If your cluster is on GKE, make sure to enable the Kubernetes Network Policy API to allow the cluster administrator to specify which pods are allowed to communicate with each other. This API is required for the CN-NGFW and CN-MGMT Pods to communicate.
    Verify that the cluster has adequate resources. Make sure that cluster has the CN-Series System requirements to support the firewall.
    kubectl get nodes
    kubectl describe node <node-name>
    View the information under the Capacity heading in the command output to see the CPU and memory available on the specified node.
    The CPU, memory and disk storage allocation will depend on your needs. See CN-Series Performance and Scalability.
    Ensure you have the following information:
  2. (
    optional
    ) If you configured a custom certificate in the Kubernetes plugin for Panorama, you must create the cert secret by executing the following command. Do not change the file name from ca.crt. The volume for custom certificates in pan-cn-mgmt.yaml and pan-cn-ngfw.yaml is optional.
    kubectl -n kube-system create secret generic custom-ca --from-file=ca.crt
  3. Edit the YAML files to provide the details required to deploy the CN-Series firewalls.
    You need to replace the image path in the YAML files to include the path to your private Google Container registry and provide the required parameters. See Editable parameters in CN-Series deployment yaml files for details.
  4. Deploy the CNI DaemonSet.
    The CNI container is deployed as a DaemonSet (one pod per node) and it creates two interfaces on the CN-NGFW pod for each application deployed on the node. When you use the kubectl commands to run the pan-cni YAML files, it becomes a part of the service chain on each node.
    1. The CN-Series firewall requires three Service accounts with the minimum permissions that authorize it to communicate with your Kubernetes cluster resources. You should create Create Service account for CN-Series cluster Authentication and verify that you have created the service account using the pan-cni-serviceaccount.yaml.
    2. Use Kubectl to run the pan-cni-configmap.yaml.
      kubectl apply -f pan-cni-configmap.yaml
    3. Use Kubectl to run the pan-cni.yaml.
      kubectl apply -f pan-cni.yaml
    4. Verify that you have modified the pan-cni-configmap and pan-cni YAML files.
    5. Run the following command and verify that your output is similar to the following example.
  5. Deploy the CN-MGMT StatefulSet.
    By default, the management plane is deployed as a StatefulSet that provides fault tolerance. Up to 30 firewall CN-NGFW pods can connect to a CN-MGMT StatefulSet.
    1. Verify that you have modified the pan-cn-mgmt-configmap and pan-cn-mgmt YAML files.
      Sample pan-cn-mgmt-configmap
      name: pan-mgmt-config
      metadata:
      namespace: kube-system
      data:
      PAN_SERVICE_NAME: pan-mgmt-svc
      PAN_MGMT_SECRET: pan-mgmt-secret
      # Panorama settings
      PAN_PANORAMA_IP: "x.y.z.a"
      PAN_DEVICE_GROUP: "dg-1"
      PAN_TEMPLATE_STACK: "temp-stack-1"
      PAN_CGNAME: "CG-GKE"
      Non-mandatory parameters
      #Recommended to have same name as the cluster name provided in Panorama Kubernetes plugin - helps with easier identification of pods if managing multiple clusters with same Panorama
      #CLUSTER_NAME: "<Cluster name>"
      #PAN_PANORAMA_IP2: ""
      #Comment out to use CERTs otherwise PSK for IPSec between pan-mgmt and pan-ngfw
      #IPSEC_CERT_BYPASS: ""
      #No values needed
      #Override auto-detect of jumbo-frame mode and force enable system-wide#PAN_JUMBO_FRAME_ENABLED: "true"
      #Start MGMT pod with GTP enabled. For complete functionality, need GTP enable at Panorama as well.
      #PAN_GTP_ENABLED: "true"
      #Enable high feature capacities. These need high memory for MGMT pod andhigher/matching memory than specified below for NGFW pod.
      #This requires kernel support and NGFW pod running with privileged: true
      #PAN_NGFW_MEMORY: "42Gi"
      Sample pan-cn-mgmt.yaml
      initContainers:
      - name: pan-mgmt-init
      image: <your-private-registry-image-path>
      containers: - name: pan-mgmt
      image: <your-private-registry-image-path>
      terminationMessagePolicy: FallbackToLogsOnError
    2. Use Kubectl to run the yaml files.
      kubectl apply -f pan-cn-mgmt-configmap.yaml
      kubectl apply -f pan-cn-mgmt-secret.yaml
      kubectl apply -f pan-cn-mgmt.yaml
      You must run the pan-mgmt-serviceaccount.yaml, only if you had not previously completed the Create Service Accounts for Cluster Authentication.
    3. Verify that the CN-MGMT pods are up.
      It takes about 5-6 minutes.
      Use
      kubectl get pods -l app=pan-mgmt -n kube-system
      NAME READY STATUS RESTARTS AGEpan-mgmt-sts-0 1/1
      Running 0 27hpan-mgmt-sts-1 1/1 Running 0 27h
  6. Deploy the CN-NGFW pods.
    By default the firewall dataplane CN-NGFW pod is deployed as a DaemonSet. An instance of the CN-NFGW pod can secure traffic for up to 30 application Pods on a node.
    1. Verify that you have modified the YAML files as detailed in PAN-CN-NGFW-CONFIGMAP and PAN-CN-NGFW.
      containers: - name: pan-ngfw-container image: <your-private-registry-image-path>
    2. Use Kubectl apply to run the pan-cn-ngfw-configmap.yaml.
      kubectl apply -f pan-cn-ngfw-configmap.yaml
    3. Use Kubectl apply to run the pan-cn-ngfw.yaml.
      kubectl apply -f pan-cn-ngfw.yaml
    4. Verify that all the CN-NGFW Pods are running. (one per node in your cluster)
      This is a sample output from a 4-node on-premises cluster.
      kubectl get pods -n kube-system -l app=pan-ngfw -o wide
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      pan-ngfw-ds-8g5xb 1/1 Running 0 27h 10.233.71.113 rk-k8-node-1 <none> <none>
      pan-ngfw-ds-qsrm6 1/1 Running 0 27h 10.233.115.189 rk-k8-vm-worker-1 <none> <none>
      pan-ngfw-ds-vqk7z 1/1 Running 0 27h 10.233.118.208 rk-k8-vm-worker-3 <none> <none>
      pan-ngfw-ds-zncqg 1/1 Running 0 27h 10.233.91.210 rk-k8-vm-worker-2 <none> <none>
  7. Verify that you can see CN-MGMT, CN-NGFW and the PAN-CNI on the Kubernetes cluster.
    kubectl -n kube-system get pods
    0 27hpan-cni-5fhbg 1/1 Running
    0 27hpan-cni-9j4rs 1/1 Running
    0 27hpan-cni-ddwb4 1/1 Running
    0 27hpan-cni-fwfrk 1/1 Running
    0 27hpan-cni-h57lm 1/1 Running
    0 27hpan-cni-h57lm 1/1 Running
    0 27hpan-cni-j62rk 1/1 Running
    0 27hpan-cni-lmxdz 1/1 Running
    0 27hpan-mgmt-sts-0 1/1 Running
    0 27hpan-mgmt-sts-1 1/1 Running
    0 27hpan-ngfw-ds-8g5xb 1/1 Running
    27hpan-ngfw-ds-qsrm6 1/1 Running
    0 27hpan-ngfw-ds-vqk7z 1/1 Running
    0 27hpan-ngfw-ds-zncqg 1/1 Running
  8. Annotate the application yaml or namespace so that the traffic from their new pods is redirected to the firewall.
    You need to add the following annotation to redirect traffic to the CN-NGFW for inspection:
    annotations: paloaltonetworks.com/firewall: pan-fw
    For example, for all new pods in the “default” namespace:
    kubectl annotate namespace default paloaltonetworks.com/firewall=pan-fw
    On some platforms, the application pods can start when the pan-cni is not active in the CNI plugin chain. To avoid such scenarios, you must specify the volumes as shown here in the application pod YAML.
    volumes: - name: pan-cni-ready hostPath: path: /var/log/pan-appinfo/pan-cni-ready type: Directory
  9. Deploy your application in the cluster.

Recommended For You