How Does the Panorama Plugin for Azure Secure Kubernetes Services?

Learn how the Azure plugin for Panorama works to secure Azure Kubernetes services.
You can use VM-Series firewalls to secure services with internet access independent of the Kubernetes cluster. VM-Series firewalls can secure inbound traffic for Azure Kubernetes Service (AKS) clusters exposed by a load balancer (such as an Azure Load Balancer). Outbound traffic can only be monitored.
The following topics review different components that enable the Azure plugin for Panorama to connect to and obtain information from an AKS cluster.

AKS Components and Planning Checklist

This solution requires the following components. See the Palo Alto Networks Compatibility Matrix, to verify the minimum OS, plugin, and template versions required to configure auto scaling and secure AKS clusters.

A Sample Hub-and-Spoke Topology to Secure AKS Clusters

The following diagram illustrates a sample auto scale deployment that secures inbound traffic for Azure AKS clusters. This deployment demonstrates a hub-and-spoke topology. Let’s review some of the components.
plugin-azure-aks-architecture.png
  • Auto Scaling Infrastructure
    —The Azure Auto Scaling templates create the messaging infrastructure and the basic hub and spoke architecture.
  • AKS Clusters
    —The Palo Alto Networks AKS template creates an AKS cluster in a new VNet. Given the name of the spoke resource group, the template tags the VNet and AKS cluster with the spoke resource group name, so the resource group can be discovered by the Azure Auto Scaling plugin for Panorama. The Azure plugin for Panorama queries service IP addresses on the Staging ILB to learn about AKS cluster services.
    Only one spoke firewall scale set can be associated with an AKS cluster; if you expose multiple services in a single AKS cluster, they must be protected by the same spoke.
    For each resource group, create a subnet-based address group. In the above diagram, for example, create an address group for 10.240.0.0/24 (AKS Cluster 1).
  • VNet Peering
    —You must manually configure VNET peering to communicate with other VNets in the same region.
    Cross-region peering is not supported.
    You can use other automation tools to deploy AKS clusters. If you deploy in an existing VNet (the Hub Firewall VNet, for example) you must manually configure VNet peering to the Inbound and Outbound hub and spoke resource groups, and manually tag the VNet and AKS cluster with the resource group name.
  • User Defined Routes and Rules
    —You must manually configure user-defined routes and rules (see AKS User-Defined Routing and Azure Networking and VM-Series Firewall). In the diagram above, incoming traffic can be redirected, according to UDR rules, to the Firewall ILB for inspection. Azure user-defined routing (UDR) rules redirect outbound traffic exiting an AKS cluster to the Hub Firewall ILB. The solution assumes allow all as a default policy for Kubernetes orchestration to function as-is, but to apply policy you can use whitelisting or blacklisting to allow or deny outbound traffic.

AKS User-Defined Routing

You must manually create user-defined routing and routing rules to govern inbound traffic to and monitor outbound traffic from an AKS cluster.
Inbound
In the above diagram, inbound traffic from the Application Gateway is driven to the back-end pool, and based on UDR rules, redirected to the Firewall ILB. For example, create a UDR pointing to the VNet subnet so that the traffic for Kubernetes services is directed to the Firewall ILB.
Outbound
On the Hub firewall set, for each AKS cluster being protected, you must create static routes for the cluster subnet CIDR, with the next hop being the gateway address of the Hub VNet trust subnet.
All outbound traffic for an AKS cluster is directed to the Hub firewall set with a single UDR rule.

AKS Cluster Communication

The Panorama plugin for Azure can only communicate with the AKS master node for a given AKS cluster. For outbound AKS traffic, the next hop is the Hub Firewall ILB. Because outbound traffic is monitored, you must allow all traffic.
The following topics emphasize common practices that help you establish connectivity. Keep them in mind when you plan your networks and subnets.

Create AKS Cluster Authentication

When you connect the AKS cluster in Azure plugin for Panorama you must enter a secret authorization token. Create a
.yaml
file to create a ClusterRoleBinding and save the service account credential to a JSON file.
  1. Create a ClusterRole.
  2. Create a ClusterRoleBinding.
    1. Create a
      .yaml
      file for the ClusterRoleBinding. For example, create a text file named
      crb.yaml
      .
      apiVersion: rbac.authorization.k8s.io kind: ClusterRoleBinding metadata:    name: default-view roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: view subjects: - kind: ServiceAccount   name: default   namespace: default
    2. Use Azure Cloud Shell to apply the
      crb.yaml
      role binding.
      kubectl apply -f crb.yaml
    3. View the service account you just created.
      kubectl get serviceaccounts
  3. Save the service account credential to a
    .json
    file.
    1. On your local machine, change to the directory in which you want to save the credential.
    2. Use
      kubectl
      commands to create the token.
      MY_SA_TOKEN=‘kubectl get serviceaccounts default -o jsonpath=’{.secrets[0].name}’‘
    3. View the token name.
      $ echo $MY_SA_TOKEN
    4. Display the credential.
      kubectl get secret $MY_SA_TOKEN -o json

Create An Address Group to Identify VNet Subnet Traffic

To create some granularity for monitored Outbound traffic, create an address group specifically for the AKS cluster VNet subnet (for example, 10.240.0.97/32 in the above diagram). You can then write rules that allow incoming or returning traffic rather than using allow all.
If you create an address group, be careful to maintain the communication between the AKS Master and any worker nodes. See Add the Subnet Address Group to the Top-Level Policy.
If communication is interrupted, application traffic can be lost or your application deployment might have problems.

Add the Subnet Address Group to the Top-Level Policy

To maintain connectivity, the subnet address group must be part of the top-level policy in Panorama. You can configure the cluster address group, or bootstrap the cluster to configure the cluster address group.
Add the address group to the top-level policy
before
you configure VNet peering or AKS User-Defined Routing.

Create Separate Address Groups for Traffic from Workloads and AKS

If an AKS cluster co-exists with VM workloads that run in separate VNets, and the VNet is peered with both the workload spoke (Inbound) and the Hub (Outbound), you must create address groups to distinguish the workloads and the AKS traffic. Add the address groups to your top-level policy as described in Add the Subnet Address Group to the Top-Level Policy. This prevents application disruption when workload and AKS cluster VNets are peered.

View Dynamic Address Groups with Kubernetes Labels

When monitoring an AKS cluster resource, the Azure plugin automatically generates the following IP address tags for AKS services.
aks.<aks cluster name>.<aks service name>
Tags are not generated for nodes, pods, or other resources.
If the AKS service has any labels, the tag is as follows (one per label):
aks.<aks cluster name>.svc.<label>.<value>
For example:
aks.prod-cluster.azure-voteback.svc.tier.stagingapp
If a labelSelector tag is defined for a cluster, the plugin generates the following IP address tag:
aks_<labelSelector>.<aks cluster name>.<aks service name>

Recommended For You