Secure an EKS Cluster with VM-Series Firewall and AWS Plugin on Panorama

Learn how Panorama can secure inbound traffic to an EKS cluster.
To enable Panorama to monitor and secure Elastic Kubernetes Services, you must install the Panorama plugin for AWS version recommended in the Compatibility Matrix for public clouds and add your cluster service account credentials. You must also associate your cluster credentials with a Panorama device group and a template stack to which the firewall set protecting the cluster belongs.

Set Up Your Panorama Configuration

Configure these Panorama elements before you use the templates to deploy firewalls.
  1. Add a template.
    In Panorama, go to
    Panorama
    Templates
    and
    Add
    a template.
  2. Add a stack.
    Select
    Panorama
    Templates
    and
    Add Stack
    . In the Templates pane,
    Add
    the template you created in Step 1.
  3. Add a device group.
    Select
    Panorama
    Device Groups
    and
    Add
    a device group. You don’t need to enter anything yet.
  4. Configure the DNS server to point to the AWS DNS server.
    1. In the
      Device
      context, from the Template menu, select the template stack you created in Step 2.
    2. Select
      Services
      and click the Edit gear.
    3. Under
      Services
      select
      Servers
      and add the IP address of the AWS DNS server—
      169.254.169.253
    4. Click
      OK
      .
  5. Configure untrust and trust interfaces, virtual routers, and zones to push to your managed firewalls.
    1. Select
      Network
      Interfaces
      , and from the
      Template
      menu, select the template you created in Step 1 (not the template stack).
    2. Select
      Ethernet
      Add Interface
      to configure the untrust interface.
      1. Slot
        —Select Slot 1.
      2. Interface Name
        —Select ethernet1/1.
      3. Interface Type
        —Select Layer3.
      4. To create the virtual router, select
        Config
        and under
        Assign Interface To
        Virtual Router
        choose
        New Virtual Router
        . To name the router, prefix your template stack name with
        VR-
        . For example:
        VR-<my-template-stack-name>
        . The plugin searches for this specific router name.
        Select
        ECMP
        and select
        Enable
        , then click
        OK
        to return to the
        Config
        tab.
      5. Go to
        Assign Interface
        Security Zone
        , choose
        New Zone
        , name the zone untrust, and click
        OK
        .
      6. Select
        IPV4
        DHCP Client
        . Leave
        Enable
        and
        Automatically create default route pointing to default gateway provided by server
        checked. This sets the default route to point to the untrust interface.
      7. Click
        OK
        .
    3. Configure the trust interface.
      1. Select
        Interfaces
        Ethernet
        Add Interface
        .
      2. Slot
        —Select Slot 1.
      3. Interface Name
        —Select ethernet1/2.
      4. Interface Type
        —Select Layer3.
      5. Select
        Config
        and under
        Assign Interface
        Virtual Router
        choose the router you just created (
        VR-<template-stack-name>
        ).
      6. Select
        Security Zone
        New Zone
        , name the zone trust, and click
        OK
        .
      7. Select
        IPV4
        , choose
        DHCP Client
        , and disable (uncheck)
        Automatically create default route pointing to default gateway provided by server
        .
      8. Click
        OK
        .
    4. (
      Optional
      ) To configure outbound monitoring you need to create a default allow-all-outbound policy from the Trust zone to the Untrust zone.
      Without the default allow-all policy the firewall will block Kubernetes orchestration traffic leaving the worker nodes.
      1. Select
        Policies
        and from the
        Device Group
        menu, select the Device Group you made in step 3.
      2. Select
        Security
        Pre Rules
        and
        Add
        a security policy rule with the following values.
      • General
        —Name the policy
        allow-all-outbound
        .
      • Source
        —Select
        Trust
        .
      • Destination
        —Select
        Untrust
        .
      • Service/URL Category
        —Select
        Any
        .
      • Click
        OK
        .
  6. Commit
    your changes.

Set Up Your AWS Bootstrap Bucket

  1. Create an Amazon S3 bucket and Bootstrap Package as described in Bootstrap the VM-Series Firewall on AWS.
  2. Download
    eks.zip
    from https://github.com/PaloAltoNetworks/aws-eks. In a local directory, extract the contents:
    \cfg    init-cfg.txt \templates panw-aws.zip
  3. Upload
    panw-aws.zip
    to your S3 bucket.
    This file contains the AWS Lambda code for the templates.
  4. Edit the init-cfg.txt file to supply the values for vm-auth-key, panorama-server, panorama-server-2, tplname, and dgname.
    • vm-auth-key
      • If you have an auth-key, log on to your Panorama CLI and type:
        request bootstrap vm-auth-key show
      • If you don’t have an auth-key, to generate one from the CLI, type:
        request bootstrap vm-auth-key generate lifetime <1-8768>
    • panorama-server
      —The IP address of your Panorama server.
    • panorama-server-2—
      The IP address of the other server in your HA pair. If you have only one server you can leave this value undefined.
    • tplname
      —The name of the template stack you created.
    • dgname
      —The name of the device group you created.
    Save the file.
  5. In your Amazon S3 bucket, add files to your bootstrap package as follows—
    1. Upload the edited init-cfg.txt file to
      \config
      .
    2. Upload
      authcodes
      to
      \license
      .
      authcodes
      (no extension) is a text file you create that contains the VM auth code you received when you purchased your license.The authcodes file ensures bootstrapped firewalls are licensed.
      eks-s3-bucket.png
      You can leave the
      \content
      and
      \software
      directories empty.

Deploy the Firewall Template on AWS

This task uses the
firewall-new-vpc-v1.0.template
to create an AWS VPC, create networks and subnetworks, and configure a firewall stack (greenfield deployment). See Deploy the Firewall Template in an Existing VPC for a brownfield deployment.
  1. In AWS, ensure that you are working in a region that supports EKS. See the region table.
  2. In AWS go to
    AWS Services
    Management & Governance
    Cloud Formation
    Stacks
    Create stack
    .
    If you completed the steps in Set Up Your AWS Bootstrap Bucket, your template is ready.
  3. Select template.
    Select
    Upload a template file
    and upload
    firewall-new-vpc-v1.0.template
    from your local drive.
    Click
    Next
    .
  4. Specify the
    Stack Name
    .
  5. Configure the VPC.
    • VPCName
      panwVPC
      (the default).
    • Number of AZs
      —The number of availability zones in the region you chose for your S3 bucket (two, three, or four).
    • Select AZs
      —From the list, select the available AZs for your region. Enter the number of AZs you specified in the previous step.
    • VPCCIDR
      —Supply the CIDR for the VPC.
    • NumberofFWs
      —Enter the number of firewalls (minimum 2, maximum 6).
    • MgmtSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall management subnets separated by commas. The number of CIDRs must match the number of AZs.
    • UntrustSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall untrust subnets separated by commas. The number of CIDRs must match the number of AZs.
    • TrustSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall trust subnets separated by commas. The number of CIDRs must match the number of AZs.
    • NATGWSubnetIpBlocks
      —List the IP CIDRs for the NAT gateway subnets separated by commas. The number of CIDRs must match the number of AZs.
    • Name of External Load Balancer
      —Name the external load balancer.
    • ELBType
      —Choose either application or network. For this sample, choose application.
  6. Configure the VM-Series firewall instance.
    • AMIID of PANFW image
      —Go to the AMI list, copy the AMI corresponding to your PAN-OS version for the BYOL license, and paste it here.
    • Key pair
      —Select an Amazon EC2 key pair.
    • SSH From
      —Enter your public IP address. This address is added to the security group to allow SSH access. To find it, type https://www.whatsmyip.org/ in a browser. If you are specifying a new VPC you must enter a valid CIDR range. For example, x.x.x.x/x.
  7. Provide S3 Bucket details—Supply the name of your bucket from Set Up Your AWS Bootstrap Bucket, which contains both firewall and Lambda code.
    • Bootstrap bucket for VM-Series firewalls
      —Your bucket name.
    • S3 Bucket Name for Lambda Code
      —Your bucket name.
    • Click
      Next
      .
    • Click
      Next
      . Skip configuring stack options.
    • Click
      Next
      .
  8. On the review page, scroll down and check
    I acknowledge that AWS CloudFormation might create IAM resources
    and click
    Create stack
    .
    Creation can take up to ten minutes.
  9. In
    CloudFormation
    Stacks
    confirm that the stack is active and the status is CREATE_COMPLETE.
  10. In Panorama, confirm the firewalls are up and connected to Panorama. This can take 20-30 minutes.
    1. Select
      Panorama
      Device Groups
      , and choose the device group you created. In the
      Devices/Virtual System
      column, verify that you have two IP addresses.
    2. Select
      Panorama
      Templates
      , select the template stack you created earlier and you also see the two IP addresses.

Deploy the Firewall Template in an Existing VPC

This task uses the
firewall-existing-vpc-v1.0.template
to deploy VM-Series firewalls in an existing VPC (brownfield deployment).
  1. In AWS, your VPC must be in a region that supports EKS. See the region table.
  2. In AWS go to
    AWS Services
    Management & Governance
    Cloud Formation
    Stacks
    Create stack
    .
    If you completed the steps in Set Up Your AWS Bootstrap Bucket, your template is ready.
  3. Select template.
    Select
    Upload a template file
    . Upload
    firewall-existing-vpc-v1.0.template
    from your local drive.
    Click
    Next
    .
  4. Specify the stack name.
  5. Configure the VPC.
    • VPCID
      —Your VPC ID.
    • VPCCIDR
      —Supply the CIDR block for the VPC.
    • InternetGatewayID
      —Enter the Internet Gateway ID for your VPC.
    • MgmtSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall management subnets separated by commas. The number of CIDRs must match the number of AZs.
    • UntrustSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall untrust subnets separated by commas. The number of CIDRs must match the number of AZs.
    • TrustSubnetIpBlocks
      —List the IP CIDRs for the VM-Series firewall trust subnets separated by commas. The number of CIDRs must match the number of AZs.
    • NATGWSubnetIpBlocks
      —List the IP CIDRs for the NAT gateway subnets separated by commas. The number of CIDRs must match the number of AZs.
    • Number of AZs
      —The number of availability zones in the region you chose for your S3 bucket (two, three, or four).
    • Select AZs
      —From the list, select the available AZs for your region. Enter the number of AZs you specified in the previous step.
    • ELBType
      —Choose either application or network. For this sample, choose application.
  6. Configure the VM-Series firewall instance.
    • AMIID of PANFW image
      —Go to the AMI list, copy the AMI corresponding to your PAN-OS version for the BYOL license, and paste it here.
    • Key pair
      —Select an Amazon EC2 key pair.
    • SSH From
      —Enter your public IP address. This address is added to the security group to allow SSH access. To find it, type https://www.whatsmyip.org/ in a browser. If you are specifying a new VPC you must enter a valid CIDR range. For example, x.x.x.x/x.
    • NumberofFWs
      —Enter the number of firewalls (minimum 2, maximum 6).
  7. Provide S3 Bucket details—Supply the name of your bucket from Set Up Your AWS Bootstrap Bucket, which contains both firewall and Lambda code.
    • Bootstrap bucket for VM-Series firewalls
      —Your bucket name.
    • S3 Bucket Name for Lambda Code
      —Your bucket name.
    • Click
      Next
      .
    • Click
      Next
      . Skip configuring stack options.
    • Click
      Next
      .
  8. Configure other parameters.
    • Name of External Load Balancer
      —Name the external load balancer.
  9. On the review page, scroll down and check
    I acknowledge that AWS CloudFormation might create IAM resources
    and click
    Create stack
    .
    Creation can take up to ten minutes.
  10. In
    CloudFormation
    Stacks
    confirm that the stack is active and the status is CREATE_COMPLETE.
  11. In Panorama, confirm the firewalls are up and connected to Panorama. This can take 20-30 minutes.
    1. Select
      Panorama
      Device Groups
      , and choose the device group you created. In the
      Devices/Virtual System
      column, verify that you have two IP addresses.
    2. Select
      Panorama
      Templates
      , select the template stack you created earlier and you also see the two IP addresses.

Deploy the Cluster Stack

This task uses
eks-cluster-v1.0.template
to set up the cluster subnets and the control plane.
  1. Deploy the cluster stack.
    Your template is ready.
    1. In
      Specify a template
      , select
      Upload a template file
      and upload
      eks-cluster-v1.0.template
      from your local drive.
    2. In AWS go to
      AWS Services
      Management & Governance
      Cloud Formation
      Stacks
      Create stack
      .
    3. Click
      Next
      .
    4. Name the stack.
  2. Configure the cluster.
    1. Fill out the template as follows:
      • Cluster Name
        —Name your EKS cluster.
      • Kubernetes Version
        —Enter the Kubernetes version for your EKS cluster.
      • VPCID
        —Select the VPC you just deployed with the firewall template.
      • Number of Cluster Subnets
        —Choose at most one subnet per availability zone, based on your choice in the next step.
      • AZs for cluster subnets
        —Two, three, or four, depending on the region.
      • Private Subnet IP Blocks
        —Enter a CIDR for each cluster subnet. For example, 192.168.110.0/24, 192.168.111.0/24.
      • Internet Gateway ID of VPC
        —Enter the internet ID for the stack you just created.
        To find the ID in AWS, go to
        Services
        VPC
        Internet Gateways
        , and copy the ID (igw-*) corresponding to the firewall stack you created when you deployed the firewall templates.
    1. Click
      Next
      , and
      Next
      again.
  3. On the review page, scroll down and check
    I acknowledge that AWS CloudFormation might create IAM resources
    and click
    Create
    .
  4. In
    CloudFormation
    Stacks
    confirm that the stack is active and the status is CREATE_COMPLETE.
  5. In the cluster you just deployed, note the API server endpoint and your subnets.

Set Up Kubectl and Configure Your Cluster

Set up a Kubectl config file so you can use Kubectl commands locally to configure your cluster (when you do not have the AWS CLI installed).
If you prefer the AWS CLI, follow the instructions in Configuring the AWS CLI.
  1. Set up your Kubectl configuration.
    1. Go to Create a kubeconfig for Amazon EKS and follow the directions in “To create your kubeconfig file manually.”
      • Copy the sample
        .config
        file from “To use the AWS IAM Authenticator for Kubernetes.”
      • On the command line, open a text file.
        vi  ~/.kube/config-<YourClusterName>
    2. Paste in the sample configuration.
    3. Edit the sample config file.
      • server
        —In the AWS console, view your EKS cluster, copy the API server endpoint (https://...) and paste it into your config file.
      • certificate-authority-data
        —View your EKS cluster, copy the certificate authority, and paste it into your config file.
      • args
        —Replace the cluster name variable with your cluster name.
      • Save the file.
    4. Set an environment variable for AWS authentication.
      export AWS_ACCESS_KEY_ID=<your-access-key> export AWS_SECRET_ACCESS_KEY_ID= <your-secret-access-key>
    5. Apply the configuration.
      export KUBECONFIG=$KUBECONFIG:~/.kube/config-<clusterName>
    6. Print the current service.
      kubectl get svc
  2. Create credentials and assign permissions.
    1. Create a service account for a specific EKS cluster user.
      kubectl create serviceaccount <service-account-name>
    2. Create a yaml file to define the cluster role.
      In the following sample, the role name is
      eks_cluster_role
      vi eks_cluster_role.yaml kubectl create -f eks_cluster_role.yaml
      Here is a sample
      eks_cluster_role.yaml
      file.
      apiVersion: rbac.authorization.k8s.io/v1beta kind: ClusterRole metadata: name: eks-cluster-role - apiGroups: - "" resources: - services verbs: - list
    3. Associate (bind) the service account to the cluster role you just created.
      vi eks_cluster_role_binding.yaml create -f eks_cluster_role_binding.yaml
      Here is a sample
      eks_cluster_role_binding.yaml
      file for the cluster role.
      apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: eks-cluster-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: eks-cluster-role subjects: - kind: ServiceAccount name:
      <service-account-name>
      namespace: default
      In the above sample, the <service-account-name> is the name you created in Step 2.a.
  3. Export the service account credentials for your <service-account-name>.
    1. Get your service accounts:
      MY_TOKEN=`kubectl get serviceaccounts <service-account-name> -o json-path='{.secrets[0].name}''
    2. Get your secret token:
      kubectl get secret $MY_TOKEN -o json > <file_name.json>
      In the above,
      <file_name.json>
      is the name of your credential file.

Add an EKS Cluster

Add your configuration to the Panorama plugin for AWS. The configuration requires the access information from your account, which is typically governed by an IAM role. For each cluster you can either use an IAM role you created or assume a role.
To perform this task you must know your AWS Access Key, which is comprised of the access key ID and the secret access key. If you do not know your access key, you can create an access key and save the .csv file in a secure place.
  1. Select
    Panorama
    AWS
    Setup
    IAM Role
    .
    Supply values for Name, Access Key ID, Secret Access Key, and Confirm Secret Access Key.
  2. Select
    Panorama
    AWS
    Setup
    EKS Service Account
    and click
    Add
    .
    Enter your service account information.
    • Name
      —Your choice. The plugin does not use the name.
    • Description
      — Your choice.
    • API server address
      —In EKS, this is the API server endpoint for your cluster.
    • EKS Credential
      —Upload the JSON file you exported in step 3 of Set Up Kubectl and Configure Your Cluster.
  3. Select
    Panorama
    EKS Clusters
    and
    add
    a cluster.
    Enter the following values.
    • Cluster Name
      —The exact name of your EKS cluster.
    • (Optional)
      Description
      —Your choice.
    • AWS FW Stack Name
      —Name of CloudFormation stack in which you deployed your firewalls.
    • Region
      —The region for your VPC and S3 bucket.
    • EKS Service Account
      —Select the account you created in the previous step.
    • IAM Role
      —Choose the EKS role or the role you want to assume.
    • Assume Role ARN
      —Leave this field blank if you chose the EKS role. If you choose to assume a role, view the role, copy the Role ARN, and paste it here.
    • Device Group
      —Choose the device group you created earlier.
    • Template Stack
      —Choose the template stack you created earlier.
    • Enable
      —Check this box to enable monitoring for the EKS cluster.
    Commit your changes.
  4. After you add the EKS cluster definition, verify plugin actions.
    When you add a new cluster, the plugin creates a NAT rule for every cluster subnet that you created, and configures a static route for each firewall to tell it how to how to access each subnet and the cluster.
    In this case there are two outbound NAT rules under in the device group.
    Select
    Policies
    Device Group
    <your Device Group>
    NAT
    and view two outbound NAT rules static route for each firewall.
    It may take up to two minutes for the result to populate.

Configure Inbound Protection and Outbound Monitoring

With the EKS cluster deployed and configured, you can now configure outbound monitoring, deploy a node stack with
eks-node-v1.0.template
, and associate nodes with the cluster you configure.

Configure Outbound Monitoring

To configure outbound monitoring, add a public IP address to the eth0 on the outbound firewall, and route the cluster subnets to the trust interface (eth2).
  1. Add a public IP address to eth0 on the outbound firewall.
    1. Go to
      AWS
      EC2
      Instances
      and search for firewalls you deployed with the templates. If you used the template naming conventions, search for your VPC name.
    2. Select one firewall to be the outbound firewall and attach a tag.
      • Select the
        Tags
        group and click
        Add/Edit
        Tags.
      • (Optional)
        Edit the name to append
        -outbound
        . This is a convenience; the plugin does not require it.
    3. Select ENI eth0 and attach a public IP address.
      1. Copy the ENI ID and choose
        Network & Security
        Elastic IPs
        .
      2. Select an available IP address and choose
        Actions
        Associate Address.
        • Select
          Network Interface
          and paste the ENI that you copied.
        • From the drop menu, select the public IP address.
        • Click
          Associate
          and choose the network interface.
        • Return to
          Instances
          . The outbound instance has an IPv4 public IP address. View eth0.
  2. Change the default route of cluster subnets to point to the trust interface, in this case eth2.
    1. Copy the ENI from the outbound firewall you tagged in step 1, go to
      Amazon Container Services
      Amazon EKS
      Clusters
      , and choose the cluster the template created.
      Under
      Networking
      , select one of the subnets to open
      Virtual Private Cloud
      Subnets
      . (There are two subnets and they both share the same routing table.)
    2. Click the
      Route Table
      tab, and click the route table link to modify the route table.
    3. Click
      Routes
      to see the default route 0.0.0.0/0 points to the IGW, causing all outbound traffic to go to the internet.
    4. Click Edit routes and change the target from the IGW to the ENI of the trust interface of your outbound firewall (see the previous step).
    Save the routes.

Deploy a Node Stack

  1. Go to
    CloudFormation > Stacks
    . Click
    Create Stack
    .
  2. Select
    Choose a template > Upload a template to Amazon S3
    .
    1. Choose
      eks-node-v1.0.template
      and click
      Open
      , then
      Next
      .
    2. Specify the stack details.
      • Stack Name
        —Enter The exact name of the cluster stack you deployed.
      • Enter cluster information—
        • Cluster Name
          —Must match the cluster name exactly or it will not associate correctly.
        • Cluster Stack Name
          —Your choice. 
        • VPC ID
          —Select your VPC.
      • Configure the node.
        • Node Group Name
          —Your choice.
        • SSH Key
          —Select an SSH key (so that you can log into the nodes).
        • Node Image ID
          you need to specify the Amazon Machine Image when the node boots up and runs a bootstrap script to associate with the cluster.
          Go to https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html, find NodeImageId, and locate the AMI table.
          Choose a Kubernetes version.
          Select
          View AMI ID
          for your region.
          Under
          Value
          , copy the AMI ID, then paste it into the
          Node Image ID
          field.
        • Node Instance Type
          —t2.medium.
        • Max Number of Nodes
          —Enter the maximum number of nodes after scale out events.
        • Min Number of Nodes
          —Enter the minimum number of nodes after scale in events (minimum of one).
        • Node Subnets
          Return to
          CloudFormation
          and select the stack where you deployed your cluster. On the
          Outputs
          tab, choose the IDs for all subnets and copy them, one at a time, into the
          Node Subnets
          field.
        • Click
          Next
          .
        • Click
          Create
          .
          On the Stacks page you see CREATE_IN_PROGRESS in yellow, then CREATE_COMPLETE in green.
      • When your stack has finished creating, select it in the console and choose the
        Outputs
        tab.
        Record the
        NodeInstanceRole
        for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

Associate the Nodes with the Cluster

After the nodes come up, apply a configuration map that tells the cluster the nodes are active and they must be associated with the cluster.
  1. Return to https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html and find “enable worker nodes to join your cluster”.
  2. Get the sample YAML file from AWS.
    curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml
    View the file with a text editor:
    apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: arn:aws:iam::############:role/<nodeName>-NodeInstanceRole-CEMFVNZGL5XL - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes
  3. Return to
    CloudFormation > Stacks
    and choose the node you deployed. On the
    Outputs
    tab, the ARN Value is in the center column.
    Copy the ARN Value.
  4. Return to
    aws-auth-cm.yam
    l, paste in the ARN, save, and close.
  5. Apply
    aws-auth-cm.yam
    using Kubectl commands.
    kubectl apply -f aws-auth-cm.yaml
    You see a confirmation that the file is created:
    configmap/aws-auth created
  6. Get the nodes, and view the progress as the node comes up.
    kubectl get nodes --watch
    As the node starts to come up the STATUS is NotReady. After it switches to Ready, you can deploy a service to this node.

Use the Guestbook Application to Verify the Deployment

This task is optional.
In this task you adapt and deploy the Kubernetes tutorial Create a Guestbook with Redis and PHP. The tutorial has five objectives, but you only need the first four:
Follow the tutorial to configure your environment and download the configuration files. The following workflow highlights exceptions or alternatives for your AWS deployment.
  1. Before you begin.
    Follow the Create a Guestbook with Redis and PHP tutorial to configure your environment and download the configuration files.
  2. Follow the instructions in Set up a Redis master and Set up Redis workers.
  3. Use a text editor to modify
    frontend-service.yaml
    as follows:
    • Add annotations.
      • service.beta.kubernetes/aws-load-balancer-type
        must be:
        nlb
        .
        ALB is not supported for the ILB.
      • service.beta.kubernetes/aws-load-balancer-internal
        must be:
        0.0.0.0/0
    • The spec type must be:
      LoadBalancer
    • Add the label
      panw-tg-port-<portname>
      and specify a port name and value—for example,
      panw-tg-port-myport1:102
      . When traffic hits port 102, your firewall applies a NAT rule to forward the traffic to this service.
    front-end-service-yaml.png
  4. Deploy the service.
    kubectl create -f frontend-service.yaml
    You see the following message when the service is created:
    service/frontend created
  5. View the FQDN for all services.
    kubectl get svc

View the Cluster in Panorama

  1. Return to Panorama and select
    Panorama
    EKS Clusters
    .
  2. Select the cluster you just deployed and in the
    Action
    column, select
    Show Port Mapping
    .
    For the frontend service, the protected column should show
    True
    .
  3. Under
    Policies
    look at the NAT rule. Choose your device group and select
    NAT
    Pre Rules
    . The rule is
    frontend-82-inbound
    .
    To test that you can reach the service through the firewall, use:
    curl http://<Firewall-untrust-I
    P-URL>:82
    If the HTML prints, you are successful.
  4. Log in to the firewall CLI and type:
    show session all
    Look for "web-browsing" in the application field.

Configure the ELB

This task demonstrates how to send traffic to your ELB then forward it to firewalls and services deployed in the cluster.
When you configured the firewall template in Deploy the Firewall Template on AWS Step 5, you chose
application
or
network
for the type.
There are some small differences in how you configure each load balancer type.
  • ALB
    —An ALB uses the HTTP or HTTPS protocol and determines the backend destination based on the FQDN. An ALB always has the same listener.
  • NLB
    —An NLB uses the TCP protocol (although there are other protocols for AWS NLBs, the plugin only supports TCP). The NLB determines the backend destination based on the port number, so you can change the listener.
  1. Create a target group for every service that you are securing with managed firewalls. Every service for which you create a NAT rule for must have its own target group.
    1. Create a target group.
      Select
      EC2
      Load Balancing
      Target Groups
      Create target group
      .
      Fill out the form as follows:
      • Target group name
        —Enter a name. In this sample, the name is frontend-demo-service.
      • Target type
        —Instance.
      • Protocol
        —Choose the protocol for the ELB type you specified in Deploy the Firewall Template on AWS, step 5:
        • ALB
          —HTTP or HTTPS.
        • NLB
          —TCP.
      • Port
        —Enter the port number on the firewall that will receive traffic when this target group is applied.
      • VPC
        —Select the VPC you created.
    2. Click
      Create
      .
  2. Edit the firewall auto scaling group.
    Select
    EC2
    Auto Scaling Group
    .
    • Select the auto scaling group you deployed previously and select
      Actions > Edit
      .
    • Under
      Target Groups
      , choose the target group you created in the previous step.
    • Click
      Save
      . Wait a minute before continuing.
  3. Verify the targets are registered.
    • Return to
      Load Balancing
      Target Groups
      .
    • Select your service, and on the
      Targets
      tab below, verify the targets are registered.
  4. Verify load balancing.
    • Go to
      EC2
      Load Balancing
      Load Balancers
      .
    • Choose your load balancer (check your Cloud Formation template for the name you supplied), select
      Listeners
      , go to your listener, and in the
      Rules
      column, choose
      View/edit rules
      .
      If there are no rules to match the traffic, traffic is forwarded to the default rule.
    • Create or edit a rule to forward traffic to the target group you created in step 1.a (frontend-demo-service). Once you create the rule, if traffic hitting the ELB on the port you specified in 1.a does not meet any rules, it forwards traffic to front end-demo-service, which forwards traffic to port 82 on the firewall. From there, it goes to the service.
      You can edit the default rule, or add your own rule. Choose one of the following:
      • Edit the default rule
        —Click the pencil to edit the default rule.
        Forward too...
        the target group you created in 1.a (frontend-demo-service) and click
        Update
        .
        If traffic hitting the ELB on the port you specified in 1.a does not meet any rules, it forwards traffic to front end-demo-service, which forwards traffic to port 82 on the firewall. From there, it should go to the service.
      • Add a new rule
        —Click
        +
        to add a rule and click
        Insert Rule
        . Add a condition and an action (
        Forward too...
        ).
    • View the load balancer description to get the DNS name for the ELB.
      Issue a curl command to ping the DNS name.
      curl http://######-1219937001.us-west-2.elb-amazonws.com
      You receive a response from the Guestbook demo application, meaning the traffic entered successfully.
  5. Log in to the firewall CLI to confirm traffic is directed to the correct port.
    show session all
    View web-browsing traffic originating from the untrust network and directed to port 80 on the firewall.
    You can also go to
    Panorama
    Monitor
    and switch to the device context to view traffic.

Test the Outbound Workflow

This
optional
task demonstrates how you can test your outbound workflow.
  1. To configure outbound traffic, change the cluster subnet default route to point to the trust interface on one of the firewalls in the firewall set. On that same firewall, add the public IP address to the untrust interface.
  2. Log in to the outbound firewall, and from the CLI,
    show session all
    .
    You should see SSL traffic originating from the cluster subnets.
    View the node IP address, and notice that it sends outbound traffic to communicate with the master node.
  3. Deploy a pod that you can log in to.
    1. Deploy a pod.
      kubectl create -f shell-demo.yaml
    2. Log in to the demo.
      kubectl exec -it shell-demo – /bin/bash
      You are logged in.
  4. Use apt-get to test the session.
    1. From the OS, type:
      apt-get update
    2. In the firewall CLI, type:
      show session all
    On the bash shell you can see the apt-get update goes to the firewall and apt-get requests are registered.
  5. You can also curl something from the internet to demonstrate traffic is going in and out. For example:
    1. From the OS, type:
      apt-get install curl
      curl an FQDN using the proper format for your ELB.
      • ALB
        curl <ELB-dns>
      • NLB
        curl <ELB-dns:80>
    2. From the firewall, type:
      show session all
    You see a request originating from your node IP address.

Recommended For You