End-of-Life (EoL)

Kubernetes

This procedure is optimized to get Prisma Cloud installed in your Kubernetes cluster quickly. There are many ways to install Prisma Cloud, but we recommend that you start with this procedure first. You can tweak the install procedure after you have validated that this install method works.
Prisma Cloud is installed with a utility called twistcli, which is bundled along with the rest of the Prisma Cloud software. The twistcli utility generates YAML configuration files for Console and Defender. You then create the required objects in your cluster with kubectl create. This two step approach gives you full control over the objects created. You can inspect, customize, and manage the YAML configuration files in source control before deploying Console and Defender.
Prisma Cloud Console is created as a Deployment, which ensures a single copy of Console is always up and available. Prisma Cloud Defenders are deployed as a DeamonSet, which guarantees an instance of Defender runs on each worker node in the cluster.
In order to improve the availability of the Console service, the orchestrator should be free to run Console on any healthy node. If a node were to go down, the orchestrator should be able to simply reschedule Console somewhere else. To enable this capability, Console’s default YAML configuration files:
  • Deploy a persistent volume (PV), where Console can save its state.
    No matter where Console runs, it must have access to its state. In order for PVs to work, every node in the cluster must have access to shared storage. Depending on your cloud provider, and whether Kubernetes is managed or unmanaged, setting up storage can range from easy to difficult. Google Cloud Kubernetes Engine (GKE), for example, offers it as an out-of-the box capability, so it requires zero configuration. If you build your cluster by hand, however, you might need to configure something like NFS.
  • Expose Console to the network using a load balancer.
    Console must always be accessible. It serves a web interface, and it communicates policy with all deployed Defenders. A load balancer ensures that Console is reachable no matter where it runs in the cluster.

Cluster context

Prisma Cloud can segment your environment by cluster. For example, you might have three clusters: test, staging, and production. The cluster pivot in Prisma Cloud lets you inspect resources and administer security policy on a per-cluster basis.
Defenders in each DaemonSet are responsible for reporting which resources belong to which cluster. When deploying a Defender DaemonSet, Prisma Cloud tries to determine the cluster name through introspection. First, it tries to retrieve the cluster name from the cloud provider. As a fallback, it tries to retrieve the name from the corresponding kubeconfig file saved in the credentials store. Finally, you can override these mechanisms by manually specifying a cluster name when deploying your Defender DaemonSet.
Both the Prisma Cloud UI and twistcli tool accept an option for manually specifying a cluster name. Let Prisma Cloud automatically detect the name for provider-managed clusters. Manually specify names for self-managed clusters, such as those built with kops.
Radar lets you explore your environment cluster-by-cluster. You can also create stored filters (also known as collections) based on cluster names. Finally, you can scope policy by cluster. Vulnerability and compliance rules for container images and hosts can all be scoped by cluster name.
There are some things to consider when manually naming clusters:
  • If you specify the same name for two or more clusters, they’re treated as a single cluster.
  • For GCP, if you have clusters with the same name in different projects, they’re treated as a single cluster. Consider manually specifying a different name for each cluster.
  • Manually specifying names isn’t supported in
    Manage > Defenders > Manage > DaemonSet
    . This page lets you deploy and manage DaemonSets directly from the Prisma Cloud UI. For this deployment flow, cluster names are retrieved from the cloud provider or the supplied kubeconfig only.

Preflight checklist

To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.

General

  • You have a valid Prisma Cloud license key and access token.

Cluster

  • You have provisioned a Kubernetes cluster that meets the minimum system requirements and runs a supported Kubernetes version.
  • You have set up a Linux or macOS system as your cluster controller, and you can access the cluster with kubectl.
  • The nodes in your cluster can reach Prisma Cloud’s cloud registry (registry-auth.twistlock.com).
  • Your cluster can create PersistentVolumes and LoadBalancers from YAML configuration files.

Runtimes

  • Prisma Cloud supports Docker Engine, CRI-O, and cri-containerd. For more information, see the system sequirements

Permissions

  • You can create and delete namespaces in your cluster.
  • You can run kubectl create commands.

Firewalls and ports

Validate that the following ports are open.
Prisma Cloud Console
:
  • Incoming: 8083, 8084
  • Outgoing: 443, 53
Prisma Cloud Defenders
:
  • Incoming: None
  • Outgoing: 8084

Install Prisma Cloud

Use twistcli to install the Prisma Cloud Console and Defenders. The twistcli utility is included with every release. After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will be running in your Kubernetes cluster.
If you’re installing Prisma Cloud on Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Azure Container Service with Kubernetes, a number of tweaks are required to the installation procedure. For more details, see the relevant sections in this article.

Download the Prisma Cloud software

Download the Prisma Cloud software to any system where you run kubectl to administer your cluster.
  1. Download the current recommended release.
  2. Unpack the release tarball.
    $ mkdir prisma_cloud $ tar xvzf prisma_cloud_compute_edition_<VERSION>.tar.gz -C prisma_cloud/

Install Console

Install Console, exposing the service using a load balancer.
If you’re using NFSv4 for persistent storage in your cluster, we recommend that you use the nolock, noatime and bg mount options for your PersistentVolume. After generating the Console YAML file, add the following mount options to your PersistentVolume definition.
apiVersion: v1 kind: PersistentVolume metadata: name: twistlock-console labels: app-volume: twistlock-console annotations: volume.beta.kubernetes.io/mount-options: "nolock,noatime,bg"
  1. On your cluster controller, navigate to the directory where you downloaded and extracted the Prisma Cloud release tarball.
  2. Generate a YAML configuration file for Console, where <PLATFORM> can be linux or osx.
    The following command saves twistlock_console.yaml to the current working directory. If needed, you can edit the generated YAML file to modify the default settings.
    $ <PLATFORM>/twistcli console export kubernetes --service-type LoadBalancer
  3. Deploy Console.
    $ kubectl create -f twistlock_console.yaml
  4. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock

Configure Console

Create your first admin user and enter your license key.
  1. Get the public endpoint address for Console.
    $ kubectl get service -o wide -n twistlock
  2. (Optional) Register a DNS entry for Console’s external IP address. The rest of this procedure assumes the DNS name for Console is yourconsole.example.com.
  3. (Optional) Set up a custom cert to secure Console access.
  4. Open a browser window, and navigate to Console. By default, Console is served on HTTPS on port 8083. For example, go to https://yourconsole.example.com:8083.
  5. Create your first admin user.
  6. Enter your Prisma Cloud license key.
  7. Defender communicates with Console using TLS. Update the list of identifiers in Console’s certificate that Defenders use to validate Console’s identity.
    1. Go to
      Manage > Defenders > Names
      .
    2. In the
      Subject Alternative Name
      table, click
      Add SAN
      , then enter Console’s IP address or domain name (e.g. yourconsole.example.com). Any Defenders deployed outside the cluster can use this name to connect to Console.
    3. In the
      Subject Alternative Name
      table, click
      Add SAN
      again, then enter twistlock-console. Any Defenders deployed in the same cluster as Console can use Console’s service name to connect. Note that the service name, twistlock-console, is not the same as the pod name, which is twistlock-console-XXXX.

Install Defender

Defender is installed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. Use twistcli to generate a YAML configuration file for the Defender DaemonSet, then deploy it using kubectl. You can use the same method to deploy Defender DaemonSets from both macOS and Linux kubectl-enabled cluster controllers.
The benefit of declarative object management, where you work directly with YAML configuration files, is that you get the full "source code" for the objects you create in your cluster. You can use a version control tool to manage and track modifications to config files so that you can delete and reliably recreate DaemonSets in your environment.
If you don’t have kubectl access to your cluster, you can deploy Defender DaemonSets directly from the Console UI.
The following procedure shows you how to deploy Defender DaemonSets with twistcli using declarative object management. Alternatively, you can generate Defender DaemonSet install commands in the Console UI under
Manage > Defenders > Deploy > DaemonSet
. Install scripts work on Linux hosts only. For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
If you’re using CRI-O or containerd, pass the --cri flag to twistcli (or enable the CRI option in the Console UI) when generating the Defender YAML or Helm chart.
You can run both Prisma Cloud Console and Defenders in the same Kubernetes namespace (e.g. twistlock). Be careful when running kubectl delete commands with the YAML file generated for Defender. This file contains the namespace declaration, so comment out the namespace section if you don’t want the namespace deleted.
For provider managed clusters, Prisma Cloud automatically gets the cluster name from the cloud provider. To override the the cloud provider’s cluster name, use the --cluster option. For self-managed clusters, such as those built with kops, you must manually specify a cluster name with the --cluster option.
  1. Determine the Console service’s external IP address.
    $ kubectl get service -o wide -n twistlock
  2. Generate a defender.yaml file, where:
    The following command connects to Console’s API (specified in --address) as user <ADMIN> (specified in --user), and generates a Defender DaemonSet YAML config file according to the configuration options passed to twistcli.
    The --cluster-address option specifies the address Defender uses to connect to Console. For Defenders deployed in the cluster where Console runs, specify Prisma Cloud Console’s service name, twistlock-console. For Defenders deployed outside the cluster, specify either Console’s external IP address, exposed by the LoadBalancer, or better, Console’s DNS name, which you must manually set up separately.
    The following command directs Defender to connect to Console using its service name. Use it for deploying a Defender DaemonSet inside a cluster.
    $ <PLATFORM>/twistcli defender export kubernetes \ --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluster-address twistlock-console
    • <PLATFORM> can be linux or osx.
    • <ADMIN_USER> is the name of the initial admin user you just created.
  3. (Optional) Schedule Defenders on your Kubernetes master nodes.
    If you want to also schedule Defenders on your Kubernetes master nodes, change the DaemonSet’s toleration spec. Master nodes are tainted by design. Only pods that specifically match the taint can run there. Tolerations allow pods to be deployed on nodes to which taints have been applied. To schedule Defenders on your master nodes, add the following tolerations to your DaemonSet spec.
    tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
  4. Deploy the Defender DaemonSet.
    $ kubectl create -f defender.yaml
  5. Open a browser, navigate to Console, then go to
    Manage > Defenders > Manage
    to see a list of deployed Defenders.

Install Prisma Cloud on a CRI (non-Docker) cluster

Kubernetes lets you set up a cluster with the container runtime of your choice. Prisma Cloud supports Docker Engine, CRI-O, and cri-containerd.

Deploying Console

Irrespective of your cluster’s underlying container runtime, you can install Console using the standard install procedure. Console doesn’t interface with other containers, so it doesn’t need to know which container runtime interface is being used.

Deploying Defender DaemonSets

When generating the YAML file to deploy the Defender DaemonSet, a toggle lets you select your runtime environment. Since Defenders need to have a view of other containers, this option is necessary to guide the communication. By default the toggle is off Prisma Cloud uses Docker Engine. When the toggle is on, Prisma Cloud generates the propper yaml for the CRI Kubernetes environment.
If you use containerd on GKE, and you install Defender without the CRI switch, everything will appear to work properly, but you’ll have no images or container scan reports in
Monitor > Vulnerability
and
Monitor > Compliance pages
and you’ll have no runtime models in
Monitor > Runtime
. This happens because the Google Container Optimized Operating system (GCOOS) nodes have Docker Engine installed, but Kubernetes doesn’t use it. Defender thinks everything is OK because all of the integrations succeed, but the underlying runtime is actually different.
If you’re deploying Defender DaemonSets with twistcli, use the
--cri
option to use to specify the runtime interface. By default (no flag), twistcli generates a configuration that uses Docker Engine. With the
--cri
flag, twistcli generates a configuration that uses CRI.
$ <PLATFORM>/twistcli defender export kubernetes \ --cri --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluster-address yourconsole.example.com
When generating YAML from Console or twistcli, there is a simple change to the yaml file as seen below.
In this abbreviated version DEFENDER_TYPE:daemonset will use the Docker interface.
... spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service restartPolicy: Always containers: - name: twistlock-defender-19-03-321 image: registry-auth.twistlock.com/tw_<token>/twistlock/defender:defender_19_03_321 volumeMounts: - name: host-root mountPath: "/host" - name: data-folder mountPath: "/var/lib/twistlock" ... env: - name: WS_ADDRESS value: wss://yourconsole.example.com:8084 - name: DEFENDER_TYPE value: daemonset - name: DEFENDER_LISTENER_TYPE value: "none" ...
In this abbreviated version DEFENDER_TYPE:cri will use the CRI.
... spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service restartPolicy: Always containers: - name: twistlock-defender-19-03-321 image: registry-auth.twistlock.com/tw_<token>/twistlock/defender:defender_19_03_321 volumeMounts: - name: host-root mountPath: "/host" - name: data-folder mountPath: "/var/lib/twistlock" ... env: - name: WS_ADDRESS value: wss://yourconsole.example.com:8084 - name: DEFENDER_TYPE value: cri - name: DEFENDER_LISTENER_TYPE value: "none" ...

Install Prisma Cloud with Helm charts

You can use twistcli to create Helm charts for Prisma Cloud Console and Defender. Helm is a package manager for Kubernetes, and chart is the moniker for a Helm package.
Follow the main install flow, except:
  1. Create a Console Helm chart.
    $ <PLATFORM>/twistcli console export kubernetes \ --service-type LoadBalancer \ --helm
  2. Install Console.
    $ helm install \ --namespace twistlock \ --name twistlock-console \ ./twistlock-console-helm.tar.gz
  3. Create a Defender DaemonSet Helm chart.
    $ <PLATFORM>/twistcli defender export kubernetes \ --address https://yourconsole.example.com:8083 \ --helm \ --user <ADMIN_USER> \ --cluster-address twistlock-console
  4. Install Defender.
    $ helm install \ --namespace twistlock \ --name twistlock-defender-ds \ ./twistlock-defender-helm.tar.gz

Alibaba Cloud Container Service for Kubernetes (ACK)

Alibaba Cloud Container Service for Kubernetes (ACK) is a managed Kubernetes service. Use the standard Kubernetes install procedure to deploy Prisma Cloud to Alibaba ACK, but specify an Alibaba Cloud-specific StorageClass when configuring the deployment.
This procedure shows you how to use Helm charts to install Prisma Cloud, but all other install methods are supported.
Prerequisites:
  • You have provisioned an ACK cluster.
  1. Go to Releases, and copy the link to current recommended release.
  2. Download the release tarball to the system where you administer your cluster (where you run your kubectl commands).
    $ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
  3. Unpack the Prisma Cloud release tarball.
    $ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C prisma_cloud/
  4. Create a Helm chart for Prisma Cloud Console.
    $ <PLATFORM>/twistcli console export kubernetes \ --storage-class alicloud-disk-available \ --service-type LoadBalancer \ --helm
  5. Install Console.
    $ helm install \ --namespace twistlock \ --name twistlock-console \ ./twistlock-console-helm.tar.gz
  6. Change the PersistentVolumeClaim’s reclaimPolicy.
    $ kubectl get pv $ kubectl patch pv <pvc-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  7. Get the public endpoint address for Console.
    When the service is fully up, the LoadBalancer’s IP address is shown.
    $ kubectl get service -w -n twistlock
  8. Open a browser window, and navigate to Console.
    By default, Console is served on HTTPS on port 8083 of the LoadBalancer:
    https://<LOADBALANCER_IP_ADDR>:8083.
  9. Continue with the rest of the install here.

Amazon Elastic Kubernetes Service (EKS)

Amazon Kubernetes Service (EKS) lets you deploy Kubernetes clusters on demand. Use our standard Kubernetes install method to deploy Prisma Cloud to EKS. The only difference between the EKS and standard Kubernetes install methods is:
  • EKS with Kubernetes 1.10 — Create a storage class that utilizes Amazon Elastic Block Storage (EBS), and then specify the storageClassName when generating the Prisma Cloud Console deployment file.
  • EKS with Kubernetes 1.11+ — You only need to specify the storageClassName when generating the Prisma Cloud Console deployment file. The gp2 storage class already exists.
For more information about Amazon EKS storage classes, see the Amazon EKS User Guide.
Prerequisites:
  1. For EKS with Kubernetes 1.10, define a storage class named gp2 that uses the Amazon EBS gp2 volume type. Create a file name gp2-storage-class.yaml, and enter the following YAML.
    For EKS with Kubernetes 1.11+, skip this step.
    kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp2 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/aws-ebs parameters: type: gp2 fsType: ext4
  2. For EKS with Kubernetes 1.10, create the storage class.
    For EKS with Kubernetes 1.11+, skip this step.
    $ kubectl create -f gp2-storage-class.yaml
  3. Generate the Prisma Cloud Console deployment file (for all versions).
    $ twistcli console export kubernetes \ --service-type LoadBalancer \ --storage-class gp2
  4. Deploy Console.
    $ kubectl create -f twistlock_console.yaml
  5. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock
  6. Continue with the rest of the install here.

Azure Kubernetes Service (AKS)

Use the following procedure to install Prisma Cloud in an AKS cluster. This setup uses dynamic PersistentVolumeClaim provisioning using Premium Azure Disk. When creating your Kubernetes cluster, be sure to specify a VM size that supports premium storage.
Prisma Cloud doesn’t support Azure Files as a storage class for persistent volumes. Use Azure Disks instead.
Prerequisites:
  1. Use twistcli to generate the Prisma Cloud Console YAML configuration file, where <PLATFORM> can be linux or osx. Set the storage class to Premium Azure Disk.
    $ <PLATFORM>/twistcli console export kubernetes \ --storage-class managed-premium \ --service-type LoadBalancer
  2. Deploy the Prisma Cloud Console in the Azure Kubernetes Service cluster.
    $ kubectl create -f ./twistlock_console.yaml
  3. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock
  4. Change the PersistentVolumeClaim’s reclaimPolicy.
    $ kubectl get pv $ kubectl patch pv <pvc-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  5. Continue with the rest of the install here.

Azure Container Service (ACS) with Kubernetes

Use the following procedure to install Prisma Cloud in an ACS Kubernetes cluster.
Microsoft will retire ACS as a standalone service on January 31, 2020.
Prerequisites:
  1. Create a persistent volume for your Kubernetes cluster. ACS uses Azure classic disks for the persistent volume. Within the same Resource Group as the ACS instance, create a classic storage group.
  2. On a Windows based system use Disk Manager to create an unformatted, 100GB Virtual Hard Disk (VHD).
  3. Use Azure Storage Explorer to upload the VHD to the classic storage group.
  4. Make sure the disk is 'released' from a 'lease'.
  5. On your Linux host with Azure CLI installed, attach to your ACS Kubernetes Master.
    $ az acs kubernetes get-credentials --resource-group pfoxacs --name pfox-acs Merged "pfoxacsmgmt" as current context in /Users/paulfox/.kube/config
    $ kubectl config use-context pfoxacsmgmt
  6. Confirm connectivity to the ACS Kubernetes cluster.
    $kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-agent-e32fd1a6-0 Ready agent 4m v1.7.7 k8s-agent-e32fd1a6-1 Ready agent 5m v1.7.7 k8s-master-e32fd1a6-0 Ready master 4m v1.7.7
  7. Create a file named persistent-volume.yaml, and open it for editing.
    apiVersion: v1 kind: PersistentVolume metadata: name: twistlock-console labels: app: twistlock-console annotations: volume.beta.kubernetes.io/storage-class: default spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce azureDisk: diskName: pfox-classic-tl-console.vhd diskURI: https://pfoxacs.blob.core.windows.net/twistlock-console/pfox-classic-tl-console.vhd cachingMode: ReadWrite fsType: ext4 readOnly: false
    • Name of the persistent disk created in the previous steps.
    • --
      Label for the persistent volume.
    • --
      Azure subscription path to the disk created in the previous steps.
  8. Create the persistent volume:
    $ kubectl create -f ./persistent-volume.yaml
  9. Generate the Console YAML configuration file:
    $ /linux/twistcli console export kubernetes \ --persistent-volume-labels app:twistlock-console \ --storage-class default
  10. Deploy the Prisma Cloud Console in your cluster.
    $ kubectl create -f ./twistlock-console.yaml
  11. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock
  12. Continue with the rest of the install here.

DC/OS Kubernetes

Kubernetes on DC/OS uses nested vitualization, where K8S nodes are actually privileged containers. This abstraction creates a mismatch between the host PID namespace Defender needs to see, and the PID namespace it actually sees.
When deploying Prisma Cloud on DC/OS Kubernetes, use the normal install flow for Console.
For installing Defender, pass the --containerized-host flag to twistcli when generating the DaemonSet deployment file. If you’re generating the DaemonSet deployment file from the Console UI, set the
Nodes runs inside containerized environment
option to
On
.
$ <PLATFORM>/twistcli defender export kubernetes \ --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluster-address twistlock-console \ --containerized-host

Google Kubernetes Engine (GKE)

To install Prisma Cloud on Google Kubernetes Engine (GKE), use the standard Kubernetes install flow. Before getting started, create a ClusterRoleBinding, which grants the permissions required to create the Defender DaemonSet.
The Google Cloud Platform (GCP) service account that you use to create the Prisma Cloud Console resources, including Deployment controller and PersistentVolumeClaim, must have at least the
Kubernetes Engine Developer
role to be successful.
The GCP service account that you use to create the Defender resources, including DaemonSet, must have the Kubernetes cluster-admin role. If you try to create the Defender resources from a service account without this cluster-specific role, it will fail because the GCP
Kubernetes Engine Developer
role doesn’t grant the developer sufficient permissions to create a ClusterRole (one of the Defender resources). You’ll need to use an account with the GCP
Kubernetes Engine Admin
role to bind the Kubernetes cluster-admin role to your Kubernetes developer’s service account.
It’s probably best to create the ClusterRoleBinding before turning the cluster over any user (typically DevOps) tasked with managing and maintaing Prisma Cloud.
Run the command in the following procedure on ANY service account that attempts to apply the Defender Daemonset YAML or Helm chart, even if that service account already has elevated permissions with the GCP
Kubernetes Engine Admin
role. Otherwise, you’ll get an error.
The following procedure uses a service account named your-dev-user@your-org.iam.gserviceaccount.com that has the GCP
Kubernetes Engine Developer
role. You’ll also need access to a more privileged GCP account that has the
Kubernetes Engine Admin
role to create the ClusterRoleBinding in your cluster.
Prerequisites:
  • You have deployed a GKE cluster.
  • You have a Google Cloud Platform (GCP) service account with the
    Kubernetes Engine Developer
    role.
  • You have access to a GCP account with at least the
    Kubernetes Engine Admin
    role.
  1. With the service account that has the GCP
    Kubernetes Engine Admin
    role set as the active account, run:
    $ kubectl create clusterrolebinding your-dev-user-cluster-admin-binding \ --clusterrole=cluster-admin \ --user=your-dev-user@your-org.iam.gserviceaccount.com
  2. With the
    Kubernetes Engine Developer
    service account, continue with the standard Kubernetes install procedure for Prisma Cloud Console and Defenders, starting here.

IBM Kubernetes Service (IKS)

Use the following procedure to install Prisma Cloud in an IKS cluster. IKS uses dynamic PersistentVolumeClaim provisioning (ibmc-file-bronze is the default StorageClass) as well as automatic LoadBalancer configuration for the Prisma Cloud Console. You can optionally specify a StorageClass for premium file or block storage options. Use a retain storage class (not default) to ensure your storage is not destroyed even if you delete the PVC.
When installing Defenders, take note of the the IKS Kubernetes version. IKS Kubernetes version 1.10 uses Docker, and 1.11+ uses containerd as the container runtime. If using containerd, pass the --cri flag to twistcli (or enable the CRI option in the Console UI) when generating the Defender YAML or Helm chart.
  1. Use twistcli to generate the Prisma Cloud Console YAML configuration file, where <PLATFORM> can be linux or osx. Optionally set the storage class to premium storage class. For IKS with Kubernetes 1.10, use our standard Kubernetes instructions. Here is an example with a premium StorageClass with the retain option.
    $ <PLATFORM>/twistcli console export kubernetes \ --storage-class ibmc-file-retain-silver \ --service-type LoadBalancer
  2. Deploy the Prisma Cloud Console in the IBM Kubernetes Service cluster.
    $ kubectl create -f ./twistlock_console.yaml
  3. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock
  4. Continue with the rest of the install here.

Redeploying Defenders

If Prisma Cloud Console is redeployed, the client and server certificates change. Redeploy your Defenders so that they can connect to the new Console without certificate issues. First, generate a new DaemonSet YAML configuration file with twistcli:
$ <PLATFORM>/twistcli defender export kubernetes \ --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluste