End-of-Life (EoL)


This procedure is optimized to get Prisma Cloud installed in your Kubernetes cluster quickly. There are many ways to install Prisma Cloud, but we recommend that you start with this procedure first. You can tweak the install procedure after you have validated that this install method works.
Prisma Cloud is installed with a utility called twistcli, which is bundled along with the rest of the Prisma Cloud software. The twistcli utility generates YAML configuration files for Console and Defender. You then create the required objects in your cluster with kubectl create. This two step approach gives you full control over the objects created. You can inspect, customize, and manage the YAML configuration files in source control before deploying Console and Defender.
Prisma Cloud Console is created as a Deployment, which ensures a single copy of Console is always up and available. Prisma Cloud Defenders are deployed as a DaemonSet, which guarantees an instance of Defender runs on each worker node in the cluster.
In order to improve the availability of the Console service, the orchestrator should be free to run Console on any healthy node. If a node were to go down, the orchestrator should be able to simply reschedule Console somewhere else. To enable this capability, Console’s default YAML configuration files:
  • Deploy a persistent volume (PV), where Console can save its state.
    No matter where Console runs, it must have access to its state. In order for PVs to work, every node in the cluster must have access to shared storage. Depending on your cloud provider, and whether Kubernetes is managed or unmanaged, setting up storage can range from easy to difficult. Google Cloud Kubernetes Engine (GKE), for example, offers it as an out-of-the box capability, so it requires zero configuration. If you build your cluster by hand, however, you might need to configure something like NFS.
  • Expose Console to the network using a load balancer.
    Console must always be accessible. It serves a web interface, and it communicates policy with all deployed Defenders. A load balancer ensures that Console is reachable no matter where it runs in the cluster.

Cluster context

Prisma Cloud can segment your environment by cluster. For example, you might have three clusters: test, staging, and production. The cluster pivot in Prisma Cloud lets you inspect resources and administer security policy on a per-cluster basis.
Defenders in each DaemonSet are responsible for reporting which resources belong to which cluster. When deploying a Defender DaemonSet, Prisma Cloud tries to determine the cluster name through introspection. First, it tries to retrieve the cluster name from the cloud provider. As a fallback, it tries to retrieve the name from the corresponding kubeconfig file saved in the credentials store. Finally, you can override these mechanisms by manually specifying a cluster name when deploying your Defender DaemonSet.
Both the Prisma Cloud UI and twistcli tool accept an option for manually specifying a cluster name. Let Prisma Cloud automatically detect the name for provider-managed clusters. Manually specify names for self-managed clusters, such as those built with kops.
Radar lets you explore your environment cluster-by-cluster. You can also create stored filters (also known as collections) based on cluster names. Finally, you can scope policy by cluster. Vulnerability and compliance rules for container images and hosts, runtime rules for container images, and trusted images rules can all be scoped by cluster name.
There are some things to consider when manually naming clusters:
  • If you specify the same name for two or more clusters, they’re treated as a single cluster.
  • For GCP, if you have clusters with the same name in different projects, they’re treated as a single cluster. Consider manually specifying a different name for each cluster.
  • Manually specifying names isn’t supported in
    Manage > Defenders > Manage > DaemonSet
    . This page lets you deploy and manage DaemonSets directly from the Prisma Cloud UI. For this deployment flow, cluster names are retrieved from the cloud provider or the supplied kubeconfig only.

Preflight checklist

To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.


  • You have a valid Prisma Cloud license key and access token.


  • You have provisioned a Kubernetes cluster that meets the minimum system requirements and runs a supported Kubernetes version.
  • You have set up a Linux or macOS system as your cluster controller, and you can access the cluster with kubectl.
  • The nodes in your cluster can reach Prisma Cloud’s cloud registry (registry-auth.twistlock.com).
  • Your cluster can create PersistentVolumes and LoadBalancers from YAML configuration files.


  • Prisma Cloud supports Docker Engine, CRI-O, and cri-containerd. For more information, see the system requirements


  • You can create and delete namespaces in your cluster.
  • You can run kubectl create commands.

Firewalls and ports

Validate that the following ports are open.
Prisma Cloud Console
  • Incoming: 8083, 8084
  • Outgoing: 443, 53
Prisma Cloud Defenders
  • Incoming: None
  • Outgoing: 8084

Install Prisma Cloud

Use twistcli to install the Prisma Cloud Console and Defenders. The twistcli utility is included with every release. After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will be running in your Kubernetes cluster.
If you’re installing Prisma Cloud on Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Azure Container Service with Kubernetes, a number of tweaks are required to the installation procedure. For more details, see the relevant sections in this article.

Download the Prisma Cloud software

Download the Prisma Cloud software to any system where you run kubectl to administer your cluster.
  1. Download the current recommended release.
  2. Unpack the release tarball.
    $ mkdir prisma_cloud $ tar xvzf prisma_cloud_compute_edition_<VERSION>.tar.gz -C prisma_cloud/

Install Console

Install Console, exposing the service using a load balancer.
If you’re using NFSv4 for persistent storage in your cluster, we recommend that you use the nolock, noatime and bg mount options for your PersistentVolume. After generating the Console YAML file, add the following mount options to your PersistentVolume definition.
apiVersion: v1 kind: PersistentVolume metadata: name: twistlock-console labels: app-volume: twistlock-console annotations: volume.beta.kubernetes.io/mount-options: "nolock,noatime,bg"
  1. On your cluster controller, navigate to the directory where you downloaded and extracted the Prisma Cloud release tarball.
  2. Generate a YAML configuration file for Console, where <PLATFORM> can be linux or osx.
    The following command saves twistlock_console.yaml to the current working directory. If needed, you can edit the generated YAML file to modify the default settings.
    $ <PLATFORM>/twistcli console export kubernetes --service-type LoadBalancer
  3. Deploy Console.
    $ kubectl create -f twistlock_console.yaml
  4. Wait for the service to come up completely.
    $ kubectl get service -w -n twistlock

Configure Console

Create your first admin user and enter your license key.
  1. Get the public endpoint address for Console.
    $ kubectl get service -o wide -n twistlock
  2. (Optional) Register a DNS entry for Console’s external IP address. The rest of this procedure assumes the DNS name for Console is yourconsole.example.com.
  3. (Optional) Set up a custom cert to secure Console access.
  4. Open a browser window, and navigate to Console. By default, Console is served on HTTPS on port 8083. For example, go to https://yourconsole.example.com:8083.
  5. Create your first admin user.
  6. Enter your Prisma Cloud license key.
  7. Defender communicates with Console using TLS. Update the list of identifiers in Console’s certificate that Defenders use to validate Console’s identity.
    1. Go to
      Manage > Defenders > Names
    2. In the
      Subject Alternative Name
      table, click
      Add SAN
      , then enter Console’s IP address or domain name (e.g. yourconsole.example.com). Any Defenders deployed outside the cluster can use this name to connect to Console.
    3. In the
      Subject Alternative Name
      table, click
      Add SAN
      again, then enter twistlock-console. Any Defenders deployed in the same cluster as Console can use Console’s service name to connect. Note that the service name, twistlock-console, is not the same as the pod name, which is twistlock-console-XXXX.

Install Defender

Defender is installed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. Use twistcli to generate a YAML configuration file for the Defender DaemonSet, then deploy it using kubectl. You can use the same method to deploy Defender DaemonSets from both macOS and Linux kubectl-enabled cluster controllers.
The benefit of declarative object management, where you work directly with YAML configuration files, is that you get the full "source code" for the objects you create in your cluster. You can use a version control tool to manage and track modifications to config files so that you can delete and reliably recreate DaemonSets in your environment.
If you don’t have kubectl access to your cluster, you can deploy Defender DaemonSets directly from the Console UI.
The following procedure shows you how to deploy Defender DaemonSets with twistcli using declarative object management. Alternatively, you can generate Defender DaemonSet install commands in the Console UI under
Manage > Defenders > Deploy > DaemonSet
. Install scripts work on Linux hosts only. For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
If you’re using CRI-O or containerd, pass the --cri flag to twistcli (or enable the CRI option in the Console UI) when generating the Defender YAML or Helm chart. If you are using an AWS Bottlerocket-based EKS cluster, you should use the --cri flag when creating the YAML.
You can run both Prisma Cloud Console and Defenders in the same Kubernetes namespace (e.g. twistlock). Be careful when running kubectl delete commands with the YAML file generated for Defender. This file contains the namespace declaration, so comment out the namespace section if you don’t want the namespace deleted.
For provider managed clusters, Prisma Cloud automatically gets the cluster name from the cloud provider. To override the the cloud provider’s cluster name, use the --cluster option. For self-managed clusters, such as those built with kops, you must manually specify a cluster name with the --cluster option.
  1. Determine the Console service’s external IP address.
    $ kubectl get service -o wide -n twistlock
  2. Generate a defender.yaml file, where:
    The following command connects to Console (specified in --address) as user <ADMIN> (specified in --user), and generates a Defender DaemonSet YAML config file according to the configuration options passed to twistcli.
    The --cluster-address option specifies the address Defender uses to connect to Console. For Defenders deployed in the cluster where Console runs, specify Prisma Cloud Console’s service name, twistlock-console. For Defenders deployed outside the cluster, specify either Console’s external IP address, exposed by the LoadBalancer, or better, Console’s DNS name, which you must manually set up separately.
    The following command directs Defender to connect to Console using its service name. Use it for deploying a Defender DaemonSet inside a cluster.
    $ <PLATFORM>/twistcli defender export kubernetes \ --user <ADMIN_USER> \ --address https://yourconsole.example.com:8083 \ --cluster-address twistlock-console
    • <PLATFORM> can be linux or osx.
    • <ADMIN_USER> is the name of the initial admin user you just created.
  3. (Optional) Schedule Defenders on your Kubernetes master nodes.
    If you want to also schedule Defenders on your Kubernetes master nodes, change the DaemonSet’s toleration spec. Master nodes are tainted by design. Only pods that specifically match the taint can run there. Tolerations allow pods to be deployed on nodes to which taints have been applied. To schedule Defenders on your master nodes, add the following tolerations to your DaemonSet spec.
    tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
  4. Deploy the Defender DaemonSet.
    $ kubectl create -f defender.yaml
  5. Open a browser, navigate to Console, then go to
    Manage > Defenders > Manage
    to see a list of deployed Defenders.

Install Prisma Cloud on a CRI (non-Docker) cluster

Kubernetes lets you set up a cluster with the container runtime of your choice. Prisma Cloud supports Docker Engine, CRI-O, and cri-containerd.

Deploying Console

Irrespective of your cluster’s underlying container runtime, you can install Console using the standard install procedure. Console doesn’t interface with other containers, so it doesn’t need to know which container runtime interface is being used.

Deploying Defender DaemonSets

When generating the YAML file to deploy the Defender DaemonSet, a toggle lets you select your runtime environment. Since Defenders need to have a view of other containers, this option is necessary to guide the communication. By default the toggle is off Prisma Cloud uses Docker Engine. When the toggle is on, Prisma Cloud generates the propper yaml for the CRI Kubernetes environment.
If you use containerd on GKE, and you install Defender without the CRI switch, everything will appear to work properly, but you’ll have no images or container scan reports in
Monitor > Vulnerability
Monitor > Compliance pages
and you’ll have no runtime models in
Monitor > Runtime
. This happens because the Google Container Optimized Operating system (GCOOS) nodes have Docker Engine installed, but Kubernetes doesn’t use it. Defender thinks everything is OK because all of the integrations succeed, but the underlying runtime is actually different.
If you’re deploying Defender DaemonSets with twistcli, use the
option to use to specify the runtime interface. By default (no flag), twistcli generates a configuration that uses Docker Engine. With the
flag, twistcli generates a configuration that uses CRI.
$ <PLATFORM>/twistcli defender export kubernetes \ --cri --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluster-address yourconsole.example.com
When generating YAML from Console or twistcli, there is a simple change to the yaml file as seen below.
In this abbreviated version DEFENDER_TYPE:daemonset will use the Docker interface.
... spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service restartPolicy: Always containers: - name: twistlock-defender-19-03-321 image: registry-auth.twistlock.com/tw_<token>/twistlock/defender:defender_19_03_321 volumeMounts: - name: host-root mountPath: "/host" - name: data-folder mountPath: "/var/lib/twistlock" ... env: - name: WS_ADDRESS value: wss://yourconsole.example.com:8084 - name: DEFENDER_TYPE value: daemonset - name: DEFENDER_LISTENER_TYPE value: "none" ...
In this abbreviated version DEFENDER_TYPE:cri will use the CRI.
... spec: template: metadata: labels: