Prisma Cloud supports OpenShift v3.9 and later.
Prisma Cloud Defenders are deployed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. You can run Defenders on OpenShift master and infrastructure nodes using node selectors.
The Prisma Cloud Defender container image can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry. Alternatively, you can configure your deployments to pull from Prisma Cloud’s cloud registry.
This guide shows you how to generate a deployment YAML file for Defender, and then deploy it to your OpenShift cluster with the

Preflight checklist

To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.

Minimum system requirements

Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in System requirements.


Validate that you have permission to:
  • Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.
  • Pull images from your registry. This might require the creation of a docker-registry secret.
  • Have the correct role bindings to pull and push to the registry. For more information, see Accessing the Registry.
  • Create and delete projects in your cluster. For OpenShift installations, a project is created when you run
    oc new-project
  • Run
    oc create

Network connectivity

Validate that outbound connections can be made on port 8084.

Install Prisma Cloud

to install the Prisma Cloud Defenders in your OpenShift cluster. The
utility is included with every release.

Create an OpenShift project for Prisma Cloud

Create a project named
  1. Login to the OpenShift cluster and create the
    $ oc new-project twistlock

(Optional) Push the Prisma Cloud images to a private registry

When Prisma Cloud is deployed to your cluster, the images are retrieved from a registry. You have a number of options for storing the Prisma Cloud Console and Defender images:
  • OpenShift internal registry.
  • Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.
Alternatively, you can pull the images from the Prisma Cloud cloud registry at deployment time. Your cluster nodes must be able to connect to the Prisma Cloud cloud registry ( with TLS on TCP port 443.
This guides shows you how to use both the OpenShift internal registry and the Prisma Cloud cloud registry. If you’re going to use the Prisma Cloud cloud registry, you can skip this section. Otherwise, this procedure shows you how to pull, tag, and upload the Prisma Cloud images to the OpenShift internal registry’s
  1. Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.
    $ oc get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP <none> 5000/TCP 88d
  2. Pull the image from the Prisma Cloud cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For exampe, 18.11.128 would be 18_11_128.
    $ docker pull \<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION>
  3. Tag the image for the OpenShift internal registry.
    $ docker tag \<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \<VERSION>
  4. Push the image to the
    project’s imageStream.
    $ docker push<VERSION>

Install Defender

Defender is installed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. Use
to generate a YAML configuration file for the Defender DaemonSet, then deploy it using
. You can use the same method to deploy Defender DaemonSets from both macOS and Linux kubectl-enabled cluster controllers.
The benefit of declarative object management, where you work directly with YAML configuration files, is that you get the full "source code" for the objects you create in your cluster. You can use a version control tool to manage and track modifications to config files so that you can delete and reliably recreate DaemonSets in your environment.
If you don’t have kubectl access to your cluster (or oc access for OpenShift), you can deploy Defender DaemonSets directly from the Console UI.
The following procedure shows you how to deploy Defender DaemonSets with twistcli using declarative object management. Alternatively, you can generate Defender DaemonSet install commands in the Console UI under
Manage > Defenders > Deploy > DaemonSet
. Install scripts work on Linux hosts only. For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with oc, as described in the following procedure.
  1. Retrive Console’s API address (PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR).
    1. Sign into Prisma Cloud.
    2. Go to
      Compute > Manage > System > Downloads
    3. Copy the URL under
      Path to Console
  2. Retrieve Console’s service address (PRISMA_CLOUD_COMPUTE_SVC_ADDR).
    The service address can be derived from the API address by removing the protocol scheme and path. It is simply the host part of the URL.
    1. Go to
      Compute > Manage > Defenders > Deploy > DaemonSet
    2. Copy the address from
      The name that clients and Defenders use to access this Console
  3. Generate a
    file, where:
    The following command connects to Console’s API (specified in
    ) as user <ADMIN> (specified in
    ), and generates a Defender DaemonSet YAML config file according to the configuration options passed to
    . The
    option specifies the address Defender uses to connect to Console, or Console’s service address.
    $ <PLATFORM>/twistcli defender export openshift \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_API_ADDR> --user <ADMIN_USER> \ --cluster-address <PRISMA_CLOUD_COMPUTE_SVC_ADDR>
    • <PLATFORM> can be linux, osx, or windows.
    • <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
  4. Deploy the Defender DaemonSet.
    $ oc create -f ./defender.yaml
  5. Confirm the Defenders were deployed.
    1. In Prisma Cloud Console, go to
      Compute > Manage > Defenders > Manage
      to see a list of deployed Defenders.
    2. In the OpenShift Web Console, go to the Prisma Cloud project’s monitoring window to see which pods are running.
    3. Use the OpenShift CLI to see the DaemonSet pod count.
      $ oc get ds -n twistlock
      NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE twistlock-defender-ds 4 3 3 3 3 <none> 29m
      pod counts do not match. This is a job for the nodeSelector.

Control Defender deployments with NodeSelector

You can deploy Defenders to all nodes in an OpenShift cluster (master, infra, compute). Depending upon the nodeSelector configuration, Prisma Cloud Defenders may not get deployed to all nodes. Adjust the guidance in the following procedure according to your organization’s deployment strategy.
  1. Review the following OpenShift configuration settings.
    1. The OpenShift master nodeSelector configuration can be found in
      . Look for any nodeSelector and nodeSelectorLabelBlacklist settings.
      defaultNodeSelector: compute=true
    2. Prisma Cloud project - The nodeSelector can be defined at the project level.
      $ oc describe project twistlock Name: twistlock Created: 10 days ago Labels: <none> Annotations:,c9 Display Name: <none> Description: <none> Status: Active Node Selector: Quota: <none> Resource limits: <none>
      In this example the Prisma Cloud project default nodeSelector instructs OpenShift to only deploy Defenders to the
  2. The following command removes the Node Selector value from the Prisma Cloud project.
    $ oc annotate namespace twistlock""
  3. Add a
    Deploy_Prisma Cloud : true
    label to all nodes to which Defender should be deployed.
    $ oc label node ip-172-31-0-55.ec2.internal Deploy_Prisma Cloud=true $ oc describe node ip-172-31-0-55.ec2.internal Name: ip-172-31-0-55.ec2.internal Roles: compute Labels: Deploy_Prisma Cloud=true logging-infra-fluentd=true region=primary Annotations: CreationTimestamp: Sun, 05 Aug 2018 05:40:10 +0000
  4. Set the nodeSelector in the Defender DaemonSet deployment YAML.
    version: extensions/v1beta1 kind: DaemonSet metadata: name: twistlock-defender-ds namespace: twistlock spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service nodeSelector: Deploy_Prisma Cloud: "true" restartPolicy: Always containers: - name: twistlock-defender-2-5-127 ...
  5. Check the desired and current count for the Defender DaemonSet deployment.
    $ oc get ds -n twistlock NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR twistlock-defender-ds 4 4 4 4 4 Deploy_Prisma Cloud=true

Install Prisma Cloud with Helm charts

You can use
to create Helm charts for Prisma Cloud Defender. Helm is a package manager for Kubernetes, and
is the moniker for a Helm package.
Follow the main install flow, except:
  • Pass the
    option to
    to generate a Helm chart. Other options passed to
    configure the chart.
  • Deploy Defender with
    helm install
    rather than
    oc create
To create and install a Defender DaemonSet Helm chart that pulls the Defender image from the OpenShift internal registry:
$ <PLATFORM>/twistcli defender export openshift \ --address \ --cluster-address \ --selinux-enabled \ --image-name<VERSION> --helm $ helm install --namespace=twistlock twistlock-defender-helm.tar.gz


To uninstall Prisma Cloud, delete the
  1. Delete the
    $ oc delete project twistlock

Recommended For You