Prisma Cloud Defenders are deployed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. You can run Defenders on OpenShift master and infrastructure nodes using node selectors.
The Prisma Cloud Defender container image can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry. Alternatively, you can configure your deployments to pull from Prisma Cloud’s cloud registry.
This guide shows you how to generate a deployment YAML file for Defender, and then deploy it to your OpenShift cluster with the oc client.
Prisma Cloud can segment your environment by cluster. For example, you might have three clusters: test, staging, and production. The cluster pivot in Prisma Cloud lets you inspect resources and administer security policy on a per-cluster basis.
Defenders in each DaemonSet are responsible for reporting which resources belong to which cluster. When deploying a Defender DaemonSet, Prisma Cloud tries to determine the cluster name through introspection. First, it tries to retrieve the cluster name from the cloud provider. As a fallback, it tries to retrieve the name from the corresponding kubeconfig file saved in the credentials store. Finally, you can override these mechanisms by manually specifying a cluster name when deploying your Defender DaemonSet.
Both the Prisma Cloud UI and twistcli tool accept an option for manually specifying a cluster name. Let Prisma Cloud automatically detect the name for provider-managed clusters. Manually specify names for self-managed clusters, such as those built with kops.
Radar lets you explore your environment cluster-by-cluster. You can also create stored filters (also known as collections) based on cluster names. Finally, you can scope policy by cluster. Vulnerability and compliance rules for container images and hosts, runtime rules for container images, and trusted images rules can all be scoped by cluster name.
There are some things to consider when manually naming clusters:
- If you specify the same name for two or more clusters, they’re treated as a single cluster.
- For GCP, if you have clusters with the same name in different projects, they’re treated as a single cluster. Consider manually specifying a different name for each cluster.
- Manually specifying names isn’t supported inManage > Defenders > Manage > DaemonSet. This page lets you deploy and manage DaemonSets directly from the Prisma Cloud UI. For this deployment flow, cluster names are retrieved from the cloud provider or the supplied kubeconfig only.
To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.
Minimum system requirements
Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in System requirements.
Validate that you have permission to:
- Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.
- Pull images from your registry. This might require the creation of a docker-registry secret.
Validate that outbound connections to your Console can be made on port 443.
Install Prisma Cloud
(Optional) Push the Prisma Cloud images to a private registry
When Prisma Cloud is deployed to your cluster, the images are retrieved from a registry. You have a number of options for storing the Prisma Cloud Console and Defender images:
- OpenShift internal registry.
- Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.
Alternatively, you can pull the images from the Prisma Cloud cloud registry at deployment time. Your cluster nodes must be able to connect to the Prisma Cloud cloud registry (registry-auth.twistlock.com) with TLS on TCP port 443.
This guides shows you how to use both the OpenShift internal registry and the Prisma Cloud cloud registry. If you’re going to use the Prisma Cloud cloud registry, you can skip this section. Otherwise, this procedure shows you how to pull, tag, and upload the Prisma Cloud images to the OpenShift internal registry’s twistlock imageStream.
- Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.$ oc get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 172.30.163.181 <none> 5000/TCP 88d
- Pull the image from the Prisma Cloud cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For exampe, 18.11.128 would be 18_11_128.$ docker pull \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION>
- Tag the image for the OpenShift internal registry.$ docker tag \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \ 172.30.163.181:5000/twistlock/private:defender_<VERSION>
- Push the image to the twistlock project’s imageStream.$ docker push 172.30.163.181:5000/twistlock/private:defender_<VERSION>
Defender is installed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. Use twistcli to generate a YAML configuration file or Helm chart for the Defender DaemonSet, then deploy it using oc. You can use the same method to deploy Defender DaemonSets from both macOS and Linux.
The benefit of declarative object management, where you work directly with YAML configuration files, is that you get the full "source code" for the objects you create in your cluster. You can use a version control tool to manage and track modifications to config files so that you can delete and reliably recreate DaemonSets in your environment.
If you don’t have kubectl access to your cluster (or oc access for OpenShift), you can deploy Defender DaemonSets directly from the Console UI.
The following procedure shows you how to deploy Defender DaemonSets with twistcli using declarative object management. Alternatively, you can generate Defender DaemonSet install commands in the Console UI under
Manage > Defenders > Deploy > DaemonSet. Install scripts work on Linux hosts only. For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with oc, as described in the following procedure.
Get connection strings
When calling twistcli to generate your YAML files and Helm charts, you’ll need to specify a couple of addresses.
- Retrieve Console’s URL (PRISMA_CLOUD_COMPUTE_CONSOLE_URL).
- Sign into Prisma Cloud.
- Go toCompute > Manage > System > Utilities.
- Copy the URL underPath to Console.
- Retrieve Console’s hostname (PRISMA_CLOUD_COMPUTE_HOSTNAME).The hostname can be derived from the URL by removing the protocol scheme and path. It is simply the host part of the URL. You can also retrieve the hostname directly.
- Go toCompute > Manage > Defenders > Deploy > Defenders > Orchestrator
- SelectOpenShiftfromStep 2(Choose the orchestrator type)
- Copy the hostname fromStep 3(The name that Defender will use to connect to this Console)
Option #1: Deploy with YAML files
Deploy the Defender DaemonSet with YAML files.
The twistcli defender export command can be used to generate native Kubernetes YAML files to deploy the Defender as a DaemonSet.
- Generate a defender.yaml file, where:The following command connects to Console (specified in --address) as user <ADMIN> (specified in --user), and generates a Defender DaemonSet YAML config file according to the configuration options passed to twistcli. The --cluster-address option specifies the address Defender uses to connect to Console.$ <PLATFORM>/twistcli defender export openshift \ --user <ADMIN_USER> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --cri
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
- Deploy the Defender DaemonSet.$ oc create -f ./defender.yaml
Option #2: Deploy with Helm chart
Deploy the Defender DaemonSet with a Helm chart.
Prisma Cloud Defenders Helm charts fail to install on OpenShift 4 clusters due to a Helm bug. If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you’ll get the following error:
Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"
To work around the issue, manually modify the generated Helm chart.
- Generate the Defender DaemonSet helm chart.A number of command variations are provided. Use them as a basis for constructing your own working command.The following commands connects to Console (specified in --address) as user <ADMIN> (specified in --user), and generates a Defender DaemonSet YAML config file according to the configuration options passed to twistcli. The --cluster-address option specifies the address Defender uses to connect to Console.Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.Use the OpenShift external route for your Prisma Cloud Console, --address https://twistlock-console.apps.ose.example.com. Designate Prisma Cloud’s cloud registry by omitting the --image-name flag. Defining CRI-O as the default container engine by using the -cri flag.$ <PLATFORM>/twistcli defender export openshift \ --user <ADMIN_USER> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --cri \ --helm$ <PLATFORM>/twistcli defender export openshift \ --user <ADMIN_USER> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --cri \ --helmInside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the -cri flag.$ <PLATFORM>/twistcli defender export openshift \ --user <ADMIN_USER> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --cri \ --helm$ <PLATFORM>/twistcli defender export openshift \ --user <ADMIN_USER> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --cri \ --helm
- Unpack the chart into a temporary directory.$ mkdir helm-defender $ tar xvzf twistlock-defender-helm.tar.gz -C helm-defender/
- Repack the Helm chart$ cd helm-defender/ $ tar cvzf twistlock-defender-helm.tar.gz twistlock-defender/
- Install the new helm chart via the helm command$ helm install --namespace=twistlock -g twistlock-defender-helm.tar.gz
Confirm Defenders were deployed
Confirm the installation was successful.
- In Prisma Cloud Console, go toCompute > Manage > Defenders > Manageto see a list of deployed Defenders.
- In the OpenShift Web Console, go to the Prisma Cloud project’s monitoring window to see which pods are running.
Control Defender deployments with NodeSelector
You can deploy Defenders to all nodes in an OpenShift cluster (master, infra, compute). Depending upon the nodeSelector configuration, Prisma Cloud Defenders may not get deployed to all nodes. Adjust the guidance in the following procedure according to your organization’s deployment strategy.
- Review the following OpenShift configuration settings.
- The OpenShift master nodeSelector configuration can be found in /etc/origin/master/master-config.yaml. Look for any nodeSelector and nodeSelectorLabelBlacklist settings.defaultNodeSelector: compute=true
- Prisma Cloud project - The nodeSelector can be defined at the project level.$ oc describe project twistlock Name: twistlock Created: 10 days ago Labels: <none> Annotations: openshift.io/description= openshift.io/display-name= openshift.io/node-selector=node-role.kubernetes.io/compute=true openshift.io/sa.scc.mcs=s0:c17,c9 openshift.io/sa.scc.supplemental-groups=1000290000/10000 openshift.io/sa.scc.uid-range=1000290000/10000 Display Name: <none> Description: <none> Status: Active Node Selector: node-role.kubernetes.io/compute=true Quota: <none> Resource limits: <none>In this example the Prisma Cloud project default nodeSelector instructs OpenShift to only deploy Defenders to the node-role.kubernetes.io/compute=true nodes.
- The following command removes the Node Selector value from the Prisma Cloud project.$ oc annotate namespace twistlock openshift.io/node-selector=""
- Add a Deploy_Prisma Cloud : true label to all nodes to which Defender should be deployed.$ oc label node ip-172-31-0-55.ec2.internal Deploy_Prisma Cloud=true $ oc describe node ip-172-31-0-55.ec2.internal Name: ip-172-31-0-55.ec2.internal Roles: compute Labels: Deploy_PrismaCloud=true beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=ip-172-31-0-55.ec2.internal logging-infra-fluentd=true node-role.kubernetes.io/compute=true region=primary Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Sun, 05 Aug 2018 05:40:10 +0000
- Set the nodeSelector in the Defender DaemonSet deployment YAML.version: extensions/v1beta1 kind: DaemonSet metadata: name: twistlock-defender-ds namespace: twistlock spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service nodeSelector: Deploy_PrismaCloud: "true" restartPolicy: Always containers: - name: twistlock-defender-2-5-127 ...
- Check the desired and current count for the Defender DaemonSet deployment.$ oc get ds -n twistlock NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR twistlock-defender-ds 4 4 4 4 4 Deploy_PrismaCloud=true
Recommended For You
Recommended videos not found.