Deploy Defender on OpenShift v4
Table of Contents
Self.Hosted 31.xx
Expand all | Collapse all
-
- Getting started
- System Requirements
- Cluster Context
-
- Prisma Cloud Container Images
- Kubernetes
- Deploy the Prisma Cloud Console on Amazon ECS
- Console on Fargate
- Onebox
- Alibaba Cloud Container Service for Kubernetes (ACK)
- Azure Container Service (ACS) with Kubernetes
- Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
- IBM Kubernetes Service (IKS)
- OpenShift v4
-
- Defender Types
- Manage your Defenders
- Redeploy Defenders
- Uninstall Defenders
-
- Deploy Orchestrator Defenders on Amazon ECS
- Automatically Install Container Defender in a Cluster
- Deploy Prisma Cloud Defender from the GCP Marketplace
- Deploy Defenders as DaemonSets
- VMware Tanzu Application Service (TAS) Defender
- Deploy Defender on Google Kubernetes Engine (GKE)
- Google Kubernetes Engine (GKE) Autopilot
- Deploy Defender on OpenShift v4
-
- Agentless Scanning Modes
-
- Onboard AWS Accounts for Agentless Scanning
- Onboard Azure Accounts for Agentless Scanning
- Configure Agentless Scanning for Azure
- Onboard GCP Accounts for Agentless Scanning
- Configure Agentless Scanning for GCP
- Onboard Oracle Cloud Infrastructure (OCI) Accounts for Agentless Scanning
- Configure Agentless Scanning for Oracle Cloud Infrastructure (OCI)
- Agentless Scanning Results
-
- Rule ordering and pattern matching
- Backup and Restore
- Custom feeds
- Configuring Prisma Cloud proxy settings
- Prisma Cloud Compute certificates
- Configure scanning
- User certificate validity period
- Enable HTTP access to Console
- Set different paths for Defender and Console (with DaemonSets)
- Authenticate to Console with Certificates
- Configure custom certs from a predefined directory
- Customize terminal output
- Collections
- Tags
- Logon settings
- Reconfigure Prisma Cloud
- Subject Alternative Names
- WildFire Settings
- Log Scrubbing
- Clustered-DB
- Permissions by feature
-
- Logging into Prisma Cloud
- Integrating with an IdP
- Integrate with Active Directory
- Integrate with OpenLDAP
- Integrate Prisma Cloud with Open ID Connect
- Integrate with Okta via SAML 2.0 federation
- Integrate Google G Suite via SAML 2.0 federation
- Integrate with Azure Active Directory via SAML 2.0 federation
- Integrate with PingFederate via SAML 2.0 federation
- Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
- Integrate Prisma Cloud with GitHub
- Integrate Prisma Cloud with OpenShift
- Non-default UPN suffixes
- Compute user roles
- Assign roles
-
- Prisma Cloud Vulnerability Feed
- Scanning Procedure
- Vulnerability Management Policies
- Vulnerability Scan Reports
- Scan Images for Custom Vulnerabilities
- Base images
- Vulnerability Explorer
- CVSS scoring
- CVE Viewer
-
- Configure Registry Scans
- Scan Images in Alibaba Cloud Container Registry
- Scan Images in Amazon Elastic Container Registry (ECR)
- Scan images in Azure Container Registry (ACR)
- Scan Images in Docker Registry v2 (including Docker Hub)
- Scan Images in GitLab Container Registry
- Scan images in Google Artifact Registry
- Scan Images in Google Container Registry (GCR)
- Scan Images in Harbor Registry
- Scan Images in IBM Cloud Container Registry
- Scan Images in JFrog Artifactory Docker Registry
- Scan Images in Sonatype Nexus Registry
- Scan images in OpenShift integrated Docker registry
- Scan Images in CoreOS Quay Registry
- Trigger Registry Scans with Webhooks
- Configure VM image scanning
- Configure code repository scanning
- Malware scanning
- Windows container image scanning
- Serverless Functions Scanning
- VMware Tanzu Blobstore Scanning
- Scan App-Embedded workloads
- Troubleshoot Vulnerability Detection
-
- Compliance Explorer
- Enforce compliance checks
- CIS Benchmarks
- Prisma Cloud Labs compliance checks
- Serverless functions compliance checks
- Windows compliance checks
- DISA STIG compliance checks
- Custom compliance checks
- Trusted images
- Host scanning
- VM image scanning
- App-Embedded scanning
- Detect secrets
- OSS license management
-
- Alert Mechanism
- AWS Security Hub
- Cortex XDR alerts
- Cortex XSOAR alerts
- Email alerts
- Google Cloud Pub/Sub
- Google Cloud Security Command Center
- IBM Cloud Security Advisor
- JIRA Alerts
- PagerDuty alerts
- ServiceNow alerts for Security Incident Response
- ServiceNow alerts for Vulnerability Response
- Slack Alerts
- Splunk Alerts
- Webhook alerts
- API
Deploy Defender on OpenShift v4
Prisma Cloud Defenders are deployed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster.
You can run Defenders on OpenShift master and infrastructure nodes by removing the taint from them.
The Prisma Cloud Defender container images can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry.
Alternatively, you can configure your deployments to pull images from Prisma Cloud’s cloud registry.
This guide shows you how to generate deployment YAML files for Defenders, and then deploy them to your OpenShift cluster with the oc client.
To better understand clusters, read our cluster context topic.
Preflight checklist
To ensure that your installation on supported versions of OpenShift v4.x goes smoothly, work through the following checklist and validate that all requirements are met.
Minimum system requirements
Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in
System requirements.
Permissions
Validate that you have permission to:
- Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.
- Pull images from your registry. This might require the creation of a docker-registry secret.
- Have the correct role bindings to pull and push to the registry. For more information, see Accessing the Registry.
- Create and delete projects in your cluster. For OpenShift installations, a project is created when you run oc new-project.
- Run oc create commands.
Create an OpenShift project for Prisma Cloud
Create a project named twistlock.
- Login to the OpenShift cluster and create the twistlock project:$ oc new-project twistlock
(Optional) Push the Prisma Cloud images to a private registry
When Prisma Cloud is deployed to your cluster, the images are retrieved from a registry.
You have a number of options for storing the Prisma Cloud Console and Defender images:
- OpenShift internal registry.
- Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.
Alternatively, you can pull the images from the Prisma Cloud cloud registry at deployment time.
Your cluster nodes must be able to connect to the Prisma Cloud cloud registry (registry-auth.twistlock.com) with TLS on TCP port 443.
This guides shows you how to use both the OpenShift internal registry and the Prisma Cloud cloud registry.
If you’re going to use the Prisma Cloud cloud registry, you can skip this section.
Otherwise, this procedure shows you how to pull, tag, and upload the Prisma Cloud images to the OpenShift internal registry’s twistlock imageStream.
- Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.$ oc get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 172.30.163.181 <none> 5000/TCP 88dPull the image from the Prisma Cloud cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For exampe, 18.11.128 would be 18_11_128.$ docker pull \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION>Tag the image for the OpenShift internal registry.$ docker tag \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \ 172.30.163.181:5000/twistlock/private:defender_<VERSION>Push the image to the twistlock project’s imageStream.$ docker push 172.30.163.181:5000/twistlock/private:defender_<VERSION>Install DefenderPrisma Cloud Defenders run as containers on the nodes in your OpenShift cluster. They are deployed as a DaemonSet. Use the twistcli tool to generate the DaemonSet deployment YAML or helm chart.The command has the following basic structure It creates a YAML file named defender.yaml or a helm chart twistlock-defender-helm.tar.gz in the working directory.Example for export of a YAML file:$ <PLATFORM>/twistcli defender export openshift \ --address <ADDRESS> \ --cluster-address <CLUSTER-ADDRESS> \ --container-runtime crioExample for export of a Helm chart:$ <PLATFORM>/twistcli defender export openshift \ --address <ADDRESS> \ --cluster-address <CLUSTER-ADDRESS> \ --helm \ --container-runtime crioThe command connects to Console’s API, specified in --address, to generate the Defender DaemonSet YAML config file or helm chart. The location where you run twistcli (inside or outside the cluster) dictates which Console address should be supplied.The --cluster-address flag specifies the address Defender uses to connect to Console. For Defenders deployed inside the cluster, specify Prisma Cloud Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or cluster IP address. For Defenders deployed outside the cluster, specify the external route for the Console over port 8084 created before, twistlock-console-8084.apps.ose.example.com, if the external route is not exposing port 8084, specify the port in the address, e.g. twistlock-console-8084.apps.ose.example.com:443 within the defender daemonSet yaml.Example: Edit the resulting defender.yaml and change: - name: WS_ADDRESS value: wss://twistlock-console-8084.apps.ose.example.com:8084 to - name: WS_ADDRESS value: wss://twistlock-console-8084.apps.ose.example.com:443If SELinux is enabled on the OpenShift nodes, pass the --selinux-enabled argument to twistcli.Option #1: Deploy with YAML filesDeploy the Defender DaemonSet with YAML files.
- Generate the Defender DaemonSet YAML. A number of command variations are provided. Use them as a basis for constructing your own working command.Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.Use the OpenShift external route for your Prisma Cloud Console, --address https://twistlock-console.apps.ose.example.com. Designate Prisma Cloud’s cloud registry by omitting the --image-name flag. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --container-runtime crioOutside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.Use the --image-name flag to designate an image from the OpenShift internal registry. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --container-runtime crioInside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --container-runtime crioInside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.Use the --image-name flag to designate an image in the OpenShift internal registry. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --container-runtime crioDeploy the Defender DaemonSet.$ oc create -f ./defender.yamlOption #2: Deploy with Helm chartDeploy the Defender DaemonSet with a Helm chart.Prisma Cloud Defenders Helm charts fail to install on OpenShift 4 clusters due to a Helm bug. If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you’ll get the following error:Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"To work around the issue, you’ll need to manually modify the generated Helm chart.
- Generate the Defender DaemonSet helm chart. A number of command variations are provided. Use them as a basis for constructing your own working command.Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.Use the OpenShift external route for your Prisma Cloud Console, --address https://twistlock-console.apps.ose.example.com. Designate Prisma Cloud’s cloud registry by omitting the --image-name flag. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --helm \ --container-runtime crioOutside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.Use the --image-name flag to designate an image from the OpenShift internal registry. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --helm \ --container-runtime crioInside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry.When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --helm \ --container-runtime crioInside the OpenShift cluster + pull the Defender image from the OpenShift internal registry.Use the --image-name flag to designate an image in the OpenShift internal registry. Defining CRI-O as the default container engine by using the --container-runtime flag.$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \ --helm \ --container-runtime crioUnpack the chart into a temporary directory.$ mkdir helm-defender $ tar xvzf twistlock-defender-helm.tar.gz -C helm-defender/{{- if .Values.openshift }} apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: twistlock-console ...Repack the Helm chart$ cd helm-defender/ $ tar cvzf twistlock-defender-helm.tar.gz twistlock-defender/Install the updated Helm chart.$ helm install --namespace=twistlock -g twistlock-defender-helm.tar.gzConfirm the Defenders were deployed.
- In Prisma Cloud Console, go toManage > Defenders > Manageto see a list of deployed Defenders.
- In the OpenShift Web Console, go to the Prisma Cloud project’s monitoring window to see which pods are running.
Control Defender deployments with taintYou can deploy Defenders to all nodes in an OpenShift cluster (master, infra, compute). OpenShift Container Platform automatically taints infra and master nodes These taints have the NoSchedule effect, which means no pod can be scheduled on them.To run the Defenders on these nodes, you can either remove the taint or add a toleration to the Defender DaemonSet. Once this is done, the Defender Daemonset will automatically be deployed to these nodes (no need to redeploy the Daemonset). Adjust the guidance in the following procedure according to your organization’s deployment strategy.- Option 1 - remove taint all nodes:$ oc adm taint nodes --all node-role.kubernetes.io/master-Option 2 - remove taint from specific nodes:$ oc adm taint nodes <node-name> node-role.kubernetes.io/master-Option 3 - add tolerations to the twistlock-defender-ds DaemonSet:$ oc edit ds twistlock-defender-ds -n twistlockAdd the following toleration in PodSpec (DaemonSet.spec.template.spec)tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
Recommended For You