OpenShift v4
Table of Contents
Self.Hosted 31.xx
Expand all | Collapse all
-
- Getting started
- System Requirements
- Cluster Context
-
- Prisma Cloud Container Images
- Kubernetes
- Deploy the Prisma Cloud Console on Amazon ECS
- Console on Fargate
- Onebox
- Alibaba Cloud Container Service for Kubernetes (ACK)
- Azure Container Service (ACS) with Kubernetes
- Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
- IBM Kubernetes Service (IKS)
- OpenShift v4
-
- Defender Types
- Manage your Defenders
- Redeploy Defenders
- Uninstall Defenders
-
- Deploy Orchestrator Defenders on Amazon ECS
- Automatically Install Container Defender in a Cluster
- Deploy Prisma Cloud Defender from the GCP Marketplace
- Deploy Defenders as DaemonSets
- VMware Tanzu Application Service (TAS) Defender
- Deploy Defender on Google Kubernetes Engine (GKE)
- Google Kubernetes Engine (GKE) Autopilot
- Deploy Defender on OpenShift v4
-
- Agentless Scanning Modes
-
- Onboard AWS Accounts for Agentless Scanning
- Onboard Azure Accounts for Agentless Scanning
- Configure Agentless Scanning for Azure
- Onboard GCP Accounts for Agentless Scanning
- Configure Agentless Scanning for GCP
- Onboard Oracle Cloud Infrastructure (OCI) Accounts for Agentless Scanning
- Configure Agentless Scanning for Oracle Cloud Infrastructure (OCI)
- Agentless Scanning Results
-
- Rule ordering and pattern matching
- Backup and Restore
- Custom feeds
- Configuring Prisma Cloud proxy settings
- Prisma Cloud Compute certificates
- Configure scanning
- User certificate validity period
- Enable HTTP access to Console
- Set different paths for Defender and Console (with DaemonSets)
- Authenticate to Console with Certificates
- Configure custom certs from a predefined directory
- Customize terminal output
- Collections
- Tags
- Logon settings
- Reconfigure Prisma Cloud
- Subject Alternative Names
- WildFire Settings
- Log Scrubbing
- Clustered-DB
- Permissions by feature
-
- Logging into Prisma Cloud
- Integrating with an IdP
- Integrate with Active Directory
- Integrate with OpenLDAP
- Integrate Prisma Cloud with Open ID Connect
- Integrate with Okta via SAML 2.0 federation
- Integrate Google G Suite via SAML 2.0 federation
- Integrate with Azure Active Directory via SAML 2.0 federation
- Integrate with PingFederate via SAML 2.0 federation
- Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
- Integrate Prisma Cloud with GitHub
- Integrate Prisma Cloud with OpenShift
- Non-default UPN suffixes
- Compute user roles
- Assign roles
-
- Prisma Cloud Vulnerability Feed
- Scanning Procedure
- Vulnerability Management Policies
- Vulnerability Scan Reports
- Scan Images for Custom Vulnerabilities
- Base images
- Vulnerability Explorer
- CVSS scoring
- CVE Viewer
-
- Configure Registry Scans
- Scan Images in Alibaba Cloud Container Registry
- Scan Images in Amazon Elastic Container Registry (ECR)
- Scan images in Azure Container Registry (ACR)
- Scan Images in Docker Registry v2 (including Docker Hub)
- Scan Images in GitLab Container Registry
- Scan images in Google Artifact Registry
- Scan Images in Google Container Registry (GCR)
- Scan Images in Harbor Registry
- Scan Images in IBM Cloud Container Registry
- Scan Images in JFrog Artifactory Docker Registry
- Scan Images in Sonatype Nexus Registry
- Scan images in OpenShift integrated Docker registry
- Scan Images in CoreOS Quay Registry
- Trigger Registry Scans with Webhooks
- Configure VM image scanning
- Configure code repository scanning
- Malware scanning
- Windows container image scanning
- Serverless Functions Scanning
- VMware Tanzu Blobstore Scanning
- Scan App-Embedded workloads
- Troubleshoot Vulnerability Detection
-
- Compliance Explorer
- Enforce compliance checks
- CIS Benchmarks
- Prisma Cloud Labs compliance checks
- Serverless functions compliance checks
- Windows compliance checks
- DISA STIG compliance checks
- Custom compliance checks
- Trusted images
- Host scanning
- VM image scanning
- App-Embedded scanning
- Detect secrets
- OSS license management
-
- Alert Mechanism
- AWS Security Hub
- Cortex XDR alerts
- Cortex XSOAR alerts
- Email alerts
- Google Cloud Pub/Sub
- Google Cloud Security Command Center
- IBM Cloud Security Advisor
- JIRA Alerts
- PagerDuty alerts
- ServiceNow alerts for Security Incident Response
- ServiceNow alerts for Vulnerability Response
- Slack Alerts
- Splunk Alerts
- Webhook alerts
- API
OpenShift v4
Prisma Cloud Console is deployed as a Deployment, which ensures it’s always running.
The Prisma Cloud Console and Defender container images can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry.
Alternatively, you can configure your deployments to pull images from Prisma Cloud’s cloud registry.
Preflight checklist
To ensure that your installation on supported versions of OpenShift v4.x goes smoothly, work through the following checklist and validate that all requirements are met.
Minimum system requirements
Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in
System requirements.
For OpenShift installs, we recommend using the overlay or overlay2 storage drivers due to a known issue in RHEL.
For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1518519.
Permissions
Validate that you have permission to:
- Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.
- Pull images from your registry. This might require the creation of a docker-registry secret.
- Have the correct role bindings to pull and push to the registry. For more information, see Accessing the Registry.
- Create and delete projects in your cluster. For OpenShift installations, a project is created when you run oc new-project.
- Run oc create commands.
Internal cluster network communication
TCP: 8083, 8084
External cluster network communication
TCP: 443
The Prisma Cloud Console connects to the Prisma Cloud Intelligence Stream (https://intelligence.twistlock.com) on TCP port 443 for vulnerability updates.
If your Console is unable to contact the Prisma Cloud Intelligence Stream, follow the guidance for offline environments.
Download the Prisma Cloud software
Download the latest Prisma Cloud release to any system where the OpenShift oc client is installed.
- Go to Releases, and copy the link to current recommended release.
- Download the release tarball to your cluster controller.$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>Unpack the release tarball.$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/
- Login to the OpenShift cluster and create the twistlock project:
- OpenShift internal registry.
- Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.
- Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.$ oc get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 172.30.163.181 <none> 5000/TCP 88dPull the images from the Prisma Cloud cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For example, 18.11.128 would be 18_11_128.$ docker pull \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> $ docker pull \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION>Tag the images for the OpenShift internal registry.$ docker tag \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \ 172.30.163.181:5000/twistlock/private:defender_<VERSION> $ docker tag \ registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION> \ 172.30.163.181:5000/twistlock/private:console_<VERSION>Push the images to the twistlock project’s imageStream.$ docker push 172.30.163.181:5000/twistlock/private:defender_<VERSION> $ docker push 172.30.163.181:5000/twistlock/private:console_<VERSION>Install ConsoleYou can optionally customize twistlock.cfg to enable additional features. Then run twistcli from the root of the extracted release tarball.Prisma Cloud Console uses a PersistentVolumeClaim to store data. There are two ways to provision storage for Console:
- Dynamic provisioning:Allocate storage for Console on-demand at deployment time. When generating the Console deployment YAML files or helm chart with twistcli, specify the name of the storage class with the --storage-class flag. Most customers use dynamic provisioning.
- Manual provisioning:Pre-provision a persistent volume for Console, then specify its label when generating the Console deployment YAML files. OpenShift uses NFS mounts for the backend infrastructure components (e.g. registry, logging, etc.). The NFS server is typically one of the master nodes. Guidance for creating an NFS backed PersistentVolume can be found here. Also see Appendix: NFS PersistentVolume example.
Option #1: Deploy with YAML filesDeploy Prisma Cloud Compute Console with YAML files.- Generate a deployment YAML file for Console. A number of command variations are provided. Use them as a basis for constructing your own working command.
- Prisma Cloud Console + dynamically provisioned PersistentVolume + image pulled from the OpenShift internal registry.*$ <PLATFORM>/twistcli console export openshift \ --storage-class "<STORAGE-CLASS-NAME>" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP"Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the OpenShift internal registry.Using the NFS backed PersistentVolume described in Appendix: NFS PersistentVolume example, pass the label to the --persistent-volume-labels flag to specify the PersistentVolume to which the PersistentVolumeClaim will bind.$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP"Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the Prisma Cloud cloud registry.If you omit the --image-name flag, the Prisma Cloud cloud registry is used by default, and you are prompted for your access token.$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --service-type "ClusterIP"Deploy Console.$ oc create -f ./twistlock_console.yamlYou can safely ignore the error that says the twistlock project already exists.
- Generate a deployment helm chart for Console. A number of command variations are provided. Use them as a basis for constructing your own working command.
- Prisma Cloud Console + dynamically provisioned PersistentVolume + image pulled from the OpenShift internal registry.$ <PLATFORM>/twistcli console export openshift \ --storage-class "<STORAGE-CLASS-NAME>" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP" \ --helmPrisma Cloud Console + manually provisioned PersistentVolume + image pulled from the OpenShift internal registry.Using the NFS backed PersistentVolume described in Appendix: NFS PersistentVolume example, pass the label to the --persistent-volume-labels flag to specify the PersistentVolume to which the PersistentVolumeClaim will bind.$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP" \ --helmPrisma Cloud Console + manually provisioned PersistentVolume + image pulled from the Prisma Cloud cloud registry.If you omit the --image-name flag, the Prisma Cloud cloud registry is used by default, and you are prompted for your access token.$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --service-type "ClusterIP" \ --helmUnpack the chart into a temporary directory.$ mkdir helm-console $ tar xvzf twistlock-console-helm.tar.gz -C helm-console/Open helm-console/twistlock-console/templates/securitycontextconstraints.yaml for editing.{{- if .Values.openshift }} apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: twistlock-console ...Repack the Helm chart$ cd helm-console/ $ tar cvzf twistlock-console-helm.tar.gz twistlock-console/Install the updated Helm chart.$ helm install --namespace=twistlock -g twistlock-console-helm.tar.gzCreate an external route to ConsoleCreate an external route to Console so that you can access the web UI and API.
- From the OpenShift web interface, go to the twistlock project.
- Go toApplication > Routes.
- SelectCreate Route.
- Enter a name for the route, such astwistlock-console.
- Hostname = URL used to access the Console, e.g. twistlock-console.apps.ose.example.com
- Path =/
- Service =twistlock-console
- Target Port = 8083 → 8083
- Select theSecurity > Secure Routeradio button.
- TLS Termination = Passthrough (if using 8083)If you plan to issue a custom certificate for Console TLS communication that is trusted and will allow the TLS establishment with the Prisma Cloud Console, then Select Passthrough TLS for TCP port 8083.
- Insecure Traffic =Redirect
- ClickCreate.
Create an external route to Console for external DefendersIf you are planning to deploy Defenders to another cluster and report to this Console, you will need to create an additional external route to Console so that the Defenders can access the Console. You need to expose the Prisma Cloud-Console service’s TCP port 8084 as external OpenShift routes. Each route will be an unique, fully qualified domain name.- From the OpenShift web interface, go to the twistlock project.
- Go toApplication > Routes.
- SelectCreate Route.
- Enter a name for the route, such astwistlock-console-8084.
- Hostname = URL used to access the Console, using a different hostname, e.g. twistlock-console-8084.apps.ose.example.com
- Path =/
- Service =twistlock-console
- Target Port = 8084 → 8084
- Select theSecurity > Secure Routeradio button.
- TLS Termination = Passthrough (if using 8084)The Defender to Console communication is a mutual TLS secure websocket session. This communication cannot be intercepted.
- Insecure Traffic =Redirect
- ClickCreate.
Configure ConsoleCreate your first admin user, enter your license key, and configure Console’s certificate so that Defenders can establish a secure connection to it.- In a web browser, navigate to the external route you configured for Console, e.g. https://twistlock-console.apps.ose.example.com.
- Create your first admin account.
- Enter your license key.
- Add a SubjectAlternativeName to Console’s certificate to allow Defenders to establish a secure connection with Console.Use either Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or Console’s cluster IP.Additionally, if a route for external Defenders was created, add that one to the SAN list too: twistlock-console-8084.apps.ose.example.com$ oc get svc -n twistlock NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) twistlock-console LoadBalancer 172.30.41.62 172.29.61.32,172.29.61.32 8084:3184...
- Go toManage > Defenders > Names.
- ClickAdd SANand enter Console’s service name.
- ClickAdd SANand enter Console’s cluster IP.
- mkdir /opt/twistlock_console
- Check selinux:sestatus
- chcon -R -t svirt_sandbox_file_t -l s0 /opt/twistlock_console
- sudo chown nfsnobody /opt/twistlock_console
- sudo chgrp nfsnobody /opt/twistlock_console
- Check perms with:ls -lZ /opt/twistlock_console(drwxr-xr-x. nfsnobody nfsnobody system_u:object_r:svirt_sandbox_file_t:s0)
- Create/etc/exports.d/twistlock.exports
- In the/etc/exports.d/twistlock.exportsadd in line/opt/twistlock_console *(rw,root_squash)
- Restart nfs mountsudo exportfs -ra
- Confirm withshowmount -e
- Get the IP address of the Master node that will be used in the PV (eth0, openshift uses 172. for node to node communication). Make sure TCP 2049 (NFS) is allowed between nodes.
- Create a PersistentVolume for Prisma Cloud Console.The following example uses a label for the PersistentVolume and the volume and claim pre-binding features. The PersistentVolumeClaim uses the app-volume: twistlock-console label to bind to the PV. The volume and claim pre-binding claimref ensures that the PersistentVolume is not claimed by another PersistentVolumeClaim before Prisma Cloud Console is deployed.apiVersion: v1 kind: PersistentVolume metadata: name: twistlock labels: app-volume: twistlock-console storageClassName: standard spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce nfs: path: /opt/twistlock_console server: 172.31.4.59 persistentVolumeReclaimPolicy: Retain claimRef: name: twistlock-console namespace: twistlock
- Log into Prisma Cloud Console.
- Go toManage > Authentication > SAML.
- InConsole URL, define the AssertionConsumerServiceURL.In this example, enter https://twistlock-console.apps.ose.example.com
Appendix: NFS PersistentVolume exampleCreate an NFS mount for the Prisma Cloud Console’s PV on the host that serves the NFS mounts.Appendix: Implementing SAML federation with a Prisma Cloud Console inside an OpenShift clusterWhen federating Prisma Cloud Console that is accessed through an OpenShift external route with a SAML v2.0 Identity Provider (IdP), the SAML authentication request’s AssertionConsumerServiceURL value must be modified. Prisma Cloud automatically generates the AssertionConsumerServiceURL value sent in a SAML authentication request based on Console’s configuration. When Console is accessed through an OpenShift external route, the URL for Console’s API endpoint is most likely not the same as the automatically generated AssertionConsumerServiceURL. Therefore, you must configure the AssertionConsumerServiceURL value that Prisma Cloud sends in the SAML authentication request.
Option #2: Deploy with Helm chartDeploy Prisma Cloud Compute Console with a Helm chart.Prisma Cloud Console Helm charts fail to install on OpenShift 4 clusters due to a Helm bug. If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you’ll get the following error:Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"To work around the issue, you’ll need to manually modify the generated Helm chart.Recommended For You
Create an OpenShift project for Prisma Cloud
Create a project named twistlock.
$ oc new-project twistlock
(Optional) Push the Prisma Cloud images to a private registry
When Prisma Cloud is deployed to your cluster, the images are retrieved from a registry.
You have a number of options for storing the Prisma Cloud Console and Defender images:
Alternatively, you can pull the images from the Prisma Cloud cloud registry at deployment time.
Your cluster nodes must be able to connect to the Prisma Cloud cloud registry (registry-auth.twistlock.com) with TLS on TCP port 443.
This guides shows you how to use both the OpenShift internal registry and the Prisma Cloud cloud registry.
If you’re going to use the Prisma Cloud cloud registry, you can skip this section.
Otherwise, this procedure shows you how to pull, tag, and upload the Prisma Cloud images to the OpenShift internal registry’s twistlock imageStream.