Table of Contents
Self.Hosted 31.xx
Expand all | Collapse all
-
- Getting started
- System Requirements
- Cluster Context
-
- Prisma Cloud Container Images
- Kubernetes
- Deploy the Prisma Cloud Console on Amazon ECS
- Console on Fargate
- Onebox
- Alibaba Cloud Container Service for Kubernetes (ACK)
- Azure Container Service (ACS) with Kubernetes
- Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
- IBM Kubernetes Service (IKS)
- OpenShift v4
-
- Defender Types
- Manage your Defenders
- Redeploy Defenders
- Uninstall Defenders
-
- Deploy Orchestrator Defenders on Amazon ECS
- Automatically Install Container Defender in a Cluster
- Deploy Prisma Cloud Defender from the GCP Marketplace
- Deploy Defenders as DaemonSets
- VMware Tanzu Application Service (TAS) Defender
- Deploy Defender on Google Kubernetes Engine (GKE)
- Google Kubernetes Engine (GKE) Autopilot
- Deploy Defender on OpenShift v4
-
- Agentless Scanning Modes
-
- Onboard AWS Accounts for Agentless Scanning
- Onboard Azure Accounts for Agentless Scanning
- Configure Agentless Scanning for Azure
- Onboard GCP Accounts for Agentless Scanning
- Configure Agentless Scanning for GCP
- Onboard Oracle Cloud Infrastructure (OCI) Accounts for Agentless Scanning
- Configure Agentless Scanning for Oracle Cloud Infrastructure (OCI)
- Agentless Scanning Results
-
- Rule ordering and pattern matching
- Backup and Restore
- Custom feeds
- Configuring Prisma Cloud proxy settings
- Prisma Cloud Compute certificates
- Configure scanning
- User certificate validity period
- Enable HTTP access to Console
- Set different paths for Defender and Console (with DaemonSets)
- Authenticate to Console with Certificates
- Configure custom certs from a predefined directory
- Customize terminal output
- Collections
- Tags
- Logon settings
- Reconfigure Prisma Cloud
- Subject Alternative Names
- WildFire Settings
- Log Scrubbing
- Clustered-DB
- Permissions by feature
-
- Logging into Prisma Cloud
- Integrating with an IdP
- Integrate with Active Directory
- Integrate with OpenLDAP
- Integrate Prisma Cloud with Open ID Connect
- Integrate with Okta via SAML 2.0 federation
- Integrate Google G Suite via SAML 2.0 federation
- Integrate with Azure Active Directory via SAML 2.0 federation
- Integrate with PingFederate via SAML 2.0 federation
- Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
- Integrate Prisma Cloud with GitHub
- Integrate Prisma Cloud with OpenShift
- Non-default UPN suffixes
- Compute user roles
- Assign roles
-
- Prisma Cloud Vulnerability Feed
- Scanning Procedure
- Vulnerability Management Policies
- Vulnerability Scan Reports
- Scan Images for Custom Vulnerabilities
- Base images
- Vulnerability Explorer
- CVSS scoring
- CVE Viewer
-
- Configure Registry Scans
- Scan Images in Alibaba Cloud Container Registry
- Scan Images in Amazon Elastic Container Registry (ECR)
- Scan images in Azure Container Registry (ACR)
- Scan Images in Docker Registry v2 (including Docker Hub)
- Scan Images in GitLab Container Registry
- Scan images in Google Artifact Registry
- Scan Images in Google Container Registry (GCR)
- Scan Images in Harbor Registry
- Scan Images in IBM Cloud Container Registry
- Scan Images in JFrog Artifactory Docker Registry
- Scan Images in Sonatype Nexus Registry
- Scan images in OpenShift integrated Docker registry
- Scan Images in CoreOS Quay Registry
- Trigger Registry Scans with Webhooks
- Configure VM image scanning
- Configure code repository scanning
- Malware scanning
- Windows container image scanning
- Serverless Functions Scanning
- VMware Tanzu Blobstore Scanning
- Scan App-Embedded workloads
- Troubleshoot Vulnerability Detection
-
- Compliance Explorer
- Enforce compliance checks
- CIS Benchmarks
- Prisma Cloud Labs compliance checks
- Serverless functions compliance checks
- Windows compliance checks
- DISA STIG compliance checks
- Custom compliance checks
- Trusted images
- Host scanning
- VM image scanning
- App-Embedded scanning
- Detect secrets
- OSS license management
-
- Alert Mechanism
- AWS Security Hub
- Cortex XDR alerts
- Cortex XSOAR alerts
- Email alerts
- Google Cloud Pub/Sub
- Google Cloud Security Command Center
- IBM Cloud Security Advisor
- JIRA Alerts
- PagerDuty alerts
- ServiceNow alerts for Security Incident Response
- ServiceNow alerts for Vulnerability Response
- Slack Alerts
- Splunk Alerts
- Webhook alerts
- API
Kubernetes
This topic helps you install Prisma Cloud in your Kubernetes cluster quickly.
There are many ways to install Prisma Cloud, but use this workflow to quickly deploy Defenders and verify how information is accessible from the Prisma Cloud Console.
After completing this procedure, you can modify the installation to match your needs.
To better understand clusters, read our cluster context topic.
To deploy Prisma Cloud Defenders, you use the command-line utility called , which is bundled with the Prisma Cloud software.
The process has the following steps to give you full control over the created objects.
- The twistcli utility generates YAML configuration files or Helm charts for the Defender.
- You create the required objects in your cluster with the kubectl create command.
You can inspect, customize, and manage the YAML configuration files or Helm charts before deploying the Prisma Cloud Console and Defender.
You can place the files or charts under source control to track changes, to integrate them with Continuos Integration and Continuos Development (CI/CD) pipelines, and to enable effective collaboration.
Each Prisma Cloud Defender is deployed as a DaemonSet to ensure that a Prisma Cloud Defender instance runs on each worker node of your cluster.
Prerequisites
To deploy your Defenders smoothly, you must meet the following requirements.
- You have a valid Prisma Cloud license key and access token.
- You have a valid access key and secret key created for the admin user inside Prisma Cloud
- You provisioned a Kubernetes cluster that meets the minimum system requirements and runs a supported Kubernetes version.
- You set up a Linux or macOS system to control your cluster, and you can access the cluster using the kubectl command-line utility.
- The nodes in your cluster can reach Prisma Cloud’s cloud registry at registry-auth.twistlock.com.
- Your cluster can create PersistentVolumes and LoadBalancers from YAML configuration files or Helm charts.
- Your cluster uses any of the following runtimes. For more information about the runtimes that Prisma Cloud supports, see the system requirements.
- Docker Engine
- CRI-O
- CRI-containerd
- Install the Prisma Cloud command-line utility called twistcli, which is bundled with the Prisma Cloud software. You use twistcli to deploy the Defenders.
Required Permissions
- You can create and delete namespaces in your cluster.
- You can run the kubectl create command.
Required Firewall and Port Configuration
Open the following ports in your firewall.
Ports for the
Prisma Cloud Console
:- Incoming: 8083, 8084
- Outgoing: 443, 53
Ports for the
Prisma Cloud Defenders
:- Incoming: None
- Outgoing: 8084
To use Prisma Cloud as part of your Kubernetes deployment, you need the twistcli command-line utility and the Prisma Cloud Defenders.
Use the twistcli command-line utility to install the Prisma Cloud Console and .
The twistcli utility is included with every release, or you can download the utility separately.
After completing this procedure, the Prisma Cloud Console and Prisma Cloud Defenders run in your Kubernetes cluster.
When you install Prisma Cloud on Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Alibaba Container Service with Kubernetes, additional configuration steps are required.
Install the Prisma Cloud Defender
This approach is called declarative object management.
It allows you to work directly with the YAML configuration files.
The benefit is that you get the full source code for the custom resources you create in your cluster, and you can use a version control tool to manage and track modifications.
With YAML configuration files under version control, you can delete and reliably recreate DaemonSets in your environment.
If you don’t have kubectl access to your cluster, you can deploy Defender DaemonSets directly from the Console UI.
This procedure shows you how to deploy Defender DaemonSets using the twistcli command-line utility and declarative object management.
You can also generate the installation commands using the Prisma Cloud Console UI under
Manage > Defenders > Deploy > Defenders
.
Installation scripts are provided for Linux and MacOS workstations.
Use the twistcli command-line utility to generate the Defender DaemonSet YAML configuration files from Windows workstations.
Deploy the custom resources with kubectl following this procedure.- Generate the DaemonSet custom resource for the Defender.
- Go toCompute > Manage > Defenders > Deployed Defenders > Manual deploy.
- SelectOrchestrator.
- SelectKubernetesfromStep 2: Choose the orchestrator type.
- Copy the hostname fromStep 3: The name that Defender will use to connect to this Console.
- Generate the defender.yaml file using the following twistcli command with the described parameters.For Defenders deployed in the cluster where Console runs, specify the service name of the Prisma Cloud Console, for example twistlock-console.$ <PLATFORM>./twistcli defender export kubernetes \ --user <ADMIN_USER_ACCESS_KEY> \ --address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \ --cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME> \ --container-runtime containerd
- <ADMIN_USER_ACCESS_KEY> is the access key of the Prisma Cloud user with the System Admin role.
- <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> specifies the address of the Prisma Cloud Compute Console.
- <PRISMA_CLOUD_COMPUTE_HOSTNAME> specifies the address Defender uses to connect to Prisma Cloud Console. You can use the external IP address exposed by your load balancer or the DNS name that you manually set up.
- Once you run the given command, after altering the fields for your environment, you will get a prompt requesting a password. The password is the secret key of the Prisma Cloud user with the System Admin role that you should have created as part of the prerequisite.
- For provider managed clusters, Prisma Cloud automatically gets the cluster name from your cloud provider.
- To override the cluster name used that your cloud provider has, use the --cluster option.
- For self-managed clusters, such as those built with kops, manually specify a cluster name with the --cluster option.
- When using the CRI-O or containerd runtimes, pass the --container-runtime crio or --container-runtime containerd flag to twistcli when you generate the YAML configuration file or the Helm chart.
- When using an AWS Bottlerocket-based EKS cluster, pass the --container-runtime crio flag when creating the YAML file.
- To use Defenders inGKE on ARM, you must prepare your workloads.
Deploy the Defender DaemonSet custom resource.$ kubectl create -f ./defender.yamlYou can run both Prisma Cloud Console and Defenders in the same Kubernetes namespace, for example twistlock. However, you must be careful when running kubectl delete commands with the YAML file generated for Defender. The defender.yaml file contains the namespace declaration, so comment out the namespace section if you don’t want the namespace deleted.(Optional) Schedule Defenders on your Kubernetes master nodes.If you want to also schedule Defenders on your Kubernetes master nodes, change the DaemonSet’s toleration spec. Master nodes are tainted by design. Only pods that specifically match the taint can run there. Tolerations allow pods to be deployed on nodes to which taints have been applied. To schedule Defenders on your master nodes, add the following tolerations to your DaemonSet spec.tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"In Prisma Cloud Compute, go toManage > Defenders > Deployed Defenders > Manual deployto see a list of deployed Defenders.Install Prisma Cloud with Helm chartsFollow the main install flow, with the following changes.- Pass the --helm_ option to _twistcli to generate a Helm chart. Don’t change the other options passed to twistcli since they configure the chart.
- Deploy your Defender with the helm install command instead of kubectl create.
The following procedure shows the modified commands.- Download the current recommended release.
- Create a Console Helm chart.$ <PLATFORM>/twistcli console export kubernetes \ --service-type LoadBalancer \ --helmInstall the Console.$ helm install twistlock-console \ --namespace twistlock \ --create-namespace \ ./twistlock-console-helm.tar.gzCreate a Defender DaemonSet Helm chart.$ <PLATFORM>/twistcli defender export kubernetes \ --address https://yourconsole.example.com:8083 \ --helm \ --user <ADMIN_USER_ACCESS_KEY> \ --cluster-address twistlock-consoleInstall the Defender.$ helm install twistlock-defender-ds \ --namespace twistlock \ --create-namespace \ ./twistlock-defender-helm.tar.gzInstall Prisma Cloud on a CRI (non-Docker) clusterDeploy Defenders as DaemonSetsKubernetes lets you set up a cluster with the container runtime of your choice. Prisma Cloud supports Docker Engine, CRI-O, and cri-containerd.When generating the YAML file or Helm chat to deploy the Defender DaemonSet, you can select theContainer Runtime typeon Console UI underManage > Defenders > Deployed Defenders > Manual deploy.Since Defenders need to have a view of other containers, this option is necessary to guide the communication.If you use containerd on GKE, and you install Defender without selecting the CRI-OContainer Runtime type, everything will appear to work properly, but you’ll have no images or container scan reports inMonitor > VulnerabilityandMonitor > Compliance pagesand you’ll have no runtime models inMonitor > Runtime. This happens because the Google Container Optimized Operating system (GCOOS) nodes have Docker Engine installed, but Kubernetes doesn’t use it. Defender thinks everything is OK because all of the integrations succeed, but the underlying runtime is actually different.If you’re deploying Defender DaemonSets with twistcli, use the following flag with one of the container runtime types:$ <PLATFORM>/twistcli defender export kubernetes \ --container-runtime crio --address https://yourconsole.example.com:8083 \ --user <ADMIN_USER> \ --cluster-address yourconsole.example.comWhen generating YAML from Console or twistcli, there is a simple change to the yaml file as seen below.In this abbreviated version DEFENDER_TYPE:daemonset will use the Docker interface.... spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service restartPolicy: Always containers: - name: twistlock-defender-19-03-321 image: registry-auth.twistlock.com/tw_<token>/twistlock/defender:defender_19_03_321 volumeMounts: - name: host-root mountPath: "/host" - name: data-folder mountPath: "/var/lib/twistlock" ... env: - name: WS_ADDRESS value: wss://yourconsole.example.com:8084 - name: DEFENDER_TYPE value: daemonset - name: DEFENDER_LISTENER_TYPE value: "none" ...In this abbreviated version DEFENDER_TYPE:cri will use the CRI.... spec: template: metadata: labels: app: twistlock-defender spec: serviceAccountName: twistlock-service restartPolicy: Always containers: - name: twistlock-defender-19-03-321 image: registry-auth.twistlock.com/tw_<token>/twistlock/defender:defender_19_03_321 volumeMounts: - name: host-root mountPath: "/host" - name: data-folder mountPath: "/var/lib/twistlock" ... env: - name: WS_ADDRESS value: wss://yourconsole.example.com:8084 - name: DEFENDER_TYPE value: cri - name: DEFENDER_LISTENER_TYPE value: "none" ...TroubleshootingKubernetes CrashLoopBackOff ErrorErrorBack-off restarting failed containerTo get the error logs, run the command: kubectl describe pod[name].ReasonThis is caused due to a temporary memory resource overload. When running WAAS Out-of-Band (OOB), the Defender automatically increases the cgroup memory limit to 4 GB (as OOB needs more memory). But since the Defenders' cgroup in K8s is hierarchically under the cgroup of the K8s pod with a limit of 512 MB, this results in an Out-Of-Memory error.Increase the Defender Pod LimitIncrease the Pod limit to 4 GB when activating WAAS OOB on K8s cluster.
- For running Defenders
- Run kubectl edit ds twistlock-defender-ds -n twistlock and change the value underresources > limits > memoryto4096Miin the Daemonset spec *.yaml file.
- Savethe file to restart the Defenders with the increased memory limit.
- When deploying Defenders
- With YAML:
- Change the value ofresources > limits > memoryto4096Mi.
- Deploy the *.yaml file.
- With HELM:
- Change the value oflimit_memoryto4096Miin "values.yaml" file.
- With script:
- Deploy the Defender using the install script.
- Run kubectl edit ds twistlock-defender-ds -n twistlock and change the value underresources > limits > memoryto4096Mi.
- Savethe file to restart the Defenders with the increased memory limit.
Pod Security PolicyIf Pod Security Policy is enabled in your cluster, you might get the following error when trying to create a Defender DaemonSet.Error creating: pods "twistlock-defender-ds-" is forbidden: unable to validate against any pod security policy ..Privileged containers are not allowedKubernetes has deprecated Pod Security Policy. The following troubleshooting steps only apply to the deprecated PodSecurityPolicy custom resource.If you get this error, then you must create a PodSecurityPolicy for the Defender and the necessary ClusterRole and ClusterRoleBinding for the twistlock namespace. You can use the following Pod Security Policy, ClusterRole and ClusterRoleBinding:apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: prismacloudcompute-service spec: privileged: false seLinux: rule: RunAsAny allowedCapabilities: - AUDIT_CONTROL - NET_ADMIN - SYS_ADMIN - SYS_PTRACE - MKNOD - SETFCAP volumes: - "hostPath" - "secret" allowedHostPaths: - pathPrefix: "/etc" - pathPrefix: "/var" - pathPrefix: "/run" - pathPrefix: "/dev/log" - pathPrefix: "/" hostNetwork: true hostPID: true supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAnyapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prismacloudcompute-defender-role rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: - prismacloudcompute-serviceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prismacloudcompute-defender-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prismacloudcompute-defender-role subjects: - kind: ServiceAccount name: twistlock-service namespace: twistlockRecommended For You