High Availability and Disaster Recovery guidelines
Table of Contents
Self.Hosted 31.xx
Expand all | Collapse all
-
- Getting started
- System Requirements
- Cluster Context
-
- Prisma Cloud Container Images
- Kubernetes
- Deploy the Prisma Cloud Console on Amazon ECS
- Console on Fargate
- Onebox
- Alibaba Cloud Container Service for Kubernetes (ACK)
- Azure Container Service (ACS) with Kubernetes
- Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
- IBM Kubernetes Service (IKS)
- OpenShift v4
-
- Defender Types
- Manage your Defenders
- Redeploy Defenders
- Uninstall Defenders
-
- Deploy Orchestrator Defenders on Amazon ECS
- Automatically Install Container Defender in a Cluster
- Deploy Prisma Cloud Defender from the GCP Marketplace
- Deploy Defenders as DaemonSets
- VMware Tanzu Application Service (TAS) Defender
- Deploy Defender on Google Kubernetes Engine (GKE)
- Google Kubernetes Engine (GKE) Autopilot
- Deploy Defender on OpenShift v4
-
- Agentless Scanning Modes
-
- Onboard AWS Accounts for Agentless Scanning
- Onboard Azure Accounts for Agentless Scanning
- Configure Agentless Scanning for Azure
- Onboard GCP Accounts for Agentless Scanning
- Configure Agentless Scanning for GCP
- Onboard Oracle Cloud Infrastructure (OCI) Accounts for Agentless Scanning
- Configure Agentless Scanning for Oracle Cloud Infrastructure (OCI)
- Agentless Scanning Results
-
- Rule ordering and pattern matching
- Backup and Restore
- Custom feeds
- Configuring Prisma Cloud proxy settings
- Prisma Cloud Compute certificates
- Configure scanning
- User certificate validity period
- Enable HTTP access to Console
- Set different paths for Defender and Console (with DaemonSets)
- Authenticate to Console with Certificates
- Configure custom certs from a predefined directory
- Customize terminal output
- Collections
- Tags
- Logon settings
- Reconfigure Prisma Cloud
- Subject Alternative Names
- WildFire Settings
- Log Scrubbing
- Clustered-DB
- Permissions by feature
-
- Logging into Prisma Cloud
- Integrating with an IdP
- Integrate with Active Directory
- Integrate with OpenLDAP
- Integrate Prisma Cloud with Open ID Connect
- Integrate with Okta via SAML 2.0 federation
- Integrate Google G Suite via SAML 2.0 federation
- Integrate with Azure Active Directory via SAML 2.0 federation
- Integrate with PingFederate via SAML 2.0 federation
- Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
- Integrate Prisma Cloud with GitHub
- Integrate Prisma Cloud with OpenShift
- Non-default UPN suffixes
- Compute user roles
- Assign roles
-
- Prisma Cloud Vulnerability Feed
- Scanning Procedure
- Vulnerability Management Policies
- Vulnerability Scan Reports
- Scan Images for Custom Vulnerabilities
- Base images
- Vulnerability Explorer
- CVSS scoring
- CVE Viewer
-
- Configure Registry Scans
- Scan Images in Alibaba Cloud Container Registry
- Scan Images in Amazon Elastic Container Registry (ECR)
- Scan images in Azure Container Registry (ACR)
- Scan Images in Docker Registry v2 (including Docker Hub)
- Scan Images in GitLab Container Registry
- Scan images in Google Artifact Registry
- Scan Images in Google Container Registry (GCR)
- Scan Images in Harbor Registry
- Scan Images in IBM Cloud Container Registry
- Scan Images in JFrog Artifactory Docker Registry
- Scan Images in Sonatype Nexus Registry
- Scan images in OpenShift integrated Docker registry
- Scan Images in CoreOS Quay Registry
- Trigger Registry Scans with Webhooks
- Configure VM image scanning
- Configure code repository scanning
- Malware scanning
- Windows container image scanning
- Serverless Functions Scanning
- VMware Tanzu Blobstore Scanning
- Scan App-Embedded workloads
- Troubleshoot Vulnerability Detection
-
- Compliance Explorer
- Enforce compliance checks
- CIS Benchmarks
- Prisma Cloud Labs compliance checks
- Serverless functions compliance checks
- Windows compliance checks
- DISA STIG compliance checks
- Custom compliance checks
- Trusted images
- Host scanning
- VM image scanning
- App-Embedded scanning
- Detect secrets
- OSS license management
-
- Alert Mechanism
- AWS Security Hub
- Cortex XDR alerts
- Cortex XSOAR alerts
- Email alerts
- Google Cloud Pub/Sub
- Google Cloud Security Command Center
- IBM Cloud Security Advisor
- JIRA Alerts
- PagerDuty alerts
- ServiceNow alerts for Security Incident Response
- ServiceNow alerts for Vulnerability Response
- Slack Alerts
- Splunk Alerts
- Webhook alerts
- API
High Availability and Disaster Recovery guidelines
The following article describes the key guidelines for keeping your Prisma Cloud Compute deployment highly available, and creating a disaster recovery process.
Prisma Cloud Compute deployment consists of two components - Console and Defenders.
- Console is the management interface. It lets you define policy and monitor your environment.
- Defenders are spread across your environment and protect its workloads according to the policies set in the Console.
When the Console fails or stops working, your environment still has active runtime protection done by the Defenders. Each Defender holds the updated policies, and keeps protecting your workloads according to them.
This article mainly focuses on Prisma Cloud Compute Edition deployment (self-hosted Console). When leveraging Prisma Cloud Enterprise Edition (SaaS Console), high availability for the console is automatically provided by Palo Alto Networks.
Guidelines
Use the guidelines in this section to create high availability and disaster recovery processes for your deployment.
The following flowchart depicts the guidelines:

Inside each cluster
Whether your deployment is in the cloud or on-premises, orchestrators, such as Kubernetes, OpenShift, and AWS ECS, automatically support HA of the cluster and the containers running on it.
- Console— Set your storage to be external to the Console container/node. In case the Console container/node fails, the orchestrator brings Console back up, where it connects to the external storage to get its latest state.
- Defenders— Defenders are deployed as a DaemonSet. In case of a node failure, the orchestrator automatically brings up another node and deploys a Defender container on it, as a part of the DaemonSet definition.
Between clusters
While not explicitly tested or supported by Palo Alto Networks, in general, solutions that replicate storage between clusters to provide disaster recovery work transparently to Prisma Cloud Compute Edition.
Note that ingress into the Console (DNS mapping and IP routing) may require additional steps during the activation of the secondary sites to ensure the Console is reachable over the network.
Public cloud
- Inside each region— CSPs provide high availability using availability zones inside each region. In case of an AZ failure, most cloud providers bring the cluster back up in another AZ.Use cross availability zones storage solutions, so when the cluster is up in another AZ, it connects to the shared storage and keeps functioning as before. For example, in AWS, EFS can be used as a shared storage between availability zones.
- Between regions— CSPs provide solutions such as snapshots and backups that can be moved between regions, shared storage between regions, etc. You can also use Compute’s backup and restore capabilities for moving the data between regions.
Private cloud (on-premises)
- Inside each site/data center(between clusters on the same site)
- Use shared storage between the clusters.
- Create a disaster recovery process using Compute’s backup and restore capabilities:
- Create a spare cluster (warm or cold) with a Prisma Cloud Compute (PCC) deployment.
- Backup PCC’s data periodically to a location outside of the active cluster.
- If the active cluster fails, bring the spare cluster up, and restore PCC’s data to it.
- Between sites/data centers
- Create a disaster recovery process for cases where one site goes down, using Compute’s backup and restore capabilities:
- Create a spare site (warm or cold) with a PCC deployment.
- Backup PCC’s data periodically to a location outside of the active site.
- If the entire active site fails, bring the spare site up, and restore PCC’s data from the external location to it.
Projects
Projects solve the problem of multi-tenancy. Each project consists of a Console and its Defenders.
Each project is a separate, compartmentalized environment which operates independently with its own rules and configurations.
High availability and disaster recovery processes should be created for each tenant project, similar to the way you would handle a single Console deployment.
If using Compute’s backup and restore capabilities, backups should be created and restored separately for each project.