Defender architecture
Table of Contents
Expand all | Collapse all
-
- Getting started
- System Requirements
- Cluster Context
-
- Prisma Cloud Container Images
- Kubernetes
- Deploy the Prisma Cloud Console on Amazon ECS
- Console on Fargate
- Onebox
- Alibaba Cloud Container Service for Kubernetes (ACK)
- Azure Container Service (ACS) with Kubernetes
- Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
- IBM Kubernetes Service (IKS)
- OpenShift v4
-
- Defender Types
- Manage your Defenders
- Redeploy Defenders
- Uninstall Defenders
-
- Deploy Orchestrator Defenders on Amazon ECS
- Automatically Install Container Defender in a Cluster
- Deploy Prisma Cloud Defender from the GCP Marketplace
- Deploy Defenders as DaemonSets
- VMware Tanzu Application Service (TAS) Defender
- Deploy Defender on Google Kubernetes Engine (GKE)
- Google Kubernetes Engine (GKE) Autopilot
- Deploy Defender on OpenShift v4
-
- Agentless Scanning Modes
-
- Onboard AWS Accounts for Agentless Scanning
- Onboard Azure Accounts for Agentless Scanning
- Configure Agentless Scanning for Azure
- Onboard GCP Accounts for Agentless Scanning
- Configure Agentless Scanning for GCP
- Onboard Oracle Cloud Infrastructure (OCI) Accounts for Agentless Scanning
- Configure Agentless Scanning for Oracle Cloud Infrastructure (OCI)
- Agentless Scanning Results
-
- Rule ordering and pattern matching
- Backup and Restore
- Custom feeds
- Configuring Prisma Cloud proxy settings
- Prisma Cloud Compute certificates
- Configure scanning
- User certificate validity period
- Enable HTTP access to Console
- Set different paths for Defender and Console (with DaemonSets)
- Authenticate to Console with Certificates
- Configure custom certs from a predefined directory
- Customize terminal output
- Collections
- Tags
- Logon settings
- Reconfigure Prisma Cloud
- Subject Alternative Names
- WildFire Settings
- Log Scrubbing
- Clustered-DB
- Permissions by feature
-
- Logging into Prisma Cloud
- Integrating with an IdP
- Integrate with Active Directory
- Integrate with OpenLDAP
- Integrate Prisma Cloud with Open ID Connect
- Integrate with Okta via SAML 2.0 federation
- Integrate Google G Suite via SAML 2.0 federation
- Integrate with Azure Active Directory via SAML 2.0 federation
- Integrate with PingFederate via SAML 2.0 federation
- Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
- Integrate Prisma Cloud with GitHub
- Integrate Prisma Cloud with OpenShift
- Non-default UPN suffixes
- Compute user roles
- Assign roles
-
- Prisma Cloud Vulnerability Feed
- Scanning Procedure
- Vulnerability Management Policies
- Vulnerability Scan Reports
- Scan Images for Custom Vulnerabilities
- Base images
- Vulnerability Explorer
- CVSS scoring
- CVE Viewer
-
- Configure Registry Scans
- Scan Images in Alibaba Cloud Container Registry
- Scan Images in Amazon Elastic Container Registry (ECR)
- Scan images in Azure Container Registry (ACR)
- Scan Images in Docker Registry v2 (including Docker Hub)
- Scan Images in GitLab Container Registry
- Scan images in Google Artifact Registry
- Scan Images in Google Container Registry (GCR)
- Scan Images in Harbor Registry
- Scan Images in IBM Cloud Container Registry
- Scan Images in JFrog Artifactory Docker Registry
- Scan Images in Sonatype Nexus Registry
- Scan images in OpenShift integrated Docker registry
- Scan Images in CoreOS Quay Registry
- Trigger Registry Scans with Webhooks
- Configure VM image scanning
- Configure code repository scanning
- Malware scanning
- Windows container image scanning
- Serverless Functions Scanning
- VMware Tanzu Blobstore Scanning
- Scan App-Embedded workloads
- Troubleshoot Vulnerability Detection
-
- Compliance Explorer
- Enforce compliance checks
- CIS Benchmarks
- Prisma Cloud Labs compliance checks
- Serverless functions compliance checks
- Windows compliance checks
- DISA STIG compliance checks
- Custom compliance checks
- Trusted images
- Host scanning
- VM image scanning
- App-Embedded scanning
- Detect secrets
- OSS license management
-
- Alert Mechanism
- AWS Security Hub
- Cortex XDR alerts
- Cortex XSOAR alerts
- Email alerts
- Google Cloud Pub/Sub
- Google Cloud Security Command Center
- IBM Cloud Security Advisor
- JIRA Alerts
- PagerDuty alerts
- ServiceNow alerts for Security Incident Response
- ServiceNow alerts for Vulnerability Response
- Slack Alerts
- Splunk Alerts
- Webhook alerts
- API
Defender architecture
Customers often ask how Prisma Cloud Defender really works under the covers.
Prisma Cloud leverages Docker’s ability to grant advanced kernel capabilities to enable Defender to protect your whole stack, while being completely containerized and utilizing a least privilege security design.
Defender design
Because we’ve built Prisma Cloud expressly for cloud native stacks, the architecture of our agent (what we call Defender) is quite different.
Rather than having to install a kernel module, or modify the host OS at all, Defender instead runs as a Docker container and takes only those specific system privileges required for it to perform its job.
It does not run as --privileged and instead takes the specific system capabilities of net_admin, sys_admin, sys_ptrace, mknod, and setfcap that it needs to run in the host namespace and interact with both it and other containers running on the system.
You can see this clearly by inspecting the Defender container:
# docker inspect twistlock_defender_<VERSION> | grep -e CapAdd -A 7 -e Priv "CapAdd": [ "NET_ADMIN", "SYS_ADMIN", "SYS_PTRACE", "MKNOD", "SETFCAP" ], -- "Privileged": false,
This architecture allows Defender to have a near real time view of the activity occurring at the kernel level.
Because we also have detailed knowledge of the operations of each container, we can correlate the kernel data with the container data to get a comprehensive view of process, file system, network, and system call activity from the kernel and all the containers running on it.
This access also allows us to take preventative actions like stopping compromised containers and blocking anomalous processes and file system writes.
Critically, though, Defender runs as a user mode process.
If Defender were to fail (and if that were to happen, it would be restarted immediately), there would be no impact on the containers on the host, nor the host kernel itself.
Additionally, we can and do apply cgroups to set hard limits on CPU and memory consumption, guaranteeing it will be a ‘good neighbor’ on the host and not interfere with host performance or stability.
In the event of a communications failure with Console, Defender continues running and enforcing the active policy that was last pushed by the management point.
Events that would be pushed back to Console are cached locally until it is once again reachable.
Why not a kernel module?
Given the broad range of security protection Prisma Cloud provides, not just for containers, but also for the hosts they run on, you might assume that we use a kernel module - with all the associated baggage that goes along with that.
However, that’s not actually how Prisma Cloud works.
Kernel modules are compiled software components that can be inserted into the kernel at runtime and typically provide enhanced capabilities for low level functionality like process scheduling or file monitoring.
Because they run as part of the kernel, these components are very powerful and privileged.
This allows them to perform a wide range of functions but also greatly increases the operational and security risks on a given system.
The kernel itself is extensively tested across broad use cases, while these modules are often created by individual companies with far fewer resources and far more narrow test coverage.
Because kernel modules have unrestricted system access, a security flaw in them is a system wide exposure.
A single unchecked buffer or other error in such a low level component can lead to the complete compromise of an otherwise well designed and hardened system.
Further, kernel modules can introduce significant stability risks to a system.
Again, because of their wide access, a poorly performing kernel module that’s frequently called can drag down performance of the entire host, consume excessive resources, and lead to kernel panics.
For these reasons, many modern operating systems designed for cloud native apps, like Google Container-Optimized OS, explicitly prevent the usage of kernel modules.
Defender-Console communication
By default, Defender connects to Console with a websocket on TCP port 8084.
This port number can be customized to meet the needs of your environment.
All traffic between Defender and Console is TLS encrypted.
Defender has no privileged access to Console or the underlying host where Console is installed.
By design, Console and Defender don’t trust each other and Defender mutual certificate-based authentication is required to connect. For more information about the Console-Defender communication certificates, see the certificates article.
Pre-auth, connections are blocked.
Post-auth, Defender’s capabilities are limited to getting policies from Console and sending event data to Console.
If Defender were to be compromised, the risk would be local to the system where it is deployed, the privilege it has on the local system, and the possibility of it sending garbage data to Console.
Console communication channels are separated, with no ability to jump channels.
Defender has no ability to interact with Console beyond the websocket.
Both Console’s API and web interfaces, served on port 8083 (HTTPS), require authentication over a different channel with different credentials (e.g. username and password, access key, and so on), none of which Defender holds.
Blocking rules
Defender is responsible for enforcing vulnerability and compliance blocking rules.
When a blocking rule is created, Defender moves the original runC binary to a new path and inserts a Prisma Cloud runC shim binary in its place.
When a command to create a container is issued, it propagates down the layers of the container orchestration stack, eventually terminating at runC.
Regardless of your environment (Docker, Kubernetes, or OpenShift, etc) and underlying CRI provider, runC does the actual work of instantiating a container.

When starting a container in a Prisma Cloud-protected environment:
- The Prisma Cloud runC shim binary intercepts calls to the runC binary.
- The shim binary calls the Defender container to determine whether the new container should be created based on the installed policy.
- If Defender replies affirmatively, the shim calls the original runC binary to create the container, and then exits.
- If Defender replies negatively, the shim terminates the request.
- If Defender does not reply within 60 seconds, the shim calls the original runC binary to create the container and then exits.
The last step guarantees that Defender always fails open, which is important for the resiliency of your environment.
Even if the Defender process terminates, becomes unresponsive, or cannot be restarted, a failed Defender will not hinder deployments or the normal operation of a node.
Firewalls
Defender enforces WAF policies (WAAS) and monitors layer 4 traffic (CNNS).
In both cases, Defender creates iptables rules on the host so it can observe network traffic.
For more information, see CNNS and WAAS architecture.