End-of-Life (EoL)
Docker Swarm
This procedure is optimized to get Prisma Cloud installed and set up in your Docker Swarm cluster quickly.
There are many ways to install Prisma Cloud, but we recommend that you start with this procedure first.
You can tweak the install procedure after you have validated that this install method works.
The Prisma Cloud install supports Docker Swarm using Swarm-native constructs.
Deploy Console as a service so you can rely on Swarm to ensure Console is always available.
Deploy Defender as a global service to guarantee that Defender is automatically deployed to every worker node with a simple one-time configuration.
Install Prisma Cloud
After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will run in your Swarm cluster.
This setup uses a load balancer (HAProxy) and external persistent storage so that Console can failover and restart on any Swarm worker node.
If you don’t have external persistent storage, you can configure Console to use local storage, but you must pin Console to the node with the local storage.
Console with local storage is not recommended for production-grade setups.
In this procedure, Prisma Cloud images are pulled from Prisma Cloud’s cloud registry.
Prisma Cloud doesn’t support deploying Defender as a global service when SELinux is enabled on your underlying hosts.
Defender requires access to the Docker socket to monitor your environment and enforce your policies.
SELinux blocks access to the Docker socket because it can be a serious security issue.
Unfortunately, Swarm doesn’t provide a way for legitimate services to run with elevated privileges.
None of the --security-opts, --privileged, or --cap-add flags are supported for Swarm services.
As a work-around, install single Container Defenders on each individual node in your cluster.
Set up a load balancer
Swarm uses a routing mesh inside the cluster.
When you deploy Prisma Cloud Console as a replicated service, Swarm’s routing mesh publishes Console’s ports on every node.
A load balancer is required to facilitate Defender-to-Console communication.
Console is deployed on an overlay network, and Defenders are deployed in the host network namespace.
Because Defenders aren’t connected to the overlay network, they cannot connect to the Virtual IP (VIP) address of the Prisma Cloud Console service.
Prepare your load balancer so that traffic is distributed to all available Swarm worker nodes.
The nodes use Swarm’s routing mesh to forward traffic to the worker node that runs Console.
The following diagram shows the setup:

The following example HAProxy configuration has been tested in our labs.
Use it as a starting point for your own configuration.
Whichever load balancer you use, be sure it supports TCP passthrough.
Otherwise, Defenders might not be able to connect Console.
global ... ca-base /etc/ssl/certs crt-base /etc/ssl/private ... ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 maxsslconn 256 tune.ssl.default-dh-param 2048 defaults ... frontend https_front stats uri /haproxy?stats default_backend https_back bind *:8083 ssl crt /etc/ssl/private/haproxy.pem backend https_back balance roundrobin server node1 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node2 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node3 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none frontend defender_front stats uri /haproxy?stats default_backend defender_back option tcplog mode tcp bind *:8084 backend defender_back balance roundrobin mode tcp option tcp-check server node1 IP-OF-YOUR-SWARMWORKER:8084 check server node2 IP-OF-YOUR-SWARMWORKER:8084 check server node3 IP-OF-YOUR-SWARMWORKER:8084 check
A couple of notes about the config file:
- Traffic is balanced across three Swarm nodes. Specify as many Swarm nodes as needed under backend http_back, backend https_back, and backend defender_back.
- The port binding 8083 uses HTTPS, so you must create a certificate in PEM format before applying the configuration. See bind *:8083 ssl crt /etc/ssl/private/haproxy.pem under frontend https_front. The cert in this configuration is stored in /etc/ssl/private/haproxy.pem. Use the linked instructions to create a certificate. We recommend creating a certificate that is signed by your trusted CA.
(Optional) Set up a DNS record
Simplify the configuration of your environment by setting up a DNS A Record that points to your load balancer.
Then use the load balancer’s domain name to:
- Connect to Console’s HTTP or HTTPS web interface,
- Interface with Console’s API,
- Configure how Defender connects to Console.
Set up persistent storage
Install a volume driver that can create persistent volumes that can be accessed from any node in the cluster.
Because Console can be scheduled on any node, it must be able to access its data and backup folders from wherever it runs.
You can use any available volume plugin, then specify the plugin driver with the --volume-driver option when installing Prisma Cloud Console with twistcli.
Every node in your cluster must have the proper permissions to create persistent volumes.
This procedure describes how to use the Google Cloud Platform and NFSv4 volume drivers, but you can use any supported volume plugin.
Set up persistent storage on GCP
Set up the gce-docker volume plugin on each cluster node, then create data and backup volumes for Console.
- Verify that Swarm is enabled on all nodes, and that they are connected to a healthy master.
- Install the GCP volume plugin. Run the following command on each node.$ docker run -d \ -v /:/rootfs \ -v /run/docker/plugins:/run/docker/plugins \ -v /var/run/docker.sock:/var/run/docker.sock \ --privileged \ mcuadros/gce-dockerCreate persistent volumes to hold Console’s data and backups.$ docker volume create \ --driver=gce \ --name twistlock-console \ -o SizeGb=90$ docker volume create \ --driver=gce \ --name twistlock-backup \ -o SizeGb=90Set up persistent storage on NFSv4Set up an NFS server, then create data and backup volumes for Console. The NFS server should run on a dedicated host that runs outside of the Swarm cluster.Prisma Cloud Console uses MongoDB to store data. There are some mount options required when accessing a MongoDB database from an NFSv4 volume.
- Install an NFSv4 server:$ sudo apt install nfs-kernel-serverConfigure the server.
- Open /etc/exports for editing.$ sudo vim /etc/exportsAppend the following line to the file./srv/home *(rw,sync,no_root_squash)
- Start the server.$ sudo systemctl start nfs-kernel-server.serviceMount all other nodes.$ sudo mount -o nolock,bg,noatime <server-ip>:/srv/home /<local>/srv/homeEnsure all permissions are granted to twistlock user (2674).Create NFS volumes to hold Console’s data and backups.$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-console$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-backupInstall ConsoleInstall Console as a Docker Swarm service.Prerequisites:
- All the components in your environment (nodes, host operating systems, orchestrator, etc) meet the hardware and version specs in System requirements.
- Your Swarm cluster is up and running.
- Your persistent storage is configured correctly.
- Your load balancer is configured correctly for ports 8083 (HTTPS) and 8084 (TCP).
- You created a DNS record that points to your load balancer.
- Get a link to the current recommended release.
- Connect to your master node.$ ssh <SWARM-MASTER>Retrieve the release tarball.$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>Unpack the Prisma Cloud release tarball.$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/ $ cd twistlockInstall Console into your Swarm using the twistcli utility.If you are using GCP:$ ./linux/twistcli console install swarm --volume-driver "gcp"