End-of-Life (EoL)
Docker Swarm
This procedure is optimized to get Prisma Cloud installed and set up in your Docker Swarm cluster quickly.
There are many ways to install Prisma Cloud, but we recommend that you start with this procedure first.
You can tweak the install procedure after you have validated that this install method works.
The Prisma Cloud install supports Docker Swarm using Swarm-native constructs.
Deploy Console as a service so you can rely on Swarm to ensure Console is always available.
Deploy Defender as a global service to guarantee that Defender is automatically deployed to every worker node with a simple one-time configuration.
Install Prisma Cloud
After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will run in your Swarm cluster.
This setup uses a load balancer (HAProxy) and external persistent storage so that Console can failover and restart on any Swarm worker node.
If you don’t have external persistent storage, you can configure Console to use local storage, but you must pin Console to the node with the local storage.
Console with local storage is not recommended for production-grade setups.
In this procedure, Prisma Cloud images are pulled from Prisma Cloud’s cloud registry.
Prisma Cloud doesn’t support deploying Defender as a global service when SELinux is enabled on your underlying hosts.
Defender requires access to the Docker socket to monitor your environment and enforce your policies.
SELinux blocks access to the Docker socket because it can be a serious security issue.
Unfortunately, Swarm doesn’t provide a way for legitimate services to run with elevated privileges.
None of the --security-opts, --privileged, or --cap-add flags are supported for Swarm services.
As a work-around, install single Container Defenders on each individual node in your cluster.
Set up a load balancer
Swarm uses a routing mesh inside the cluster.
When you deploy Prisma Cloud Console as a replicated service, Swarm’s routing mesh publishes Console’s ports on every node.
A load balancer is required to facilitate Defender-to-Console communication.
Console is deployed on an overlay network, and Defenders are deployed in the host network namespace.
Because Defenders aren’t connected to the overlay network, they cannot connect to the Virtual IP (VIP) address of the Prisma Cloud Console service.
Prepare your load balancer so that traffic is distributed to all available Swarm worker nodes.
The nodes use Swarm’s routing mesh to forward traffic to the worker node that runs Console.
The following diagram shows the setup:

The following example HAProxy configuration has been tested in our labs.
Use it as a starting point for your own configuration.
Whichever load balancer you use, be sure it supports TCP passthrough.
Otherwise, Defenders might not be able to connect Console.
global ... ca-base /etc/ssl/certs crt-base /etc/ssl/private ... ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 maxsslconn 256 tune.ssl.default-dh-param 2048 defaults ... frontend https_front stats uri /haproxy?stats default_backend https_back bind *:8083 ssl crt /etc/ssl/private/haproxy.pem backend https_back balance roundrobin server node1 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node2 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node3 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none frontend defender_front stats uri /haproxy?stats default_backend defender_back option tcplog mode tcp bind *:8084 backend defender_back balance roundrobin mode tcp option tcp-check server node1 IP-OF-YOUR-SWARMWORKER:8084 check server node2 IP-OF-YOUR-SWARMWORKER:8084 check server node3 IP-OF-YOUR-SWARMWORKER:8084 check
A couple of notes about the config file:
- Traffic is balanced across three Swarm nodes. Specify as many Swarm nodes as needed under backend http_back, backend https_back, and backend defender_back.
- The port binding 8083 uses HTTPS, so you must create a certificate in PEM format before applying the configuration. See bind *:8083 ssl crt /etc/ssl/private/haproxy.pem under frontend https_front. The cert in this configuration is stored in /etc/ssl/private/haproxy.pem. Use the linked instructions to create a certificate. We recommend creating a certificate that is signed by your trusted CA.
(Optional) Set up a DNS record
Simplify the configuration of your environment by setting up a DNS A Record that points to your load balancer.
Then use the load balancer’s domain name to:
- Connect to Console’s HTTP or HTTPS web interface,
- Interface with Console’s API,
- Configure how Defender connects to Console.
Set up persistent storage
Install a volume driver that can create persistent volumes that can be accessed from any node in the cluster.
Because Console can be scheduled on any node, it must be able to access its data and backup folders from wherever it runs.
You can use any available volume plugin, then specify the plugin driver with the --volume-driver option when installing Prisma Cloud Console with twistcli.
Every node in your cluster must have the proper permissions to create persistent volumes.
This procedure describes how to use the Google Cloud Platform and NFSv4 volume drivers, but you can use any supported volume plugin.
Set up persistent storage on GCP
Set up the gce-docker volume plugin on each cluster node, then create data and backup volumes for Console.
- Verify that Swarm is enabled on all nodes, and that they are connected to a healthy master.
- Install the GCP volume plugin. Run the following command on each node.$ docker run -d \ -v /:/rootfs \ -v /run/docker/plugins:/run/docker/plugins \ -v /var/run/docker.sock:/var/run/docker.sock \ --privileged \ mcuadros/gce-dockerCreate persistent volumes to hold Console’s data and backups.$ docker volume create \ --driver=gce \ --name twistlock-console \ -o SizeGb=90$ docker volume create \ --driver=gce \ --name twistlock-backup \ -o SizeGb=90Set up persistent storage on NFSv4Set up an NFS server, then create data and backup volumes for Console. The NFS server should run on a dedicated host that runs outside of the Swarm cluster.Prisma Cloud Console uses MongoDB to store data. There are some mount options required when accessing a MongoDB database from an NFSv4 volume.
- Install an NFSv4 server:$ sudo apt install nfs-kernel-serverConfigure the server.
- Open /etc/exports for editing.$ sudo vim /etc/exportsAppend the following line to the file./srv/home *(rw,sync,no_root_squash)
- Start the server.$ sudo systemctl start nfs-kernel-server.serviceMount all other nodes.$ sudo mount -o nolock,bg,noatime <server-ip>:/srv/home /<local>/srv/homeEnsure all permissions are granted to twistlock user (2674).Create NFS volumes to hold Console’s data and backups.$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-console$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-backupInstall ConsoleInstall Console as a Docker Swarm service.Prerequisites:
- All the components in your environment (nodes, host operating systems, orchestrator, etc) meet the hardware and version specs in System requirements.
- Your Swarm cluster is up and running.
- Your persistent storage is configured correctly.
- Your load balancer is configured correctly for ports 8083 (HTTPS) and 8084 (TCP).
- You created a DNS record that points to your load balancer.
- Go to Releases, and copy the link to current recommended release.
- Connect to your master node.$ ssh <SWARM-MASTER>Retrieve the release tarball.$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>Unpack the Prisma Cloud release tarball.$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/ $ cd twistlockInstall Console into your Swarm using the twistcli utility.If you are using GCP:$ ./linux/twistcli console install swarm --volume-driver "gcp"If you are using NFSv4:$ ./linux/twistcli console install swarm --volume-driver "local"If you are using a local storage (not recommended for production environments):$ ./linux/twistcli console install swarm --volume-driver "local"At the prompt, enter your Prisma Cloud access token. The access token is required to retrieve the Prisma Cloud container images from the cloud repository.Validate that Console is running. It takes a few moments for the replica count to go from 0/1 to 1/1.$ docker service ls ID NAME MODE REPLICAS IMAGE pctny1pymjg8 twistlock-console replicated 1/1 registry.twistlock.com/...Open Console’s dashboard in a web browser.Console’s published ports use Swarm’s routing mesh (ingress network), so the Console service is accessible at the target port on every node, not just the host it runs on.Open Prisma Cloud Console’s web interface. By default, the web interface is available via HTTPS (port 8083). Go to https://<LOAD-BALANCER>:8083.If you did not configure a load balancer, Console is reachable via HTTPS at https://<ANY-SWARM-NODE-IPADDR>:8083Create your first admin user.Enter your license key, and click OK.You are redirected to the Console dashboard.Install DefenderDefender is installed as a global service, which ensures it runs on every node in the cluster. Console provides a GUI to configure all the options required to deploy Defender into your environment.
- Open Console.
- Go toManage > Defenders > Names.
- ClickAdd SAN, and add the DNS name of your load balancer.
- Go toManage > Defenders > Deploy > Swarm.
- Work through each of the configuration options:
- Choose the DNS name of your load balancer. Defenders use this address to communicate with Console.
- Choose the registry that hosts the Defender image. SelectPrisma Cloud’s registry.
- SetDeploy Defenders with SELinux PolicytoOff.
- Copy the generated curl-bash command.
- Connect to your Swarm master.$ ssh <SWARM-MASTER>Paste the curl-bash command into your shell, then run it. You need sudo privileges to run this command.$ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...Validate that the Defender global service is running.Open Console, then go toManage > Defenders > Manage. The table lists all Defenders deployed to your environment (one per node).
UninstallTo uninstall Prisma Cloud, reverse the install steps. Delete the Defender global service first, followed by the Console service.- Delete the Defender global service.
- Open Console, then go toManage > Defenders > Deploy Swarm.
- Scroll to the bottom of the page, then copy the last curl-bash command, where it saysThe script below uninstalls the Swarm Defenders from the cluster.
- Connect your Swarm master.$ ssh <SWARM-MASTER>Paste the curl-bash command into your shell, then run it.$ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...
- Delete the Console service.
- SSH to the node where you downloaded and unpacked the Prisma Cloud release tarball.
- Run twistcli with the uninstall subcommand.$ ./linux/twistcli console uninstall swarm
Using a private registryFor maximum control over your environment, you might want to store the Prisma Cloud container images in your own private registry, and then install Prisma Cloud from your private registry.When you deploy Prisma Cloud as a service, Docker Swarm pulls the Console image from the specified registry, and then schedules it to run on a node in the cluster.Docker Hub and Docker Trusted RegistryPrisma Cloud currently only supports Docker Hub and Docker Trusted Registry for Swarm deployments.The key steps in the deployment workflow are:- Log into your registry with docker login.
- Push the Console image your registry.
- Install Console using twistcli.Set the --registry-address option to your registry and repository. Set the --skip-push option so that twistcli doesn’t try to automatically push the Console image to your registry for you.
Unsupported registriesIf you are using an unsupported registry, you must manually make the Console image available on each node in your cluster. Unsupported registries include Quay.io, Artifactory, and Amazon EC2 Container Registry.The method documented here supports any registry. The key steps in this deployment workflow are:- Manually push the Console image to your registry. The twistcli tool is not capable of doing it for you.
- Manually pull the Console image to each node in your cluster.
- Run twistcli to deploy Console, bypassing any options that interact with the registry. In particular, use the --skip-push option because twistcli does not know how to authenticate and push to unsupported registries.
The commands in this procedure assume you are using Quay.io, but the same method can be applied to any registry. Adjust the commands for your specific registry.- Download the Prisma Cloud current release from Releases, and copy it to your master node.
- Unpack the Prisma Cloud release tarball.$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/Login to your registry.$ docker login quay.io Username: Password: Email:Load the Console image shipped in the release tarball.$ docker load < twistlock_console.tar.gzTag the Console image according to the format required by your registry.$ docker tag twistlock/private:console_<VERSION> quay.io/<USERNAME>/twistlock:consolePush the Console image to your registry.$ docker push quay.io/<username>/twistlock:consoleConnect to each node in your cluster, and pull the Console image.$ docker pull quay.io/<username>/twistlock:consoleOn your Swarm master, run twistcli to deploy Console.$ ./linux/twistcli console install swarm \ --volume-driver "<VOLUME-DRIVER>" \ --registry-address "quay.io/<USERNAME>"
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found.