End-of-Life (EoL)

Docker Swarm

This procedure is optimized to get Prisma Cloud installed and set up in your Docker Swarm cluster quickly. There are many ways to install Prisma Cloud, but we recommend that you start with this procedure first. You can tweak the install procedure after you have validated that this install method works.
The Prisma Cloud install supports Docker Swarm using Swarm-native constructs. Deploy Console as a service so you can rely on Swarm to ensure Console is always available. Deploy Defender as a global service to guarantee that Defender is automatically deployed to every worker node with a simple one-time configuration.

Install Prisma Cloud

After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will run in your Swarm cluster. This setup uses a load balancer (HAProxy) and external persistent storage so that Console can failover and restart on any Swarm worker node.
If you don’t have external persistent storage, you can configure Console to use local storage, but you must pin Console to the node with the local storage. Console with local storage is not recommended for production-grade setups.
In this procedure, Prisma Cloud images are pulled from Prisma Cloud’s cloud registry.
Prisma Cloud doesn’t support deploying Defender as a global service when SELinux is enabled on your underlying hosts. Defender requires access to the Docker socket to monitor your environment and enforce your policies. SELinux blocks access to the Docker socket because it can be a serious security issue. Unfortunately, Swarm doesn’t provide a way for legitimate services to run with elevated privileges. None of the --security-opts, --privileged, or --cap-add flags are supported for Swarm services. As a work-around, install single Container Defenders on each individual node in your cluster.

Set up a load balancer

Swarm uses a routing mesh inside the cluster. When you deploy Prisma Cloud Console as a replicated service, Swarm’s routing mesh publishes Console’s ports on every node.
A load balancer is required to facilitate Defender-to-Console communication. Console is deployed on an overlay network, and Defenders are deployed in the host network namespace. Because Defenders aren’t connected to the overlay network, they cannot connect to the Virtual IP (VIP) address of the Prisma Cloud Console service. Prepare your load balancer so that traffic is distributed to all available Swarm worker nodes. The nodes use Swarm’s routing mesh to forward traffic to the worker node that runs Console. The following diagram shows the setup:
The following example HAProxy configuration has been tested in our labs. Use it as a starting point for your own configuration.
Whichever load balancer you use, be sure it supports TCP passthrough. Otherwise, Defenders might not be able to connect Console.
global ... ca-base /etc/ssl/certs crt-base /etc/ssl/private ... ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 maxsslconn 256 tune.ssl.default-dh-param 2048 defaults ... frontend https_front stats uri /haproxy?stats default_backend https_back bind *:8083 ssl crt /etc/ssl/private/haproxy.pem backend https_back balance roundrobin server node1 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node2 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none server node3 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none frontend defender_front stats uri /haproxy?stats default_backend defender_back option tcplog mode tcp bind *:8084 backend defender_back balance roundrobin mode tcp option tcp-check server node1 IP-OF-YOUR-SWARMWORKER:8084 check server node2 IP-OF-YOUR-SWARMWORKER:8084 check server node3 IP-OF-YOUR-SWARMWORKER:8084 check
A couple of notes about the config file:

(Optional) Set up a DNS record

Simplify the configuration of your environment by setting up a DNS A Record that points to your load balancer. Then use the load balancer’s domain name to:
  • Connect to Console’s HTTP or HTTPS web interface,
  • Interface with Console’s API,
  • Configure how Defender connects to Console.

Set up persistent storage

Install a volume driver that can create persistent volumes that can be accessed from any node in the cluster. Because Console can be scheduled on any node, it must be able to access its data and backup folders from wherever it runs.
You can use any available volume plugin, then specify the plugin driver with the --volume-driver option when installing Prisma Cloud Console with twistcli. Every node in your cluster must have the proper permissions to create persistent volumes.
This procedure describes how to use the Google Cloud Platform and NFSv4 volume drivers, but you can use any supported volume plugin.

Set up persistent storage on GCP

Set up the gce-docker volume plugin on each cluster node, then create data and backup volumes for Console.
  1. Verify that Swarm is enabled on all nodes, and that they are connected to a healthy master.
  2. Install the GCP volume plugin. Run the following command on each node.
    $ docker run -d \ -v /:/rootfs \ -v /run/docker/plugins:/run/docker/plugins \ -v /var/run/docker.sock:/var/run/docker.sock \ --privileged \ mcuadros/gce-docker
  3. Create persistent volumes to hold Console’s data and backups.
    $ docker volume create \ --driver=gce \ --name twistlock-console \ -o SizeGb=90
    $ docker volume create \ --driver=gce \ --name twistlock-backup \ -o SizeGb=90

Set up persistent storage on NFSv4

Set up an NFS server, then create data and backup volumes for Console. The NFS server should run on a dedicated host that runs outside of the Swarm cluster.
Prisma Cloud Console uses MongoDB to store data. There are some mount options required when accessing a MongoDB database from an NFSv4 volume.
  • nolock — Disables the NLM sideband protocol to lock files on the server.
  • noatime — Disables the NFS server from updating the inodes access time.
  • bg — Backgrounds a mount command so that it doesn’t hang forever in the event that there is a problem connecting to the server.
  1. Install an NFSv4 server:
    $ sudo apt install nfs-kernel-server
  2. Configure the server.
    1. Open /etc/exports for editing.
      $ sudo vim /etc/exports
    2. Append the following line to the file.
      /srv/home *(rw,sync,no_root_squash)
  3. Start the server.
    $ sudo systemctl start nfs-kernel-server.service
  4. Mount all other nodes.
    $ sudo mount -o nolock,bg,noatime <server-ip>:/srv/home /<local>/srv/home
  5. Ensure all permissions are granted to twistlock user (2674).
  6. Create NFS volumes to hold Console’s data and backups.
    $ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-console
    $ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-backup

Install Console

Install Console as a Docker Swarm service.
Prerequisites:
  • All the components in your environment (nodes, host operating systems, orchestrator, etc) meet the hardware and version specs in System requirements.
  • Your Swarm cluster is up and running.
  • Your persistent storage is configured correctly.
  • Your load balancer is configured correctly for ports 8083 (HTTPS) and 8084 (TCP).
  • You created a DNS record that points to your load balancer.
  1. Get a link to the current recommended release.
  2. Connect to your master node.
    $ ssh <SWARM-MASTER>
  3. Retrieve the release tarball.
    $ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
  4. Unpack the Prisma Cloud release tarball.
    $ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/ $ cd twistlock
  5. Install Console into your Swarm using the twistcli utility.
    If you are using GCP:
    $ ./linux/twistcli console install swarm --volume-driver "gcp"
    If you are using NFSv4:
    $ ./linux/twistcli console install swarm --volume-driver "local"
    If you are using a local storage (not recommended for production environments):
    $ ./linux/twistcli console install swarm --volume-driver "local"
  6. At the prompt, enter your Prisma Cloud access token. The access token is required to retrieve the Prisma Cloud container images from the cloud repository.
  7. Validate that Console is running. It takes a few moments for the replica count to go from 0/1 to 1/1.
    $ docker service ls ID NAME MODE REPLICAS IMAGE pctny1pymjg8 twistlock-console replicated 1/1 registry.twistlock.com/...
  8. Open Console’s dashboard in a web browser.
    Console’s published ports use Swarm’s routing mesh (ingress network), so the Console service is accessible at the target port on every node, not just the host it runs on.
  9. Open Prisma Cloud Console’s web interface. By default, the web interface is available via HTTPS (port 8083). Go to https://<LOAD-BALANCER>:8083.
    If you did not configure a load balancer, Console is reachable via HTTPS at https://<ANY-SWARM-NODE-IPADDR>:8083
  10. Create your first admin user.
  11. Enter your license key, and click OK.
    You are redirected to the Console dashboard.

Install Defender

Defender is installed as a global service, which ensures it runs on every node in the cluster. Console provides a GUI to configure all the options required to deploy Defender into your environment.
  1. Open Console.
  2. Go to
    Manage > Defenders > Names
    .
  3. Click
    Add SAN
    , and add the DNS name of your load balancer.
  4. Go to
    Manage > Defenders > Deploy > Swarm
    .
  5. Work through each of the configuration options:
    1. Choose the DNS name of your load balancer. Defenders use this address to communicate with Console.
    2. Choose the registry that hosts the Defender image. Select
      Prisma Cloud’s registry
      .
    3. Set
      Deploy Defenders with SELinux Policy
      to
      Off
      .
    4. Copy the generated curl-bash command.
  6. Connect to your Swarm master.
    $ ssh <SWARM-MASTER>
  7. Paste the curl-bash command into your shell, then run it. You need sudo privileges to run this command.
    $ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...
  8. Validate that the Defender global service is running.
    Open Console, then go to
    Manage > Defenders > Manage
    . The table lists all Defenders deployed to your environment (one per node).

Uninstall

To uninstall Prisma Cloud, reverse the install steps. Delete the Defender global service first, followed by the Console service.
  1. Delete the Defender global service.
    1. Open Console, then go to
      Manage > Defenders > Deploy Swarm
      .
    2. Scroll to the bottom of the page, then copy the last curl-bash command, where it says
      The script below uninstalls the Swarm Defenders from the cluster
      .
    3. Connect your Swarm master.
      $ ssh <SWARM-MASTER>
    4. Paste the curl-bash command into your shell, then run it.
      $ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...
  2. Delete the Console service.
    1. SSH to the node where you downloaded and unpacked the Prisma Cloud release tarball.
    2. Run twistcli with the uninstall subcommand.
      $ ./linux/twistcli console uninstall swarm

Using a private registry

For maximum control over your environment, you might want to store the Prisma Cloud container images in your own private registry, and then install Prisma Cloud from your private registry.
When you deploy Prisma Cloud as a service, Docker Swarm pulls the Console image from the specified registry, and then schedules it to run on a node in the cluster.

Docker Hub and Docker Trusted Registry

Prisma Cloud currently only supports Docker Hub and Docker Trusted Registry for Swarm deployments.
The key steps in the deployment workflow are:
  1. Log into your registry with docker login.
  2. Push the Console image your registry.
  3. Install Console using twistcli.
    Set the --registry-address option to your registry and repository. Set the --skip-push option so that twistcli doesn’t try to automatically push the Console image to your registry for you.

Unsupported registries

If you are using an unsupported registry, you must manually make the Console image available on each node in your cluster. Unsupported registries include Quay.io, Artifactory, and Amazon EC2 Container Registry.
The method documented here supports any registry. The key steps in this deployment workflow are:
  • Manually push the Console image to your registry. The twistcli tool is not capable of doing it for you.
  • Manually pull the Console image to each node in your cluster.
  • Run twistcli to deploy Console, bypassing any options that interact with the registry. In particular, use the --skip-push option because twistcli does not know how to authenticate and push to unsupported registries.
The commands in this procedure assume you are using Quay.io, but the same method can be applied to any registry. Adjust the commands for your specific registry.
  1. Download the current recommended release and copy it to your master node.
  2. Unpack the Prisma Cloud release tarball.