End-of-Life (EoL)
Amazon ECS
This quickstart guide shows you how to deploy Prisma Cloud on a simple cluster that has a single infrastructure node and two worker nodes.
Console runs on the infrastructure node, and an instance of Defender runs on each of the worker nodes.
Console is the Prisma Cloud management interface, and it runs as a service.
The parameters of the service are described in a task definition, and the task definition is written in JSON format.
Defender protects your containerized environment according to the policies you set in Console.
To automatically deploy an instance of Defender on each worker node in your cluster, you will use a user data script in the worker node launch configuration.
User data scripts run custom configuration commands when a new instance is started.
You will set up the user data script to call the Prisma Cloud API to download, install, and start Defender.
This guide assumes you know very little about AWS ECS.
As such, it is extremely prescriptive.
If you are already familiar with AWS ECS and do not need assistance navigating the interface, simply read the section synopsis, which summarizes all key configurations.
The installation described in this article is meant to be "highly available" in that data is persisted across restarts of the nodes.
If an infrastructure node were to go down, ECS should be able to reschedule the Console service on any healthy node, and Console should still have access to its state.
To enable this capability, you must attach storage that is accessible from each of your infrastructure nodes, and Amazon Elastic File System (EFS) is an excellent choice.
When you have more than one infrastructure node, ECS can run Console on any one of them.
Defenders need a reliable way to connect to Console, no matter where it runs.
A load balancer automatically directs traffic to the node where Console runs, and offers a stable interface that Defenders can use to connect to Console and that operators can use to access its web interface.
We assume you are deploying Prisma Cloud to the default VPC.
If you are not using the default VPC, adjust your settings accordingly.
Key details
There are a number of AWS resource identifiers and other details that are used throughout the install procedure.
You should create a list of the following details for easy retrieval during the installation process.
Cluster name
: retain this after creating the ECS cluster. Default value: pc-ecs-cluster.Security group name
: retain this after creating the security group. Default value: pc-security-group.Mount command for console EFS
: retain this after creating an EFS for the console.Access Token
: Access token for Prisma Cloud.License Key
: License key for Prisma Cloud.Version
: The version of Prisma Cloud you are deploying, for example 20_04_169Load Balancer’s public DNS
: retain this after configuring a loadbalancer for your infrastructure nodes.Download the Prisma Cloud software
The Prisma Cloud release tarball contains all the release artifacts.
- Download the latest recommended release.
- Retrieve the release tarball.$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>Unpack the Prisma Cloud release tarball.$ mkdir twistlock $ tar xvzf prisma_cloud_compute_edition_<VERSION>.tar.gz -C twistlock/
- Log into the AWS Management Console.
- Go toServices > Containers > Elastic Container Service.
- ClickCreate Cluster.
- SelectNetworking only, then clickNext Step.
- Enter a cluster name, such aspc-ecs-cluster.
- ClickCreate.
- Go toServices > Compute > EC2.
- In the left menu, clickNETWORK & SECURITY > Security Groups.
- ClickCreate Security Group.
- InSecurity group name, enter a name, such aspc-security-group.
- InDescription, enterPrisma Cloud ports.
- InVPC, select your default VPC.
- Under theInbound rulessection, clickAdd Rule.
- UnderType, selectCustom TCP.
- UnderPort Range, enter8083-8084.
- UnderSource, selectAnywhere.
- ClickAdd Rule.
- UnderType, selectCustom TCP.
- UnderPort Range, enter2049.
- UnderSource, selectAnywhere.
- ClickAdd Rule.
- UnderType, selectSSH.
- UnderSource, selectAnywhere.
- ClickCreate security group.
- Performance mode:General purpose.
- Throughput mode:Provisioned. Provision 0.1 MiB/s per deployed Defender. For example, if you plan to deploy 10 Defenders, provision 1 MiB/s of throughput.
- Log into the AWS Management Console.
- Go toServices > Storage > EFS.
- ClickCreate File System.
- Select a VPC, select thepc-security-groupfor each mount target, then clickNext Step.
- Enter a value for Name, such aspc-efs-console
- Set your throughput mode toProvisioned, and adjust Throughput to 0.1 MiB/s per Defender that would be deployed.
- ClickNext Step.
- ForConfigure client access, keep the default settings and clickNext Step.
- Review your settings and selectCreate File System.
- Click on theAmazon EC2 mount instructions (from local VPC)link and copy the mount command (Using the NFS client) and set it aside as the Console mount command.You will use this mount command to configure your launch configuration for the Console.
- Creates an instance type of t2.large, or higher. For more information about Console’s minimum requirements, see System requirements.
- Runs Amazon ECS-Optimized Amazon Linux 2 AMI.
- Uses the ecsInstanceRole IAM role.
- Runs a user data script that joins the pc-ecs-cluster and defines a custom attribute named purpose with a value of infra. Console tasks will be placed to this instance.
- Go toServices > Compute > EC2.
- In the left menu, clickAUTO SCALING > Launch Configurations.
- ClickCreate launch configuration.
- Choose an AMI.
- ClickAWS Marketplace.
- In the search box, enterAmazon ECS-Optimized Amazon Linux 2 AMI.
- ClickSelectforAmazon ECS-Optimized Amazon Linux 2 AMI.
- Choose an instance type.
- Selectt2.large.
- ClickNext: Configure details.
- Configure details.
- InName, enter a name for your launch configuration, such aspc-infra-node.
- InIAMrole, selectecsInstanceRole.If this role doesn’t exist, see Amazon ECS Container Instance IAM Role.
- SelectEnabled CloudWatch detailed monitoring.
- ExpandAdvanced Details,
- InUser Data, enter the following text in order to install the NFS utilities and mount the EFS file system:#!/bin/bash cat <<'EOF' >> /etc/ecs/ecs.config ECS_CLUSTER=pc-ecs-cluster ECS_INSTANCE_ATTRIBUTES={"purpose": "infra"} EOF yum install -y nfs-utils mkdir /twistlock_console <CONSOLE_MOUNT_COMMAND> /twistlock_console mkdir -p /twistlock_console/var/lib/twistlock mkdir -p /twistlock_console/var/lib/twistlock-backup mkdir -p /twistlock_console/var/lib/twistlock-configpc-ecs-clustermust match your cluster name. If you’ve named your cluster something other thanpc-ecs-cluster, then modify your User Data script accordingly.<CONSOLE_MOUNT_COMMAND> is the Console mount command you copied from the AWS Management Console after creating your console EFS file system. The mount target must be /twistlock_console, not the efs mount target provided in the sample command.(Optional) UnderIP Address Type, selectAssign a public IP address to every instance.With this option, you can easily SSH to this instance to troubleshoot issues.ClickNext: Add Storage.
- Add Storage.
- Accept the defaults, and clickNext: Configure Security Group.
- Configure security group.
- UnderAssign a security group, chooseSelect an existing security group.
- Selectpc-security-group.
- ClickReview.
- Review.
- Review the configuration and selectCreate launch configuration.
- Select an existing key pair, or create a new key pair so that you can access your instance.
- ClickCreate launch configuration.
- Go toServices > Compute > EC2.
- In the left menu, clickAUTO SCALING > Auto Scaling Groups.
- ClickCreate Auto Scaling group.
- SelectLaunch Configuration
- Selectpc-infra-node.
- ClickNext Step.
- Configure Auto Scaling group details.
- InGroup Name, enter pc-infra-autoscaling.
- SetGroup sizeto the desired value (typically, this is a value greater than1).
- UnderNetwork, select your default VPC.
- UnderSubnet, select a public subnet, such as 172.31.0.0/20.
- ClickNext: Configure scaling policies.
- Configure scaling policies.
- SelectKeep this group at its initial size.
- ClickNext: Configure Notifications.
- Configure Notifications.
- ClickNext: Configure Tags.
- Configure Tags.
- UnderKey, enterName.
- UnderValue, enterpc-infra-node.
- ClickReview.
- Review the configuration and clickCreate Auto Scaling Group.After the auto scaling group spins up (it will take some time), validate that your cluster has one container instance, where a container instance is the ECS vernacular for an EC2 instance that has joined the cluster and is ready to accept container workloads:
- Go toServices > Containers > Elastic Container Service. The count forContainer instancesshould be 1.
- Click on the cluster, then click on theECS Instancestab. In the status table, there should be a single entry. Click on the link under theEC2 Instancecolumn. In the details page for the EC2 instance, record thePublic DNS.
- Upload twistlock.cfg to the infrastructure node.
- Go to the directory where you unpacked the Prisma Cloud release tarball.
- Copy twistlock.cfg to the infrastructure node.$ scp -i <PATH-TO-KEY-FILE> twistlock.cfg ec2-user@<ECS_INFRA_NODE_DNS_NAME>:~
- SSH to the infrastructure node.$ ssh -i <PATH-TO-KEY-FILE> ec2-user@<ECS_INFRA_NODE_DNS_NAME>Copy the twistlock.cfg file into place.$ sudo cp twistlock.cfg /twistlock_console/var/lib/twistlock-config
Create a Prisma Cloud Console task definitionPrisma Cloud provides a task definition template for Console. Download the template, then update the variables specific to your environment. Finally, load the task definition in ECS.Prerequisites:- The task definition provisions sufficient resources for Console to operate. Our template specifies reasonable defaults. For more information, see System requirements.
- Download the Prisma Cloud Console task definition, and open it for editing.
- Update the value for image to point to Prisma Cloud’s cloud registry:Replace the following placeholder strings with the appropriate values:
- <ACCESS-TOKEN> — Your Prisma Cloud access token. All characters must be lowercase. To convert your access token to lowercase, run:$ echo <ACCESS-TOKEN> | tr '[:upper:]' '[:lower:]'<VERSION> — Version of the Console image to use.For example: for version 20.04.177, specify 20_04_177. The image will look similar to console:console_20_04_177.
Update <CONSOLE-DNS> to the Load Balancer’s DNS name.Go toServices > Containers > Elastic Container Service.In the left menu, clickTask Definitions.ClickCreate new Task Definition.InStep 1: Select launch type compatibility, selectEC2, then clickNext step.InStep 2: Configure task and container definitions, scroll to the bottom of the page and clickConfigure via JSON.Delete the contents of the window, and replace it with the Prisma Cloud Console task definitionClickSave.- (Optional) Change the task definition name before creating. The JSON will default the name topc-console.
ClickCreate.
Launch the Prisma Cloud Console serviceCreate the Console service using the previously defined task definition. A single instance of Console will run on the infrastructure node.- Go toServices > Containers > Elastic Container Service.
- In the left menu, clickClusters.
- Click on your cluster.
- In theServicestab, then clickCreate.
- InStep 1: Configure service:
- ForLaunch type, selectEC2.
- ForTask Definition, selectpc-console.
- InService Name, enterpc-console.
- InNumber of tasks, enter1.
- ClickNext Step.
- InStep 2: Configure network:
- ForLoad Balancer type, selectClassic Load Balancer.
- ForService IAM role, leave the defaultecsServiceRole.
- ForLoad Balancer Name, select previously created load balancer.
- UnselectEnable Service discovery integration
- clickNext Step.
- InStep 3: Set Auto Scaling, accept the defaults, and clickNext.
- InStep 4: Review, clickCreate Service.
- Wait for the service launch to be completed and clickView Service.
- Wait for the serviceLast statusto change to running (can take a few minutes) and continue toConfigure Prisma Cloud Consolebelow.
Configure Prisma Cloud ConsoleNavigate to Console’s web interface, create your first admin account, then enter your license.- Start a browser, then navigate to https://<Load Balancer DNS Name>:8083
- At the login page, create your first admin account. Enter a username and password.
- Enter your license key, then clickRegister.
- Runs the Amazon ECS-Optimized Amazon Linux 2 AMI.
- Uses the ecsInstanceRole IAM role.
- Runs a user data script that joins the pc-ecs-cluster and runs the commands required to install Defender.
- Go toServices > Compute > EC2.
- In the left menu, clickAUTO SCALING > Launch Configurations.
- ClickCreate Launch Configuration
- Choose an AMI:
- ClickAWS Marketplace.
- In the search box, enterAmazon ECS-Optimized Amazon Linux 2 AMI.
- ClickSelectforAmazon ECS-Optimized Amazon Linux 2 AMI.
- Choose an instance type.
- Selectt2.medium.
- ClickNext: Configure details.
- Configure details.
- InName, enter a name for your launch configuration, such aspc-worker-node.
- InIAMrole, selectecsInstanceRole.
- SelectEnable CloudWatch detailed monitoring.
- ExpandAdvanced Details,
- InUser Data, enter the following text:#!/bin/bash echo ECS_CLUSTER=pc-ecs-cluster >> /etc/ecs/ecs.configWhere:
- ECS_CLUSTER must match your cluster name. If you’ve named your cluster something other than pc_ecs_cluster, then modify your User Data script accordingly.
(Optional) UnderIP Address Type, selectAssign a public IP address to every instance.With this option, you can easily SSH to any worker nodes instances and troubleshoot issues.ClickNext: Add Storage. - Add Storage.
- Accept the defaults, and clickNext: Configure Security Group.
- Configure security group.
- UnderAssign a security group, chooseSelect an existing security group.
- Selectpc-security-group.
- ClickReview.
- Review.
- Review the configuration and selectCreate launch configuration.
- Select an existing key pair, or create a new key pair so that you can access your instance.
- Go toServices > Compute > EC2.
- In the left menu, clickAUTO SCALING > Auto Scaling Groups.
- ClickCreate Auto Scaling group:
- SelectLaunch Configuration
- Selectpc-worker-node.
- ClickNext Step.
- Configure Auto Scaling group details:
- InGroup Name, enterpc-worker-autoscaling.
- SetGroup sizeto2.
- UnderNetwork, select your default VPC.
- UnderSubnet, select a public subnet, such as 172.31.0.0/20.
- ClickNext: Configure scaling policies.
- Configure scaling policies.
- SelectKeep this group at its initial size.
- ClickNext: Configure Notifications.
- Configure Notifications.
- ClickNext: Configure Tags.
- Configure Tags.
- UnderKey, enterName.
- UnderValue, enterpc-worker-node.
- ClickReview.
- Review the configuration and clickCreate Auto Scaling Group.
- After the auto scaling group spins up (it will take some time), validate that your cluster has three container instances.
- Go toServices > Containers > Elastic Container Service.
- The count forContainer instancesin your cluster should now be a total of three.
- Retrieve the service parameter from the Prisma Cloud API.$ curl -k \ -u "<username>:<password>" \ -X GET https://<load_balancer_dns>:8083/api/v1/certs/service-parameter \ -o service-parameterEnsure the jq package is installed.Retrieve and retain the installBundle from the Prisma Cloud API:$ curl -k -s \ -u "<username>:<password>" \ -X GET "https://<load_balancer_dns>:8083/api/v1/defenders/install-bundle?consoleaddr=<load_balancer_dns>&defenderType=appEmbedded" | jq -r '.installBundle' > install-bundle
- Download the Prisma Cloud Defender task definition, and open it for editing.
- Apply the following changes to the task definition:
- Modify the WS_ADDRESS parameter to the DNS of the Console.
- <CONSOLE-DNS> — The DNS name for the load balancer you created.
- <PORT> — The port the DNS is listening on.The default port is 8084.
- <INSTALL-BUNDLE> — Output from the installBundle endpoint.
- <SERVICE-PARAMETER> — Output from the service-parameter endpoint.
- Update the value for image to point to Prisma Cloud’s public registry by replacing the following placeholder strings with the appropriate values:
- <ACCESS-TOKEN> — Your Prisma Cloud access token. This is located in your Console underManage > System > Intelligence.All characters must be lowercase.To convert your access token to lowercase, run:$ echo <ACCESS-TOKEN> | tr '[:upper:]' '[:lower:]'<VERSION> — Version of the Defender image to use.For example: for version 20.04.177, specify 20_04_177. The image will look similar to defender:defender_20_04_177.
- Go toServices > Containers > Elastic Container Service.
- In the left menu, clickTask Definitions.
- ClickCreate new Task Definition.
- InStep 1: Select launch type compatibility, selectEC2, then clickNext step.
- InStep 2: Configure task and container definitions, scroll to the bottom of the page and clickConfigure via JSON.
- Delete the contents of the window, and replace it with the Prisma Cloud Console task definition
- ClickSave.
- (Optional) Change the task definition name before creating. The JSON will default the name topc-defender.
- ClickCreate.
Launch the Prisma Cloud Defender serviceCreate the Defender service using the previously defined task definition. Using Daemon scheduling, one Defender will run per node in your cluster.- Go toServices > Containers > Elastic Container Service.
- In the left menu, clickClusters.
- Click on your cluster.
- In theServicestab, then clickCreate.
Create a cluster
Create an empty cluster named pc-ecs-cluster.
Later, you will create launch configurations and auto-scaling groups to start EC2 instances in the cluster.
Create a security group
Create a new security group named pc-security-group that opens ports 8083 and 8084.
In order for Prisma Cloud to operate properly, these ports must be open.
This security group will be associated with the EC2 instances started in your cluster.
Console’s web interface and API are served on port 8083.
Defender and Console communicate over a secure web socket on port 8084.
Inbound connection to port 2049 is required to setup the NFS.
Open port 22 so that you can SSH to any machine in the cluster.
Additional hardening can be performed as desired for the below roles. For example, limiting access to port 22 only to source IPs from which you would connect to your instances via SSH.
Create an EFS file system for Console
Create the Console EFS file system, then capture the mount command that will be used to mount the file system on every infrastructure node.
Prerequisites:
Prisma Cloud Console depends on an EFS file system with the following performance characteristics:
The EFS file system and ECS cluster must be in the same VPC and security group.
Set up a classic load balancer
Set up an AWS Classic Load Balancer, and capture the Load Balancer DNS name.
You’ll create two load balancer listeners.
One is used for Console’s UI and API, which are served on port 8083.
Another is used for the websocket connection between Defender and Console, which is established on port 8084.
For detailed instructions on how to create a load balancer for Console, see Configure an AWS Load Balancer for ECS.
Deploy Console
Launch an infrastructure node that runs in the cluster, then start Prisma Cloud Console as a service on that node.
Create a launch configuration for the infrastructure node
Launch configurations are templates that are used by an auto-scaling group to start EC2 instances in your cluster.
Create a launch configuration named pc-infra-node that:
Create an auto scaling group for the infrastructure node
Launch a single instance of the infrastructure node into your cluster.
Copy the Prisma Cloud config file into place
The Prisma Cloud API serves the version of the configuration file used to instantiate Console.
Use scp to copy twistlock.cfg from the Prisma Cloud release tarball to /twistlock_console/var/lib/twistlock-config on the infrastructure node.
Deploy Defender
Launch an infrastructure node that runs in the cluster
You are now ready to deploy your worker nodes.
You will create worker nodes that run in the cluster, an ECS Task Definition for the Prisma Cloud Defender, then create a service of type Daemon to ensure that the Defender is deployed across your ECS cluster.
Create a launch configuration for worker nodes
Create a launch configuration named pc-worker-node that:
Create an auto scaling group for the worker nodes
Launch two worker nodes into your cluster.
Generate install bundle for Defender
Generate install bundle which will be used in Defender’s task definition.
Create a Prisma Cloud Defender task definition
Prisma Cloud provides a task definition template for Defender.
Download the template, then update the variables specific to your environment.
Finally, load the task definition in ECS.