Amazon ECS

This quickstart guide shows you how to deploy Prisma Cloud on a simple cluster that has a single infrastructure node and two worker nodes. Console runs on the infrastructure node, and an instance of Defender runs on each of the worker nodes.
Console is the Prisma Cloud management interface, and it runs as a service. The parameters of the service are described in a task definition, and the task definition is written in JSON format.
Defender protects your containerized environment according to the policies you set in Console. To automatically deploy an instance of Defender on each worker node in your cluster, you will use a user data script in the worker node launch configuration. User data scripts run custom configuration commands when a new instance is started. You will set up the user data script to call the Prisma Cloud API to download, install, and start Defender.
This guide assumes you know very little about AWS ECS. As such, it is extremely prescriptive. If you are already familiar with AWS ECS and do not need assistance navigating the interface, simply read the section synopsis, which summarizes all key configurations.
The installation described in this article is meant to be "production grade" in that data is persisted across restarts of the nodes. If an infrastructure node were to go down, ECS should be able to reschedule the Console service on any healthy node, and Console should still have access to its state. To enable this capability, you must attach storage that is accessible from each of your infrastructure nodes, and Amazon Elastic File System (EFS) is an excellent choice.
When you have more than one infrastructure node, ECS can run Console on any one of them. Defenders need a reliable way to connect to Console, no matter where it runs. A load balancer automatically directs traffic to the the node where Console runs, and offers a stable interface that Defenders can use to connect to Console and that operators can use to access its web interface.
We assume you are deploying Prisma Cloud to the default VPC. If you are not using the default VPC, adjust your settings accordingly.

Key identifiers

There are a number of AWS resource identifiers that used throughout the install procedure. The important ones are highlighted here.
Cluster name
:
tw-ecs-cluster
Security group
:
tw-security-group
Infrastructure node’s IP address
:
<ECS_INFRA_NODE>
(Retrieve this value from the AWS Management Console after the infrastructure EC2 instance starts.)
Console task definition
:
tw-console
(This value is specified in the task definition JSON.)

Download the Prisma Cloud software

The Prisma Cloud release tarball contains all the release artifacts.
  1. Go to the Releases page, and copy the link to current recommended release.
  2. Retrieve the release tarball.
    $ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
  3. Unpack the Prisma Cloud release tarball.
    $ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/

Create a cluster

Create an empty cluster named
tw-ecs-cluster
. Later, you will create launch configurations and auto-scaling groups to start EC2 instances in the cluster.
  1. Log into the AWS Management Console.
  2. Go to
    Services > Compute > Elastic Container Service
    .
  3. Click
    Create Cluster
    .
  4. Select
    EC2 Linux + Networking
    , then click
    Next Step
    .
  5. Enter a cluster name, such as
    tw-ecs-cluster
    .
  6. Select
    Create an empty cluster
    .
  7. Click
    Create
    .

Create a security group

Create a new security group named
tw-security-group
that opens ports 8083 and 8084. In order for Prisma Cloud to operate properly, these ports must be open. This security group will be associated with the EC2 instances started in your cluster.
Console’s web interface and API are served on port 8083. Defender and Console communicate over a secure web socket on port 8084.
Inbound connection to port 2049 is required to setup the NFS.
For debugging purposes, also open port 22 so that you can SSH to any machine in the cluster.
  1. Go to
    Services > Compute > EC2
    .
  2. In the left menu, click
    NETWORK & SECURITY > Security Groups
    .
  3. Click
    Create Security Group
    .
  4. In
    Security group name
    , enter a name, such as
    tw-security-group
    .
  5. In
    Description
    , enter
    Prisma Cloud ports
    .
  6. In
    VPC
    , select your default VPC.
  7. Under the
    Inbound
    tab, click
    Add Rule
    .
    1. Under
      Type
      , select
      Custom TCP
      .
    2. Under
      Port Range
      , enter
      8083-8084
      .
    3. Under
      Source
      , select
      Anywhere
      .
  8. Click
    Add Rule
    .
    1. Under
      Type
      , select
      Custom TCP
      .
    2. Under
      Port Range
      , enter
      2049
      .
    3. Under
      Source
      , select
      Anywhere
      .
  9. Click
    Add Rule
    .
    1. Under
      Type
      , select
      SSH
      .
    2. Under
      Source
      , select
      Anywhere
      .
  10. Click
    Create
    .

Create an EFS file system for Console

Create the Console EFS file system, set up the directory hierarchy expected by Console, then capture the mount command that will be used to mount the file system on every infrastructure node.
Prerequisites:
Prisma Cloud Console depends on an EFS file system with the following performance characteristics:
  • Performance mode:
    General purpose.
  • Throughput mode:
    Provisioned. Provision 0.1 MiB/s per deployed Defender. For example, if you plan to deploy 10 Defenders, provision 1 MiB/s of throughput.
The EFS file system and ECS cluster must be in the same VPC and security group.
  1. Log into the AWS Management Console.
  2. Go to
    Services > Storage > EFS
    .
  3. Click
    Create File System
    .
  4. Select a VPC, select the
    tw-security-group
    for each mount target, then click
    Next
    .
  5. Enter a value for Name, such as
    tw-nlb-console
    , then click
    Next
    .
  6. Review your settings and create the file system.
  7. Click on the
    Amazon EC2 mount instructions (from local VPC)
    link and copy the mount command (Using the NFS client) and set it aside as the Console mount command.
    You will use this mount command to configure your launch configuration for the Console.

Create EFS file system for Defender

Create the Defender EFS file system, copy the
service-parameter
and
certificates
to the file system, then capture the mount command that will be used to mount the file system on every worker node.
The EFS file system and ECS cluster must be in the same VPC and security group.
  1. Log into the AWS Management Console.
  2. Go to
    Services > Storage > EFS
    .
  3. Click
    Create File System
    .
  4. Select a VPC, select the
    tw-security-group
    for each mount target, then click
    Next
    .
  5. Enter a value for Name, such as
    tw-nlb-defender
    , then click
    Next
    .
  6. Review your settings and create the file system.
  7. Click on the
    Amazon EC2 mount instructions (from local VPC)
    link and copy the mount command (Using the NFS client) and set it aside as the Defender mount command.
    You will use this mount command to configure your launch configuration for the Defenders.

Deploy Console

Launch an infrastructure node that runs in the cluster, then start Prisma Cloud Console as a service on that node.

Create a launch configuration for the infrastructure node

Launch configurations are templates that are used by an auto-scaling group to start EC2 instances in your cluster.
Create a launch configuration named
tw-infra-node
that:
  • Creates an instance type of t2.large, or higher. For more information about Console’s minimum requirements, see System requirements.
  • Runs Amazon ECS-Optimized Amazon Linux AMI.
  • Uses the ecsInstanceRole IAM role.
  • Runs a user data script that joins the
    tw-ecs-cluster
    and defines a custom attribute named
    purpose
    with a value of
    infra
    . Console will be pinned to this instance.
  1. Go to
    Services > Compute > EC2
    .
  2. In the left menu, click
    AUTO SCALING > Launch Configurations
    .
  3. Click
    Create launch configuration
    .
  4. Choose an AMI.
    1. Click
      AWS Marketplace
      .
    2. In the search box, enter
      ecs
      .
    3. Click
      Select
      for
      Amazon ECS-Optimized Amazon Linux AMI
      .
  5. Choose an instance type.
    1. Select
      t2.large
      .
    2. Click
      Next: Configure details
      .
  6. Configure details.
    1. In
      Name
      , enter a name for your launch configuration, such as
      tw-infra-node
      .
    2. In
      IAM
      role, select
      ecsInstanceRole
      .
      If this role doesn’t exist, see Amazon ECS Container Instance IAM Role.
    3. Select
      Monitoring
      .
    4. Expand
      Advanced Details
      ,
    5. In
      User Data
      , enter the following text in order to install the NFS utilities and mount the EFS file system:
      #!/bin/bash cat <<'EOF' >> /etc/ecs/ecs.config ECS_CLUSTER=tw-ecs-cluster ECS_INSTANCE_ATTRIBUTES={"purpose": "infra"} EOF yum install -y nfs-utils mkdir /twistlock_console <CONSOLE_MOUNT_COMMAND> /twistlock_console
      ECS_CLUSTER
      must match your cluster name. If you’ve named your cluster something other than
      tw_ecs_cluster
      , then modify your User Data script accordingly.
      <CONSOLE_MOUNT_COMMAND>
      is the Console mount command you copied from the AWS Management Console after creating your EFS file system. The mount target must be
      /twistlock_console
      , not the
      efs
      mount target provided in the sample command.
    6. (Optional) Under
      IP Address Type
      , select
      Assign a public IP address to every instance
      .
      With this option, you can easily SSH to this instance to troubleshoot issues.
    7. Click
      Next: Add Storage
      .
  7. Add Storage.
    1. Accept the defaults, and click
      Next: Configure Security Group
      .
  8. Configure security group.
    1. Under
      Assign a security group
      , choose
      Select an existing security group
      .
    2. Select
      tw-security-group
      .
    3. Click
      Review
      .
  9. Review.
    1. Click
      Create launch configuration
      .
  10. Select an existing key pair, or create a new key pair so that you can access your instance.
  11. Click
    Create launch configuration
    .

Create an auto scaling group for the infrastructure node

Launch a single instance of the infrastructure node into your cluster.
  1. Go to
    Services > Compute > EC2
    .
  2. In the left menu, click
    AUTO SCALING > Auto Scaling Groups
    .
  3. Click
    Create Auto Scaling group
    .
  4. Select
    tw-infra-node
    .
  5. Click
    Next Step
    .
  6. Configure Auto Scaling group details.
    1. In
      Group Name
      , enter tw-infra-autoscaling.
    2. Set
      Group size
      to the desired value (typically, this is a value greater than
      1
      ).
    3. Under
      Network
      , select your default VPC.
    4. Under
      Subnet
      , select a public subnet, such as 172.31.0.0/20.
    5. Click
      Next: Configure scaling policies
      .
  7. Configure scaling policies.
    1. Select
      Keep this group at its initial size
      .
    2. Click
      Next: Configure Notifications
      .
  8. Configure Notifications.
    1. Click
      Next: Configure Tags
      .
  9. Configure Tags.
    1. Under
      Key
      , enter
      Name
      .
    2. Under
      Value
      , enter
      tw-infra-node
      .
    3. Click
      Review
      .
  10. Click
    Create Auto Scaling Group
    .
    After the auto scaling group spins up (it will take some time), validate that your cluster has one container instance, where a container instance is the ECS vernacular for an EC2 instance that has joined the cluster and is ready to accept container workloads.
    Go to
    Services > Compute > Elastic Container Service
    . The count for
    Container instances
    should be 1.
    Click on the cluster, then click on the
    ECS Instances
    tab. In the status table, there should a single entry. Click on the link under the
    EC2 Instance
    column. In the details page for the EC2 instance, copy the
    IPv4 Public IP
    , and set it aside. You will use it to create a launch configuration for your worker nodes.
    • To initialize the file structure in the EFS mount:
      SSH to the infrastructure node
      $ ssh -i <PATH-TO-KEY-FILE> ec2-user@<ECS_INFRA_NODE>
      Set up the following directory structure
      $ sudo mkdir -p /twistlock_console/var/lib/twistlock $ sudo mkdir -p /twistlock_console/var/lib/twistlock-backup $ sudo mkdir -p /twistlock_console/var/lib/twistlock-config

Copy the Prisma Cloud config file into place

The Prisma Cloud API serves the version of the configuration file used to instantiate Console. Use scp to copy
twistlock.cfg
from the Prisma Cloud release tarball to
/twistlock_console/var/lib/twistlock-config
on the infrastructure node.
  1. Upload
    twistlock.cfg
    to the infrastructure node.
    1. Go to the directory where you unpacked the Prisma Cloud release tarball.
    2. Copy
      twistlock.cfg
      to the infrastructure node.
      $ scp -i <PATH-TO-KEY-FILE> twistlock.cfg ec2-user@<ECS_INFRA_NODE>:~
  2. SSH to the infrastructure node.
    $ ssh -i <PATH-TO-KEY-FILE> ec2-user@<ECS_INFRA_NODE>
  3. Copy the
    twistlock.cfg
    file into place.
    $ sudo cp twistlock.cfg /twistlock_console/var/lib/twistlock-config

Create a Prisma Cloud Console task definition

Prisma Cloud provides a task definition template for Console. Download the template, then update the variables specific to your environment. Finally, load the task definition in ECS.
Prerequisites:
  • The task definition provisions sufficient resources for Console to operate. Our template specifies reasonable defaults. For more information, see System requirements.
  1. Download the Prisma Cloud Console task definition, and open it for editing.
  2. Update the value for
    image
    to point to Prisma Cloud’s cloud registry.
    Replace the following placeholder strings with the appropriate values:
    • <ACCESS-TOKEN>
       — Your Prisma Cloud access token. All characters must be lowercase. To convert your access token to lowercase, run:
      $ echo <ACCESS-TOKEN> | tr '[:upper:]' '[:lower:]'
    • <VERSION>
       — Version of the Console image to retrieve and install. For example,
      18_11_128
      .
  3. Update the value for
    CONSOLE_SAN
    to the DNS name and/or IP address for your infra node.
  4. Go to
    Services > Compute > Elastic Container Service
    .
  5. In the left menu, click
    Task Definitions
    .
  6. Click
    Create new Task Definition
    .
  7. In
    Step 1: Select launch type compatibility
    , select
    EC2
    , then click
    Next step
    .
  8. In
    Step 2: Configure task and container definitions
    , scroll to the bottom of the page and click
    Configure via JSON
    .
  9. Delete the contents of the window, and replace it with the Prisma Cloud Console task definition
  10. Click
    Save
    .
    1. (Optional) Change the task definition name before creating. The JSON will default the name to
      tw-console
      .
  11. Click
    Create
    .

Launch the Prisma Cloud Console service

Create the Console service using the previously defined task definition. A single instance of Console will run on the infrastructure node.
  1. Go to
    Services > Compute > Elastic Container Service
    .
  2. In the left menu, click
    Clusters
    .
  3. Click on your cluster.
  4. In the
    Services
    tab, then click
    Create
    .
  5. In
    Step 1: Configure service
    :
    1. For
      Launch type
      , select
      EC2
      .
    2. For
      Task Definition
      , select
      tw-console
      .
    3. In
      Service Name
      , enter
      tw-console
      .
    4. In
      Number of tasks
      , enter
      1
      .
    5. Click
      Next Step
      .
  6. In
    Step 2: Configure network
    , accept the defaults, and click
    Next
    .
  7. In
    Step 3: Set Auto Scaling
    , accept the defaults, and click
    Next
    .
  8. In
    Step 4: Review
    , click
    Create Service
    .
  9. Click
    View Service
    .

Configure Prisma Cloud Console

Navigate to Console’s web interface, create your first admin account, then enter your license.
  1. Start a browser, then navigate to https://<ECS_INFRA_NODE>:8083
  2. At the login page, create your first admin account. Enter a username and password.
  3. Enter your license key, then click
    Register
    .

Set up a load balancer and Target Group

After you deploy Console, set up Amazon Network Load Balancers (NLB). You should use a
network
load balancer.
You’ll create two load balancer listeners. One is used for Console’s UI and API, which are served on port 8083. Another is used for the websocket connection between Defender and Console, which is established on port 8084.
For detailed instructions on how to create an NLB load balancer for Console, please refer to our Configure an AWS Network Load Balancer article.
Prerequisites:
  • Console is fully operational. You have created your first admin user, entered your license, and you can access Console’s web interface.
  1. Copy the DNS name for your load balancer and set it aside. You will need it to configure the launch configuration for your worker nodes.
  2. Add the DNS names for your load balancers to the Subject Alternative Name field in Console’s certificate.
    Log into Prisma Cloud Console, go to
    Manage > Defenders > Names
    , and add the DNS names for your load balancers to the
    Subject Alternative Name
    table.

Deploy Defender

After deploying Console, you are now ready to deploy your worker nodes. You will create an ECS Task Definition for the Prisma Cloud Defender, then create a service of type Daemon to ensure that the Defender is deployed across your ECS cluster. For this reason, it is imperative that Console be fully operational before worker nodes are instantiated in the cluster.

Copy Defender’s certificates into place

Get the certificates Defender requires to securely connect to Console, and then copy them to the EFS partition that worker nodes will mount.
  1. Retrieve the service parameter from the Prisma Cloud API.
    $ curl -k -s \ -u <USER> \ -H 'Content-Type: application/json' \ -X GET \ https://<CONSOLE>:8083/api/v1/certs/service-parameter \ -o service-parameter
    <CONSOLE> is the address the curl command uses to access Console.
  2. Retrieve the certificate bundle from the Prisma Cloud API, and save it to a file. It’s returned as a base64 string.
    Depending on the Console version the API call will be different.
    For Console version 19.11.480:
    $ curl -k -s \ -u <USER> \ -H 'Content-Type: application/json' \ -X GET \ https://<CONSOLE>:8083/api/v1/defenders/install-bundle?consoleaddr=<CONSOLE_CONN>&defenderType=rasp \ | jq -r '.installBundle' > INSTALL_BUNDLE
    For Console versions greater than or equal to 19.11.506:
    $ curl -k -s \ -u <USER> \ -H 'Content-Type: application/json' \ -X GET \ https://<CONSOLE>:8083/api/v1/defenders/install-bundle?consoleaddr=<CONSOLE_CONN>&defenderType=appEmbedded \ | jq -r '.installBundle' > INSTALL_BUNDLE
    <CONSOLE_CONN> is the address the Defenders use to connect to the Console. Use the address of your load balancer.
  3. Using the output from the previous command, decode the base64 string into each of the three separate files:
    ca.pem
    ,
    client-cert.pem
    , and
    client-key.pem
    . This following command also replaces the
    \n
    values in the output to UNIX-style line endings.
    $ for file in "ca.pem" "client-cert.pem" "client-key.pem"; \ do cat INSTALL_BUNDLE | base64 --decode \ | jq --arg i "$file" -r '.secrets[$i]' \ | awk '{gsub(/\\n/,"\n")}1' > $file; \ done
  4. Copy the certs into place.
    1. Mount the Defender EFS file system temporarily on a system of your choosing. Use the mount command you saved when you created your EFS file system.
    2. Copy the following files to the EFS file system:
      • service-parameter
      • ca.pem
      • client-cert.pem
      • client-key.pem
    3. Set the ownership and permissions for each file.
      $ chown root:root ca.pem client-cert.pem client-key.pem service-parameter $ chmod 600 ca.pem client-cert.pem client-key.pem service-parameter
    4. Unmount the EFS file system.
      $ umount <filesystem>

Create a launch configuration for worker nodes

Create a launch configuration named
tw-worker-node
that:
  • Runs the Amazon ECS-Optimized Amazon Linux AMI.
  • Uses the ecsInstanceRole IAM role.
  • Runs a user data script that joins the tw-ecs-cluster and runs the commands required to install Defender.
  1. Go to
    Services > Compute > EC2
    .
  2. In the left menu, click
    AUTO SCALING > Launch Configurations
    .
  3. Click
    Create launch configuration
    .
  4. Choose an AMI.
    1. Click
      AWS Marketplace
      .
    2. In the search box, enter
      ecs
      .
    3. Click
      Select
      for
      Amazon ECS-Optimized Amazon Linux AMI
      .
  5. Choose an instance type.
    1. Select
      t2.medium
      .
    2. Click
      Next: Configure details
      .
  6. Configure details.
    1. In
      Name
      , enter a name for your launch configuration, such as
      tw-worker-node
      .
    2. In
      IAM
      role, select
      ecsInstanceRole
      .
    3. Select
      Monitoring
      .
    4. Expand
      Advanced Details
      ,
    5. In
      User Data
      , enter the following text:
      #!/bin/bash echo ECS_CLUSTER=tw-ecs-cluster >> /etc/ecs/ecs.config yum install -y nfs-utils mkdir /twistlock_certificates chown root:root /twistlock_certificates chmod 700 /twistlock_certificates <DEFENDER_MOUNT_COMMAND> /twistlock_certificates
      Where:
      • ECS_CLUSTER
        must match your cluster name. If you’ve named your cluster something other than
        tw_ecs_cluster
        , then modify your User Data script accordingly.
      • <DEFENDER_MOUNT_COMMAND>
        is the mount command you copied from the AWS Management Console after creating your Defender EFS file system. The mount target must be
        /twistlock_certificates
        , replacing the
        efs
        mount target provided in the sample mount command.
    6. (Optional) Under
      IP Address Type
      , select
      Assign a public IP address to every instance
      .
      With this option, you can easily SSH to any worker node instance and troubleshoot issues.
    7. Click
      Next: Add Storage
      .
  7. Add Storage.
    1. Accept the defaults, and click
      Next: Configure Security Group
      .
  8. Configure security group.
    1. Under
      Assign a security group
      , choose
      Select an existing security group
      .
    2. Select
      tw-security-group
      .
    3. Click
      Review
      .
  9. Review.
    1. Click
      Create launch configuration
      .

Create an auto scaling group for the worker nodes

Launch two worker nodes into your cluster.
  1. Go to
    Services > Compute > EC2
    .
  2. In the left menu, click
    AUTO SCALING > Auto Scaling Groups
    .
  3. Click
    Create Auto Scaling group
    .
  4. Select
    Create an Auto Scaling group from an existing launch configuration
    .
  5. Select
    tw-worker-node
    .
  6. Click
    Next Step
    .
  7. Configure Auto Scaling group details.
    1. In
      Group Name
      , enter tw-worker-autoscaling.
    2. Leave
      Group size
      set to
      2
      .
    3. Under
      Network
      , select your default VPC.
    4. Under
      Subnet
      , select a public subnet, such as 172.31.0.0/20.
    5. Click
      Next: Configure scaling policies
      .
  8. Configure scaling policies.
    1. Select
      Keep this group at its initial size
      .
    2. Click
      Next: Configure Notifications
      .
  9. Configure Notifications.
    1. Click
      Next: Configure Tags
      .
  10. Configure Tags.
    1. Under
      Key
      , enter
      Name
      .
    2. Under
      Value
      , enter
      tw-worker-node
      .
    3. Click
      Review
      .
  11. Click
    Create Auto Scaling Group
    .
    After the auto scaling group spins up (it will take some time), validate that your cluster has two more container instances. Go to
    Services > Compute > Elastic Container Service
    . The count for
    Container instances
    in your cluster should now be a total of three.

Create a Prisma Cloud Defender task definition

Prisma Cloud provides a task definition template for Defender. Download the template, then update the variables specific to your environment. Finally, load the task definition in ECS.
  1. Download the Prisma Cloud Defender task definition, and open it for editing.
  2. Update the value for
    image
    to point to Prisma Cloud’s cloud registry.
    Replace the following placeholder strings with the appropriate values:
    • <ACCESS-TOKEN>
       — Your Prisma Cloud access token. All characters must be lowercase. To convert your access token to lowercase, run:
      $ echo <ACCESS-TOKEN> | tr '[:upper:]' '[:lower:]'
    • <VERSION>
       — Version of the Console image to retrieve and install. For example,
      19_03_321
      .
    • <NLB-8084>
       — The DNS name for the load balancer you created for port 8084.
  3. Go to
    Services > Compute > Elastic Container Service
    .
  4. In the left menu, click
    Task Definitions
    .
  5. Click
    Create new Task Definition
    .
  6. In
    Step 1: Select launch type compatibility
    , select
    EC2
    , then click
    Next step
    .
  7. In
    Step 2: Configure task and container definitions
    , scroll to the bottom of the page and click
    Configure via JSON
    .
  8. Delete the contents of the window, and replace it with the Prisma Cloud Console task definition
  9. Click
    Save
    .
  10. Click
    Create
    .

Launch the Prisma Cloud Defender service

Create the Defender service using the previously defined task definition. Using Daemon scheduling, one Defender will run per node in your cluster.
  1. Go to
    Services > Compute > Elastic Container Service
    .
  2. In the left menu, click
    Clusters
    .
  3. Click on your cluster.
  4. In the
    Services
    tab, then click
    Create
    .
  5. In
    Step 1: Configure service
    :
    1. For
      Launch type
      , select
      EC2
      .
    2. For
      Task Definition
      , select
      twistlock_defender
      .
    3. In
      Service Name
      , enter
      twistlock_defender
      .
    4. In
      Service Type
      , select
      Daemon
      .
    5. Click
      Next Step
      .
  6. In
    Step 2: Configure network
    , accept the defaults, and click
    Next
    .
  7. In
    Step 3: Set Auto Scaling
    , accept the defaults, and click
    Next
    .
  8. In
    Step 4: Review
    , click
    Create Service
    .
  9. Click
    View Service
    .
  10. Verify that you have Defenders running on each node in your ECS cluster.
    Go to your Prisma Cloud Console and viewing the list of Defenders in
    Manage > Defenders > Manage
    .

Using a private registry

For maximum control over your environment, you might want to store the Console container image in your own private registry, and then install Prisma Cloud from your private registry. When the Console service is started, ECS retrieves the image from your registry. This procedure shows you how to push the Console container image to Amazon’s Elastic Container Registry (ECR).
Prerequisites:
  • AWS CLI is installed on your machine. It is required to push the Console image to your registry.
  1. Go to the directory where you unpacked the Prisma Cloud release tarball.
    $ cd twistlock/
  2. Load the Console image.
    $ docker load < ./twistlock_console.tar.gz
  3. Go to
    Services > Compute > Elastic Container Service
    .
  4. In the left menu, click
    Repositories
    .
  5. Click
    Create repository
    .
  6. Follow the AWS instructions for logging in to the registry, tagging the Console image, and pushing it to your repo.
    Be sure to update your Console task definition so that the value for
    image
    points to your private registry.

Recommended For You