This section guides you through deploying a Terraform template to add Prisma AIRS AI Runtime Firewall protection for GCP cloud
resources.
On this page, you will configure Prisma AIRS AI Runtime
Firewall in Strata Cloud Manager, download the corresponding Terraform template,
and deploy it in your cloud environment. This setup will integrate the firewall in
your cloud network architecture, enabling comprehensive monitoring and protection of
your assets.
After onboarding the cloud account, the Strata Cloud Manager command
center dashboard will show asset discovery with no firewall protection deployed.
Unprotected traffic paths to and from applications, AI models, and the internet are
marked in red until you add firewall protection. For more details, see Discover Your Cloud Resources.
Select Cloud Service Provider as Google Cloud and select Next.
In Firewall Placement, select one or more traffic flows to
inspect.
The following table shows the network traffic type that the Prisma AIRS AI Runtime Firewall or the VM-Series firewall can support:
Traffic Type
AI Runtime Security Firewall
VM-Series
AI Traffic - Traffic between your applications
and AI Models
✅
Non-AI Traffic and namespaces (example,
kube-system)
✅
Cluster Traffic
✅
Non-AI and non-cluster traffic
✅
✅
When you select any namespace, the VM-Series firewall option becomes unavailable because only
Prisma AIRS AI Runtime Firewall can secure these
namespaces.
Select Next.
In Region & Applications:
Select your cloud account to secure from the onboarded cloud
accounts list.
Select a region in which you want to protect the
applications.
In Selected applications:
Select the applications to secure from the available list. This list
includes application workloads such as namespaces or VPCs.
The available applications are
determined by the application definition criteria you configured
during cloud account
onboarding in the “Application Definition”
step.
Set the Public IP address on the External Load Balancer (ELB)
for each application by selecting:
Auto generate: Automatically assigns an ephemeral
(temporary) IP address to your application.
Input manually: Create and assign a static IP address to
your application.
Each application is mapped to one public ELB IP address.
Configure Traffic Inspection to protect your clusters at
namespace-level only:
Traffic steering inspection is available only when you select
namespaces from the applications list. Select the namespace and
configure how to handle traffic from specific network segments
(Limit to 10 CIDRs per cluster that can be inspected or bypassed at
any time):
Inspect certain CIDRs: Only inspect traffic from
specified subnet ranges.
Bypass certain CIDRs: Exclude traffic from specified
subnet ranges from inspection.
For container applications, all traffic to
and from the applications is protected by default. Use
traffic inspection options only when you need granular
control over which network segments are inspected or
bypassed.
When protecting traffic from namespaces
using traffic inspection, select only the namespace
and not its parent VPC to avoid deployment failures.
The same GWLB endpoint can't be used for both VPC
and namespace-level protection in the same
zone.
Select the Undiscovered VPC(s) tab to discover or add a new
VPC.
Select Add VPC.
Configure the following:
VPC Name to secure.
VPC CIDRs IP range values.
OptionalK8s pod CIDRs IP range values.
OptionalK8s service CIDRs IP range values.
Cluster Id.
CIDR ranges to be inspected in the Inspect certain
CIDRs field.
CIDR ranges to be bypassed in the Bypass certain
CIDRs field.
Select Submit.
Select Next.
In Protection Settings:
In the Deployment parameters, select AI Runtime Security
or VM-Series firewall type based on the type of traffic you
decided to protect in the Firewall Placement step.
Enter the Service account attached to security VM.
Enable Auto-Deploy Security; by default, this option is
disabled.
Enter Number of firewalls to deploy.
Select zones to deploy firewalls from the available zones.
In Firewall Scaling, select Static or Dynamic to
configure autoscaling.
With autoscaling, you can choose
between static or dynamic scaling models during
deployment. Dynamic scaling allows you to select from several
metrics to base your autoscaling decisions on, giving you
fine-grained control over how your security infrastructure adapts to
changing conditions. This approach ensures that your security
posture remains robust during traffic surges while optimizing
license consumption during periods of lower demand. After traffic
decreases and firewalls are deactivated, the system automatically
removes the firewalls from inventory and returns licenses to your
pool for future scaling events.
Selecting Dynamic firewall scaling allows you to configure
additional metrics:
Specify the Number of Firewalls to Deploy by entering a
range (minimum to maximum).
Enter the Cloudwatch Namespace.
Set the Update Interval; 1-60 minutes.
Use the drop-down menu to select Auto-Scaling Metrics;
specify the input percentage.
Click Apply.
After completing the Firewall Information section you can
configure IP Addressing, Licensing and Management
Parameters.
Configure the following:
IP addressing scheme
Licensing
Management parameters
CIDR value for untrust VPC.
CIDR value for trust VPC.
CIDR value for management VPC.
Enter the following values:
PAN OS version for your image from the
available list.
Flex authentication code (Copy AUTH CODE
for the deployment
profile you created for Prisma AIRS AI Runtime Firewall
in Customer Support Portal).
Enter a unique Terraform template name. (Use only lowercase
letters, numbers, and hyphens. (Don't use a hyphen at the beginning or
end, and limit the name to under 19 characters).
Create terraform template.
Save and Download Terraform Template.
Close the deployment workflow to exit.
Unzip the downloaded file. Navigate to <unzipped-folder>
with 2 directories: `architecture` and `modules`. Deploy the Terraform templates
in your cloud environment following the `README.md` file in the `architecture`
folder.
Initialize and apply the Terraform for the security_project.
The `security_project` contains the Terraform plan to deploy a Prisma AIRS AI Runtime Firewall in your architecture.
The Terraform plan creates the required resources to deploy Prisma AIRS: AI Runtime Firewall with inline
prevention mode, including the managed instance groups, load balancers, and
health checks.
cd architecture //Change directory to architecture/security_project
cd security_project
terraform init
terraform plan
terraform apply
The security Terraform generates the following output. Ensure to record the IP
addresses within the lbs_external_ips &
lbs_internal_ips
outputs.
Configure Strata Cloud Manager or Panorama to secure VM workloads and
Kubernetes clusters and deploy pods. Configure interfaces, zones, NAT policy,
routers, and security policy rules.
Navigate to Workflows→ NGFW Setup → Device Management. The Prisma AIRS AI Runtime Firewall appears under Cloud
Managed Devices.
Switch to the Cloud Managed Devices tab to view and
manage the connected state, the configuration sync state, and the deployed Prisma AIRS licenses.
It takes a while before the Device Status shows as
connected.