Flow Log Compression on GCP
Prisma Cloud enables you to automate the compression
of flow logs using the Google Cloud Dataflow service.
This additional automation on Prisma cloud addresses the
lack of native compression support for flow logs sink setup on GCP,
and helps reduce the egress costs associated with transferring large
volume of logs to the Prisma Cloud infrastructure. Therefore, Prisma
Cloud recommends that you enable flowlog compression.
When you enable Dataflow compression on Prisma Cloud, the Dataflow
pipeline resources are created in the same GCP project associated
with the Google Cloud Storage bucket to which your VPC Flow logs
are sent, and it saves the compressed logs also to the Cloud Storage
bucket. Therefore, if you are onboarding a GCP Organization and
enabling Dataflow compression to it or enabling Dataflow compression to
an existing GCP Organization that has been added to Prisma cloud,
make sure that the Dataflow-enabled Project ID is the same Google
Cloud Storage bucket to which you send VPC flow logs.
In order to launch the Dataflow job and create and stage the
compressed files, the following permissions are required:
- Enable the Dataflow APIs.The API is dataflow.googleapi.com.
- Grant the service account with permissions to:
- Run and examine jobs —Dataflow Adminrole
- Attach service accounts to Dataflow resources —iam.serviceAccounts.actAspermission
- Create a network, subnetwork, and firewall rules within your VPC —compute.networks.create,compute.subnetworks.create,compute.firewalls.create,compute.networks.updatepolicyTo enable connectivity with the Dataflow pipeline resources and the compute instances that perform log compression within your VPC, Prisma Cloud creates a network, subnetwork, and firewall rules in your VPC. You can view the compute instances that are spun up with the RQLconfig where api.name='gcloud-compute-instances-list' AND json.rule = name starts with "prisma-compress"
For details on enabling the APIs, see Permissions and APIs Required for GCP Account on Prisma Cloud.
The GCP Flow Logs compression test jobs are submitted to ensure
that Prisma Cloud can verify flow log compression jobs for cloud
accounts, and to confirm that the Dataflow API is enabled. Prisma
Cloud launches test jobs at regular intervals before submitting
compression jobs. The following validations are done:
- Verify that the compression is enabled for the GCP project.
- Verify that at least one Dataflow enabled child project is in an org type account.
- Verify that the credentials are not empty.
- Verify that the project ID is not empty.
- Verify the region from storage location.
- Verify the network, subnet, and firewall.
After the above validations are done, the test job is submitted.
If the submission is successful then the API returns a 200 status
code indicating that everything needed for submission is available
at the GCP project level. If a test job fails then it can be ignored.
If there was an error during the submission then the appropriate
error messages would be logged; these entries indicate the cause
of failure.
In addition, the Cloud Dataflow service spins up short lived
compute instances to handle the compression jobs and you may have
associated costs with the service. Palo Alto Networks recommends
keeping your Cloud Storage bucket in the same project in which you
have enabled the Dataflow service. Based on the location of your
Cloud Storage bucket, Prisma Cloud launches the Cloud Dataflow jobs
in the following regions:
Storage Bucket Region | Region Where the Dataflow is Launched |
---|---|
us-central1 us-east1 us-west1 europe-west1 europe-west4 asia-east1 asia-northeast1 | us-central1 us-east1 us-west1 europe-west1 europe-west4 asia-east1 asia-northeast1 |
eur4 eu | europe-west4 |
asia | asia-east1 |
us | us-central1 us-east1 |
Any other region | us-central1 |
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found.