Ingest Logs from Google Kubernetes Engine

Forward your Google Kubernetes Engine (GKE) logs directly to Cortex XDR using Elasticsearch Filebeat.
Ingesting logs and data requires a Cortex XDR Pro per TB license.
Instead of forwarding Google Kubernetes Engine (GKE) logs directly to Google StackDrive, Cortex XDR can ingest container logs from GKE using Elasticsearch* Filebeat. To receive logs, you must install Filebeat on your containers and enable SaaS Log Collection settings for Filebeat.
After Cortex XDR begins receiving logs, the app automatically creates an XQL dataset using the vendor and product name that you specify during Filebeat setup. It is recommended to specify a descriptive name. For example, if you specify
google
as the vendor and
kubernetes
as the product, the dataset name will be
google_kubernetes_raw
. If you leave the product and vendor blank, Cortex XDR assigns the dataset a name of
container_container_raw
.
After Cortex XDR creates the dataset, you can search your GKE logs using XQL Search.
  1. Install Filebeat on your containers.
  2. Record your token key and API URL for the Filebeat Collector instance as you will need these later in this workflow.
  3. Deploy a Filebeat as a DaemonSet on Kubernetes.
    This ensures there is a running instance of Filebeat on each node of the cluster.
    1. Download the manifest file to a location where you can edit it.
      curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml
    2. Open the YAML file in your preferred text editor.
    3. Remove the
      cloud.id
      and
      cloud.auth
      lines.
      gke-filebeat-cloud.id-remove.png
    4. For the
      output.elasticsearch
      configuration, replace the
      hosts
      ,
      username
      , and
      password
      with environment variable references for
      hosts
      and
      api_key
      , and add a field and value for
      compression_level
      and
      bulk_max_size
      .
      filebeat-elasticsearch-env-vars.png
    5. In the
      DaemonSet
      configuration, locate the
      env
      configuration and replace
      ELASTIC_CLOUD_AUTH
      ,
      ELASTIC_CLOUD_ID
      ,
      ELASTICSEARCH_USERNAME
      ,
      ELASTICSEARCH_PASSWORD
      ,
      ELASTICSEARCH_HOST
      ,
      ELASTICSEARCH_PORT
      and their relative values with the following:
      • ELASTICSEARCH_ENDPOINT
        —Enter the API URL for your Cortex XDR tenant. You can copy the URL from the Filebeat Collector instance you set up for GKE in the Cortex XDR management console (
        gear.png
        Settings
        SaaS Integrations
        Copy API URL
        . The URL will include your tenant name (
        https://api-
        <tenant external URL>
        :443/logs/v1/filebeat)
      • ELASTICSEARCH_API_KEY
        —Enter the token key you recorded earlier during the configuration of your Filebeat Collector instance.
      After you configure these settings your configuration should look like the following image.
      gke-filebeat-env-config.png
    6. Save your changes.
  4. If you use RedHat OpenShift, you must also specify additional settings.
  5. Deploy Filebeat on your Kubernetes.
    kubectl create -f filebeat-kubernetes.yaml
    This will deploy Filebeat in the kube-system namespace. If you want to deploy the Filebeat configuration in other namespaces, change the namespace values in the YAML file (in any YAML inside this file) and add
    -n <your_namespace>
    .
    After you deploy your configuration, the Filebeat DameonSet will run throughout your containers to forward logs to Cortex XDR. You can review the configuration from the Kubernetes Engine console:
    Workloads
    Filebeat
    YAML
    .
Elasticsearch is a trademark of Elasticsearch B.V., registered in the U.S. and in other countries.

Recommended For You