View WildFire Cluster Status Using the CLI

To confirm that you WildFire cluster is running within normal operating parameters, you must execute the following show commands:
  • show cluster controller
    —Displays the status of active/passive WildFire cluster nodes.
  • show cluster all-peers
    —Displays information about all of the members in a given WildFire cluster.
  • show cluster membership
    —Displays WildFire appliance information for cluster and standalone nodes.
  1. On a WildFire appliance controller node, run:
    admin@WF-500(active-controller)>show cluster controller
    A healthy WildFire cluster displays the following details:
    • The name of the cluster the appliance has been enrolled in and its configured role.
    • The
      K/V API online status
      indicates
      True
      when the internal cluster interface is functioning properly. A status of
      False
      can indicate an improperly configured node or a network issue.
    • Task processing
      indicates
      True
      on active-controllers (primary) and
      False
      on passive-controllers (backup).
    • The IP addresses for all WildFire nodes in the cluster are listed under
      App Service Avail
      .
    • Up to three
      Good Core Servers
      . The number of
      Good Core Servers
      depends on the number of nodes running in the cluster. If you have a third node operating within a cluster, it automatically get configured as a server node to maximize cluster integrity.
    • No
      Suspended Nodes
      .
    • The
      Current Task
      provides background information about cluster-level operations, such as reboot, decommission, and suspend tasks.
    The following example shows the output from an active controller configured in a 2-node WildFire cluster operating in a healthy state:
    Cluster name: WildFire_Cluster K/V API online: True Task processing: on Active Controller: True DNS Advertisement: App Service DNS Name: App Service Avail: 2.2.2.14, 2.2.2.15 Core Servers: 009701000026: 2.2.2.15 009701000043: 2.2.2.14 Good Core Servers: 2 Suspended Nodes: Current Task: * Showing latest completed task Request: startup from qa14 (009701000043/80025) at 2017-09-18 21:43:34 UTC null Response: permit by qa15 at 2017-09-18 21:45:15 UTC 1/2 core servers available. Finished: success at 2017-09-18 21:43:47 UTC
  2. On a WildFire appliance controller node, run:
    admin@WF-500>
    show cluster all-peers
    A healthy WildFire cluster displays the following details:
    • The general information about the WildFire nodes in the cluster are listed under
      Address
      ,
      Mode
      ,
      Server
      ,
      Node
      , and
      Name
      .
    • All WildFire cluster nodes are running the
      wfpc
      service, an internal file sample analysis service.
    • Nodes operating as an active, passive, or server display
      Server role applied
      next to
      Status
      . If the node has been configured to be a server, but isn’t operating as a server, the
      status
      displays
      Server role assigned
      .
      In a 3-node deployment, the third server node is categorized as a worker.
    • Recently removed nodes might be present but displays as
      Disconnected
      . It can take several days for a disconnected node to be removed from the cluster node list.
    • The active controller node displays
      siggen-db: ReadyMaster
      .
    • The passive controller node displays
      siggen-db: ReadySlave
      .
      For more information about general WildFire application and service status details, refer to WildFire Application States and WildFire Service States.
    • The
      Diag report
      displays cluster system events and error messages:
      Error Message
      Description
      Unreachable
      The node was never reachable from the cluster controller.
      Unexpected member
      The node is not part of the cluster configuration. The node might have recently deleted from the cluster configuration or the result of misconfiguration.
      Left cluster
      The node is no longer reachable from the cluster controller.
      Incorrect cluster name
      The node has an incorrectly configured cluster name.
      Connectivity unstable
      The node’s connection to the cluster controller is unstable.
      Connectivity lost
      The node’s connectivity to the cluster controller has been lost.
      Unexpected server serial number
      The unexpected presence of a server node has been detected.
    The following example shows a 3-node WildFire cluster operating in a healthy state:
    Address Mode Server Node Name ------- ---- ------ --------- 2.2.2.15 controller Self True qa15 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: JoinedCluster wildfire-apps-service: Stopped global-queue-service: JoinedCluster wildfire-management-service: Done siggen-db: ReadySlave 2.2.2.14 controller Peer True qa14 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: commit-lock wildfire-apps-service: Stopped global-queue-service: ReadyStandalone wildfire-management-service: Done siggen-db: ReadyMaster 2.2.2.16 worker True wf6240 Service: infra wfcore wfpc Status: Connected, Server role applied Changed: Wed, 22 Feb 2017 11:11:15 -0800 WF App: wildfire-apps-service: Ready global-db-service: JoinedCluster global-queue-service: JoinedCluster local-db-service: DataMigrationFailed Diag report: 2.2.2.14: reported leader '2.2.2.15', age 0. 2.2.2.15: local node passed sanity check.
  3. On a WildFire appliance controller node, run:
    admin@WF-500>
    show cluster membership
    A healthy WildFire cluster displays the following details:
    • The general WildFire appliance configuration details, such as the cluster name, IP address of the appliance, serial number, etc.
    • Server role
      indicates whether or not the WildFire appliance is operating as a cluster server. Cluster servers operate additional infrastructure applications and services. You can have a maximum of three servers per cluster.
    • Node mode
      describes the role of a WildFire appliance. WildFire appliances enrolled in a cluster can be either a
      controller
      or
      worker
      node depending on your configuration and the number of nodes in your deployment. Appliances that are not a part of a cluster displays
      stand_alone
      .
    • Operates the following
      Services
      based on the cluster node role:
      Node Type
      Services Operating on the Node
      Controller Node (Active or Passive)
      • infra
      • wfpc
      • signature
      • wfcore
      Server Node
      • infra
      • wfpc
      • wfcore
      Worker Node
      • infra
      • wfpc
    • HA priority
      displays primary or secondary depending on its configured role, however this setting is independent of the current HA state of the appliance.
    • Work queue status
      shows the sample analysis backlog as well as samples that are currently being analyzed. This also indicates how much load a particular WildFire appliance receives.
    For more information about WildFire application and service status details, refer to WildFire Application States and WildFire Service States.
    The following example shows a WildFire controller operating in a healthy state:
    Service Summary: wfpc signature Cluster name: qa-auto-0ut1 Address: 2.2.2.15 Host name: qa15 Node name: wfpc-009701000026-internal Serial number: 009701000026 Node mode: controller Server role: True HA priority: secondary Last changed: Fri, 22 Sep 2017 11:30:47 -0700 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable Service 'infra' check: passing Application status: global-db-service: ReadyLeader wildfire-apps-service: Ready global-queue-service: ReadyLeader wildfire-management-service: Done siggen-db: Ready Work queue status: sample anaysis queued: 0 sample anaysis running: 0 sample copy queued: 0 sample copy running: 0 Diag report: 2.2.2.14: reported leader '2.2.2.15', age 0. 2.2.2.15: local node passed sanity check.

Related Documentation