View WildFire Cluster Status Using the CLI
To confirm that you WildFire cluster is running within normal operating parameters, you must execute the following show commands:
- show cluster controller—Displays the status of active/passive WildFire cluster nodes.
- show cluster all-peers—Displays information about all of the members in a given WildFire cluster.
- show cluster membership—Displays WildFire appliance information for cluster and standalone nodes.
- On a WildFire appliance controller node, run:admin@WF-500(active-controller)>show cluster controllerA healthy WildFire cluster displays the following details:
The following example shows the output from an active controller configured in a 2-node WildFire cluster operating in a healthy state:
- The name of the cluster the appliance has been enrolled in and its configured role.
- The K/V API online status indicates True when the internal cluster interface is functioning properly. A status of False can indicate an improperly configured node or a network issue.
- Task processing indicates True on active-controllers (primary) and False on passive-controllers (backup).
- The IP addresses for all WildFire nodes in the cluster are listed under App Service Avail.
- Up to three Good Core Servers. The number of Good Core Servers depends on the number of nodes running in the cluster. If you have a third node operating within a cluster, it automatically get configured as a server node to maximize cluster integrity.
- No Suspended Nodes.
- The Current Task provides background information about cluster-level operations, such as reboot, decommission, and suspend tasks.
Cluster name: WildFire_Cluster K/V API online: True Task processing: on Active Controller: True DNS Advertisement: App Service DNS Name: App Service Avail: 188.8.131.52, 184.108.40.206 Core Servers: 009701000026: 220.127.116.11 009701000043: 18.104.22.168 Good Core Servers: 2 Suspended Nodes: Current Task: * Showing latest completed task Request: startup from qa14 (009701000043/80025) at 2017-09-18 21:43:34 UTC null Response: permit by qa15 at 2017-09-18 21:45:15 UTC 1/2 core servers available. Finished: success at 2017-09-18 21:43:47 UTC
- On a WildFire appliance controller node, run:
admin@WF-500>show cluster all-peersA healthy WildFire cluster displays the following details:
The following example shows a 3-node WildFire cluster operating in a healthy state:
- The general information about the WildFire nodes in the cluster are listed under Address, Mode, Server, Node, and Name.
- All WildFire cluster nodes are running the wfpc service, an internal file sample analysis service.
- Nodes operating as an active, passive, or server display Server role applied next to Status. If the node has been configured to be a server, but isn’t operating as a server, the status displays Server role assigned.In a 3-node deployment, the third server node is categorized as a worker.
- Recently removed nodes might be present but displays as Disconnected. It can take several days for a disconnected node to be removed from the cluster node list.
- The active controller node displays siggen-db: ReadyMaster.
- The passive controller node displays siggen-db: ReadySlave.
- TheDiag report displays cluster system events and error messages:
Error Message Description Unreachable The node was never reachable from the cluster controller. Unexpected member The node is not part of the cluster configuration. The node might have recently deleted from the cluster configuration or the result of misconfiguration. Left cluster The node is no longer reachable from the cluster controller. Incorrect cluster name The node has an incorrectly configured cluster name. Connectivity unstable The node’s connection to the cluster controller is unstable. Connectivity lost The node’s connectivity to the cluster controller has been lost. Unexpected server serial number The unexpected presence of a server node has been detected.
Address Mode Server Node Name ------- ---- ------ --------- 22.214.171.124 controller Self True qa15 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: JoinedCluster wildfire-apps-service: Stopped global-queue-service: JoinedCluster wildfire-management-service: Done siggen-db: ReadySlave 126.96.36.199 controller Peer True qa14 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: commit-lock wildfire-apps-service: Stopped global-queue-service: ReadyStandalone wildfire-management-service: Done siggen-db: ReadyMaster 188.8.131.52 worker True wf6240 Service: infra wfcore wfpc Status: Connected, Server role applied Changed: Wed, 22 Feb 2017 11:11:15 -0800 WF App: wildfire-apps-service: Ready global-db-service: JoinedCluster global-queue-service: JoinedCluster local-db-service: DataMigrationFailed Diag report: 184.108.40.206: reported leader '220.127.116.11', age 0. 18.104.22.168: local node passed sanity check.
- On a WildFire appliance controller node, run:
admin@WF-500>show cluster membershipA healthy WildFire cluster displays the following details:
The following example shows a WildFire controller operating in a healthy state:
- The general WildFire appliance configuration details, such as the cluster name, IP address of the appliance, serial number, etc.
- Server role indicates whether or not the WildFire appliance is operating as a cluster server. Cluster servers operate additional infrastructure applications and services. You can have a maximum of three servers per cluster.
- Node mode describes the role of a WildFire appliance. WildFire appliances enrolled in a cluster can be either a controller or worker node depending on your configuration and the number of nodes in your deployment. Appliances that are not a part of a cluster displays stand_alone.
- Operates the following Services based on the cluster node role:
Node Type Services Operating on the Node Controller Node (Active or Passive)
- HA priority displays primary or secondary depending on its configured role, however this setting is independent of the current HA state of the appliance.
- Work queue status shows the sample analysis backlog as well as samples that are currently being analyzed. This also indicates how much load a particular WildFire appliance receives.
Service Summary: wfpc signature Cluster name: qa-auto-0ut1 Address: 22.214.171.124 Host name: qa15 Node name: wfpc-009701000026-internal Serial number: 009701000026 Node mode: controller Server role: True HA priority: secondary Last changed: Fri, 22 Sep 2017 11:30:47 -0700 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable Service 'infra' check: passing Application status: global-db-service: ReadyLeader wildfire-apps-service: Ready global-queue-service: ReadyLeader wildfire-management-service: Done siggen-db: Ready Work queue status: sample anaysis queued: 0 sample anaysis running: 0 sample copy queued: 0 sample copy running: 0 Diag report: 126.96.36.199: reported leader '188.8.131.52', age 0. 184.108.40.206: local node passed sanity check.
show cluster membership
show cluster membership Description Show WildFire appliance cluster membership information for the cluster node or standalone WildFire appliance, including the IP address, host name, WildFire ...
show cluster all-peers
show cluster all-peers Description On a WildFire appliance cluster controller node, display the status of all WildFire appliance cluster members, including the WildFire appliance mode ...
Configure a Cluster and Add Nodes Locally
Configure a Cluster and Add Nodes Locally When you add nodes to a cluster, the cluster automatically sets up communication between nodes based on the ...
WildFire Service States
WildFire Service States The WildFire appliance operates a series of internal services to manage and coordinate processing of sample data. These services and their requisite ...
request cluster decommission
request cluster decommission Description Remove a WildFire appliance cluster node from a cluster that has three or more member nodes. Do not use this command ...
Determine if the WildFire Cluster is in a Split-Brain Condi...
Determine if the WildFire Cluster is in a Split-Brain Condition When the appliances in a WildFire 2-node cluster enter a split-brain condition, the service failure(s) ...
Limitations The following table includes limitations associated with PAN-OS® 8.0 releases. Issue ID Description PAN-68997 The WildFire® appliance cluster membership list may not be accurate ...
Monitor a WildFire Cluster
Monitor a WildFire Cluster You can check the operational status of your WildFire cluster using the CLI or Panorama. This allows you to verify that ...
Add Standalone WildFire Appliances to Manage with Panorama
Add Standalone WildFire Appliances to Manage with Panorama You can manage up to 200 WildFire appliances with a Panorama M-Series or virtual appliance. The 200 ...