End-of-Life (EoL)
View WildFire Cluster Status Using the CLI
To confirm that you WildFire cluster is running
within normal operating parameters, you must execute the following
show commands:
- show cluster controller—Displays the status of active/passive WildFire cluster nodes.
- show cluster all-peers—Displays information about all of the members in a given WildFire cluster.
- show cluster membership—Displays WildFire appliance information for cluster and standalone nodes.
- On a WildFire appliance controller node, run:admin@WF-500(active-controller)>show cluster controllerA healthy WildFire cluster displays the following details:
- The name of the cluster the appliance has been enrolled in and its configured role.
- TheK/V API online statusindicatesTruewhen the internal cluster interface is functioning properly. A status ofFalsecan indicate an improperly configured node or a network issue.
- Task processingindicatesTrueon active-controllers (primary) andFalseon passive-controllers (backup).
- The IP addresses for all WildFire nodes in the cluster are listed underApp Service Avail.
- Up to threeGood Core Servers. The number ofGood Core Serversdepends on the number of nodes running in the cluster. If you have a third node operating within a cluster, it automatically get configured as a server node to maximize cluster integrity.
- NoSuspended Nodes.
- TheCurrent Taskprovides background information about cluster-level operations, such as reboot, decommission, and suspend tasks.
The following example shows the output from an active controller configured in a 2-node WildFire cluster operating in a healthy state:Cluster name: WildFire_Cluster K/V API online: True Task processing: on Active Controller: True DNS Advertisement: App Service DNS Name: App Service Avail: 2.2.2.14, 2.2.2.15 Core Servers: 009701000026: 2.2.2.15 009701000043: 2.2.2.14 Good Core Servers: 2 Suspended Nodes: Current Task: * Showing latest completed task Request: startup from qa14 (009701000043/80025) at 2017-09-18 21:43:34 UTC null Response: permit by qa15 at 2017-09-18 21:45:15 UTC 1/2 core servers available. Finished: success at 2017-09-18 21:43:47 UTC - On a WildFire appliance controller node, run:admin@WF-500>show cluster all-peersA healthy WildFire cluster displays the following details:
- The general information about the WildFire nodes in the cluster are listed underAddress,Mode,Server,Node, andName.
- All WildFire cluster nodes are running thewfpcservice, an internal file sample analysis service.
- Nodes operating as an active, passive, or server displayServer role appliednext toStatus. If the node has been configured to be a server, but isn’t operating as a server, thestatusdisplaysServer role assigned.In a 3-node deployment, the third server node is categorized as a worker.
- Recently removed nodes might be present but displays asDisconnected. It can take several days for a disconnected node to be removed from the cluster node list.
- The active controller node displayssiggen-db: ReadyMaster.
- The passive controller node displayssiggen-db: ReadySlave.For more information about general WildFire application and service status details, refer to WildFire Application States and WildFire Service States.
- TheDiag reportdisplays cluster system events and error messages:Error MessageDescriptionUnreachableThe node was never reachable from the cluster controller.Unexpected memberThe node is not part of the cluster configuration. The node might have recently deleted from the cluster configuration or the result of misconfiguration.Left clusterThe node is no longer reachable from the cluster controller.Incorrect cluster nameThe node has an incorrectly configured cluster name.Connectivity unstableThe node’s connection to the cluster controller is unstable.Connectivity lostThe node’s connectivity to the cluster controller has been lost.Unexpected server serial numberThe unexpected presence of a server node has been detected.
The following example shows a 3-node WildFire cluster operating in a healthy state:Address Mode Server Node Name ------- ---- ------ --------- 2.2.2.15 controller Self True qa15 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: JoinedCluster wildfire-apps-service: Stopped global-queue-service: JoinedCluster wildfire-management-service: Done siggen-db: ReadySlave 2.2.2.14 controller Peer True qa14 Service: infra signature wfcore wfpc Status: Connected, Server role applied Changed: Mon, 18 Sep 2017 15:37:40 -0700 WF App: global-db-service: commit-lock wildfire-apps-service: Stopped global-queue-service: ReadyStandalone wildfire-management-service: Done siggen-db: ReadyMaster 2.2.2.16 worker True wf6240 Service: infra wfcore wfpc Status: Connected, Server role applied Changed: Wed, 22 Feb 2017 11:11:15 -0800 WF App: wildfire-apps-service: Ready global-db-service: JoinedCluster global-queue-service: JoinedCluster local-db-service: DataMigrationFailed Diag report: 2.2.2.14: reported leader '2.2.2.15', age 0. 2.2.2.15: local node passed sanity check. - On a WildFire appliance controller node, run:admin@WF-500>show cluster membershipA healthy WildFire cluster displays the following details:
- The general WildFire appliance configuration details, such as the cluster name, IP address of the appliance, serial number, etc.
- Server roleindicates whether or not the WildFire appliance is operating as a cluster server. Cluster servers operate additional infrastructure applications and services. You can have a maximum of three servers per cluster.
- Node modedescribes the role of a WildFire appliance. WildFire appliances enrolled in a cluster can be either acontrollerorworkernode depending on your configuration and the number of nodes in your deployment. Appliances that are not a part of a cluster displaysstand_alone.
- Operates the followingServicesbased on the cluster node role:Node TypeServices Operating on the NodeController Node (Active or Passive)
- infra
- wfpc
- signature
- wfcore
Server Node- infra
- wfpc
- wfcore
Worker Node- infra
- wfpc
- HA prioritydisplays primary or secondary depending on its configured role, however this setting is independent of the current HA state of the appliance.
- Work queue statusshows the sample analysis backlog as well as samples that are currently being analyzed. This also indicates how much load a particular WildFire appliance receives.
For more information about WildFire application and service status details, refer to WildFire Application States and WildFire Service States.The following example shows a WildFire controller operating in a healthy state:Service Summary: wfpc signature Cluster name: qa-auto-0ut1 Address: 2.2.2.15 Host name: qa15 Node name: wfpc-009701000026-internal Serial number: 009701000026 Node mode: controller Server role: True HA priority: secondary Last changed: Fri, 22 Sep 2017 11:30:47 -0700 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable Service 'infra' check: passing Application status: global-db-service: ReadyLeader wildfire-apps-service: Ready global-queue-service: ReadyLeader wildfire-management-service: Done siggen-db: Ready Work queue status: sample anaysis queued: 0 sample anaysis running: 0 sample copy queued: 0 sample copy running: 0 Diag report: 2.2.2.14: reported leader '2.2.2.15', age 0. 2.2.2.15: local node passed sanity check.
Recommended For You
Recommended Videos
Recommended videos not found.