Advanced WildFire Powered by Precision AI™
Remove a Node from a Cluster Locally
Table of Contents
Expand All
|
Collapse All
Advanced WildFire
-
-
- Forward Files for Advanced WildFire Analysis
- Manually Upload Files to the WildFire Portal
- Forward Decrypted SSL Traffic for Advanced WildFire Analysis
- Enable Advanced WildFire Inline Cloud Analysis
- Enable Advanced WildFire Inline ML
- Enable Hold Mode for Real-Time Signature Lookup
- Configure the Content Cloud FQDN Settings
- Sample Removal Request
- Firewall File-Forwarding Capacity by Model
-
-
-
- set deviceconfig cluster
- set deviceconfig high-availability
- set deviceconfig setting management
- set deviceconfig setting wildfire
- set deviceconfig system eth2
- set deviceconfig system eth3
- set deviceconfig system panorama local-panorama panorama-server
- set deviceconfig system panorama local-panorama panorama-server-2
- set deviceconfig system update-schedule
- set deviceconfig system vm-interface
-
- clear high-availability
- create wildfire api-key
- delete high-availability-key
- delete wildfire api-key
- delete wildfire-metadata
- disable wildfire
- edit wildfire api-key
- load wildfire api-key
- request cluster decommission
- request cluster reboot-local-node
- request high-availability state
- request high-availability sync-to-remote
- request system raid
- request wildfire sample redistribution
- request system wildfire-vm-image
- request wf-content
- save wildfire api-key
- set wildfire portal-admin
- show cluster all-peers
- show cluster controller
- show cluster data migration status
- show cluster membership
- show cluster task
- show high-availability all
- show high-availability control-link
- show high-availability state
- show high-availability transitions
- show system raid
- submit wildfire local-verdict-change
- show wildfire
- show wildfire global
- show wildfire local
- test wildfire registration
Remove a Node from a Cluster Locally
Where Can I Use This? | What Do I Need? |
---|---|
|
|
You can remove nodes from a cluster using the
local CLI. The procedure to remove a node is different in a two-node
cluster than in a cluster with three or more nodes.
- Remove a worker node from a cluster
with three or more nodes.
- Decommission the worker node from the worker
node’s CLI:
admin@WF-500> request cluster decommission start
The decommission command only works with clusters that have three or more nodes. Do not use decommission to remove a node in a two-node cluster. - Confirm that decommissioning the node was successful:
admin@WF-500> show cluster membership
This command reports decommission: success after the worker node is removed from the cluster. If the command does not display successful decommission, wait a few minutes to allow the decommission to finish and then run the command again. - Delete the cluster configuration from the worker node’s
CLI:
admin@WF-500># delete deviceconfig cluster
- Commit the configuration:
admin@WF-500># commit
- Check that all processes are running:
admin@WF-500> show system software status
- Remove the worker node from the controller node’s
worker list:
admin@WF-500(active-controller)# delete deviceconfig cluster mode controller worker-list <worker-node-ip>
- Commit the configuration:
admin@WF-500(active-controller)# commit
- On the controller node, check to ensure that the worker
node was removed:
admin@WF-500(active-controller)> show cluster all-peers
The worker node you removed does not appear in the list of cluster nodes.
- Decommission the worker node from the worker
node’s CLI:
- Remove a controller node from a two-node cluster.Each cluster must have two controller nodes in a high availability configuration under normal conditions. However, maintenance or swapping out controller nodes may require removing a controller node from a cluster using the CLI:
- Suspend the controller node you want to
remove:
admin@WF-500(passive-controller)> debug cluster suspend on
- On the controller node you want to remove, delete
the high-availability configuration. This example shows removing
the controller backup node:
admin@WF-500(passive-controller)> configure admin@WF-500(passive-controller)# delete deviceconfig high-availability
- Delete the cluster configuration:
admin@WF-500(passive-controller)# delete deviceconfig cluster
- Commit the configuration:
admin@WF-500(passive-controller)# commit
- Wait for services to come back up. Run show cluster membership and check the Application status, which shows all services and the siggen-db in a Ready state when all services are up. The Node mode should be stand_alone.
- On the remaining cluster node, check to ensure that
the node was removed:
admin@WF-500(active-controller)> show cluster all-peers
The controller node you removed does not appear in the list of cluster nodes. - If you have another WildFire appliance ready, add
it to the cluster as soon as possible to restore high-availability
(Configure
a Cluster and Add Nodes Locally).If you do not have another WildFire appliance ready to replace the removed cluster node, you should remove the high availability and cluster configurations from the remaining cluster node because one-node clusters are not recommended and do not provide high availability. It is better to manage a single WildFire appliance as a standalone appliance, not as a one-node cluster.To remove the high availability and cluster configurations from the remaining node (in this example, the primary controller node):
admin@WF-500(active-controller)> configure admin@WF-500(active-controller)# delete deviceconfig high-availability admin@WF-500(active-controller)# delete deviceconfig cluster admin@WF-500(active-controller)# commit
Wait for services to come back up. Run show cluster membership and check the Application status, which shows all services and the siggen-db in a Ready state when all services are up. The Node mode should be stand_alone.
- Suspend the controller node you want to
remove: