You can remove nodes from a cluster using the
local CLI. The procedure to remove a node is different in a two-node
cluster than in a cluster with three or more nodes.
Remove a worker node from a cluster
with three or more nodes.
Decommission the worker node from the worker
node’s CLI:
admin@WF-500>
request cluster decommission start
The
decommission
command
only works with clusters that have three or more nodes. Do not use
decommission
to
remove a node in a two-node cluster.
Confirm that decommissioning the node was successful:
admin@WF-500>
show cluster membership
This
command reports
decommission: success
after
the worker node is removed from the cluster. If the command does
not display successful decommission, wait a few minutes to allow
the decommission to finish and then run the command again.
Delete the cluster configuration from the worker node’s
CLI:
admin@WF-500>#
delete deviceconfig cluster
Commit the configuration:
admin@WF-500>#
commit
Check that all processes are running:
admin@WF-500>
show system software status
Remove the worker node from the controller node’s
worker list:
On the controller node, check to ensure that the worker
node was removed:
admin@WF-500(active-controller)>
show cluster all-peers
The
worker node you removed does not appear in the list of cluster nodes.
Remove a controller node from a two-node cluster.
Each cluster must have two controller nodes in a high availability
configuration under normal conditions. However, maintenance or swapping
out controller nodes may require removing a controller node from
a cluster using the CLI:
Suspend the controller node you want to
remove:
admin@WF-500(passive-controller)>
debug cluster suspend on
On the controller node you want to remove, delete
the high-availability configuration. This example shows removing
the controller backup node:
admin@WF-500(passive-controller)>
configure
admin@WF-500(passive-controller)#
delete deviceconfig high-availability
Delete the cluster configuration:
admin@WF-500(passive-controller)#
delete deviceconfig cluster
Commit the configuration:
admin@WF-500(passive-controller)#
commit
Wait for services to come back up. Run
show cluster membership
and
check the
Application status
, which
shows all services and the
siggen-db
in
a
Ready
state when all services are
up. The
Node mode
should be
stand_alone
.
On the remaining cluster node, check to ensure that
the node was removed:
admin@WF-500(active-controller)>
show cluster all-peers
The
controller node you removed does not appear in the list of cluster
nodes.
If you do not have another WildFire appliance ready to
replace the removed cluster node, you should remove the high availability
and cluster configurations from the remaining cluster node because
one-node clusters are not recommended and do not provide high availability.
It is better to manage a single WildFire appliance as a standalone
appliance, not as a one-node cluster.
To remove the high availability
and cluster configurations from the remaining node (in this example,
the primary controller node):