Remove a Node from a Cluster Locally
Focus
Focus
Advanced WildFire

Remove a Node from a Cluster Locally

Table of Contents

Remove a Node from a Cluster Locally

Where Can I Use This?
What Do I Need?
  • WildFire Appliance
  • WildFire License
You can remove nodes from a cluster using the local CLI. The procedure to remove a node is different in a two-node cluster than in a cluster with three or more nodes.
  • Remove a worker node from a cluster with three or more nodes.
    1. Decommission the worker node from the worker node’s CLI:
      admin@WF-500>
      request cluster decommission start
      The
      decommission
      command only works with clusters that have three or more nodes. Do not use
      decommission
      to remove a node in a two-node cluster.
    2. Confirm that decommissioning the node was successful:
      admin@WF-500>
      show cluster membership
      This command reports
      decommission: success
      after the worker node is removed from the cluster. If the command does not display successful decommission, wait a few minutes to allow the decommission to finish and then run the command again.
    3. Delete the cluster configuration from the worker node’s CLI:
      admin@WF-500>#
      delete deviceconfig cluster
    4. Commit the configuration:
      admin@WF-500>#
      commit
    5. Check that all processes are running:
      admin@WF-500>
      show system software status
    6. Remove the worker node from the controller node’s worker list:
      admin@WF-500(active-controller)#
      delete deviceconfig cluster mode controller worker-list <worker-node-ip>
    7. Commit the configuration:
      admin@WF-500(active-controller)#
      commit
    8. On the controller node, check to ensure that the worker node was removed:
      admin@WF-500(active-controller)>
      show cluster all-peers
      The worker node you removed does not appear in the list of cluster nodes.
  • Remove a controller node from a two-node cluster.
    Each cluster must have two controller nodes in a high availability configuration under normal conditions. However, maintenance or swapping out controller nodes may require removing a controller node from a cluster using the CLI:
    1. Suspend the controller node you want to remove:
      admin@WF-500(passive-controller)>
      debug cluster suspend on
    2. On the controller node you want to remove, delete the high-availability configuration. This example shows removing the controller backup node:
      admin@WF-500(passive-controller)>
      configure
      admin@WF-500(passive-controller)#
      delete deviceconfig high-availability
    3. Delete the cluster configuration:
      admin@WF-500(passive-controller)#
      delete deviceconfig cluster
    4. Commit the configuration:
      admin@WF-500(passive-controller)#
      commit
    5. Wait for services to come back up. Run
      show cluster membership
      and check the
      Application status
      , which shows all services and the
      siggen-db
      in a
      Ready
      state when all services are up. The
      Node mode
      should be
      stand_alone
      .
    6. On the remaining cluster node, check to ensure that the node was removed:
      admin@WF-500(active-controller)>
      show cluster all-peers
      The controller node you removed does not appear in the list of cluster nodes.
    7. If you have another WildFire appliance ready, add it to the cluster as soon as possible to restore high-availability (Configure a Cluster and Add Nodes Locally).
      If you do not have another WildFire appliance ready to replace the removed cluster node, you should remove the high availability and cluster configurations from the remaining cluster node because one-node clusters are not recommended and do not provide high availability. It is better to manage a single WildFire appliance as a standalone appliance, not as a one-node cluster.
      To remove the high availability and cluster configurations from the remaining node (in this example, the primary controller node):
      admin@WF-500(active-controller)>
      configure
      admin@WF-500(active-controller)#
      delete deviceconfig high-availability
      admin@WF-500(active-controller)#
      delete deviceconfig cluster
      admin@WF-500(active-controller)#
      commit
      Wait for services to come back up. Run
      show cluster membership
      and check the
      Application status
      , which shows all services and the
      siggen-db
      in a
      Ready
      state when all services are up. The
      Node mode
      should be
      stand_alone
      .

Recommended For You