Configure a Cluster and Add Nodes Locally
Focus
Focus
Advanced WildFire

Configure a Cluster and Add Nodes Locally

Table of Contents

Configure a Cluster and Add Nodes Locally

Where Can I Use This?
What Do I Need?
  • WildFire Appliance
  • WildFire License
When you add nodes to a cluster, the cluster automatically sets up communication between nodes based on the interfaces you configure for the controller node.
  1. Ensure that each WildFire appliance that you want to add to the cluster is running PAN-OS 8.0.1 or later.
    On each WildFire appliance, run:
    admin@WF-500>
    show system info | match version
  2. Verify that the WildFire appliances are not analyzing samples and are in standalone state (not members of another cluster).
    1. On each appliance, display whether the appliance is analyzing samples:
      admin@WF-500>
      show wildfire global sample-analysis
      No sample should show as
      pending
      . All samples should be in a finished state. If samples are
      pending
      , wait for them to finish analysis.
      Pending
      samples display separately from malicious and non-malicious samples.
      Finish Date
      displays the date and time the analysis finished.
    2. On each appliance, verify that the all processes are running:
      admin@WF-500>
      show system software status
    3. On each appliance, check to ensure the appliance is in a standalone state and does not already belong to a cluster:
      admin@WF-500>
      show cluster membership
      Service Summary: wfpc signature Cluster name: Address: 10.10.10.100 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000 Node mode: stand_alone Server role: True HA priority: Last changed: Mon, 06 Mar 2017 16:34:25 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable Application status: global-db-service: ReadyStandalone wildfire-apps-service: Ready global-queue-service: ReadyStandalone wildfire-management-service: Done siggen-db: ReadyMaster Diag report: 10.10.10.100: reported leader '10.10.10.100', age 0. 10.10.10.100: local node passed sanity check.
      The highlighted lines show that the node is in standalone mode and is ready to be converted from a standalone appliance to a cluster node.
      The 12-digit serial number in these examples (
      000000000000
      ) is a generic example and is not a real serial number. WildFire appliances in your network have unique, real serial numbers.
  3. Configure the primary controller node.
    This includes configuring the node as the primary controller of the HA pair, enabling HA, and defining the interfaces the appliance uses for the HA control link and for cluster communication and management.
    1. Enable high availability and configure the control link interface connection to the controller backup node, for example, on interface eth3:
      admin@WF-500#
      set deviceconfig high-availability enabled yes interface ha1 port eth3 peer-ip-address
      <secondary-node-eth3-ip-address>
    2. Configure the appliance as the primary controller node:
      admin@WF-500#
      set deviceconfig high-availability election-option priority primary
    3. (
      Optional
      ) Configure the backup high-availability interface between the controller node and the controller backup node, for example, on the management interface:
      admin@WF-500#
      set deviceconfig high-availability interface ha1-backup port management peer-ip-address
      <secondary-node-management-ip-address>
    4. Configure the dedicated interface for communication and management within the cluster, including specifying the cluster name and setting the node role to controller node:
      admin@WF-500#
      set deviceconfig cluster cluster-name
      <name>
      interface eth2 mode controller
      This example uses eth2 as the dedicated cluster communication port.
      The cluster name must be a valid sub-domain name with a maximum length of 63 characters. Only lower-case characters and numbers are allowed, and hyphens and periods if they are not at the beginning or end of the cluster name.
  4. Configure the controller backup node.
    This includes configuring the node as the backup controller of the HA pair, enabling HA, and defining the interfaces the appliance uses for the HA control link and for cluster communication and management.
    1. Enable high availability and configure the control link interface connection to the primary controller node on the same interface used on the primary controller node (eth3 in this example):
      admin@WF-500#
      set deviceconfig high-availability enabled yes interface ha1 port eth3 peer-ip-address
      <primary-node-eth3-ip-address>
    2. Configure the appliance as the controller backup node:
      admin@WF-500#
      set deviceconfig high-availability election-option priority secondary
    3. (
      Recommended
      ) Configure the backup high-availability interface between the controller backup node and the controller node, for example, on the management interface:
      admin@WF-500#
      set deviceconfig high-availability interface ha1-backup port management peer-ip-address
      <primary-node-management-ip-address>
    4. Configure the dedicated interface for communication and management within the cluster, including specifying the cluster name and setting the node role to controller node:
      admin@WF-500#
      set deviceconfig cluster cluster-name <name> interface eth2 mode controller
  5. Commit the configurations on both controller nodes.
    On each controller node:
    admin@WF-500#
    commit
    Committing the configuration on both controller nodes forms a two-node cluster.
  6. Verify the configuration on the primary controller node.
    On the primary controller node:
    admin@WF-500(active-controller)>
    show cluster membership
    Service Summary: wfpc signature Cluster name: mycluster Address: 10.10.10.100 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000
    Node mode: controller
    Server role: True HA priority: primary Last changed: Sat, 04 Mar 2017 12:52:38 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable
    Application status:
    global-db-service: JoinedCluster
    wildfire-apps-service: Ready
    global-queue-service: JoinedCluster
    wildfire-management-service: Done
    siggen-db: ReadyMaster
    Diag report: 10.10.10.110: reported leader '10.10.10.100', age 0. 10.10.10.100: local node passed sanity check.
    The prompt (
    active-controller
    ) and the highlighted
    Application status
    lines show that the node is in controller mode, is ready, and is the primary controller node.
  7. Verify the configuration on the secondary controller node.
    On the secondary controller node:
    admin@WF-500(passive-controller)>
    show cluster membership
    Service Summary: wfpc signature Cluster name: mycluster Address: 10.10.10.110 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000
    Node mode: controller
    Server role: True HA priority: secondary Last changed: Fri, 02 Dec 2016 16:25:57 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable
    Application status:
    global-db-service: JoinedCluster
    wildfire-apps-service: Ready
    global-queue-service: JoinedCluster
    wildfire-management-service: Done
    siggen-db: ReadySlave
    Diag report: 10.10.10.110: reported leader '10.10.10.100', age 0. 10.10.10.110: local node passed sanity check.
    The prompt (
    passive-controller
    ) and the highlighted
    Application status
    lines show that the node is in controller mode, is ready, and is the backup controller node.
  8. Test the node configuration.
    Verify that the controller node API keys are viewable globally:
    admin@WF-500(passive-controller)> show wildfire global api-keys allService Summary: wfpc signatureCluster name: mycluster
    The API keys for both appliances should be viewable.
  9. Manually synchronize the high availability configurations on the controller nodes.
    Synchronizing the controller nodes ensures that the configurations match and should only need to be done one time. After the high availability configurations are synchronized, the controller nodes keep the configurations synchronized and you do not need to synchronize them again.
    1. On the primary controller node, synchronize the high availability configuration to the remote peer controller node:
      admin@WF-500(active-controller)>
      request high-availability sync-to-remote running-config
      If there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node.
    2. Commit the configuration:
      admin@WF-500#
      commit
  10. Verify that the cluster is functioning properly.
    To verify firewall-related information, you must first connect at least one firewall to a cluster node by selecting
    Device
    Setup
    WildFire
    and editing the
    General Settings
    to point to the node.
    1. Display the cluster peers to ensure that both controllers are cluster members:
      admin@WF-500(active-controller)>
      show cluster all-peers
    2. Display API keys from both nodes (if you created API keys), from either controller node:
      admin@WF-500(active-controller)>
      show wildfire global api-keys all
    3. Access any sample from either controller node:
      admin@WF-500(active-controller)>
      show wildfire global sample-status sha256 equal
      <value>
    4. Firewalls can register and upload files to both nodes. Confirm that the firewall is successfully forwarding samples.
    5. Both nodes can download and analyze files.
    6. All files analyzed after the cluster was created show two storage locations, one on each node.
  11. (
    Optional
    ) Configure a worker node and add it to the cluster.
    Worker nodes use the controller node’s settings so that the cluster has a consistent configuration. You can add up to 18 worker nodes to a cluster for a total of 20 nodes in a cluster.
    1. On the primary controller node, add the worker to the controller node’s worker list:
      admin@WF-500(active-controller)>
      configure
      admin@WF-500(active-controller)#
      set deviceconfig cluster mode controller worker-list
      <ip>
      The
      <ip>
      is the cluster management interface IP address of the worker node you want to add to the cluster. Use separate commands to add each worker node to the cluster.
    2. Commit the configuration the controller node:
      admin@WF-500(active-controller)#
      commit
    3. On the WildFire appliance you want to convert to a cluster worker node, configure the cluster to join, set the cluster communications interface, and place the appliance in
      worker
      mode:
      admin@WF-500>
      configure
      admin@WF-500#
      set deviceconfig cluster cluster-name
      <name>
      interface eth2 mode worker
      The cluster communications interface must be the same interface specified for intracluster communications on the controller nodes. In this example,
      eth2
      is the interface configured on the controller nodes for cluster communication.
    4. Commit the configuration on the worker node:
      admin@WF-500#
      commit
    5. Wait for all services to come up on the worker node. Run
      show cluster membership
      and check the
      Applicationstatus
      , which shows all services and the
      siggen-db
      in a
      Ready
      state when all services are up.
    6. On either cluster controller node, check to ensure that the worker node was added:
      admin@WF-500>
      show cluster all-peers
      The worker node you added appears in the list of cluster nodes. If you accidentally added the wrong WildFire appliance to a cluster, you can Remove a Node from a Cluster Locally.
  12. Verify the configuration on the worker node.
    1. On the worker node, check to ensure that the
      Node mode
      field shows that the node is in worker mode:
      admin@WF-500>
      show cluster membership
    2. Verify that firewalls can register on the worker node and that the worker node can download and analyze files.

Recommended For You