Configure a Cluster and Add Nodes Locally
Where Can I Use
This? | What Do I Need? |
---|---|
|
|
When you add nodes to a cluster, the cluster
automatically sets up communication between nodes based on the interfaces
you configure for the controller node.
- Ensure that each WildFire appliance that you want to add to the cluster is running PAN-OS 8.0.1 or later.On each WildFire appliance, run:admin@WF-500>show system info | match version
- Verify that the WildFire appliances are not analyzing samples and are in standalone state (not members of another cluster).
- On each appliance, display whether the appliance is analyzing samples:admin@WF-500>show wildfire global sample-analysisNo sample should show aspending. All samples should be in a finished state. If samples arepending, wait for them to finish analysis.Pendingsamples display separately from malicious and non-malicious samples.Finish Datedisplays the date and time the analysis finished.
- On each appliance, verify that the all processes are running:admin@WF-500>show system software status
- On each appliance, check to ensure the appliance is in a standalone state and does not already belong to a cluster:admin@WF-500>show cluster membershipService Summary: wfpc signature Cluster name: Address: 10.10.10.100 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000 Node mode: stand_alone Server role: True HA priority: Last changed: Mon, 06 Mar 2017 16:34:25 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachable Application status: global-db-service: ReadyStandalone wildfire-apps-service: Ready global-queue-service: ReadyStandalone wildfire-management-service: Done siggen-db: ReadyMaster Diag report: 10.10.10.100: reported leader '10.10.10.100', age 0. 10.10.10.100: local node passed sanity check.The highlighted lines show that the node is in standalone mode and is ready to be converted from a standalone appliance to a cluster node.The 12-digit serial number in these examples (000000000000) is a generic example and is not a real serial number. WildFire appliances in your network have unique, real serial numbers.
- Configure the primary controller node.This includes configuring the node as the primary controller of the HA pair, enabling HA, and defining the interfaces the appliance uses for the HA control link and for cluster communication and management.
- Enable high availability and configure the control link interface connection to the controller backup node, for example, on interface eth3:admin@WF-500#set deviceconfig high-availability enabled yes interface ha1 port eth3 peer-ip-address<secondary-node-eth3-ip-address>
- Configure the appliance as the primary controller node:admin@WF-500#set deviceconfig high-availability election-option priority primary
- (Optional) Configure the backup high-availability interface between the controller node and the controller backup node, for example, on the management interface:admin@WF-500#set deviceconfig high-availability interface ha1-backup port management peer-ip-address<secondary-node-management-ip-address>
- Configure the dedicated interface for communication and management within the cluster, including specifying the cluster name and setting the node role to controller node:admin@WF-500#set deviceconfig cluster cluster-name<name>interface eth2 mode controllerThis example uses eth2 as the dedicated cluster communication port.The cluster name must be a valid sub-domain name with a maximum length of 63 characters. Only lower-case characters and numbers are allowed, and hyphens and periods if they are not at the beginning or end of the cluster name.
- Configure the controller backup node.This includes configuring the node as the backup controller of the HA pair, enabling HA, and defining the interfaces the appliance uses for the HA control link and for cluster communication and management.
- Enable high availability and configure the control link interface connection to the primary controller node on the same interface used on the primary controller node (eth3 in this example):admin@WF-500#set deviceconfig high-availability enabled yes interface ha1 port eth3 peer-ip-address<primary-node-eth3-ip-address>
- Configure the appliance as the controller backup node:admin@WF-500#set deviceconfig high-availability election-option priority secondary
- (Recommended) Configure the backup high-availability interface between the controller backup node and the controller node, for example, on the management interface:admin@WF-500#set deviceconfig high-availability interface ha1-backup port management peer-ip-address<primary-node-management-ip-address>
- Configure the dedicated interface for communication and management within the cluster, including specifying the cluster name and setting the node role to controller node:admin@WF-500#set deviceconfig cluster cluster-name <name> interface eth2 mode controller
- Commit the configurations on both controller nodes.On each controller node:admin@WF-500#commitCommitting the configuration on both controller nodes forms a two-node cluster.
- Verify the configuration on the primary controller node.On the primary controller node:admin@WF-500(active-controller)>show cluster membershipService Summary: wfpc signature Cluster name: mycluster Address: 10.10.10.100 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000Node mode: controllerServer role: True HA priority: primary Last changed: Sat, 04 Mar 2017 12:52:38 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachableApplication status:global-db-service: JoinedClusterwildfire-apps-service: Readyglobal-queue-service: JoinedClusterwildfire-management-service: Donesiggen-db: ReadyMasterDiag report: 10.10.10.110: reported leader '10.10.10.100', age 0. 10.10.10.100: local node passed sanity check.The prompt (active-controller) and the highlightedApplication statuslines show that the node is in controller mode, is ready, and is the primary controller node.
- Verify the configuration on the secondary controller node.On the secondary controller node:admin@WF-500(passive-controller)>show cluster membershipService Summary: wfpc signature Cluster name: mycluster Address: 10.10.10.110 Host name: WF-500 Node name: wfpc-000000000000-internal Serial number: 000000000000Node mode: controllerServer role: True HA priority: secondary Last changed: Fri, 02 Dec 2016 16:25:57 -0800 Services: wfcore signature wfpc infra Monitor status: Serf Health Status: passing Agent alive and reachableApplication status:global-db-service: JoinedClusterwildfire-apps-service: Readyglobal-queue-service: JoinedClusterwildfire-management-service: Donesiggen-db: ReadySlaveDiag report: 10.10.10.110: reported leader '10.10.10.100', age 0. 10.10.10.110: local node passed sanity check.The prompt (passive-controller) and the highlightedApplication statuslines show that the node is in controller mode, is ready, and is the backup controller node.
- Test the node configuration.Verify that the controller node API keys are viewable globally:admin@WF-500(passive-controller)> show wildfire global api-keys allService Summary: wfpc signatureCluster name: myclusterThe API keys for both appliances should be viewable.
- Manually synchronize the high availability configurations on the controller nodes.Synchronizing the controller nodes ensures that the configurations match and should only need to be done one time. After the high availability configurations are synchronized, the controller nodes keep the configurations synchronized and you do not need to synchronize them again.
- On the primary controller node, synchronize the high availability configuration to the remote peer controller node:admin@WF-500(active-controller)>request high-availability sync-to-remote running-configIf there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node.
- Commit the configuration:admin@WF-500#commit
- Verify that the cluster is functioning properly.To verify firewall-related information, you must first connect at least one firewall to a cluster node by selectingand editing theDeviceSetupWildFireGeneral Settingsto point to the node.
- Display the cluster peers to ensure that both controllers are cluster members:admin@WF-500(active-controller)>show cluster all-peers
- Display API keys from both nodes (if you created API keys), from either controller node:admin@WF-500(active-controller)>show wildfire global api-keys all
- Access any sample from either controller node:admin@WF-500(active-controller)>show wildfire global sample-status sha256 equal<value>
- Firewalls can register and upload files to both nodes. Confirm that the firewall is successfully forwarding samples.
- Both nodes can download and analyze files.
- All files analyzed after the cluster was created show two storage locations, one on each node.
- (Optional) Configure a worker node and add it to the cluster.Worker nodes use the controller node’s settings so that the cluster has a consistent configuration. You can add up to 18 worker nodes to a cluster for a total of 20 nodes in a cluster.
- On the primary controller node, add the worker to the controller node’s worker list:admin@WF-500(active-controller)>configureadmin@WF-500(active-controller)#set deviceconfig cluster mode controller worker-list<ip>The<ip>is the cluster management interface IP address of the worker node you want to add to the cluster. Use separate commands to add each worker node to the cluster.
- Commit the configuration the controller node:admin@WF-500(active-controller)#commit
- On the WildFire appliance you want to convert to a cluster worker node, configure the cluster to join, set the cluster communications interface, and place the appliance inworkermode:admin@WF-500>configureadmin@WF-500#set deviceconfig cluster cluster-name<name>interface eth2 mode workerThe cluster communications interface must be the same interface specified for intracluster communications on the controller nodes. In this example,eth2is the interface configured on the controller nodes for cluster communication.
- Commit the configuration on the worker node:admin@WF-500#commit
- Wait for all services to come up on the worker node. Runshow cluster membershipand check theApplicationstatus, which shows all services and thesiggen-dbin aReadystate when all services are up.
- On either cluster controller node, check to ensure that the worker node was added:admin@WF-500>show cluster all-peersThe worker node you added appears in the list of cluster nodes. If you accidentally added the wrong WildFire appliance to a cluster, you can Remove a Node from a Cluster Locally.
- Verify the configuration on the worker node.
- On the worker node, check to ensure that theNode modefield shows that the node is in worker mode:admin@WF-500>show cluster membership
- Verify that firewalls can register on the worker node and that the worker node can download and analyze files.
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found.