Configure a Cluster and Add Nodes on Panorama
Before configuring a WildFire appliance cluster from Panorama, you must upgrade Panorama to 8.0.1 or later and upgrade all WildFire appliances you plan to add to the cluster to 8.0.1 or later. All WildFire appliances must run the same version of PAN-OS.
You can manage up to 200 WildFire appliances with a Panorama M-Series or virtual appliance. The 200 WildFire appliance limit is the combined total of standalone appliances and WildFire appliance cluster nodes (if you also Add Standalone WildFire Appliances to Manage with Panorama). Except where noted, configuration takes place on Panorama.
Each WildFire appliance cluster node must have a static IP address in the same subnet and have low-latency connections.
- Using the local CLI, configure the IP address
of the Panorama server that will manage the WildFire appliance cluster.Before you register cluster or standalone WildFire appliances to a Panorama appliance, you must first configure the Panorama IP address or FQDN on each WildFire appliance using the local WildFire CLI. This is how each WildFire appliance knows which Panorama appliance manages it.
- On each WildFire appliance, configure the
IP address or FQDN of the primary Panorama appliance’s management
admin@WF-500# set deviceconfig system panorama-server <ip-address | FQDN>
- On each WildFire appliance, if you use a backup Panorama
appliance for high availability (recommended), configure
the IP address or FQDN of the backup Panorama appliance’s management
admin@WF-500# set deviceconfig system panorama-server-2 <ip-address | FQDN>
- Commit the configuration on each WildFire appliance:
- On each WildFire appliance, configure the IP address or FQDN of the primary Panorama appliance’s management interface:
- On the primary Panorama appliance, Register the WildFire
appliances.The newly registered appliances are in standalone mode unless they already belong to a cluster due to local cluster configuration.
- Select PanoramaManaged WildFire Appliances and Add Appliance.
- Enter the serial number of each WildFire appliance on a separate line. If you do not have a list of WildFire appliance serial numbers, using the local CLI, run show system info on each WildFire appliance to obtain the serial number.
- Click OK.If it is available, information about configuration that is already committed on the WildFire appliances displays, such as IP address and software version. WildFire appliances that already belong to a cluster (for example, because of local cluster configuration) display their cluster information and connection status.
- (Optional) Import WildFire appliance configurations
into the Panorama appliance.Importing configurations saves time because you can reuse or edit the configurations on Panorama and then push them to one or more WildFire appliance clusters or standalone WildFire appliances. If there are no configurations you want to import, skip this step. When you push a configuration from Panorama, the pushed configuration overwrites the local configuration.
- Select PanoramaManaged WildFire Appliances, and select the appliances that have configurations you want to import from the list of managed WildFire appliances.
- Import Config.
- Select Yes.Importing configurations updates the displayed information and makes the imported configurations part of the Panorama appliance candidate configuration.
- Commit to Panorama to make the imported WildFire appliance configurations part of the Panorama running configuration.
- Create a new WildFire appliance cluster.
- Select Managed WildFire Clusters.ApplianceNo Cluster Assigned displays standalone WildFire appliances (nodes) and indicates how many available nodes are not assigned to a cluster.
- Create Cluster.
- Enter an alphanumeric cluster Name of up to 63 characters in length. The Name can contain lower-case characters and numbers, and hyphens and periods if they are not the first or last character. No spaces or other characters are allowed.
- Click OK.The new cluster name displays but has no assigned WildFire nodes.
- Select Managed WildFire Clusters.
- Add WildFire appliances to the new cluster.The first WildFire appliance added to the cluster automatically becomes the controller node, and the second WildFire appliance added to the cluster automatically becomes the controller backup node. All subsequent WildFire appliances added to the cluster become worker nodes. Worker nodes use the controller node settings so that the cluster has a consistent configuration.
- Select the new cluster.
- Select Clustering.
- Browse the list of WildFire appliances that do not belong to clusters.
- Add ( ) each WildFire appliance you want to include in the cluster. You can add up to twenty nodes to a cluster. Each WildFire appliance that you add to the cluster is displayed along with its automatically assigned role.
- Click OK.
- Configure the Management, Analysis
Environment Network, HA, and cluster management interfaces.Configure the Management, Analysis Environment Network, and cluster management interfaces on each cluster member (controller and worker nodes) if they are not already configured. The cluster management interface is a dedicated interface for management and communication within the cluster and is not the same as the Management interface.Configure the HA interfaces individually on both the controller node and the controller backup node. The HA interfaces connect the primary and backup controller nodes and enable them to remain in sync and ready to respond to a failover.Cluster nodes need IP addresses for each of the four WildFire appliance interfaces. You cannot configure HA services on worker nodes.
- Select the new cluster.
- Select Clustering.
- If the management interface is not configured on a cluster node, select Interface NameManagement and enter the IP address, netmask, services, and other information for the interface.
- If the interface for the Analysis Environment Network is not configured on a cluster node, select Interface NameAnalysis Environment Network and enter the IP address, netmask, services, and other information for the interface.
- On both the controller node and controller backup node, select the interface to use for the HA control link. You must configure the same interface on both controller nodes for the HA service. For example, on the controller node and then on the controller backup node, select Ethernet3.
- For each controller node, select Clustering ServicesHA. (The HA option is not available for worker nodes.) If you also want the ability to ping the interface, select Management ServicesPing.
- Click OK.
- (Recommended) Select the interface to use
as the backup HA control link between the controller node and the
controller backup node. You must use the same interface on both
nodes for the HA backup service. For example, on both nodes, select Management.Select Clustering ServicesHA Backup for both nodes. You can also select Ping, SSH, and SNMP if you want those Management Services on the interface.The Analysis Environment Network interface cannot be an HA or HA Backup interface or a cluster management interface.
- Select the dedicated interface to use for management and communication within the cluster. You must use the same interface on both nodes, for example, Ethernet2.
- Select Clustering ServicesCluster Management for both
nodes. If you also want the ability to ping on the interface, select Management ServicesPing.Worker nodes in the cluster automatically inherit the controller node’s settings for the dedicated management and communication interface.
- Commit the configuration on the Panorama appliance and
push it to the cluster.
- Commit and Push.
- If there are configurations on the Panorama appliance that you do not want to push, Edit Selections to choose the appliances to which you push configurations. The pushed configuration overwrites the running configuration on the cluster nodes so that all cluster nodes run the same configuration.
- Verify the configuration.
- Select PanoramaManaged WildFire Clusters.
- Check the following fields:
- Appliance—Instead of displaying as standalone appliances, the WildFire nodes added to the cluster display under the cluster name.
- Cluster Name—The cluster name displays for each node.
- Role—The appropriate role (Controller, Controller Backup, or Worker) displays for each node.
- Config Status—Status is In Sync.
- Last Commit State—Commit succeeded.
- Using the local CLI on the primary controller node (not
the Panorama web interface), check to ensure that the configurations
are synchronized.If they are not synchronized, manually synchronize the high availability configurations on the controller nodes and commit the configuration.Even though you can perform most other configuration on Panorama, synchronizing the controller node high availability configurations must be done on the primary controller node’s CLI.
- On the primary controller node, check to
ensure that the configurations are synchronized:
admin@WF-500(active-controller)> show high-availability allAt the end of the output, look for the Configuration Synchronization output:
Configuration Synchronization: Enabled: yes Running Configuration: synchronizedIf the running configuration is synchronized, you do not need to manually synchronize the configuration. However, if the configuration is not synchronized, you need to synchronize the configuration manually.
- If the configuration is not synchronized, on the primary
controller node, synchronize the high availability configuration
to the remote peer controller node:
admin@WF-500(active-controller)> request high-availability sync-to-remote running-configIf there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node.
- Commit the configuration:
- On the primary controller node, check to ensure that the configurations are synchronized:
Limitations The following table includes limitations associated with PAN-OS® 8.0 releases. Issue ID Description PAN-68997 The WildFire® appliance cluster membership list may not be accurate ...
WildFire Appliance Cluster Management
WildFire Appliance Cluster Management To manage a WildFire appliance cluster, you need to know the capabilities of clusters and management recommendations. Category Description Cluster operation ...
WildFire Appliance Cluster Resiliency and Scale
WildFire Appliance Cluster Resiliency and Scale WildFire appliance clusters aggregate the sample analysis and storage capacity of up to twenty WildFire appliances so that you ...
Remove a Cluster from Panorama Management
Remove a Cluster from Panorama Management To remove a cluster from Panorama management, Panorama Managed WildFire Clusters and select the row of the cluster you ...
Configure a Cluster and Add Nodes Locally
Configure a Cluster and Add Nodes Locally When you add nodes to a cluster, the cluster automatically sets up communication between nodes based on the ...
Known Issues Specific to the WF-500 Appliance
Known Issues Specific to the WF-500 Appliance The following list includes known issues specific to WildFire® 8.0 releases running on the WF-500 appliance. See also ...
Managed WildFire Cluster Tasks
Managed WildFire Cluster Tasks You can create and remove WildFire appliance clusters from Panorama. Additionally, you can save configuration time by importing configurations from one ...
Remove a Node from a Cluster Locally
Remove a Node from a Cluster Locally You can remove nodes from a cluster using the local CLI. The procedure to remove a node is ...
Managed WildFire Cluster and Appliance Administration
Managed WildFire Cluster and Appliance Administration Select Panorama Managed WildFire Clusters and select a cluster to manage it or select a WildFire appliance ( Panorama ...