Configure a Cluster and Add Nodes on Panorama
You can manage up to 200 WildFire appliances with a Panorama M-Series or virtual appliance. The 200 WildFire appliance limit is the combined total of standalone appliances and WildFire appliance cluster nodes (if you also Add Standalone WildFire Appliances to Manage with Panorama). Except where noted, configuration takes place on Panorama.
Each WildFire appliance cluster node must have a static IP address in the same subnet and have low-latency connections.
- Using the local CLI, configure the IP address of the Panorama server that will manage the WildFire appliance cluster.Before you register cluster or standalone WildFire appliances to a Panorama appliance, you must first configure the Panorama IP address or FQDN on each WildFire appliance using the local WildFire CLI. This is how each WildFire appliance knows which Panorama appliance manages it.
- On each WildFire appliance, configure the IP address or FQDN of the primary Panorama appliance’s management interface:admin@WF-500#set deviceconfig system panorama-server <ip-address | FQDN>
- On each WildFire appliance, if you use a backup Panorama appliance for high availability (recommended), configure the IP address or FQDN of the backup Panorama appliance’s management interface:admin@WF-500#set deviceconfig system panorama-server-2 <ip-address | FQDN>
- Commit the configuration on each WildFire appliance:admin@WF-500#commit
- On the primary Panorama appliance, Register the WildFire appliances.The newly registered appliances are in standalone mode unless they already belong to a cluster due to local cluster configuration.
- SelectandPanoramaManaged WildFire AppliancesAdd Appliance.
- Enter the serial number of each WildFire appliance on a separate line. If you do not have a list of WildFire appliance serial numbers, using the local CLI, runshow system infoon each WildFire appliance to obtain the serial number.
- ClickOK.If it is available, information about configuration that is already committed on the WildFire appliances displays, such as IP address and software version. WildFire appliances that already belong to a cluster (for example, because of local cluster configuration) display their cluster information and connection status.
- (Optional) Import WildFire appliance configurations into the Panorama appliance.Importing configurations saves time because you can reuse or edit the configurations on Panorama and then push them to one or more WildFire appliance clusters or standalone WildFire appliances. If there are no configurations you want to import, skip this step. When you push a configuration from Panorama, the pushed configuration overwrites the local configuration.
- Select, and select the appliances that have configurations you want to import from the list of managed WildFire appliances.PanoramaManaged WildFire Appliances
- Import Config.
- SelectYes.Importing configurations updates the displayed information and makes the imported configurations part of the Panorama appliance candidate configuration.
- Commit to Panoramato make the imported WildFire appliance configurations part of the Panorama running configuration.
- Create a new WildFire appliance cluster.
- SelectManaged WildFire Clusters.displays standalone WildFire appliances (nodes) and indicates how many available nodes are not assigned to a cluster.ApplianceNo Cluster Assigned
- Create Cluster.
- Enter an alphanumeric clusterNameof up to 63 characters in length. TheNamecan contain lower-case characters and numbers, and hyphens and periods if they are not the first or last character. No spaces or other characters are allowed.
- ClickOK.The new cluster name displays but has no assigned WildFire nodes.
- Add WildFire appliances to the new cluster.The first WildFire appliance added to the cluster automatically becomes the controller node, and the second WildFire appliance added to the cluster automatically becomes the controller backup node. All subsequent WildFire appliances added to the cluster become worker nodes. Worker nodes use the controller node settings so that the cluster has a consistent configuration.
- Select the new cluster.
- Browsethe list of WildFire appliances that do not belong to clusters.
- Add ( ) each WildFire appliance you want to include in the cluster. You can add up to twenty nodes to a cluster. Each WildFire appliance that you add to the cluster is displayed along with its automatically assigned role.
- Configure theManagement,Analysis Environment Network, HA, and cluster management interfaces.Configure theManagement,Analysis Environment Network, and cluster management interfaces on each cluster member (controller and worker nodes) if they are not already configured. The cluster management interface is a dedicated interface for management and communication within the cluster and is not the same as the Management interface.Configure the HA interfaces individually on both the controller node and the controller backup node. The HA interfaces connect the primary and backup controller nodes and enable them to remain in sync and ready to respond to a failover.Cluster nodes need IP addresses for each of the four WildFire appliance interfaces. You cannot configure HA services on worker nodes.
- Select the new cluster.
- If the management interface is not configured on a cluster node, selectand enter the IP address, netmask, services, and other information for the interface.Interface NameManagement
- If the interface for the Analysis Environment Network is not configured on a cluster node, selectand enter the IP address, netmask, services, and other information for the interface.Interface NameAnalysis Environment Network
- On both the controller node and controller backup node, select the interface to use for the HA control link. You must configure the same interface on both controller nodes for the HA service. For example, on the controller node and then on the controller backup node, selectEthernet3.
- For each controller node, select. (TheClustering ServicesHAHAoption is not available for worker nodes.) If you also want the ability to ping the interface, select.Management ServicesPing
- (Recommended) Select the interface to use as the backup HA control link between the controller node and the controller backup node. You must use the same interface on both nodes for the HA backup service. For example, on both nodes, selectManagement.Selectfor both nodes. You can also selectClustering ServicesHA BackupPing,SSH, andSNMPif you want thoseManagement Serviceson the interface.TheAnalysis Environment Networkinterface cannot be an HA or HA Backup interface or a cluster management interface.
- Select the dedicated interface to use for management and communication within the cluster. You must use the same interface on both nodes, for example,Ethernet2.
- Selectfor both nodes. If you also want the ability to ping on the interface, selectClustering ServicesCluster Management.Management ServicesPingWorker nodes in the cluster automatically inherit the controller node’s settings for the dedicated management and communication interface.
- Commit the configuration on the Panorama appliance and push it to the cluster.
- Commit and Push.
- If there are configurations on the Panorama appliance that you do not want to push,Edit Selectionsto choose the appliances to which you push configurations. The pushed configuration overwrites the running configuration on the cluster nodes so that all cluster nodes run the same configuration.
- Verify the configuration.
- Select.PanoramaManaged WildFire Clusters
- Check the following fields:
- Appliance—Instead of displaying as standalone appliances, the WildFire nodes added to the cluster display under the cluster name.
- Cluster Name—The cluster name displays for each node.
- Role—The appropriate role (Controller,Controller Backup, orWorker) displays for each node.
- Config Status—Status isInSync.
- Last Commit State—Commitsucceeded.
- Using the local CLI on the primary controller node (not the Panorama web interface), check to ensure that the configurations are synchronized.If they are not synchronized, manually synchronize the high availability configurations on the controller nodes and commit the configuration.Even though you can perform most other configuration on Panorama, synchronizing the controller node high availability configurations must be done on the primary controller node’s CLI.
- On the primary controller node, check to ensure that the configurations are synchronized:admin@WF-500(active-controller)>show high-availability allAt the end of the output, look for theConfigurationSynchronizationoutput:Configuration Synchronization: Enabled: yes Running Configuration: synchronizedIf the running configuration is synchronized, you do not need to manually synchronize the configuration. However, if the configuration is not synchronized, you need to synchronize the configuration manually.
- If the configuration is not synchronized, on the primary controller node, synchronize the high availability configuration to the remote peer controller node:admin@WF-500(active-controller)>request high-availability sync-to-remote running-configIf there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node.
- Commit the configuration:admin@WF-500#commit
Recommended For You
Recommended videos not found.