: Configure a Cluster and Add Nodes on Panorama
Focus
Focus

Configure a Cluster and Add Nodes on Panorama

Table of Contents
End-of-Life (EoL)

Configure a Cluster and Add Nodes on Panorama

Before configuring a WildFire appliance cluster from Panorama, you must upgrade Panorama to 8.0.1 or later and upgrade all WildFire appliances you plan to add to the cluster to 8.0.1 or later. All WildFire appliances must run the same version of PAN-OS.
You can manage up to 200 WildFire appliances with a Panorama M-Series or virtual appliance. The 200 WildFire appliance limit is the combined total of standalone appliances and WildFire appliance cluster nodes (if you also Add Standalone WildFire Appliances to Manage with Panorama). Except where noted, configuration takes place on Panorama.
Each WildFire appliance cluster node must have a static IP address in the same subnet and have low-latency connections.
  1. Using the local CLI, configure the IP address of the Panorama server that will manage the WildFire appliance cluster.
    Before you register cluster or standalone WildFire appliances to a Panorama appliance, you must first configure the Panorama IP address or FQDN on each WildFire appliance using the local WildFire CLI. This is how each WildFire appliance knows which Panorama appliance manages it.
    1. On each WildFire appliance, configure the IP address or FQDN of the primary Panorama appliance’s management interface:
      admin@WF-500#
      set deviceconfig system panorama-server <ip-address | FQDN>
    2. On each WildFire appliance, if you use a backup Panorama appliance for high availability (
      recommended
      ), configure the IP address or FQDN of the backup Panorama appliance’s management interface:
      admin@WF-500#
      set deviceconfig system panorama-server-2 <ip-address | FQDN>
    3. Commit the configuration on each WildFire appliance:
      admin@WF-500#
      commit
  2. On the primary Panorama appliance, Register the WildFire appliances.
    The newly registered appliances are in standalone mode unless they already belong to a cluster due to local cluster configuration.
    1. Select
      Panorama
      Managed WildFire Appliances
      and
      Add Appliance
      .
    2. Enter the serial number of each WildFire appliance on a separate line. If you do not have a list of WildFire appliance serial numbers, using the local CLI, run
      show system info
      on each WildFire appliance to obtain the serial number.
    3. Click
      OK
      .
      If it is available, information about configuration that is already committed on the WildFire appliances displays, such as IP address and software version. WildFire appliances that already belong to a cluster (for example, because of local cluster configuration) display their cluster information and connection status.
  3. (
    Optional
    ) Import WildFire appliance configurations into the Panorama appliance.
    Importing configurations saves time because you can reuse or edit the configurations on Panorama and then push them to one or more WildFire appliance clusters or standalone WildFire appliances. If there are no configurations you want to import, skip this step. When you push a configuration from Panorama, the pushed configuration overwrites the local configuration.
    1. Select
      Panorama
      Managed WildFire Appliances
      , and select the appliances that have configurations you want to import from the list of managed WildFire appliances.
    2. Import Config
      .
    3. Select
      Yes
      .
      Importing configurations updates the displayed information and makes the imported configurations part of the Panorama appliance candidate configuration.
    4. Commit to Panorama
      to make the imported WildFire appliance configurations part of the Panorama running configuration.
  4. Create a new WildFire appliance cluster.
    1. Select
      Managed WildFire Clusters
      .
      Appliance
      No Cluster Assigned
      displays standalone WildFire appliances (nodes) and indicates how many available nodes are not assigned to a cluster.
    2. Create Cluster
      .
    3. Enter an alphanumeric cluster
      Name
      of up to 63 characters in length. The
      Name
      can contain lower-case characters and numbers, and hyphens and periods if they are not the first or last character. No spaces or other characters are allowed.
    4. Click
      OK
      .
      The new cluster name displays but has no assigned WildFire nodes.
  5. Add WildFire appliances to the new cluster.
    The first WildFire appliance added to the cluster automatically becomes the controller node, and the second WildFire appliance added to the cluster automatically becomes the controller backup node. All subsequent WildFire appliances added to the cluster become worker nodes. Worker nodes use the controller node settings so that the cluster has a consistent configuration.
    1. Select the new cluster.
    2. Select
      Clustering
      .
    3. Browse
      the list of WildFire appliances that do not belong to clusters.
    4. Add ( ) each WildFire appliance you want to include in the cluster. You can add up to twenty nodes to a cluster. Each WildFire appliance that you add to the cluster is displayed along with its automatically assigned role.
    5. Click
      OK
      .
  6. Configure the
    Management
    ,
    Analysis Environment Network
    , HA, and cluster management interfaces.
    Configure the
    Management
    ,
    Analysis Environment Network
    , and cluster management interfaces on each cluster member (controller and worker nodes) if they are not already configured. The cluster management interface is a dedicated interface for management and communication within the cluster and is not the same as the Management interface.
    Configure the HA interfaces individually on both the controller node and the controller backup node. The HA interfaces connect the primary and backup controller nodes and enable them to remain in sync and ready to respond to a failover.
    Cluster nodes need IP addresses for each of the four WildFire appliance interfaces. You cannot configure HA services on worker nodes.
    1. Select the new cluster.
    2. Select
      Clustering
      .
    3. If the management interface is not configured on a cluster node, select
      Interface Name
      Management
      and enter the IP address, netmask, services, and other information for the interface.
    4. If the interface for the Analysis Environment Network is not configured on a cluster node, select
      Interface Name
      Analysis Environment Network
      and enter the IP address, netmask, services, and other information for the interface.
    5. On both the controller node and controller backup node, select the interface to use for the HA control link. You must configure the same interface on both controller nodes for the HA service. For example, on the controller node and then on the controller backup node, select
      Ethernet3
      .
    6. For each controller node, select
      Clustering Services
      HA
      . (The
      HA
      option is not available for worker nodes.) If you also want the ability to ping the interface, select
      Management Services
      Ping
      .
    7. Click
      OK
      .
    8. (
      Recommended
      ) Select the interface to use as the backup HA control link between the controller node and the controller backup node. You must use the same interface on both nodes for the HA backup service. For example, on both nodes, select
      Management
      .
      Select
      Clustering Services
      HA Backup
      for both nodes. You can also select
      Ping
      ,
      SSH
      , and
      SNMP
      if you want those
      Management Services
      on the interface.
      The
      Analysis Environment Network
      interface cannot be an HA or HA Backup interface or a cluster management interface.
    9. Select the dedicated interface to use for management and communication within the cluster. You must use the same interface on both nodes, for example,
      Ethernet2
      .
    10. Select
      Clustering Services
      Cluster Management
      for both nodes. If you also want the ability to ping on the interface, select
      Management Services
      Ping
      .
      Worker nodes in the cluster automatically inherit the controller node’s settings for the dedicated management and communication interface.
  7. Commit the configuration on the Panorama appliance and push it to the cluster.
    1. Commit and Push
      .
    2. If there are configurations on the Panorama appliance that you do not want to push,
      Edit Selections
      to choose the appliances to which you push configurations. The pushed configuration overwrites the running configuration on the cluster nodes so that all cluster nodes run the same configuration.
  8. Verify the configuration.
    1. Select
      Panorama
      Managed WildFire Clusters
      .
    2. Check the following fields:
      • Appliance
        —Instead of displaying as standalone appliances, the WildFire nodes added to the cluster display under the cluster name.
      • Cluster Name
        —The cluster name displays for each node.
      • Role
        —The appropriate role (
        Controller
        ,
        Controller Backup
        , or
        Worker
        ) displays for each node.
      • Config Status
        —Status is
        In Sync
        .
      • Last Commit State
        Commit succeeded
        .
  9. Using the local CLI on the primary controller node (not the Panorama web interface), check to ensure that the configurations are synchronized.
    If they are not synchronized, manually synchronize the high availability configurations on the controller nodes and commit the configuration.
    Even though you can perform most other configuration on Panorama, synchronizing the controller node high availability configurations must be done on the primary controller node’s CLI.
    1. On the primary controller node, check to ensure that the configurations are synchronized:
      admin@WF-500(active-controller)>
      show high-availability all
      At the end of the output, look for the
      Configuration Synchronization
      output:
      Configuration Synchronization: Enabled: yes Running Configuration: synchronized
      If the running configuration is synchronized, you do not need to manually synchronize the configuration. However, if the configuration is not synchronized, you need to synchronize the configuration manually.
    2. If the configuration is not synchronized, on the primary controller node, synchronize the high availability configuration to the remote peer controller node:
      admin@WF-500(active-controller)>
      request high-availability sync-to-remote running-config
      If there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node.
    3. Commit the configuration:
      admin@WF-500#
      commit

Recommended For You