Configure a Cluster and Add Nodes on Panorama
Table of Contents
10.0 (EoL)
Expand all | Collapse all
-
- Determine Panorama Log Storage Requirements
-
- Setup Prerequisites for the Panorama Virtual Appliance
- Perform Initial Configuration of the Panorama Virtual Appliance
- Set Up The Panorama Virtual Appliance as a Log Collector
- Set Up the Panorama Virtual Appliance with Local Log Collector
- Set up a Panorama Virtual Appliance in Panorama Mode
- Set up a Panorama Virtual Appliance in Management Only Mode
-
- Preserve Existing Logs When Adding Storage on Panorama Virtual Appliance in Legacy Mode
- Add a Virtual Disk to Panorama on an ESXi Server
- Add a Virtual Disk to Panorama on vCloud Air
- Add a Virtual Disk to Panorama on Alibaba Cloud
- Add a Virtual Disk to Panorama on AWS
- Add a Virtual Disk to Panorama on Azure
- Add a Virtual Disk to Panorama on Google Cloud Platform
- Add a Virtual Disk to Panorama on Hyper-V
- Add a Virtual Disk to Panorama on KVM
- Add a Virtual Disk to Panorama on Oracle Cloud Infrastructure (OCI)
- Mount the Panorama ESXi Server to an NFS Datastore
-
- Increase CPUs and Memory for Panorama on an ESXi Server
- Increase CPUs and Memory for Panorama on vCloud Air
- Increase CPUs and Memory for Panorama on Alibaba Cloud
- Increase CPUs and Memory for Panorama on AWS
- Increase CPUs and Memory for Panorama on Azure
- Increase CPUs and Memory for Panorama on Google Cloud Platform
- Increase CPUs and Memory for Panorama on Hyper-V
- Increase CPUs and Memory for Panorama on KVM
- Increase CPUs and Memory for Panorama on Oracle Cloud Infrastructure (OCI)
- Complete the Panorama Virtual Appliance Setup
-
- Convert Your Evaluation Panorama to a Production Panorama with Local Log Collector
- Convert Your Evaluation Panorama to a Production Panorama without Local Log Collector
- Convert Your Evaluation Panorama to VM-Flex Licensing with Local Log Collector
- Convert Your Evaluation Panorama to VM-Flex Licensing without Local Log Collector
- Convert Your Production Panorama to an ELA Panorama
-
- Register Panorama
- Activate a Panorama Support License
- Activate/Retrieve a Firewall Management License when the Panorama Virtual Appliance is Internet-connected
- Activate/Retrieve a Firewall Management License when the Panorama Virtual Appliance is not Internet-connected
- Activate/Retrieve a Firewall Management License on the M-Series Appliance
- Install the Panorama Device Certificate
-
- Panorama, Log Collector, Firewall, and WildFire Version Compatibility
- Install Updates for Panorama in an HA Configuration
- Install Updates for Panorama with an Internet Connection
- Install Updates for Panorama When Not Internet-Connected
- Install Updates Automatically for Panorama without an Internet Connection
- Migrate Panorama Logs to the New Log Format
-
- Migrate from a Panorama Virtual Appliance to an M-Series Appliance
- Migrate a Panorama Virtual Appliance to a Different Hypervisor
- Migrate from an M-Series Appliance to a Panorama Virtual Appliance
- Migrate from an M-100 Appliance to an M-500 Appliance
- Migrate from an M-100 or M-500 Appliance to an M-200 or M-600 Appliance
-
- Configure an Admin Role Profile
- Configure an Access Domain
-
- Configure a Panorama Administrator Account
- Configure Local or External Authentication for Panorama Administrators
- Configure a Panorama Administrator with Certificate-Based Authentication for the Web Interface
- Configure an Administrator with SSH Key-Based Authentication for the CLI
- Configure RADIUS Authentication for Panorama Administrators
- Configure TACACS+ Authentication for Panorama Administrators
- Configure SAML Authentication for Panorama Administrators
-
- Add a Firewall as a Managed Device
-
- Add a Device Group
- Create a Device Group Hierarchy
- Create Objects for Use in Shared or Device Group Policy
- Revert to Inherited Object Values
- Manage Unused Shared Objects
- Manage Precedence of Inherited Objects
- Move or Clone a Policy Rule or Object to a Different Device Group
- Push a Policy Rule to a Subset of Firewalls
- Manage the Rule Hierarchy
- Manage the Master Key from Panorama
- Redistribute Data to Managed Firewalls
-
- Add Standalone WildFire Appliances to Manage with Panorama
- Remove a WildFire Appliance from Panorama Management
-
-
- Configure a Cluster and Add Nodes on Panorama
- Configure General Cluster Settings on Panorama
- Remove a Cluster from Panorama Management
- Configure Appliance-to-Appliance Encryption Using Predefined Certificates Centrally on Panorama
- Configure Appliance-to-Appliance Encryption Using Custom Certificates Centrally on Panorama
- View WildFire Cluster Status Using Panorama
- Upgrade a Cluster Centrally on Panorama with an Internet Connection
- Upgrade a Cluster Centrally on Panorama without an Internet Connection
-
-
- Manage Licenses on Firewalls Using Panorama
-
- Supported Updates
- Schedule a Content Update Using Panorama
- Upgrade Log Collectors When Panorama Is Internet-Connected
- Upgrade Log Collectors When Panorama Is Not Internet-Connected
- Upgrade Firewalls When Panorama Is Internet-Connected
- Upgrade Firewalls When Panorama Is Not Internet-Connected
- Upgrade a ZTP Firewall
- Revert Content Updates from Panorama
-
- Preview, Validate, or Commit Configuration Changes
- Enable Automated Commit Recovery
- Compare Changes in Panorama Configurations
- Manage Locks for Restricting Configuration Changes
- Add Custom Logos to Panorama
- Use the Panorama Task Manager
- Reboot or Shut Down Panorama
- Configure Panorama Password Profiles and Complexity
-
-
- Verify Panorama Port Usage
- Resolve Zero Log Storage for a Collector Group
- Replace a Failed Disk on an M-Series Appliance
- Replace the Virtual Disk on an ESXi Server
- Replace the Virtual Disk on vCloud Air
- Migrate Logs to a New M-Series Appliance in Log Collector Mode
- Migrate Logs to a New M-Series Appliance in Panorama Mode
- Migrate Logs to a New M-Series Appliance Model in Panorama Mode in High Availability
- Migrate Logs to the Same M-Series Appliance Model in Panorama Mode in High Availability
- Migrate Log Collectors after Failure/RMA of Non-HA Panorama
- Regenerate Metadata for M-Series Appliance RAID Pairs
- View Log Query Jobs
- Troubleshoot Commit Failures
- Troubleshoot Registration or Serial Number Errors
- Troubleshoot Reporting Errors
- Troubleshoot Device Management License Errors
- Troubleshoot Automatically Reverted Firewall Configurations
- Complete Content Update When Panorama HA Peer is Down
- View Task Success or Failure Status
- Downgrade from Panorama 10.0
End-of-Life (EoL)
Configure a Cluster and Add Nodes on Panorama
Before configuring a WildFire appliance cluster
from Panorama, you must upgrade Panorama to 8.0.1 or
later and upgrade all WildFire appliances you
plan to add to the cluster to 8.0.1 or later. All WildFire appliances
must run the same version of PAN-OS.
You can manage up to
200 WildFire appliances with a Panorama M-Series or virtual appliance.
The 200 WildFire appliance limit is the combined total of standalone
appliances and WildFire appliance cluster nodes (if you also Add Standalone WildFire Appliances to Manage
with Panorama). Except where noted, configuration takes place
on Panorama.
Each WildFire appliance cluster node must
have a static IP address in the same subnet and have low-latency
connections.
- Using the local CLI, configure the IP address
of the Panorama server that will manage the WildFire appliance cluster.Before you register cluster or standalone WildFire appliances to a Panorama appliance, you must first configure the Panorama IP address or FQDN on each WildFire appliance using the local WildFire CLI. This is how each WildFire appliance knows which Panorama appliance manages it.
- On each WildFire appliance, configure the
IP address or FQDN of the primary Panorama appliance’s management
interface:
admin@WF-500# set deviceconfig system panorama-server <ip-address | FQDN>
- On each WildFire appliance, if you use a backup Panorama
appliance for high availability (recommended), configure
the IP address or FQDN of the backup Panorama appliance’s management
interface:
admin@WF-500# set deviceconfig system panorama-server-2 <ip-address | FQDN>
- Commit the configuration on each WildFire appliance:
admin@WF-500# commit
- On each WildFire appliance, configure the
IP address or FQDN of the primary Panorama appliance’s management
interface:
- On the primary Panorama appliance, Register the WildFire
appliances.The newly registered appliances are in standalone mode unless they already belong to a cluster due to local cluster configuration.
- Select PanoramaManaged WildFire Appliances and Add Appliance.
- Enter the serial number of each WildFire appliance on a separate line. If you do not have a list of WildFire appliance serial numbers, using the local CLI, run show system info on each WildFire appliance to obtain the serial number.
- Click OK.If it is available, information about configuration that is already committed on the WildFire appliances displays, such as IP address and software version. WildFire appliances that already belong to a cluster (for example, because of local cluster configuration) display their cluster information and connection status.
- (Optional) Import WildFire appliance configurations
into the Panorama appliance.Importing configurations saves time because you can reuse or edit the configurations on Panorama and then push them to one or more WildFire appliance clusters or standalone WildFire appliances. If there are no configurations you want to import, skip this step. When you push a configuration from Panorama, the pushed configuration overwrites the local configuration.
- Select PanoramaManaged WildFire Appliances, and select the appliances that have configurations you want to import from the list of managed WildFire appliances.
- Import Config.
- Select Yes.Importing configurations updates the displayed information and makes the imported configurations part of the Panorama appliance candidate configuration.
- Commit to Panorama to make the imported WildFire appliance configurations part of the Panorama running configuration.
- Create a new WildFire appliance cluster.
- Select Managed WildFire Clusters.ApplianceNo Cluster Assigned displays standalone WildFire appliances (nodes) and indicates how many available nodes are not assigned to a cluster.
- Create Cluster.
- Enter an alphanumeric cluster Name of up to 63 characters in length. The Name can contain lower-case characters and numbers, and hyphens and periods if they are not the first or last character. No spaces or other characters are allowed.
- Click OK.The new cluster name displays but has no assigned WildFire nodes.
- Select Managed WildFire Clusters.
- Add WildFire appliances to the new cluster.The first WildFire appliance added to the cluster automatically becomes the controller node, and the second WildFire appliance added to the cluster automatically becomes the controller backup node. All subsequent WildFire appliances added to the cluster become worker nodes. Worker nodes use the controller node settings so that the cluster has a consistent configuration.
- Select the new cluster.
- Select Clustering.
- Browse the list of WildFire appliances that do not belong to clusters.
- Add (
- Click OK.
- Configure the Management, Analysis
Environment Network, HA, and cluster management interfaces.Configure the Management, Analysis Environment Network, and cluster management interfaces on each cluster member (controller and worker nodes) if they are not already configured. The cluster management interface is a dedicated interface for management and communication within the cluster and is not the same as the Management interface.Configure the HA interfaces individually on both the controller node and the controller backup node. The HA interfaces connect the primary and backup controller nodes and enable them to remain in sync and ready to respond to a failover.Cluster nodes need IP addresses for each of the four WildFire appliance interfaces. You cannot configure HA services on worker nodes.
- Select the new cluster.
- Select Clustering.
- If the management interface is not configured on a cluster node, select Interface NameManagement and enter the IP address, netmask, services, and other information for the interface.
- If the interface for the Analysis Environment Network is not configured on a cluster node, select Interface NameAnalysis Environment Network and enter the IP address, netmask, services, and other information for the interface.
- On both the controller node and controller backup node, select the interface to use for the HA control link. You must configure the same interface on both controller nodes for the HA service. For example, on the controller node and then on the controller backup node, select Ethernet3.
- For each controller node, select Clustering ServicesHA. (The HA option is not available for worker nodes.) If you also want the ability to ping the interface, select Management ServicesPing.
- Click OK.
- (Recommended) Select the interface to use
as the backup HA control link between the controller node and the
controller backup node. You must use the same interface on both
nodes for the HA backup service. For example, on both nodes, select Management.Select Clustering ServicesHA Backup for both nodes. You can also select Ping, SSH, and SNMP if you want those Management Services on the interface.The Analysis Environment Network interface cannot be an HA or HA Backup interface or a cluster management interface.
- Select the dedicated interface to use for management and communication within the cluster. You must use the same interface on both nodes, for example, Ethernet2.
- Select Clustering ServicesCluster Management for both
nodes. If you also want the ability to ping on the interface, select Management ServicesPing.Worker nodes in the cluster automatically inherit the controller node’s settings for the dedicated management and communication interface.
- Commit the configuration on the Panorama appliance and
push it to the cluster.
- Commit and Push.
- If there are configurations on the Panorama appliance that you do not want to push, Edit Selections to choose the appliances to which you push configurations. The pushed configuration overwrites the running configuration on the cluster nodes so that all cluster nodes run the same configuration.
- Verify the configuration.
- Select PanoramaManaged WildFire Clusters.
- Check the following fields:
- Appliance—Instead of displaying as standalone appliances, the WildFire nodes added to the cluster display under the cluster name.
- Cluster Name—The cluster name displays for each node.
- Role—The appropriate role (Controller, Controller Backup, or Worker) displays for each node.
- Config Status—Status is In Sync.
- Last Commit State—Commit succeeded.
- Using the local CLI on the primary controller node (not
the Panorama web interface), check to ensure that the configurations
are synchronized.If they are not synchronized, manually synchronize the high availability configurations on the controller nodes and commit the configuration.Even though you can perform most other configuration on Panorama, synchronizing the controller node high availability configurations must be done on the primary controller node’s CLI.
- On the primary controller node, check to
ensure that the configurations are synchronized:
admin@WF-500(active-controller)> show high-availability all
At the end of the output, look for the Configuration Synchronization output:Configuration Synchronization: Enabled: yes Running Configuration: synchronized
If the running configuration is synchronized, you do not need to manually synchronize the configuration. However, if the configuration is not synchronized, you need to synchronize the configuration manually. - If the configuration is not synchronized, on the primary
controller node, synchronize the high availability configuration
to the remote peer controller node:
admin@WF-500(active-controller)> request high-availability sync-to-remote running-config
If there is a mismatch between the primary controller node’s configuration and the configuration on the controller backup node, the configuration on the primary controller node overrides the configuration on the controller backup node. - Commit the configuration:
admin@WF-500# commit
- On the primary controller node, check to
ensure that the configurations are synchronized: