Cluster operation and configuration | Configure all cluster nodes identically
to ensure consistency in analysis and appliance-to-appliance communication: All cluster nodes must run the same version of PAN-OS (PAN-OS
8.0.1 or later). Panorama must run the same software version as
the cluster nodes or a newer version. Firewalls can run the same
software versions that enable them to submit samples to a WildFire
appliance. Firewalls do not require a particular software version
to submit samples to a WildFire appliance cluster. Cluster nodes inherit their configuration from the controller
node, with the exception of interface configuration. Cluster members
monitor the controller node configuration and update their own configurations
when the controller node commits an updated configuration. Worker
nodes inherit settings such as content update server settings, WildFire
cloud server settings, the sample analysis image, sample data retention
time frames, analysis environment settings, signature generation
settings, log settings, authentication settings, and Panorama server,
DNS server, and NTP server settings, When you manage a cluster with Panorama, the Panorama appliance
pushes a consistent configuration to all cluster nodes. Although
you can change the configuration locally on a WildFire appliance
node, Palo Alto Networks does not recommend that you do this, because
the next time the Panorama appliance pushes a configuration, it
replaces the running configuration on the node. Local changes to cluster
nodes that Panorama manages often cause Out of Sync errors. If the cluster node membership list differs on the two controller
nodes, the cluster generates an Out of Sync warning. To avoid a
condition where both controller nodes continually update the out-of-sync
membership list for the other node, cluster membership enforcement
stops. When this happens, you can synchronize the cluster membership
lists from the local CLI on the controller and controller backup
nodes by running the operational command request high-availability sync-to-remote running-configuration .
If there is a mismatch between the primary controller node’s configuration
and the configuration on the controller backup node, the configuration
on the primary controller node overrides the configuration on the
controller backup node. On each controller node, run show cluster all-peers and
compare and correct the membership lists. A cluster can have only two controller nodes (primary and backup);
attempts to locally add a third controller node to a cluster fail. (The
Panorama web interface automatically prevents you from adding a
third controller node.) The third and all subsequent nodes added
to a cluster must be worker nodes. A characteristic of HA configurations is that the cluster distributes
and retains multiple copies of the database, queuing services, and
sample submissions to provide redundancy in case of a cluster node
failure. Running the additional services required to provide redundancy
for HA has a minimal impact on throughput. The cluster automatically checks for duplicate IP addresses used
for the analysis environment network. If a node belongs to a cluster and you want to move it to a
different cluster, you must first remove the node from its current cluster. Do not change the IP address of WildFire appliances that are
currently operating in a cluster. Doing so causes the associated firewall
to deregister from the node.
|