Cluster Node Connectivity
Focus
Focus
Prisma AIRS

Cluster Node Connectivity

Table of Contents

Cluster Node Connectivity

Each node within a cluster is equipped with a dedicated management interface. These cluster nodes are interconnected via internal cluster control and cluster data links.
Where Can I Use This?What Do I Need?
  • Prisma AIRS
  • Software NGFW Credits
  • HSF subscription license
Each node within a cluster is equipped with a dedicated management interface. These cluster nodes are interconnected via internal cluster control and cluster data links. External connections are exclusively established with the P-Nodes.

Cluster Traffic Interface

The node<n>:ethernet1/<number> interface is crucial for facilitating traffic flow between P-Nodes and S-Nodes and synchronizing state across Data Planes (DPs). It requires a dedicated virtual switch with MAC address changes permitted within the vSwitch port group, while promiscuous mode must remain disabled. Any available data port can be designated as a TI port, with the first cluster ethernet interface in each cluster node serving as the default. A static IP address must be assigned to the TI port within the Panorama template, ensuring all TI interfaces in the cluster are in the same IP subnet and configured within a separate Logical Router (LR). The TI keepalive timer is configured for a 1-second interval, with a message sent to other cluster nodes every second. Only one TI interface should be configured per cluster node within the Panorama template, and an SR-IOV interface should be used as T1 for high throughput scenarios.

Memory and Core Sizes of Nodes

The P-Nodes must have 4 cores and a minimum of 16 GB of memory. A P-Node must have twice the memory of a S-Node. For example, if an S-Node has 10 GB of memory, then the P-Node must have 20 GB. The P-Node configuration capacity is determined by 50% of the total configured memory, and P-Nodes require extra memory for the cluster-wide sharded global flow table. All P-Nodes within a cluster must maintain a consistent footprint.
The S-Nodes support the same number of cores and flex memory tiers as standalone Prisma AIRS instances, and all S-Nodes must maintain a consistent footprint. The P-Node should have twice the memory allocated to the DP node. The configuration capacity/tier is based on 50% of the memory allocated to the P-Node. This tier should be the same as the S-Node.

Autoscaling of HSF Nodes

The S-Nodes offer auto-scaling capabilities depending on session utilization. In contrast, the P-Nodes lack auto-scaling functionality and necessitate manual addition or removal from the cluster.
Utilization data is transmitted from the device to Panorama at regular five-minute intervals (statistics viewable in the Firewall Clusters plugin). The Panorama orchestration plugin leverages this utilization data to dynamically adjust the number of firewall nodes within the cluster.
Both P-Nodes and S-Nodes are capable of vertical scaling through manual updates to the cluster within the Panorama orchestration plugin.

Node Placement

For optimal redundancy, each P-Node should reside on a dedicated server. Co-locating two P-Nodes on the same ESXi server will compromise availability and result in substantial traffic disruption should a server failure occur. S-Nodes may be deployed together in separate servers or co-located with P-Nodes.

Node Configuration and Content

Configuration on the cluster nodes is governed by Panorama and cannot be modified locally, with the exception of the management interface configuration.
All cluster nodes are required to maintain identical configurations, encompassing template, device group, and cluster settings.
Configuration version information is exchanged between nodes. Cluster nodes possessing an older configuration version will transition to a failed state.
To view the configuration version, execute the command show cluster local config-version.
The Panorama and the cluster nodes must possess matching app/threat content and AV content. A mismatch in content versions between Panorama and the cluster nodes will lead to commit failures. Should discrepancies in configuration versions be detected, the nodes with the lower configuration version will enter a failed state. To restore these nodes, initiate a commit and push operation from Panorama to all nodes, including the template, device group, and cluster configurations.

MTU Configuration

Maximum Transmission Unit (MTU) can be configured solely on external interfaces. MTU configuration is not supported on the TI interface.
In jumbo mode, external interfaces accommodate a maximum MTU of 8650 bytes, attributed to the C3 header overhead in the TI interface. Modifying the MTU on the TI interface via Panorama configuration will lead to a commit failure.
The MTU on the ESXi vSwitch utilized for the TI interfaces should be set to 9000, irrespective of whether jumbo mode is configured in Pan-OS.

Server Clock Synchronization

All ESXi server clocks, encompassing both those hosting the cluster nodes and the server accommodating the Panorama VM, must be meticulously synchronized via NTP configuration. Failure to synchronize the cluster node clocks may result in significant state synchronization discrepancies between them.