Cluster Node Connectivity
Each node within a cluster is equipped with a dedicated management interface. These
cluster nodes are interconnected via internal cluster control and cluster data
links.
Where Can I Use This? | What Do I Need? |
|
- Software NGFW Credits
- HSF subscription license
|
Each node within a cluster is equipped with a dedicated management interface. These
cluster nodes are interconnected via internal cluster control and cluster data links.
External connections are exclusively established with the AI-Gateway nodes.
Cluster Traffic Interface
The node<n>:ethernet1/<number> interface is crucial for facilitating traffic
flow between Gateway and firewall nodes and synchronizing state across Data Planes
(DPs). It requires a dedicated virtual switch with MAC address changes permitted
within the vSwitch port group, while promiscuous mode must remain disabled. Any
available data port can be designated as a TI port, with the first cluster ethernet
interface in each cluster node serving as the default. A static IP address must be
assigned to the TI port within the Panorama template, ensuring all TI interfaces in
the cluster are in the same IP subnet and configured within a separate Logical
Router (LR). The TI keepalive timer is configured for a 1-second interval, with a
message sent to other cluster nodes every second. Only one TI interface should be
configured per cluster node within the Panorama template, and an SR-IOV interface
should be used as T1 for high throughput scenarios.
Memory and Core Sizes of Nodes
The AI-Gateway nodes must have 4 cores and a minimum of 16 GB of memory. An
AI-Gateway node must have twice the memory of an AI-DP node. For example, if an
AI-DP node has 10 GB of memory, then the AI-Gateway node must have 20 GB. The
AI-Gateway configuration capacity is determined by 50% of the total configured
memory, and AI-Gateway nodes require extra memory for the cluster-wide sharded
global flow table. All AI-Gateway nodes within a cluster must maintain a consistent
footprint.
The AI-DP nodes support the same number of cores and flex memory tiers as standalone
Prisma AIRS instances, and all AI-DP nodes must maintain a consistent footprint. For
interoperability, both AI-Gateway nodes and AI-DP nodes must be in the same memory
tier, having the same configuration capacity. The AI-Gateway node should have twice
the memory allocated to the DP node. The configuration capacity/tier is based on 50%
of the memory allocated to the AI-Gateway node. This tier should be the same as the
AI-DP node.
Autoscaling of HSF Nodes
The AI-DP nodes offer auto-scaling capabilities depending on session
utilization. In contrast, the AI-Gateway nodes lack auto-scaling functionality and
necessitate manual addition or removal from the cluster.
Utilization data is transmitted from the device to Panorama at regular
five-minute intervals (statistics viewable in the Firewall Clusters plugin). The
Panorama orchestration plugin leverages this utilization data to dynamically adjust
the number of firewall nodes within the cluster.
Both AI-Gateway and AI-DP nodes are capable of vertical scaling through
manual updates to the cluster within the Panorama orchestration plugin.
Node Placement
For optimal redundancy, each AI-Gateway node should reside on a dedicated server.
Co-locating two AI-Gateway nodes on the same ESXi server will compromise
availability and result in substantial traffic disruption should a server failure
occur. AI-DP nodes may be deployed together in separate servers or co-located with
AI-Gateway nodes.
Node Configuration and Content
Configuration on the cluster nodes is governed by Panorama and cannot be modified
locally, with the exception of the management interface configuration.
All cluster nodes are required to maintain identical configurations, encompassing
template, device group, and cluster settings.
Configuration version information is exchanged between nodes. Cluster nodes
possessing an older configuration version will transition to a failed state.
To view the configuration version, execute the command show cluster local
config-version.
The Panorama and the cluster nodes must possess matching app/threat content and AV
content. A mismatch in content versions between Panorama and the cluster nodes will
lead to commit failures. Should discrepancies in configuration versions be detected,
the nodes with the lower configuration version will enter a failed state. To restore
these nodes, initiate a commit and push operation from Panorama to all nodes,
including the template, device group, and cluster configurations.
MTU Configuration
Maximum Transmission Unit (MTU) can be configured solely on external
interfaces. MTU configuration is not supported on the TI interface.
In jumbo mode, external interfaces accommodate a maximum MTU of 8650 bytes,
attributed to the C3 header overhead in the TI interface. Modifying the MTU on
the TI interface via Panorama configuration will lead to a commit failure.
The MTU on the ESXi vSwitch utilized for the TI interfaces should be set to
9000, irrespective of whether jumbo mode is configured in Pan-OS.
Server Clock Synchronization
All ESXi server clocks, encompassing both those hosting the cluster nodes and the
server accommodating the Panorama VM, must be meticulously synchronized via NTP
configuration. Failure to synchronize the cluster node clocks may result in
significant state synchronization discrepancies between them.