What's New With HSF
Focus
Focus
Prisma AIRS

What's New With HSF

Table of Contents

What's New With HSF

Acquaint yourself with the new changes that HSF introduces - Prisma AIRS platform, management interface, cluster interconnect interfaces, cluster ethernet interfaces and so on.
Where Can I Use This?What Do I Need?
  • Prisma AIRS
  • Software NGFW Credits
  • HSF subscription license
Acquaint yourself with the following changes that HSF brings:

Prisma AIRS Platform

You can leverage HSF using an enhanced Pan-OS image package which allows you to deploy instances as either P-Nodes or S-Nodes. This flexibility enables you to maintain consistent resource allocation across your cluster (same resource footprint including number of cores, memory and storage), ensuring optimal performance and efficiency. By utilizing the same Pan-OS configuration capacity for both P-Nodes and S-Nodes, you can streamline your management processes and reduce complexity.

Management Interface

Every node in the cluster has a separate management interface. This interface is used to connect to the Panorama as well as for exporting security logs. The management interface is also used to check reachability of other cluster nodes under some fault conditions to avoid conflicts. Ensure that the connectivity between the management interface and nodes is always maintained.

Cluster Interconnect Interfaces

In your cluster configuration, you'll find that each node is equipped with two essential interfaces that operate on separate L3 subnets:
  • Cluster Control Interface - This interface is dedicated to handling control traffic between instances, allowing Pan-OS control processes to synchronize the control plane state. This interface uses statically allocated IP addresses based on node identifiers.
  • node<n>:ethernet1/<number> interface - This Interface is responsible for managing user traffic between instances. The IP configuration of this interface is managed through standard Pan-OS settings.
You'll need to ensure these interfaces are connected to distinct layer 2 networks. For the Cluster Traffic Interface, you must configure a Logical Router (LR) on each node. In the initial phase, the inter-node connectivity isn't encrypted (IPSec or MacSec is not supported). You can implement NIC bonding in the host vSwitch for link redundancy or increased throughput on the Cluster Control Interface. By using separate interfaces for control and data traffic, you achieve a level of isolation, though it's important to note that strict QoS guarantees aren't feasible in virtualized environments due to the complexities of end-to-end configuration across hypervisors, host systems, and external networks.
The following are a few limitations on the config of the cluster traffic interface until the general availability of HSF.
Once cluster is up and the node is part of cluster, it is not recommended to:
  • Change the TI interface selection often (such as check or uncheck the TI interface check box).
  • Change the IP often (such as change to DHCP or manually changing static IP).
  • Do port changes on the node (such as deleting the first node and configuring a new port with the same IP, lR, and marking it with the TI flag and pushing a commitAll to all devices once cluster is up).
IPv6 IP is currently not supported on TI interfaces.

Cluster Ethernet Interfaces

The Cluster Ethernet interfaces uniquely identifies the Prisma AIRS firewall interfaces that are exposed as a single slot such as eth1/1, eth1/2, eth1/3, and so on. ​​In an HSF cluster, the Cluster Ethernet interfaces is extended to include a node identifier which helps in uniquely identifying a given interface across the cluster.
Ethernet Interface naming format for clustering:
  • Node<ID>:ethernet1/<Port>.<SubInterface>
    • node2:ethernet1/3
    • node2:ethernet1/4.1
  • The slot number will always be 1
A maximum of 10 cluster ethernet interfaces are supported (ESXi limitation). The first interface is used for management, second for cluster control and one interface is used for cluster traffic. The rest of the 7 interfaces are available for external traffic. The firewall nodes use only three interfaces (no external interface).

Node Indentifier

You can view the node ID of the current firewall node by using the command show cluster local node-id. Each node within the cluster is identified by a unique identifier known as the Node ID. These Node IDs are allocated in Panorama by the cluster orchestration plugin and subsequently transmitted to the firewall via bootstrap. Node numbers 1-4 are reserved for the P-Nodes, while node numbers 5-10 are utilized by the S- Nodes. The Node IDs are displayed on the cluster monitoring page in Panorama and are not modifiable through PanOS configuration.
To view the Node ID of the current firewall node, execute the following command:
show cluster local node-id

Node State

A cluster node can be in one of the following states:
  • Unknown
  • Init
  • Online
  • Degraded
  • Suspended
  • Failed
Initially, a node resides in an unknown state. Upon receiving configuration from Panorama, installing a license, and configuring the cluster control link, the node transitions to an init state, during which state synchronization and node initialization occur.
Following successful node initialization, the node moves to an online state, signifying its readiness to process traffic.
When faults are encountered, the node transitions to a Failed state.
When a node is removed as part of an autoscale-in operation, it first enters a degraded state, subsequently moving to a suspended state.
You can view the current node state in the Panorama visibility page or by the executing the following command:
show cluster local state

Leader Node

One of your P-Node instances will run the control-plane routing protocol, with a leader elected among the instances based on the node ID. This leader node distributes and synchronizes control plane state to other cluster nodes and handles dynamic ID management and some L7 feature state storage. If the leader instance fails, any node connected to this leader node will move to a failed state until another P-Node takes over as the leader node.
Execute the following commands to view the leader node information:
CommandDescription
show cluster leader To view the current leader node ID.