WildFire Appliance Cluster Management
To manage a WildFire appliance cluster, you need to know the capabilities of clusters and management recommendations.
Cluster operation and configuration
Configure all cluster nodes identically to ensure consistency in analysis and appliance-to-appliance communication:
Cluster data retention policies
Data retention policies determine how long the WildFire appliance cluster stores different types of samples.
No communication between WildFire appliance clusters is allowed. Nodes communicate with each other within a given cluster, but do not communicate with nodes in other clusters.
All cluster members must:
Dedicated cluster management interface
The dedicated cluster management interface enables the controller nodes to manage the cluster and is a different interface than the standard management interface (Ethernet0). Panorama enforces configuring a dedicated cluster management interface.
If the cluster management link goes down between two controller nodes in a two-node configuration, the controller backup node services and sample analysis continue to run even though there is no management communication with the primary controller node. This is because when the cluster management link goes down, the controller backup node does not know if the primary controller node is still functional, resulting in a split-brain condition. The controller backup node must continue to provide cluster services in case the primary controller node is not functional. When the cluster management link is restored, the data from each controller node is merged.
You can use the controller node in a WildFire appliance cluster as the authoritative DNS server for the cluster. (An authoritative DNS server serves the actual IP addresses of the cluster members, as opposed to a recursive DNS server, which queries the authoritative DNS server and passes the requested information to the host that made the initial request.)
Firewalls that submit samples to the WildFire appliance cluster should send DNS queries to their regular DNS server, for example, an internal corporate DNS server. The internal DNS server forwards the DNS query to the WildFire appliance cluster controller (based on the query’s domain). Using the cluster controller as the DNS server provides many advantages:
Although the DNS record should not be cached, for troubleshooting, if the DNS lookup succeeds, the TTL is 0. However, when the DNS lookup returns NXDOMAIN, the TTL and “minimum TTL” are both 0.
You can administer WildFire clusters using the local WildFire CLI or through Panorama. There are two administrative roles available locally on WildFire cluster nodes:
WildFire appliance clusters push a registration list that contains all of the nodes in a cluster to every firewall connected to a cluster node. When you register a firewall with an appliance in a cluster, the firewall receives the registration list. When you add a standalone WildFire appliance that already has connected firewalls to a cluster so that it becomes a cluster node, those firewalls receive the registration list.
If a node fails, the connected firewalls use the registration list to register with the next node on the list.
To provide data redundancy, WildFire appliance nodes in a cluster share database, queuing service, and sample submission content, however the precise location of this data depends on the cluster topology. As a result, WildFire appliances in a cluster undergo data migration or data rearrangement whenever topology changes are made. Topology changes include adding and removing nodes, as well as changing the role of a pre-existing node. Data migration can also occur when databases are converted to a newer version, as with the upgrade from WildFire 7.1 to 8.0.
Data migration status can be viewed by issuing status commands from the WildFire CLI. This process can take several hours depending on the quantity of data on the WildFire appliances.
Recommended For You
Recommended videos not found.