Known Issues Specific to the WF-500 Appliance
The following list includes known issues specific to WildFire® 8.0 releases running on the WF-500 appliance. See also the specific and general Known Issues Related to PAN-OS 8.0 Releases.
RADIUS server profile configurations only) You cannot send a commit from a Panorama appliance running a PAN-OS 8.1 release to a WF-500 appliance running a PAN-OS 8.0 release because the RADIUS authentication protocol is incorrectly changed to CHAP authentication.
In rare cases when you upgrade a WF-500 appliance from a PAN-OS 7.1 release to a PAN-OS 8.0 release, the disk partition becomes full due to the amount of data on the drive. When you try deleting the backup database to free up space, the
debugwildfire reset backup-database-for-old-samplesCLI command fails and displays the following error:
Server error : Clientwf_devsrvr not ready.
PAN-OS 8.0 releases only) In rare cases where the disk partition becomes full, the WF-500 appliance does not come up after you upgrade from a PAN-OS 7.1 release to a PAN-OS 8.0 release because data migration stops progressing, several processes don’t start, and automatic commits don’t occur.
PAN-OS 8.0 releases only) A WF-500 appliance with millions of reports does not come up after you upgrade from a PAN-OS 7.1 release to a PAN-OS 8.0 release because the data migration runs slowly, several processes don't start, and automatic commits don't occur.
As part of and after upgrading a WildFire appliance to a PAN-OS® 8.0 release, rebooting a cluster node (
requestcluster reboot-local-node) sometimes results in the nodez going offline or failing to reboot.
debug cluster agent restart-agentCLI command to bring the node back on line and to restart the cluster agent as needed.
The Create Date shown when using the
show wildfireglobal sample-status sha256 equaland
showwildfire global sample-analysisCLI commands is two hours behind the actual time for WF-500 appliance samples.
In a three-node WildFire appliance cluster, if you decommission the backup controller node or the worker node (
request cluster decommission start) and then delete the cluster-related configuration (high-availability and cluster membership) from the decommissioned node, in some cases, the cluster stops functioning. Running the
show cluster membershipcommand on the primary controller node shows:
In this state, the cluster does not function and does not accept new samples for processing.
Workaround:Reboot the primary controller (run the
request cluster reboot-local-nodeCLI command on the primary controller). After the primary controller reboots, the cluster functions again and accepts new samples for processing.
After you remove a node from a cluster, if the cluster was storing sample information on that node, that serial number of that node may appear in the list of storage nodes when you show the sample status (
show wildfire globalsample-status sha256 equal) even though the node no longer belongs to the cluster.
Integrated reports are not available for firewalls connected to a WF-500 appliance running in FIPS mode.
In a WildFire appliance cluster with three or more nodes and with two controller nodes, when you try to configure a worker node as a controller node, the change should fail because a cluster can have only two controller nodes (primary and backup controller nodes). However, the commit operation on the worker node succeeds and causes the cluster to see the worker node as a third controller node that cannot be allowed in the cluster. This prevents the converted worker node from connecting to the cluster manager and the node is removed from the cluster. The
show clustertask localCLI command displays the following error:
Workaround:Perform the following tasks to workaround this issue:
If you remove a node from a two-node WildFire appliance cluster by deleting the high availability (HA) configuration (
delete deviceconfig high-availability) and the cluster configuration (
delete deviceconfig cluster), the single remaining cluster node cannot process samples.
Workaround:Use either of the following workarounds to enable the remaining cluster node to process samples:
This issue is now resolved. SeePAN-OS 8.0.1 Addressed Issues
In a three-node WildFire appliance cluster, decommissioning the active (primary) controller node fails. Attempting to decommission the active controller node by running the
requestcluster decommissionstartcommand results in a suspension of services on the node. Use the
show cluster membershipcommand to verify that the node services (
wildfire-apps-service) are suspended.
Workaround:Instead of using the
requestclusterdecommission startcommand to decommission the active controller, failover the active controller so that it becomes the passive (backup) controller first and then decommission the passive controller:
The Panorama management server doesn’t support removing a node from a cluster.
Workaround:Remove a node from a cluster locally.
On the Panorama management server, you can configure an authentication profile and
Addgroups or administrators to the
Allow Listin the profile (
). However, WildFire appliances and appliance clusters support only the
allvalue for the groups in the allow list for an authentication profile. The analogous WildFire appliance CLI command is
set shared authentication-profile, with
as the only allowed parameter.
When you try to push a configuration that specifies a group or name other than
allin the authentication profile from Panorama to a WildFire appliance or appliance cluster, the push operation fails. However,
Managed WildFire Appliances
Managed WildFire Clustersindicates the Last Commit State is
commitsucceededdespite the failure. The Config Status indicates cluster nodes are
Outof Syncand when you click
commit succeeded, the
LastPush State Detailsdisplays an error message.
For example, if you
Adda group named
abcdto an authentication profile named
auth5in Panorama and then attempt to push the configuration to a WildFire appliance cluster, Panorama returns the following error:
authentication-profileauth5 allow-list‘abcd’ is not an allowed keyword. This is because WildFire appliances and appliance clusters see the allow list argument as a keyword, not as a variable, and the only keyword allowed is
request cluster join ipCLI command is not functional; don’t use it.
WildFire appliances build and release all untested signatures to the connected firewalls every five minutes, which is the maximum time that a signature remains untested (not released to firewalls). When a WildFire appliance joins a cluster, if any untested (unreleased) signatures are on the appliance, they may be lost instead of migrating to the cluster, depending on when the last build of untested signatures occurred.
request cluster reboot-all-nodesCLI command is not functional; don’t use it.
Workaround:To reboot all nodes in a cluster, reboot each node individually using the
request cluster reboot-local-nodecommand from the node’s local CLI.
When using a web browser to view a WildFire Analysis Report from a firewall that uses a WF-500 appliance for file sample analysis, the report does not display until the browser downloads the WF-500 certificate. This issue occurs after upgrading a firewall and the WF-500 appliance to a PAN-OS 6.1 or later release.
Workaround:Browse to the IP address or hostname of the WF-500 appliance, which will temporarily download the certificate into the browser. For example, if the IP address of the WF-500 is 10.3.4.99, open a browser and enter
https://10.3.4.99. You can then access the report from the firewall by selecting
log details, and then clicking
WildFire Analysis Report.
Recommended For You
Recommended videos not found.