Known Issues Specific to the WF-500 Appliance

The following list includes known issues specific to WildFire® 8.0 releases running on the WF-500 appliance. See also the specific and general Known Issues Related to PAN-OS 8.0 Releases.
Issue ID
Description
WF500-4893
This issue is now resolved. SeePAN-OS 8.0.15 Addressed Issues.
(RADIUS server profile configurations only) You cannot send a commit from a Panorama appliance running a PAN-OS 8.1 release to a WF-500 appliance running a PAN-OS 8.0 release because the RADIUS authentication protocol is incorrectly changed to CHAP authentication.
WF500-4636
This issue is now resolved. SeePAN-OS 8.0.15 Addressed Issues.
In rare cases when you upgrade a WF-500 appliance from a PAN-OS 7.1 release to a PAN-OS 8.0 release, the disk partition becomes full due to the amount of data on the drive. When you try deleting the backup database to free up space, the debug wildfire reset backup-database-for-old-samples CLI command fails and displays the following error: Servererror : Client wf_devsrvr not ready.
WF500-4635
(PAN-OS 8.0 releases only) In rare cases where the disk partition becomes full, the WF-500 appliance does not come up after you upgrade from a PAN-OS 7.1 release to a PAN-OS 8.0 release because data migration stops progressing, several processes don’t start, and automatic commits don’t occur.
WF500-4632
(PAN-OS 8.0 releases only) A WF-500 appliance with millions of reports does not come up after you upgrade from a PAN-OS 7.1 release to a PAN-OS 8.0 release because the data migration runs slowly, several processes don't start, and automatic commits don't occur.
WF500-4218
This issue is now resolved. SeePAN-OS 8.0.2 Addressed Issues.
As part of and after upgrading a WildFire appliance to a PAN-OS® 8.0 release, rebooting a cluster node (request cluster reboot-local-node) sometimes results in the nodez going offline or failing to reboot.
Workaround: Use the debug cluster agent restart-agent CLI command to bring the node back on line and to restart the cluster agent as needed.
WF500-4200
The Create Date shown when using the show wildfire global sample-status sha256 equal <hash> and show wildfire global sample-analysis CLI commands is two hours behind the actual time for WF-500 appliance samples.
WF500-4186
This issue is now resolved. SeePAN-OS 8.0.2 Addressed Issues.
In a three-node WildFire appliance cluster, if you decommission the backup controller node or the worker node (request cluster decommission start) and then delete the cluster-related configuration (high-availability and cluster membership) from the decommissioned node, in some cases, the cluster stops functioning. Running the show cluster membership command on the primary controller node shows:
Service Summary: Cluster:offline, HA:peer-offline
In this state, the cluster does not function and does not accept new samples for processing.
Workaround: Reboot the primary controller (run the request cluster reboot-local-node CLI command on the primary controller). After the primary controller reboots, the cluster functions again and accepts new samples for processing.
WF500-4176
This issue is now resolved. SeePAN-OS 8.0.2 Addressed Issues.
After you remove a node from a cluster, if the cluster was storing sample information on that node, that serial number of that node may appear in the list of storage nodes when you show the sample status (show wildfire global sample-status sha256 equal <value>) even though the node no longer belongs to the cluster.
WF500-4173
This issue is now resolved. SeePAN-OS 8.0.2 Addressed Issues.
Integrated reports are not available for firewalls connected to a WF-500 appliance running in FIPS mode.
WF500-4166
In a WildFire appliance cluster with three or more nodes and with two controller nodes, when you try to configure a worker node as a controller node, the change should fail because a cluster can have only two controller nodes (primary and backup controller nodes). However, the commit operation on the worker node succeeds and causes the cluster to see the worker node as a third controller node that cannot be allowed in the cluster. This prevents the converted worker node from connecting to the cluster manager and the node is removed from the cluster. The show cluster task local CLI command displays the following error:
Server error: Cannot connect to ‘cluster-mgr’ daemon, please check it is running.
Status Report: <node-ip-address>: reported leader <ip-address>, age 0. 
<node-ip-address>: quit cluster due to too many controllers.
Workaround: Perform the following tasks to workaround this issue:
  1. Reconfigure the node to run in worker mode using the set deviceconfig cluster mode worker command.
  2. Run the commit force command. (A standard commit operation fails and returns a message that the cluster manager is non-responsive.)
  3. After the commit force operation succeeds, reboot the node using the request cluster reboot-local-node command. Until you reboot the node, the node’s application services do not respond.
WF500-4132
If you remove a node from a two-node WildFire appliance cluster by deleting the high availability (HA) configuration (delete deviceconfig high-availability) and the cluster configuration (delete deviceconfig cluster), the single remaining cluster node cannot process samples.
Workaround: Use either of the following workarounds to enable the remaining cluster node to process samples:
  • Make the cluster node a standalone WildFire appliance—Delete the HA and cluster configurations on the remaining cluster node and reboot the node. The node comes back up as a standalone WildFire appliance.
  • Recreate the cluster—Reconfigure the node you removed as a cluster node by adding the cluster and HA configurations using the following commands so that both nodes come back up as cluster nodes and can process samples:
admin@WF-500# set deviceconfig cluster cluster-name <name> interface 
<cluster-communication-interface> node controller
admin@WF-500# set deviceconfig high-availability enabled yes interface ha1 port <port> 
peer-ip-address <node-port-ip-address> 
admin@WF-500# set deviceconfig high-availability election-option priority {primary | secondary}
admin@WF-500# set deviceconfig high-availability interface ha1-backup peer-ip-address 
<node-backup-ha-interface-ip-address>
WF500-4098
This issue is now resolved. SeePAN-OS 8.0.1 Addressed Issues
In a three-node WildFire appliance cluster, decommissioning the active (primary) controller node fails. Attempting to decommission the active controller node by running the requestcluster decommission start command results in a suspension of services on the node. Use the show cluster membership command to verify that the node services (Service Summary and wildfire-apps-service) are suspended.
Workaround: Instead of using the requestcluster decommission start command to decommission the active controller, failover the active controller so that it becomes the passive (backup) controller first and then decommission the passive controller:
  1. Ensure that preemption is not enabled (Preemptive: no) by running the show high-availability state command (preemption forces the active controller to resume its role as the active controller so that—after a failover, when the active controller comes back up—the active controller resumes its role as the active controller instead of becoming the passive backup controller).
    If preemption is enabled, disable preemption on the active controller by running the set deviceconfig high-availability election-option preemptive no command and then Commit the configuration.
  2. Failover the active controller so that it becomes the passive (backup) controller by running the request cluster reboot-local-node operational command on the active controller.
  3. Wait for the former active controller to come up completely. Its new cluster role is the passive controller (as shown in the prompt).
  4. When the node is in the passive controller state, remove the HA configuration (delete deviceconfig high-availability) and the cluster configuration (delete deviceconfigcluster) and then commit the configuration.
  5. Decommission the node by running the request cluster decommission start command.
WF500-4044
The Panorama management server doesn’t support removing a node from a cluster.
WF500-4001
On the Panorama management server, you can configure an authentication profile and Add groups or administrators to the Allow List in the profile (PanoramaAuthentication Profile<auth-profile>Advanced). However, WildFire appliances and appliance clusters support only the all value for the groups in the allow list for an authentication profile. The analogous WildFire appliance CLI command is set shared authentication-profile <name> allow-list [all], with all as the only allowed parameter.
When you try to push a configuration that specifies a group or name other than all in the authentication profile from Panorama to a WildFire appliance or appliance cluster, the push operation fails. However, PanoramaManaged WildFire Appliances or Managed WildFire Clusters indicates the Last Commit State is commit succeeded despite the failure. The Config Status indicates cluster nodes are Outof Sync and when you click commit succeeded, the Last Push State Details displays an error message.
For example, if you Add a group named abcd to an authentication profile named auth5 in Panorama and then attempt to push the configuration to a WildFire appliance cluster, Panorama returns the following error: authentication-profileauth5 allow-list ‘abcd’ is not an allowed keyword. This is because WildFire appliances and appliance clusters see the allow list argument as a keyword, not as a variable, and the only keyword allowed is all.
WF500-3966
The request cluster join ip <ip-address> CLI command is not functional; don’t use it.
WF500-3935
WildFire appliances build and release all untested signatures to the connected firewalls every five minutes, which is the maximum time that a signature remains untested (not released to firewalls). When a WildFire appliance joins a cluster, if any untested (unreleased) signatures are on the appliance, they may be lost instead of migrating to the cluster, depending on when the last build of untested signatures occurred.
WF500-3892
The request cluster reboot-all-nodes CLI command is not functional; don’t use it.
Workaround: To reboot all nodes in a cluster, reboot each node individually using the request cluster reboot-local-node command from the node’s local CLI.
WF500-1584
When using a web browser to view a WildFire Analysis Report from a firewall that uses a WF-500 appliance for file sample analysis, the report does not display until the browser downloads the WF-500 certificate. This issue occurs after upgrading a firewall and the WF-500 appliance to a PAN-OS 6.1 or later release.
Workaround: Browse to the IP address or hostname of the WF-500 appliance, which will temporarily download the certificate into the browser. For example, if the IP address of the WF-500 is 10.3.4.99, open a browser and enter https://10.3.4.99. You can then access the report from the firewall by selecting MonitorLogsWildFire Submissions, clicking log details, and then clicking WildFire Analysis Report.
PAN-95313
On the WF-500 appliance, when you select to reformat the database and log partitions as a corrective action while in maintenance mode, the operation fails. Subsequently, automatic commits don't happen and several processes don't start even if you then reboot the appliance.

Related Documentation