Known Issues in Kubernetes Plugin 2.0.0

The following list describes known issues in the Panorama plugin for Kubernetes version 2.0.0.


When configuring a monitoring definition, the monitoring watcher might become stuck in the initialization stage and does not receive any new updates.
Workaround: Use the following command to restart the plugin—
request plugins reset-plugin only plugin plugin-name kubernetes
This issue is fixed in Panorama Plugin for Kubernetes 2.0.1.


On occasion, CN-MGMT pods fail to connect to Panorama.
: Commit the Panorama configuration after the CN-MGMT pod successfully registers with Panorama.


If you did not allocate adequate tokens, and some CN-NGFW pods are on the 4-hour license grace period, on a Panorama HA failover the CN-NGFW pods that connect first are allocated licenses. Hence, a CN-NGFW pod that was licensed can now be unlicensed based on the order it connects to Panorama.To ensure that you continuous coverage, verify that you have allocated the required number of tokens for securing you deployment.


When you uninstall the Kubernetes plugin on Panorama, you must also delete the default template (
) that the plugin automatically creates before or after you downgrade to 9.1 or an earlier version of Panorama. You will be unable to commit changes on Panorama if this default template with 60 interfaces is not deleted manually. This commit failure occurs because Panorama 9.1 and earlier versions only support up to 30 interfaces per template, and this template has 60 interfaces.
: Delete the
template before downgrading or after downgrading. The commit on Panorama will be successful after you remove this template.


If you update the number of tokens allocated for licensing the CN-Series firewalls (
), when the plugin communicates with the IT server, the license
Issued Date
is updated to match the current date on Panorama.


When the Kubernetes cluster becomes unreachable from the Panorama plugin, all the nodes that were licensed till that point will remain licensed.Tokens can be given back to the licensing server only if there is cluster connectivity and you must reestablish cluster connectivity in the Panorama plugin to reclaim tokens.
: Delete the cluster configuration on Panorama.


When you deploy Panorama in a HA set up, each active Panorama peer consumes a license token for each CN-NGFW pod that it manages.


API calls from the Kubernetes plugin to the Kubernetes API server become unresponsive sometimes due to network connectivity issues. When this issue occurs a System Log is generated with the following message:
Subprocess hanging for <mon-def-name>. Plugin watcher may be in bad state. To resolve please run following command:
request plugins reset-plugin plugin-name kubernetes
: When this issue happens restart the plugin from the CLI with the following command:
request plugins reset-plugin plugin-name kubernetes


When you add a large number of Kubernetes clusters to Panorama, depending on the number of services running on each cluster, it might up to 10 minutes to create the service objects and register the IP addresses on Panorama.


If you reference the same device group in more than one monitoring definition on Panorama, (
Monitoring Definition
, the tags associated with one cluster and monitoring definition may be shared with the other clusters associated with the other monitoring definitions.

Recommended For You