: Intelligent Traffic Offload
Focus
Focus

Intelligent Traffic Offload

Table of Contents

Intelligent Traffic Offload

The Intelligent Traffic Offload (ITO) service routes the first packets of a flow to the firewall for inspection to determine the packets in the flow should be inspected or offloaded.
With Intelligent Traffic Offload on VM-Series and Nvidia BlueField-3 you can unlock 5X performance gains. Intelligent Traffic Offloads expands VM-Series throughput handling capability. This becomes important for customers where the traffic type and pattern has offloadable flows.
The Intelligent Traffic Offload (ITO) service routes the first few packets of a flow to the firewall for inspection to determine whether the rest of the packets in the flow should be inspected or offloaded. This decision is based on policy or whether the flow can be inspected (for example, encrypted traffic can’t be inspected). By only inspecting flows that can benefit from security inspection, the overall load on the firewall is greatly reduced and VM-Series firewall performance increases without sacrificing security.

Intelligent Traffic Offload: Support for Layer 3 (Static and Dynamic Routing)

Intelligent Traffic Offload (ITO) is a VM-Series firewall security subscription. When ITO is configured on VM-Series with the supported NVIDIA Bluefield DPU infrastructure on its underlying compute, it increases throughput handling capacity for the firewall.
In previous releases, ITO required that you deploy your VM-Series firewall in virtual wire mode. This limitation prevented deployments in Layer 3 mode supporting static or dynamic routing. This release removes that limitation by allowing you to deploy your VM-Series firewall with Intelligent Traffic Offload for L3 traffic supporting static and dynamic routing.
With dynamic routing, you attain stable, high-performing, and highly available L3 routing through profile-based filtering lists and conditional route maps which can be used across logical routers. These profiles provide finer granularity to filter routes for each dynamic routing protocol and improve redistribution across multiple protocols. When combined with NAT for IPv4, you can extend security policy to protect end user devices from being exposed to outside threats.
Additionally, you can use Intelligent Traffic Offload for NAT (IPv4). The same previous limitation, which required you to deploy the VM-Series firewall in virtual wire mode, is now removed in this release, allowing you to deploy the firewall with an ITO subscription using NAT for perimeter security. Now you can deploy your ITO subscription in Layer 3 mode that supports NAT for IPv4, which provides robust security features that prevent end-user devices from being exposed to outside threats.
NAT support extends to NAT44 and DIPP for both deployments with Intelligent Traffic Offload (DPU-based) and software cut-through for traffic inspection.
The new layer 3 mode really expands your capability of taking advantage of security within your data centers, relying on firewalls to both switch and route traffic to network domains.
The figure below shows how the packet flow works in layer 3 mode. It illustrates the host using NVIDIA BlueField DPU and leaf/router.
In the above figure, the packet flow has the following characteristics:
  • Interface e1/1 and e1/2 are configured in Layer 3 mode.
  • VR1 is configured with static or dynamic routes to the 5G layer 3 router or a UPF and to the Internet peer router.
  • Tagged and untagged traffic are supported. The router and DPU or NIC can be in access or trunk mode.
This is the packet flow process for Layer 3 mode:
  1. The packet is sent from the 5G UPF (Layer 3 router) to the Layer 3 Leaf/Router.
  2. The packet arrives at the router port PA1 that is connected to DPU and SmartNIC PF0 and which is programmed to add a vlanX tag to the packets.
  3. The packet arrives at DPU port pf0vf0 and is delivered to the VM-Series with or without removing the VLAN.
  4. The firewall is running in L3 mode. It finds the packet’s next hop and its MAC address.
  5. The firewall updates the DPU and SmartNIC through gRPC with the new destination MAC and vlanY, if required.
  6. The tagged packet with vlanY arrives at router port PA2 from the DPU/SmartNIC port PF1.
  7. If the packet was untagged, the router port PA2 may add tag vlanY.
  8. The packet is sent to the next-hop address and delivered to the Internet peer. In case of dynamic routing, with any route update, the VM-Series updates the DPU with the new next-hop MAC address.

Intelligent Traffic Offload: NAT Support for Virtual Wire and L3

Typically, in 5G deployments and certain hyperscale enterprise environments, the VM-Series virtual NGFW secures the Internet perimeter or north-south traffic. In such deployments, you can take advantage of NAT mode to ensure that end-user devices are not exposed to the Internet.
Palo Alto Networks provides NAT support with ITO capabilities in both virtual wire (vWire) and layer 3 mode of deployment. You can now configure multiple modes of NAT with IPv4, such as source NAT with dynamic IP and Port translation, destination NAT port translation, and forwarding.
The image below illustrates how this works from a packet flow point of view. It illustrates the source and destination ports for gaming devices through to gaming portals.
Here’s how the NAT policy is configured:
  • VM-Series has the NAT policy configured to perform source NAT for IP and port to dynamic IP and port mapping.
  • The NAT policy defined can also perform destination IP and port translation and forwarding.
This is the packet flow process when a NAT policy is configured:
  1. The packet is sent from the 5G device through the 5G UPF or Layer 3 router with a source IP address and port 172.10.20.30:320.
  2. The packet arrives at the VM-Series firewall, where the NAT policy is defined to do source NAT to dynamic IP address and port translation. The source IP: Port 172.10.20.30:320 is translated to 192.168.100.15:545.
  3. The VM-Series firewall does a layer 2 and layer 3 re-write on the packet based on the defined NAT policy, which can be source NAT with dynamic IP address and port translation or destination NAT with port translation and forwarding.
  4. The VM-Series firewall updates the DPU and SmartNIC through gRPC with the NAT translation.
  5. The SNAT DIPP maintains persistence by retaining the binding of a private source IP address and port pair to a specific public (translated) source IP address and port pair for subsequent sessions with the same original source IP address and port pair. In this case, 172.10.20.30:320 and its translated 192.168.100.15:545 address are persistent for multiple destination IP:Ports.

Intelligent Traffic Offload: Requirements

VM-Series on ITO requires a dedicated Bluefield DPU on the x86 physical host. Active/Passive high availability for VM-Series firewalls is supported.
Starting with 11.2.0, ITO is now supported on Nvidia Bluefield-3 DPUs. This is in addition to the existing Nvidia Bluefield-2 DPU support.
You can deploy only one VM-Series firewall and one BlueField-2 DPU or BlueField-3 DPU per host.
  • Network switch with 2 available 100GB/s ports (4 for HA).
    If you want to use VLANs, make sure your switch is capable.
  • X86 physical host hardware requirements.
  • X86 host software requirements:
    • Ubuntu 22.04, with kernel version 5.15-12.2.
    • Bluefield Binary bootstream version: DOCA 2.6.0 Bluefield OS 4.6.0 for Ubuntu 22.04.
      Accept the End User License Agreement to start the download.
  • Virtual machine for the VM-Series firewall.
    • PAN-OS 11.2.0 or later.
    • Corresponding VM-Series Plugin with PAN-OS 11.2.0 or later.
    • To license Intelligent Traffic Offload, create a Software NGFW deployment profile for 10.0.4 and above, with a minimum of 6 vCPUs and the Intelligent Traffic Offload service. The profile can include other security services.
      With PAN-OS 11.2.0 or later and the corresponding VM-Series Plugin to license Intelligent Traffic Offload, create a Software NGFW deployment profile for 11.2.0 and above, with a minimum of 6 vCPUs and the Intelligent Traffic Offload service. The profile can include other security services.
    • You can use ITO with VM-Flex 6 vCPU onwards up to 64 vCPU.

Intelligent Traffic Offload: Interfaces

An Intelligent Traffic Offload deployment connects three types of interfaces:
  • PAN-OS virtual interfaces:
    • eth0: management interface
    • eth1, eth2: dataplane
    • eth3: HA interface
    • eth4: gRPC interface
  • BlueField DPU physical interfaces (created from the host OS).
  • Host physical interfaces for the BlueField-2 DPU 100GbE or or BlueField-3 DPU with 200 Gbps ports (created from the host OS).
  • Host physical interfaces for the BlueField-3 DPU 200GbE ports (created from the host OS).
You connect the PAN-OS interfaces to the BlueField DPU through SR-IOV virtual functions (VFs) you create on the physical host.
In the following figure, the two BlueField DPU ports are shown as Physical Functions PF0 and PF1. These PFs can be observed from the host side as enp4s0f0 and enp4s0f1, and are divided into multiple VFs for SR-IOV functionality.
  • The first VF for each PF must be the data port (eth1:pf0vf0).
  • An additional VF is required for the control channel for the gRPC client/server interface (eth4:pf0v1).
  • VFs from the host side are as follows:
    • enp4s0f0 is represented by pf0vf0 and pf1vf0 on the BlueField DPU, and are used for data.
    • enp4s0f1 is represented by pf0vf1 and is used for gRPC control traffic.

Intelligent Traffic Offload: Scalability

The current NVIDIA BlueField-2 DPU scalability limitations are as follows:
  • Session table capacity: 500,000 sessions
  • Connections per second: 3500
  • Offload rate: ~90 Gbps for 1500 byte packets on a Bluefield-2 100GbE DPU
The current NVIDIA BlueField-3 DPU scalability limitations are as follows:
  • Session table capacity: 500,000 sessions
  • Connections per second: 3,500
  • Offload rate: ~160 Gbps for 1500 byte packets on a Bluefield-3 200GbE DPU
If offload traffic to the BlueField DPU exceeds 3500 sessions per second, or the offload session table is full, traffic still flows through the VM-Series firewall and is inspected. When the sessions per second drops below 3500, intelligent traffic offload to the Bluefield DPU resumes.
Active/Passive HA is supported for the VM-Series firewalls running on physical hosts with identical configurations.
Intelligent Traffic Offload does not support the accelerated aging session setting.

Intelligent Traffic Offload: High Availability

Active/Passive HA is supported in both the VWire and the Layer 3 modes of deployment for a pair of VM-Series firewalls.
  • The firewalls must be installed on physical hosts with the BlueField-2 DPU configured as specified in Intelligent Traffic Offload: Requirements.
  • For the HA2 interface (see the figures in Active Packet Flow and Passive Packet Flow), use the same Mellanox interface (cx-3, cx-4, cx-5, or cx-6) on both hosts.
    The HA interfaces can be configured on any other vendor NIC supported within software firewalls.
  • Optionally, to support traffic switching, the hosts must be on separate VLANs so you can use VLAN tags to select the primary.
Intelligent Traffic Offload on VM-Series in HA focuses on VM-Series firewall availability. Each firewall maintains a session table, and each BlueField DPU maintains a flow table. The HA configuration synchronizes the active session table, ensuring it is mirrored to the passive firewall at runtime. The session table stores both sessions that require inspection and sessions that are marked for offload.
HA uses the PAN-OS interface eth3, which is on a NIC on the VM-Series firewall. Eth3 is used to select the active firewall, and sync the VM-Series firewall session tables on the active/passive pair.

Active Packet Flow

The following diagram steps through the active packet flow for a HA configuration in a vWire mode of deployment that uses an optional VLAN configuration. HA configuration and packet flow will remain similar for layer 3 mode of deployment.
  1. Packet is sent from the client application to the network switch.
  2. The packet arrives at the switch port that is programmed to add a VLAN 100 tag to the packets.
  3. The tagged packets can only go to Port Pa1 as the interface for port Ps1 is down because that firewall is in passive mode.
  4. The packet arrives at port Pa1 and VLAN 100 is removed from the packet and the packet is delivered to the firewall eth1.
  5. The firewall is running in vWire mode so the packet is processed by the firewall and then sent out eth2.
    If the firewall is running in L3 mode, it will find the packet’s next hop and its MAC address. The firewall will then update the DPU and SmartNIC through gRPC with the new destination MAC and VLAN, if needed.
  6. The packet arrives at port Pa2 and VLAN 200 is added.
  7. The packet is sent out port Pa2 and can only be delivered to port Ps because the other VLAN 200 port Ps2 is down.
  8. The packet arrives at port Ps and the VLAN 200 tag is removed.
  9. The packet is sent out port Ps with no VLAN tag
  10. The packet is delivered to the server.

Failover Event

A failover event occurs when there is either a notification from the active VM-Series firewall or the passive firewall detects that the active is not responding. When this happens the network connections to ports Pa1 and Pa2 go down and the network connections to ports Ps1 and Ps2 become active.

Passive Packet Flow

When the VM-Series firewall is in the passive state, the BlueField DPU on the passive member is live but does not pass traffic until there is a failover and the co-located VM-Series firewall becomes active. The following diagram steps through the passive packet flow for an HA configuration that uses an optional VLAN configuration.
  1. The packet is sent from the client application to the network switch.
  2. The packet arrives at the switch port that is programmed to add a VLAN 100 tag to the packets.
  3. The tagged packets can only go to Port Ps1 because the interface for port Pa1 is down and that firewall has now moved from passive to active.
  4. The packet arrives at port Ps1 and VLAN 100 is removed from the packet and the packet is delivered to the firewall eth1.
  5. The firewall is running in vWire mode so the packet is processed by the firewall and then sent out eth2.
    If the firewall is running in L3 mode, it will find the packet’s next hop and its MAC address. The firewall will then update the DPU and SmartNIC through gRPC with the new destination MAC and VLAN, if needed
  6. The packet arrives at port Ps2 and VLAN 200 is added.
  7. The packet is sent out port Ps2 and can only be delivered to port Ps because the other VLAN 200 port Pa2 is down.
  8. The packet arrives at port Ps and the VLAN 200 tag is removed.
  9. The packet is sent out port Ps with no VLAN tag.
  10. The packet is delivered to the server.

Install the BlueField DPU

Install the BlueField DPU on the physical host before you install the VM-Series firewall:
  1. Install the BlueField DPU on the host machine as directed in the NVIDIA BlueField Ethernet DPU User Guide or install the BlueField-3 DPU on the host machine as directed in the NVIDIA BlueField Ethernet DPU User Guide.
  2. Install the BlueField drivers as directed in the NVIDIA BlueField-3 DPU Software Quick Start Guide.

Install the VM-Series Firewall

The standard installation for KVM on the VM-Series firewall installs PAN-OS. Follow the installation steps in the following sections.

Enable Virtual Functions

As mentioned in Intelligent Traffic Offload: Interfaces virtual functions (VFs) connect PAN-OS interfaces to the BlueField DPU.
The maximum number of virtual functions VFs per port is 2. You need a total of three—two for the data path and one for the management interface.
PANOS 11.2.0 and later supports Nvidia BlueField-2 and BlueField-3 DPUs.
  1. Enable virtual functions on the host machine.
    • By default the BlueField DPU uses the first VF for the datapath, i.e. enp4s0f0v0 and enp4s0f1v0 in the following example.
    • The other VF, enp4s0f0v1, is used for the management interface for the service running on the BlueField card (not to be confused with the VM-Series firewall management interface).
    $ cat /sys/class/net/enp4s0f0/device/sriov_totalvfs
       8
    $ echo 2 > /sys/class/net/enp4s0f0/device/sriov_numvfs
    $ cat /sys/class/net/enp4s0f1/device/sriov_totalvfs
       8
    $ echo 2 > /sys/class/net/enp4s0f1/device/sriov_numvfs
  2. Allocate VFs to the VM-Series firewall from the KVM hypervisor.
    The Guest PAN-OS won’t boot unless VFs are allocated to the VM.
    1. Shut off the VM.
    2. On KVM use virt-manager to add VFs to the VM.
      • Select Add Hardware, select VF0 of PF1, and click Finish.
      • Select Add Hardware, select VF0 of PF0, and click Finish.
      • Select Add Hardware, select VF1 of PF0, and click Finish.

Check the BlueField DPU System

The BlueField DPU first communicates with the host when the rshim driver is installed on the host. The Rshim provides a tty (accessible through minicom) interface and a networking interface called tmfifo_net0. With the tmfifo_net0 interface you can ssh in to the BlueField DPU from the host. The Rshim driver runs on the x86 Hypervisor OS, and in fact, the OFED installation installs an Rshim driver by default.
  1. Log in to the host machine.
    $ ssh user@<host-ip-address>
    $ password: 
  2. If the host network interface for the Rshim driver does not have an IP address, you must create one.
    $ ip addr add dev tmfifo_net0 192.168.100.1/24
  3. From the host machine log in to the BlueField DPU subsystem.
    $ ssh ubuntu@192.168.100.2
    $ password: <fake-password>
    If this is your first login the system prompts you to replace the default password with a new password.
  4. Change the default password on the BlueField DPU.
    Log in to BlueField DPU with the initial username as ubuntu and the password ubuntu.
    Once you log in, the system prompts you to set up a new password.
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for ubuntu.
    Current password: *****
    New password:  *****
    Retype new password: *****
    passwd: password updated successfully
    Log out and log in with your new password.
  5. Check the software version.
    $ ofed_info -s
    This should return the following version or later:
    $ MLNX_OFED_LINUX-24.01-0.3.3
  6. Check that the Bluefield DPU is in the correct mode.
    The correct mode is embedded CPU function ownership mode. See the Embedded CPU Function Ownership Mode documentation for instructions to check and configure the mode.

Install or Upgrade the BlueField Bootstream Software

Follow these steps to ensure you have the latest Bluefield bootstream (BFB) software for the BlueField-2 DPU. The BFB includes the BlueField OS and other software such as drivers and network interfaces.
  1. Download the BFB package to the physical host for the BlueField-2 DPU.
    Get the latest version of the driver for the OS running on the DPI ARM cores from the NVIDIA website—you must accept the end-user license agreement to download.
  2. Install the BFB from the Rshim boot location on the physical host.
    The filename below (the string starting with DOCA and ending with .bfb) does not contain spaces. Enter the command on a single line.
    $ cat<packagename> > /dev/rshim0/boot
    For example, cat DOCA_2.6.0_BSP_4.6.0_Ubuntu_22.04-5.24-01.prod.bfb > /dev/rshim0/boot
  3. Log in to the BlueField DPU.
    Use the new password you created in Check the BlueField DPU System.
    $ ssh ubuntu@192.168.100.2
    $ password: 
  4. Apply the firmware upgrade on the BlueField DPU.
    Enter the following command on a single line.
    $ sudo /opt/mellanox/mlnx-fw-updater/firmware/mlxfwmanager_sriov_dis_aarch64_41686
  5. Power cycle the system.
    Log off the BlueField DPU and return to Linux host.
    $ ipmitool chassis power cycle
  6. Log in to the BlueField DPU.
    $ ssh ubuntu@192.168.100.2
    $ password: 
  7. Start the opof (open offload) service on the BlueField DPU. opof is a standalone service at this time.
    The VFs must exist before you start opof.
    $ opof_setup_highavail
    $ service opof restart
  8. Verify the opof service is running without issues.
    $ service opof status

Install or Upgrade the Debian Package

If the Debian package version is earlier than 1.0.4 you must upgrade.
  1. On the BlueField-2 DPU, check the version of the opof package.
    $ opof -v
    If it is earlier than 1.7.4 it must be upgraded.
    If it is earlier than 1.0.4 it must be upgraded.
  2. Add the NViIDIA repository for packages.
    $ cd /etc/apt/sources.list.d
    Enter each wget command all on one line. There are no spaces in the URLs:
    wget https://linux.mellanox.com/public/repo/doca/1.0/ubuntu20.04/doca.list
    wget -qO - https://linux.mellanox.com/public/repo/doca/1.0/ubuntu20.04/aarch64/GPG-KEY-Mellanox.pub | sudo apt-key add -
    $ apt update
  3. On the BlueField DPU check the Debian package in the repository.
    $ apt search opof
    Sorting... Done
    Full Text Search... Done
    opof/now 1.0.4 arm64 [installed,local]
      Nvidia Firewall Open Offload Daemon
  4. On ARM, uninstall the obsolete Debian package.
    $ apt remove opof
  5. Install the new Debian package.
    $ apt install opof
  6. Set up and restart the opof service.
    $ opof_setup
    $ service opof restart
  7. Verify the opof service is running without issues.
    $ service opof status

Run Intelligent Traffic Offload

This solution requires a subscription to the Intelligent Traffic Offload service and a minimum of 8 physical cores on the physical host for the best performance/throughput. For example, on a 10 vCPU VM-Series by default, PAN-OS allocates 2 cores for Intelligent Traffic Offload, 2 cores for management processes and the remaining 4 cores for dataplane processing.
  • Set Up Intelligent Traffic Offload on the VM-Series Firewall
  • Set Up the Intelligent Traffic Offload Service on the BlueField-2 DPU
  • Start or Restart the Intelligent Traffic Offload Service
  • Get Service Status and Health

Set Up Intelligent Traffic Offload on the VM-Series Firewall

Follow these steps to enable Intelligent Traffic Offload on PAN-OS.
  1. Bring up the PAN-OS VM. This assumes that you already have a VM instance created and are restarting it.
    $ virsh start <vm-name>
  2. Use SSH to log in to the VM-Series firewall management interface.
    $ ssh admin@<panos-management-IP-address>
    $ admin@PA-VM>
  3. Verify that Intelligent Traffic Offload is installed and licensed.
    admin@PA-VM> show intelligent-traffic-offload
    Intelligent Traffic Offload:
      Configuration            : Enabled
      Operation Enabled        : True
      Min number packet        : 8
      Min Rate                 : 95
      TCP ageing               : 12-
      UDP ageing               : 20
    Configuration:Enabled means Intelligent Traffic Outload is licensed.
    Operation Enabled:True means you have rebooted a configured device.
  4. Enable Intelligent Traffic Offload.
    Use the following command to enable ITO.
    admin@PA-VM> set session offload yes
    You can also use set session offload no to disable the ITO without rebooting the system.
    Use the set session offload command without rebooting when you first enable the feature. However, if you choose to disable ITO at a later time, you must reboot. Additionally, if you choose to enable ITO after deploying your firewall, you must reboot. For non-DPU ITO or software cut-through environments rebooting is not necessary.
  5. Validate Intelligent Traffic Offload.
    admin@PA-VM> show session info | match offload
    Hardware session offloading:      True
    To view global counters, use the following command:
    admin@PA-VM> show counter global | match flow_offload

Set Up the Intelligent Traffic Offload Service on the BlueField DPU

The service must be built as described in Set Up Intelligent Traffic Offload on the VM-Series Firewall.
  1. From the host machine, log in to the BlueField DPU complex.
    $ ssh ubuntu@192.168.100.2
    $ password: <fake-password>
    $ ubuntu> sudo -i
  2. Set up the preliminary configuration in the BlueField-2 DPU OS.
    root@bf2SmartNIC:~# opof_setup_highavail
    [ INFO ] No num of hugepages specified, use 2048 
    [ INFO ] No gRPC port specified, use pf0vf1
    

Start or Restart the Intelligent Traffic Offload Service

If the ITO service is running on a DPU, the service probably started automatically. To check the status, run the following command:
$ service opof status
If the opof service is not running, enter the following command to start the controller:
$ service opof start
To restart the service, run the following command:
$ service opof restart

Get Service Status and Health

Use opof to get the service status and health. Each command has its own command-line help, for example: $ opof -h
  • Query a session:
    $ opof query -i <session_id>
  • Query service offload statistics:
    $ opof stats

BlueField-2 DPU Troubleshooting

Use the following procedure to power cycle the system.
  1. To power cycle the system, log out of the BlueField-2 DPU and return to the Linux host OS.
    $ ipmitool chassis power cycle
  2. If the interfaces do not come up after the power cycle, log in to the BlueField-2 DPU and enter:
    $ /sbin/mlnx_bf_configure
  3. Return to the host OS and enter:
    $ sudo /etc/init.d/openibd restart

Intelligent Traffic Offload: Traffic Troubleshooting

Validate Traffic Flows

Data traffic can be generated from the client and consumed through the Intelligent Traffic Offload setup by a server. IPERF3 can be used to generate traffic, as discussed in Run IPERF3 Tests. Once the traffic is initiated, the first few packets of the flow are sent to the PA-VM which decides if the flow needs to be offloaded or not.
An application override policy must be defined to identify flows for offload. A TCP flow sets the FIN/RST flag on a control packet and sends it to the PA-VM. When the PA-VM decides to offload the flow, use show session all to display the offloaded flows. Use show session id <flowID> to provide information on the state of the flow. An offloaded flow has the state Offload: yes.
The flow counters are not updated while subsequent packets of the flow are in the offload state and are passing through the BlueField-2 DPU. Once the flow completes, the offload service triggers an age-out timer (TCP aging configured from the CLI). When the timer expires, the service collects the updated flow statistics and sends them to the VM-Series firewall. The firewall then updates its flow session counters, and show session id <flowID> returns the updated values.

Session Counters

Use the following command to view session counters.
admin@PA-VM > show counter global | match flow_offload
The output columns for each counter are:
Counter Name | Value | Rate | Severity | Category | Aspect | Description.
  • Value—Number of occurrences since system start.
  • Rate—Frequency of counter change.
  • Severity—Info, Warning, Drop. Used for Tech Support.
  • Category—Flow (a component of a session).
  • Aspect—Offload for an entire flow.
Counter NameDescription
flow_offload_rcv_cpuNumber of packets received by CPU with session offloaded
flow_offload_session_updateNumber of times the session needs to be updated
flow_offload_session_insertNumber of sessions inserted in the offload device
flow_offload_session_deleteNumber of sessions deleted from offload device
flow_offload_delete_msg_failedNumber of del messages to GRPC that failed
flow_offload_add_msg_failedNumber of session messages to GRPC that failed
flow_offload_session_verifyNumber of verify messages to the offload device
flow_offload_verify_msg_failedNumber of verify messages to GRPC that failed
flow_offload_update_session_statHW indicates flow age out
flow_offload_missing_session_statCannot find session for stats
flow_offload_invalid_sessionOffload invalid session ID
flow_offload_del_session_failOffload Delete invalid session
flow_offload_add_session_failOffload Add session failed
flow_offload_get_session_failOffload Get session failed
flow_offload_grpc_failOffload grpc call failed
flow_offload_active_sessionNumber of active offloaded sessions
flow_offload_aged_sessionNumber of aged out offloaded sessions
flow_offload_sessionNumber of offloaded sessions

Run IPERF3 Tests

Iperf3 is an optional simple application for generating traffic that is effective in running data traffic tests. To run the server as a service, use iperf3 -s -D. By default the application expects packets on TCP/UDP destination port 5201, but the port can be changed.
  • Single flow—For single iperf3 flows enter:
    iperf3 -c <server-ip-address> -t 60
  • Multiple flows—To initiate 20 concurrent flows for a 60 second duration, enter:
    iperf3 -c <ip of server> -P 20 -t 60

Validate Intelligent Traffic Offload Logs

You can use VM-Series firewall logs to validate the connectivity between the ITO client running on the firewall and the offload service on the BlueField DPU. The expected log output for a successful offload is as follows.
admin@auto-pavm> less mp-log pan_grpcd.log
[PD] dec free list 0xe0ff022000 
RS LIB INIT in DP! 
pan_fec_app_init: fec_data 0xe0feef1088, maxentries 120 
[FEC] enc free list 0xe0feef1100, dec free list 0xe0feef10b8 
Creating dp grpc ring buf
Initializing dp grpc ring buf
Mapping flow data memory
Found offload parameters
Heart beat found 1
Established connection to offload device

OPOF Troubleshooting

You can also view the offload service logs to validate connectivity:
root@linux:~# service opof status
● opof.service - Nvidia Firewall Intelligent Traffic Offload Daemon
   Loaded: loaded (/etc/systemd/system/opof.service; disabled; vendor preset: enabled)
   Active: active (running) since Fri 2021-05-21 18:40:38 UTC; 3h 48min ago
    Docs: file:/opt/mellanox/opof/README.md
  Process: 163906 ExecStartPre=/usr/sbin/opof_pre_check (code=exited, status=0/SUCCESS)
  Main PID: 163922 (nv_opof)
   Tasks: 30 (limit: 19085)
   Memory: 50.7M
   CGroup: /system.slice/opof.service
       └─163922 /usr/sbin/nv_opof -n 1 -a 0000:03:00.0,representor=[0] -a 0000:03:00.1,representor=[0]
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:03:00.0 (socket 0)
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL:  Invalid NUMA socket, default to 0
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:03:00.0 (socket 0)
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL:  Invalid NUMA socket, default to 0
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:03:00.1 (socket 0)
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL:  Invalid NUMA socket, default to 0
May 21 18:40:38 localhost.localdomain nv_opof[163922]: EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:03:00.1 (socket 0)
May 21 18:40:39 localhost.localdomain nv_opof[163922]: EAL: No legacy callbacks, legacy socket not created
May 21 18:40:39 localhost.localdomain nv_opof[163922]: EAL: No legacy callbacks, legacy socket not created
May 21 18:40:42 localhost.localdomain nv_opof[163922]: Server listening on: 169.254.33.51:3443
The logs show that the Intelligent Traffic Offload is communicating with the VM-Series firewall PA-VM over the server listening on IP address, and you see the VFs along with other details of the DPDK parameters. Also attached is a log from the addition of a TCP flow that is offloaded.