In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
a private buffer pool in local memory to minimize remote memory access.
The configuration of packet buffer pools should take into account the underlying physical memory architecture in terms of DIMMS,
channels and ranks.
The application must ensure that appropriate parameters are given at memory pool creation time.
See :ref:`Mempool Library <Mempool_Library>`.
Design Principles
-----------------
The API and architecture of the Ethernet* PMDs are designed with the following guidelines in mind.
PMDs must help global policy-oriented decisions to be enforced at the upper application level.
Conversely, NIC PMD functions should not impede the benefits expected by upper-level global policies,
or worse prevent such policies from being applied.
For instance, both the receive and transmit functions of a PMD have a maximum number of packets/descriptors to poll.
This allows a run-to-completion processing stack to statically fix or
to dynamically adapt its overall behavior through different global loop policies, such as:
* Receive, process immediately and transmit packets one at a time in a piecemeal fashion.
* Receive as many packets as possible, then process all received packets, transmitting them immediately.
* Receive a given maximum number of packets, process the received packets, accumulate them and finally send all accumulated packets to transmit.
To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and
balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations:
* Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one function.
* Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware features (prefetch data in cache, use of NIC head/tail registers)
to minimize the number of CPU cycles per packet, for example by avoiding unnecessary read memory accesses to ring transmit descriptors,
or by systematically using arrays of pointers that exactly fit cache line boundaries and sizes.
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
Allowing this should prevent any multiple management of Ethernet port by different entities.
..note::
It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
The configuration of each NIC port includes the following operations:
* Allocate PCI resources
* Reset the hardware (issue a Global Reset) to a well-known default state
* Set up the PHY and the link
* Initialize statistics counters
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
All device features that can be started or stopped "on the fly" (that is, without stopping the device) do not require the PMD API to export dedicated functions for this purpose.
All that is required is the mapping address of the device PCI registers to implement the configuration of these features in specific functions outside of the drivers.
For this purpose,
the PMD API exports a function that provides all the information associated with a device that can be used to set up a given device feature outside of the driver.
This includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI device registers, and the name of the driver.
The main advantage of this approach is that it gives complete freedom on the choice of the API used to configure, to start, and to stop such features.
As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Gigabit Ethernet Controller and
the Intel® 82599 10 Gigabit Ethernet Controller controllers in the testpmd application.
Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in the same way.
Ethernet* flow control (pause frame) can be configured on the individual port.
Refer to the testpmd source code for details.
Also, L4 (UDP/TCP/ SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly. See `Hardware Offload`_ for details.
Each transmit queue is independently configured with the following information:
* The number of descriptors of the transmit ring
* The socket identifier used to identify the appropriate DMA memory zone from which to allocate the transmit ring in NUMA architectures
* The values of the Prefetch, Host and Write-Back threshold registers of the transmit queue
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
* The *minimum* RS bit threshold. The minimum number of transmit descriptors to use before setting the Report Status (RS) bit in the transmit descriptor.
Note that this parameter may only be valid for Intel 10 GbE network adapters.
The RS bit is set on the last descriptor used to transmit a packet if the number of descriptors used since the last RS bit setting,
up to the first descriptor used to transmit the packet, exceeds the transmit RS bit threshold (tx_rs_thresh).
In short, this parameter controls which transmit descriptors are written back to host memory by the network adapter.
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
* tx_rs_thresh must be greater than 0.
* tx_rs_thresh must be less than the size of the ring minus 2.
* tx_rs_thresh must be less than or equal to tx_free_thresh.
* tx_free_thresh must be greater than 0.
* tx_free_thresh must be less than the size of the ring minus 3.
* For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater than 1.
One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence the maximum threshold constraints.
..note::
When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
Of course, this function can be invoked in parallel by different logical cores on different RX queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
built on top of their corresponding lock-free functions of the PMD API.
Generic Packet Representation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A packet is represented by an rte_mbuf structure, which is a generic metadata structure containing all necessary housekeeping information.
This includes fields and status bits corresponding to offload hardware features, such as checksum computation of IP headers or VLAN tags.
The rte_mbuf data structure includes specific fields to represent, in a generic way, the offload features provided by network controllers.
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
The mbuf structure is fully described in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK API Reference*.