@@ -192,6 +192,33 @@ is reused to control the delay of the timer, while
192192``napi_defer_hard_irqs `` controls the number of consecutive empty polls
193193before NAPI gives up and goes back to using hardware IRQs.
194194
195+ The above parameters can also be set on a per-NAPI basis using netlink via
196+ netdev-genl. When used with netlink and configured on a per-NAPI basis, the
197+ parameters mentioned above use hyphens instead of underscores:
198+ ``gro-flush-timeout `` and ``napi-defer-hard-irqs ``.
199+
200+ Per-NAPI configuration can be done programmatically in a user application
201+ or by using a script included in the kernel source tree:
202+ ``tools/net/ynl/cli.py ``.
203+
204+ For example, using the script:
205+
206+ .. code-block :: bash
207+
208+ $ kernel-source/tools/net/ynl/cli.py \
209+ --spec Documentation/netlink/specs/netdev.yaml \
210+ --do napi-set \
211+ --json=' {"id": 345,
212+ "defer-hard-irqs": 111,
213+ "gro-flush-timeout": 11111}'
214+
215+ Similarly, the parameter ``irq-suspend-timeout `` can be set using netlink
216+ via netdev-genl. There is no global sysfs parameter for this value.
217+
218+ ``irq-suspend-timeout `` is used to determine how long an application can
219+ completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
220+ which can be set on a per-epoll context basis with ``EPIOCSPARAMS `` ioctl.
221+
195222.. _poll :
196223
197224Busy polling
@@ -207,6 +234,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
207234``net.core.busy_read `` sysctls. An io_uring API for NAPI busy polling
208235also exists.
209236
237+ epoll-based busy polling
238+ ------------------------
239+
240+ It is possible to trigger packet processing directly from calls to
241+ ``epoll_wait ``. In order to use this feature, a user application must ensure
242+ all file descriptors which are added to an epoll context have the same NAPI ID.
243+
244+ If the application uses a dedicated acceptor thread, the application can obtain
245+ the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
246+ distribute that file descriptor to a worker thread. The worker thread would add
247+ the file descriptor to its epoll context. This would ensure each worker thread
248+ has an epoll context with FDs that have the same NAPI ID.
249+
250+ Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
251+ be inserted to distribute incoming connections to threads such that each thread
252+ is only given incoming connections with the same NAPI ID. Care must be taken to
253+ carefully handle cases where a system may have multiple NICs.
254+
255+ In order to enable busy polling, there are two choices:
256+
257+ 1. ``/proc/sys/net/core/busy_poll `` can be set with a time in useconds to busy
258+ loop waiting for events. This is a system-wide setting and will cause all
259+ epoll-based applications to busy poll when they call epoll_wait. This may
260+ not be desirable as many applications may not have the need to busy poll.
261+
262+ 2. Applications using recent kernels can issue an ioctl on the epoll context
263+ file descriptor to set (``EPIOCSPARAMS ``) or get (``EPIOCGPARAMS ``) ``struct
264+ epoll_params ``:, which user programs can define as follows:
265+
266+ .. code-block :: c
267+
268+ struct epoll_params {
269+ uint32_t busy_poll_usecs;
270+ uint16_t busy_poll_budget;
271+ uint8_t prefer_busy_poll;
272+
273+ /* pad the struct to a multiple of 64bits */
274+ uint8_t __pad;
275+ };
276+
210277 IRQ mitigation
211278---------------
212279
@@ -222,12 +289,111 @@ Such applications can pledge to the kernel that they will perform a busy
222289polling operation periodically, and the driver should keep the device IRQs
223290permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL ``
224291socket option. To avoid system misbehavior the pledge is revoked
225- if ``gro_flush_timeout `` passes without any busy poll call.
292+ if ``gro_flush_timeout `` passes without any busy poll call. For epoll-based
293+ busy polling applications, the ``prefer_busy_poll `` field of ``struct
294+ epoll_params `` can be set to 1 and the ``EPIOCSPARAMS `` ioctl can be issued to
295+ enable this mode. See the above section for more details.
226296
227297The NAPI budget for busy polling is lower than the default (which makes
228298sense given the low latency intention of normal busy polling). This is
229299not the case with IRQ mitigation, however, so the budget can be adjusted
230- with the ``SO_BUSY_POLL_BUDGET `` socket option.
300+ with the ``SO_BUSY_POLL_BUDGET `` socket option. For epoll-based busy polling
301+ applications, the ``busy_poll_budget `` field can be adjusted to the desired value
302+ in ``struct epoll_params `` and set on a specific epoll context using the ``EPIOCSPARAMS ``
303+ ioctl. See the above section for more details.
304+
305+ It is important to note that choosing a large value for ``gro_flush_timeout ``
306+ will defer IRQs to allow for better batch processing, but will induce latency
307+ when the system is not fully loaded. Choosing a small value for
308+ ``gro_flush_timeout `` can cause interference of the user application which is
309+ attempting to busy poll by device IRQs and softirq processing. This value
310+ should be chosen carefully with these tradeoffs in mind. epoll-based busy
311+ polling applications may be able to mitigate how much user processing happens
312+ by choosing an appropriate value for ``maxevents ``.
313+
314+ Users may want to consider an alternate approach, IRQ suspension, to help deal
315+ with these tradeoffs.
316+
317+ IRQ suspension
318+ --------------
319+
320+ IRQ suspension is a mechanism wherein device IRQs are masked while epoll
321+ triggers NAPI packet processing.
322+
323+ While application calls to epoll_wait successfully retrieve events, the kernel will
324+ defer the IRQ suspension timer. If the kernel does not retrieve any events
325+ while busy polling (for example, because network traffic levels subsided), IRQ
326+ suspension is disabled and the IRQ mitigation strategies described above are
327+ engaged.
328+
329+ This allows users to balance CPU consumption with network processing
330+ efficiency.
331+
332+ To use this mechanism:
333+
334+ 1. The per-NAPI config parameter ``irq-suspend-timeout `` should be set to the
335+ maximum time (in nanoseconds) the application can have its IRQs
336+ suspended. This is done using netlink, as described above. This timeout
337+ serves as a safety mechanism to restart IRQ driver interrupt processing if
338+ the application has stalled. This value should be chosen so that it covers
339+ the amount of time the user application needs to process data from its
340+ call to epoll_wait, noting that applications can control how much data
341+ they retrieve by setting ``max_events `` when calling epoll_wait.
342+
343+ 2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout ``
344+ and ``napi_defer_hard_irqs `` can be set to low values. They will be used
345+ to defer IRQs after busy poll has found no data.
346+
347+ 3. The ``prefer_busy_poll `` flag must be set to true. This can be done using
348+ the ``EPIOCSPARAMS `` ioctl as described above.
349+
350+ 4. The application uses epoll as described above to trigger NAPI packet
351+ processing.
352+
353+ As mentioned above, as long as subsequent calls to epoll_wait return events to
354+ userland, the ``irq-suspend-timeout `` is deferred and IRQs are disabled. This
355+ allows the application to process data without interference.
356+
357+ Once a call to epoll_wait results in no events being found, IRQ suspension is
358+ automatically disabled and the ``gro_flush_timeout `` and
359+ ``napi_defer_hard_irqs `` mitigation mechanisms take over.
360+
361+ It is expected that ``irq-suspend-timeout `` will be set to a value much larger
362+ than ``gro_flush_timeout `` as ``irq-suspend-timeout `` should suspend IRQs for
363+ the duration of one userland processing cycle.
364+
365+ While it is not stricly necessary to use ``napi_defer_hard_irqs `` and
366+ ``gro_flush_timeout `` to use IRQ suspension, their use is strongly
367+ recommended.
368+
369+ IRQ suspension causes the system to alternate between polling mode and
370+ irq-driven packet delivery. During busy periods, ``irq-suspend-timeout ``
371+ overrides ``gro_flush_timeout `` and keeps the system busy polling, but when
372+ epoll finds no events, the setting of ``gro_flush_timeout `` and
373+ ``napi_defer_hard_irqs `` determine the next step.
374+
375+ There are essentially three possible loops for network processing and
376+ packet delivery:
377+
378+ 1) hardirq -> softirq -> napi poll; basic interrupt delivery
379+ 2) timer -> softirq -> napi poll; deferred irq processing
380+ 3) epoll -> busy-poll -> napi poll; busy looping
381+
382+ Loop 2 can take control from Loop 1, if ``gro_flush_timeout `` and
383+ ``napi_defer_hard_irqs `` are set.
384+
385+ If ``gro_flush_timeout `` and ``napi_defer_hard_irqs `` are set, Loops 2
386+ and 3 "wrestle" with each other for control.
387+
388+ During busy periods, ``irq-suspend-timeout `` is used as timer in Loop 2,
389+ which essentially tilts network processing in favour of Loop 3.
390+
391+ If ``gro_flush_timeout `` and ``napi_defer_hard_irqs `` are not set, Loop 3
392+ cannot take control from Loop 1.
393+
394+ Therefore, setting ``gro_flush_timeout `` and ``napi_defer_hard_irqs `` is
395+ the recommended usage, because otherwise setting ``irq-suspend-timeout ``
396+ might not have any discernible effect.
231397
232398.. _threaded :
233399
0 commit comments