Skip to content

Commit c34edc1

Browse files
author
CKI KWF Bot
committed
Merge: net: IRQ suspension
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-10/-/merge_requests/1588 JIRA: https://issues.redhat.com/browse/RHEL-77189 Tested using the selftest in the series. See the associated Jira ticket for details about the feature. Signed-off-by: Antoine Tenart <atenart@redhat.com> Approved-by: Davide Caratti <dcaratti@redhat.com> Approved-by: Xin Long <lxin@redhat.com> Approved-by: CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> Merged-by: CKI GitLab Kmaint Pipeline Bot <26919896-cki-kmaint-pipeline-bot@users.noreply.gitlab.com>
2 parents e8d35ce + fe2c7a9 commit c34edc1

File tree

15 files changed

+809
-7
lines changed

15 files changed

+809
-7
lines changed

Documentation/netlink/specs/netdev.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -267,6 +267,11 @@ attribute-sets:
267267
the end of a NAPI cycle. This may add receive latency in exchange
268268
for reducing the number of frames processed by the network stack.
269269
type: uint
270+
-
271+
name: irq-suspend-timeout
272+
doc: The timeout, in nanoseconds, of how long to suspend irq
273+
processing, if event polling finds events
274+
type: uint
270275
-
271276
name: queue
272277
attributes:
@@ -657,6 +662,7 @@ operations:
657662
- pid
658663
- defer-hard-irqs
659664
- gro-flush-timeout
665+
- irq-suspend-timeout
660666
dump:
661667
request:
662668
attributes:
@@ -708,6 +714,7 @@ operations:
708714
- id
709715
- defer-hard-irqs
710716
- gro-flush-timeout
717+
- irq-suspend-timeout
711718

712719
kernel-family:
713720
headers: [ "linux/list.h"]

Documentation/networking/napi.rst

Lines changed: 168 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,6 +192,33 @@ is reused to control the delay of the timer, while
192192
``napi_defer_hard_irqs`` controls the number of consecutive empty polls
193193
before NAPI gives up and goes back to using hardware IRQs.
194194

195+
The above parameters can also be set on a per-NAPI basis using netlink via
196+
netdev-genl. When used with netlink and configured on a per-NAPI basis, the
197+
parameters mentioned above use hyphens instead of underscores:
198+
``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
199+
200+
Per-NAPI configuration can be done programmatically in a user application
201+
or by using a script included in the kernel source tree:
202+
``tools/net/ynl/cli.py``.
203+
204+
For example, using the script:
205+
206+
.. code-block:: bash
207+
208+
$ kernel-source/tools/net/ynl/cli.py \
209+
--spec Documentation/netlink/specs/netdev.yaml \
210+
--do napi-set \
211+
--json='{"id": 345,
212+
"defer-hard-irqs": 111,
213+
"gro-flush-timeout": 11111}'
214+
215+
Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
216+
via netdev-genl. There is no global sysfs parameter for this value.
217+
218+
``irq-suspend-timeout`` is used to determine how long an application can
219+
completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
220+
which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
221+
195222
.. _poll:
196223

197224
Busy polling
@@ -207,6 +234,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
207234
``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
208235
also exists.
209236

237+
epoll-based busy polling
238+
------------------------
239+
240+
It is possible to trigger packet processing directly from calls to
241+
``epoll_wait``. In order to use this feature, a user application must ensure
242+
all file descriptors which are added to an epoll context have the same NAPI ID.
243+
244+
If the application uses a dedicated acceptor thread, the application can obtain
245+
the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
246+
distribute that file descriptor to a worker thread. The worker thread would add
247+
the file descriptor to its epoll context. This would ensure each worker thread
248+
has an epoll context with FDs that have the same NAPI ID.
249+
250+
Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
251+
be inserted to distribute incoming connections to threads such that each thread
252+
is only given incoming connections with the same NAPI ID. Care must be taken to
253+
carefully handle cases where a system may have multiple NICs.
254+
255+
In order to enable busy polling, there are two choices:
256+
257+
1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
258+
loop waiting for events. This is a system-wide setting and will cause all
259+
epoll-based applications to busy poll when they call epoll_wait. This may
260+
not be desirable as many applications may not have the need to busy poll.
261+
262+
2. Applications using recent kernels can issue an ioctl on the epoll context
263+
file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
264+
epoll_params``:, which user programs can define as follows:
265+
266+
.. code-block:: c
267+
268+
struct epoll_params {
269+
uint32_t busy_poll_usecs;
270+
uint16_t busy_poll_budget;
271+
uint8_t prefer_busy_poll;
272+
273+
/* pad the struct to a multiple of 64bits */
274+
uint8_t __pad;
275+
};
276+
210277
IRQ mitigation
211278
---------------
212279

@@ -222,12 +289,111 @@ Such applications can pledge to the kernel that they will perform a busy
222289
polling operation periodically, and the driver should keep the device IRQs
223290
permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
224291
socket option. To avoid system misbehavior the pledge is revoked
225-
if ``gro_flush_timeout`` passes without any busy poll call.
292+
if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
293+
busy polling applications, the ``prefer_busy_poll`` field of ``struct
294+
epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
295+
enable this mode. See the above section for more details.
226296

227297
The NAPI budget for busy polling is lower than the default (which makes
228298
sense given the low latency intention of normal busy polling). This is
229299
not the case with IRQ mitigation, however, so the budget can be adjusted
230-
with the ``SO_BUSY_POLL_BUDGET`` socket option.
300+
with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
301+
applications, the ``busy_poll_budget`` field can be adjusted to the desired value
302+
in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
303+
ioctl. See the above section for more details.
304+
305+
It is important to note that choosing a large value for ``gro_flush_timeout``
306+
will defer IRQs to allow for better batch processing, but will induce latency
307+
when the system is not fully loaded. Choosing a small value for
308+
``gro_flush_timeout`` can cause interference of the user application which is
309+
attempting to busy poll by device IRQs and softirq processing. This value
310+
should be chosen carefully with these tradeoffs in mind. epoll-based busy
311+
polling applications may be able to mitigate how much user processing happens
312+
by choosing an appropriate value for ``maxevents``.
313+
314+
Users may want to consider an alternate approach, IRQ suspension, to help deal
315+
with these tradeoffs.
316+
317+
IRQ suspension
318+
--------------
319+
320+
IRQ suspension is a mechanism wherein device IRQs are masked while epoll
321+
triggers NAPI packet processing.
322+
323+
While application calls to epoll_wait successfully retrieve events, the kernel will
324+
defer the IRQ suspension timer. If the kernel does not retrieve any events
325+
while busy polling (for example, because network traffic levels subsided), IRQ
326+
suspension is disabled and the IRQ mitigation strategies described above are
327+
engaged.
328+
329+
This allows users to balance CPU consumption with network processing
330+
efficiency.
331+
332+
To use this mechanism:
333+
334+
1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
335+
maximum time (in nanoseconds) the application can have its IRQs
336+
suspended. This is done using netlink, as described above. This timeout
337+
serves as a safety mechanism to restart IRQ driver interrupt processing if
338+
the application has stalled. This value should be chosen so that it covers
339+
the amount of time the user application needs to process data from its
340+
call to epoll_wait, noting that applications can control how much data
341+
they retrieve by setting ``max_events`` when calling epoll_wait.
342+
343+
2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
344+
and ``napi_defer_hard_irqs`` can be set to low values. They will be used
345+
to defer IRQs after busy poll has found no data.
346+
347+
3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
348+
the ``EPIOCSPARAMS`` ioctl as described above.
349+
350+
4. The application uses epoll as described above to trigger NAPI packet
351+
processing.
352+
353+
As mentioned above, as long as subsequent calls to epoll_wait return events to
354+
userland, the ``irq-suspend-timeout`` is deferred and IRQs are disabled. This
355+
allows the application to process data without interference.
356+
357+
Once a call to epoll_wait results in no events being found, IRQ suspension is
358+
automatically disabled and the ``gro_flush_timeout`` and
359+
``napi_defer_hard_irqs`` mitigation mechanisms take over.
360+
361+
It is expected that ``irq-suspend-timeout`` will be set to a value much larger
362+
than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
363+
the duration of one userland processing cycle.
364+
365+
While it is not stricly necessary to use ``napi_defer_hard_irqs`` and
366+
``gro_flush_timeout`` to use IRQ suspension, their use is strongly
367+
recommended.
368+
369+
IRQ suspension causes the system to alternate between polling mode and
370+
irq-driven packet delivery. During busy periods, ``irq-suspend-timeout``
371+
overrides ``gro_flush_timeout`` and keeps the system busy polling, but when
372+
epoll finds no events, the setting of ``gro_flush_timeout`` and
373+
``napi_defer_hard_irqs`` determine the next step.
374+
375+
There are essentially three possible loops for network processing and
376+
packet delivery:
377+
378+
1) hardirq -> softirq -> napi poll; basic interrupt delivery
379+
2) timer -> softirq -> napi poll; deferred irq processing
380+
3) epoll -> busy-poll -> napi poll; busy looping
381+
382+
Loop 2 can take control from Loop 1, if ``gro_flush_timeout`` and
383+
``napi_defer_hard_irqs`` are set.
384+
385+
If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2
386+
and 3 "wrestle" with each other for control.
387+
388+
During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
389+
which essentially tilts network processing in favour of Loop 3.
390+
391+
If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3
392+
cannot take control from Loop 1.
393+
394+
Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
395+
the recommended usage, because otherwise setting ``irq-suspend-timeout``
396+
might not have any discernible effect.
231397

232398
.. _threaded:
233399

fs/eventpoll.c

Lines changed: 34 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -420,7 +420,9 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
420420

421421
static bool ep_busy_loop_on(struct eventpoll *ep)
422422
{
423-
return !!READ_ONCE(ep->busy_poll_usecs) || net_busy_loop_on();
423+
return !!READ_ONCE(ep->busy_poll_usecs) ||
424+
READ_ONCE(ep->prefer_busy_poll) ||
425+
net_busy_loop_on();
424426
}
425427

426428
static bool ep_busy_loop_end(void *p, unsigned long start_time)
@@ -455,6 +457,8 @@ static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
455457
* it back in when we have moved a socket with a valid NAPI
456458
* ID onto the ready list.
457459
*/
460+
if (prefer_busy_poll)
461+
napi_resume_irqs(napi_id);
458462
ep->napi_id = 0;
459463
return false;
460464
}
@@ -538,6 +542,22 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
538542
}
539543
}
540544

545+
static void ep_suspend_napi_irqs(struct eventpoll *ep)
546+
{
547+
unsigned int napi_id = READ_ONCE(ep->napi_id);
548+
549+
if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
550+
napi_suspend_irqs(napi_id);
551+
}
552+
553+
static void ep_resume_napi_irqs(struct eventpoll *ep)
554+
{
555+
unsigned int napi_id = READ_ONCE(ep->napi_id);
556+
557+
if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
558+
napi_resume_irqs(napi_id);
559+
}
560+
541561
#else
542562

543563
static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock)
@@ -555,6 +575,14 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
555575
return -EOPNOTSUPP;
556576
}
557577

578+
static void ep_suspend_napi_irqs(struct eventpoll *ep)
579+
{
580+
}
581+
582+
static void ep_resume_napi_irqs(struct eventpoll *ep)
583+
{
584+
}
585+
558586
#endif /* CONFIG_NET_RX_BUSY_POLL */
559587

560588
/*
@@ -786,6 +814,7 @@ static bool ep_refcount_dec_and_test(struct eventpoll *ep)
786814

787815
static void ep_free(struct eventpoll *ep)
788816
{
817+
ep_resume_napi_irqs(ep);
789818
mutex_destroy(&ep->mtx);
790819
free_uid(ep->user);
791820
wakeup_source_unregister(ep->ws);
@@ -2003,8 +2032,11 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
20032032
* trying again in search of more luck.
20042033
*/
20052034
res = ep_send_events(ep, events, maxevents);
2006-
if (res)
2035+
if (res) {
2036+
if (res > 0)
2037+
ep_suspend_napi_irqs(ep);
20072038
return res;
2039+
}
20082040
}
20092041

20102042
if (timed_out)

include/linux/netdevice.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -349,6 +349,7 @@ struct gro_list {
349349
*/
350350
struct napi_config {
351351
u64 gro_flush_timeout;
352+
u64 irq_suspend_timeout;
352353
u32 defer_hard_irqs;
353354
unsigned int napi_id;
354355
};
@@ -385,6 +386,7 @@ struct napi_struct {
385386
struct hrtimer timer;
386387
struct task_struct *thread;
387388
unsigned long gro_flush_timeout;
389+
unsigned long irq_suspend_timeout;
388390
u32 defer_hard_irqs;
389391
/* control-path-only fields follow */
390392
struct list_head dev_list;

include/net/busy_poll.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
5252
bool (*loop_end)(void *, unsigned long),
5353
void *loop_end_arg, bool prefer_busy_poll, u16 budget);
5454

55+
void napi_suspend_irqs(unsigned int napi_id);
56+
void napi_resume_irqs(unsigned int napi_id);
57+
5558
#else /* CONFIG_NET_RX_BUSY_POLL */
5659
static inline unsigned long net_busy_loop_on(void)
5760
{

include/uapi/linux/netdev.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,7 @@ enum {
127127
NETDEV_A_NAPI_PID,
128128
NETDEV_A_NAPI_DEFER_HARD_IRQS,
129129
NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
130+
NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
130131

131132
__NETDEV_A_NAPI_MAX,
132133
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)

net/core/dev.c

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6595,6 +6595,43 @@ void napi_busy_loop(unsigned int napi_id,
65956595
}
65966596
EXPORT_SYMBOL(napi_busy_loop);
65976597

6598+
void napi_suspend_irqs(unsigned int napi_id)
6599+
{
6600+
struct napi_struct *napi;
6601+
6602+
rcu_read_lock();
6603+
napi = napi_by_id(napi_id);
6604+
if (napi) {
6605+
unsigned long timeout = napi_get_irq_suspend_timeout(napi);
6606+
6607+
if (timeout)
6608+
hrtimer_start(&napi->timer, ns_to_ktime(timeout),
6609+
HRTIMER_MODE_REL_PINNED);
6610+
}
6611+
rcu_read_unlock();
6612+
}
6613+
6614+
void napi_resume_irqs(unsigned int napi_id)
6615+
{
6616+
struct napi_struct *napi;
6617+
6618+
rcu_read_lock();
6619+
napi = napi_by_id(napi_id);
6620+
if (napi) {
6621+
/* If irq_suspend_timeout is set to 0 between the call to
6622+
* napi_suspend_irqs and now, the original value still
6623+
* determines the safety timeout as intended and napi_watchdog
6624+
* will resume irq processing.
6625+
*/
6626+
if (napi_get_irq_suspend_timeout(napi)) {
6627+
local_bh_disable();
6628+
napi_schedule(napi);
6629+
local_bh_enable();
6630+
}
6631+
}
6632+
rcu_read_unlock();
6633+
}
6634+
65986635
#endif /* CONFIG_NET_RX_BUSY_POLL */
65996636

66006637
static void __napi_hash_add_with_id(struct napi_struct *napi,
@@ -6760,6 +6797,7 @@ static void napi_restore_config(struct napi_struct *n)
67606797
{
67616798
n->defer_hard_irqs = n->config->defer_hard_irqs;
67626799
n->gro_flush_timeout = n->config->gro_flush_timeout;
6800+
n->irq_suspend_timeout = n->config->irq_suspend_timeout;
67636801
/* a NAPI ID might be stored in the config, if so use it. if not, use
67646802
* napi_hash_add to generate one for us. It will be saved to the config
67656803
* in napi_disable.
@@ -6774,6 +6812,7 @@ static void napi_save_config(struct napi_struct *n)
67746812
{
67756813
n->config->defer_hard_irqs = n->defer_hard_irqs;
67766814
n->config->gro_flush_timeout = n->gro_flush_timeout;
6815+
n->config->irq_suspend_timeout = n->irq_suspend_timeout;
67776816
n->config->napi_id = n->napi_id;
67786817
napi_hash_del(n);
67796818
}

0 commit comments

Comments
 (0)