Skip to content

Commit 7d45251

Browse files
edumazetkuba-moo
authored andcommitted
Revert "net: group sk_backlog and sk_receive_queue"
This reverts commit 4effb33. This was a benefit for UDP flood case, which was later greatly improved with commits 6471658 ("udp: use skb_attempt_defer_free()") and b650bf0 ("udp: remove busylock and add per NUMA queues"). Apparently blamed commit added a regression for RAW sockets, possibly because they do not use the dual RX queue strategy that UDP has. sock_queue_rcv_skb_reason() and RAW recvmsg() compete for sk_receive_buf and sk_rmem_alloc changes, and them being in the same cache line reduce performance. Fixes: 4effb33 ("net: group sk_backlog and sk_receive_queue") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202509281326.f605b4eb-lkp@intel.com Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Cc: David Ahern <dsahern@kernel.org> Cc: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250929182112.824154-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1 parent 9dd4e02 commit 7d45251

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

include/net/sock.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -395,6 +395,7 @@ struct sock {
395395

396396
atomic_t sk_drops;
397397
__s32 sk_peek_off;
398+
struct sk_buff_head sk_error_queue;
398399
struct sk_buff_head sk_receive_queue;
399400
/*
400401
* The backlog queue is special, it is always used with
@@ -412,7 +413,6 @@ struct sock {
412413
} sk_backlog;
413414
#define sk_rmem_alloc sk_backlog.rmem_alloc
414415

415-
struct sk_buff_head sk_error_queue;
416416
__cacheline_group_end(sock_write_rx);
417417

418418
__cacheline_group_begin(sock_read_rx);

0 commit comments

Comments
 (0)