Skip to content

Commit a94f096

Browse files
SnailSploitSasha Levin
authored andcommitted
io_uring/zcrx: fix user_ref race between scrub and refill paths
[ Upstream commit 003049b ] The io_zcrx_put_niov_uref() function uses a non-atomic check-then-decrement pattern (atomic_read followed by separate atomic_dec) to manipulate user_refs. This is serialized against other callers by rq_lock, but io_zcrx_scrub() modifies the same counter with atomic_xchg() WITHOUT holding rq_lock. On SMP systems, the following race exists: CPU0 (refill, holds rq_lock) CPU1 (scrub, no rq_lock) put_niov_uref: atomic_read(uref) - 1 // window opens atomic_xchg(uref, 0) - 1 return_niov_freelist(niov) [PUSH #1] // window closes atomic_dec(uref) - wraps to -1 returns true return_niov(niov) return_niov_freelist(niov) [PUSH #2: DOUBLE-FREE] The same niov is pushed to the freelist twice, causing free_count to exceed nr_iovs. Subsequent freelist pushes then perform an out-of-bounds write (a u32 value) past the kvmalloc'd freelist array into the adjacent slab object. Fix this by replacing the non-atomic read-then-dec in io_zcrx_put_niov_uref() with an atomic_try_cmpxchg loop that atomically tests and decrements user_refs. This makes the operation safe against concurrent atomic_xchg from scrub without requiring scrub to acquire rq_lock. Fixes: 34a3e60 ("io_uring/zcrx: implement zerocopy receive pp memory provider") Cc: stable@vger.kernel.org Signed-off-by: Kai Aizen <kai@snailsploit.com> [pavel: removed a warning and a comment] Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent 581a8f0 commit a94f096

1 file changed

Lines changed: 7 additions & 3 deletions

File tree

io_uring/zcrx.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -336,10 +336,14 @@ static inline atomic_t *io_get_user_counter(struct net_iov *niov)
336336
static bool io_zcrx_put_niov_uref(struct net_iov *niov)
337337
{
338338
atomic_t *uref = io_get_user_counter(niov);
339+
int old;
340+
341+
old = atomic_read(uref);
342+
do {
343+
if (unlikely(old == 0))
344+
return false;
345+
} while (!atomic_try_cmpxchg(uref, &old, old - 1));
339346

340-
if (unlikely(!atomic_read(uref)))
341-
return false;
342-
atomic_dec(uref);
343347
return true;
344348
}
345349

0 commit comments

Comments
 (0)