From patchwork Thu Jan 23 18:45:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13948542 Received: from mail-io1-f48.google.com (mail-io1-f48.google.com [209.85.166.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DF0A14D71A for ; Thu, 23 Jan 2025 18:47:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658081; cv=none; b=n28Xgrk74sDgbMjTdhvs2khXt5e+AnHX14Jxua85TDcCyXyF65+eZSx82v6/ZnfR9I4czC2MsZo30LJ7aN/7M6AJ2zmicfoFE2AR1lvMoiJzRXGolCWodUX7NpE/uxoiCdpvoZOeQpqRZcQSd8gVlBF9cLdtEZeDKy931W++zlE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658081; c=relaxed/simple; bh=Xns51Vz6Rh7C1fPVD7AYCURfsXegQLXtQCL8ZmomcTk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s5COqkGrdPbc7hhc00Uyv6ZJfWXcryZH5itnoaQ2aQ9yhCOEUqPAKJAASa62kdwuUn9q6vc92EImSIY6hdw9G+zu/kiE2Xbyj/ukrKMEFAExP+A5j6k9+StoKDoz36BVw17lbCBPofHlnf85kHbE4lBB/yexHgBGUgRqI6E0Qg0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=OC1cdH//; arc=none smtp.client-ip=209.85.166.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="OC1cdH//" Received: by mail-io1-f48.google.com with SMTP id ca18e2360f4ac-844e1eb50e2so34633339f.0 for ; Thu, 23 Jan 2025 10:47:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1737658078; x=1738262878; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nsc4bN2nBFss96MyL7dPMTjUepgDcEWleenTGlQx2Lk=; b=OC1cdH//krQQC9FF4kEZJ1AMXFtNq0zf63ZPksDPTozDf1QUWOl47tPSLqq+pmd3RQ AJ8j5KSXATsvRhRY7bH8zVgx6s/eOTIjVe1b52+qlAY/8YwAOIcmBXkYSOmzUQbfSutu hV2HxdPst+mZft2kbRZP0RYgYA48mmy9eQyfWaOxI+3Z7bHChml3TPc/O7J8k6Jc6Iqd YIG7yhc+U7ZOIibbcVvqE0ZnKlspxuB9vHXr9Mm+rvUD8KyUNqwg+plssR3JR8P2c5bE se2rcrobCZA67TrP3qOZ0CZjZ6hcpFy5kPXQHgoyG9NZ9+HXML4Z7bDjZZkn2fDSv8oZ EU3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737658078; x=1738262878; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nsc4bN2nBFss96MyL7dPMTjUepgDcEWleenTGlQx2Lk=; b=wVR7wB5/xiaziXEtHABdvbvMVjtx9lVS2j74u+CmIkw3jyCYHW2/FlwLnZOfBlmxp2 o8RdJM4enkJoX6Hkm8UXbrnpsSLlhc4QREcTRUeU4ukuCD/iR3+lL3hjAVgMdzG8KGBD /IkpU12hYwqbc8UormjeXLYM93L8Z0LoyQ0oyZJkOCpMf862IEvvxyQu92SqmjuvYmDR CDIYUFRJVa5EYpp9BL+oT6/COax9bJFtP9WRhBcIGV8qBXKT6D3sJfU5384RieumMbbE M5uBRvxP0gp57Q3S17uM5zgazH9bZxA44kyY0AYQH1ohEws1rFHr9qdxDfE5DWrtvO/c kYJw== X-Gm-Message-State: AOJu0YyFB/dyrsCIDxZ+GLfBMbqZRXLiI4DjeNdY90HUxwEERqwEdmKz 1kBJwPq4ftQyCMcL1OIVOVBlN/IKTlmo9m3QTceinqfzLWkHlL5SBTtTbki5vRjXPCr7/XuJH/B M X-Gm-Gg: ASbGncvzLxRNAxqJS6HxewqNIjOgDB/C0YIaIqS9FKrC0BotgwrCndSpijz9iYuoW1p MlqHkWY/3Ti3fcPtTej8cQycnCGLH8GCwxu6jROftYyg6Is7MB87qCk2xP4eJ5Rv0e1GnYpLFgM Jwrftnen9c1zyiyMEXhe9RKUzEabfFwKC6gSAwLqITFlYhG8B0WTsJH2DIzPxs1DKkBjdfvRt0S Yw8xdavuVonaVlFBlng4EVp7/nbDSIu/usYXpYK7l4ELjj/ulAZhPv4a0HOGXqfrfkqeFptKsGX Az7E52jK X-Google-Smtp-Source: AGHT+IFR2tUDARxV/jB6Y2WFP8GrKSgStKlllGEGUIMQz6MkSUpYFsNuHbdY4msVqRvA+cfYDS/z3A== X-Received: by 2002:a05:6602:4186:b0:807:f0fb:1192 with SMTP id ca18e2360f4ac-851b6169a31mr2326943939f.1.1737658078053; Thu, 23 Jan 2025 10:47:58 -0800 (PST) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ec1db6c4b0sm53432173.89.2025.01.23.10.47.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2025 10:47:57 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org Cc: krisman@suse.de, Jens Axboe Subject: [PATCH 1/3] io_uring/uring_cmd: cleanup struct io_uring_cmd_data layout Date: Thu, 23 Jan 2025 11:45:25 -0700 Message-ID: <20250123184754.555270-2-axboe@kernel.dk> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250123184754.555270-1-axboe@kernel.dk> References: <20250123184754.555270-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 A few spots in uring_cmd assume that the SQEs copied are always at the start of the structure, and hence mix req->async_data and the struct itself. Clean that up and use the proper indices. Signed-off-by: Jens Axboe --- io_uring/uring_cmd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 3993c9339ac7..6a63ec4b5445 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -192,8 +192,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, return 0; } - memcpy(req->async_data, sqe, uring_sqe_size(req->ctx)); - ioucmd->sqe = req->async_data; + memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx)); + ioucmd->sqe = cache->sqes; return 0; } @@ -260,7 +260,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) struct io_uring_cmd_data *cache = req->async_data; if (ioucmd->sqe != (void *) cache) - memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx)); + memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx)); return -EAGAIN; } else if (ret == -EIOCBQUEUED) { return -EIOCBQUEUED; From patchwork Thu Jan 23 18:45:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13948543 Received: from mail-io1-f54.google.com (mail-io1-f54.google.com [209.85.166.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDCB714AD2E for ; Thu, 23 Jan 2025 18:48:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658083; cv=none; b=Ny2rQBekbqFycpB4n/pgUZNwc7BmjWWAPG5dK+WqC2+ki++mBN72IwFnSZIwhb2alcfsxUxFaI24LPd7vZVEwTx0H7n0zSU6pUfjTjNNh9DQp4NwW8T0IgtcZKo1djAZvgpQfMZ+Ec7T2v1ks5oiyin9xr8iN3NLl+DnLgT5K8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658083; c=relaxed/simple; bh=jQwOKvKlMUoIoZgrA4m5EmRuiCCPD+mfypN/GQ6i9qs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=blLUwbFHBjbg+eLIy2gXCdJ6lb0ErIn6lRMFVgXWD/tAgb2S0esuc9dc44l5IoxamJS9QOhhy+VZ0DNHD/EK2uMj2HR69814q0HvPNN/FQ0J/A4c9A59kY/8dvIttsgF6f0KGpQ6ZIG3b3+R2n44EQ1x+MySzUNTE9yB85Z/WBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=SHCibSN0; arc=none smtp.client-ip=209.85.166.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="SHCibSN0" Received: by mail-io1-f54.google.com with SMTP id ca18e2360f4ac-851c4ee2a37so86876339f.3 for ; Thu, 23 Jan 2025 10:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1737658079; x=1738262879; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VEays071pvLvL9nwza3/j9gEeJjM0rxxDw0OC4t2mHI=; b=SHCibSN0boHHTB/rFhK5Qrv8t2+GcQ0Pfi5Ev/y/KBTFVGcRXt92t3gvZUJvHz+5sx eMCDx5slKNWR+B9kJWRiorCGkc4qtdlTAXRHjW+MVPAGq6fwwNyRU19v7ezvQFGlw5RE pJxYIa2P2HT4ncR3cqVWZXyFr66tw306wEHZS7XXWa6wyl8XO3t7PrXst6Zdbb0AQzzd WuoWZxWMOElqYIB1uO1WPrwwLsHoqolV6IMvO8siljshOQNaghdycTjxOzQn60O/lSgi Z6ceLf70TofYYrOPJOa1ffA6hvJFRBb8u4aSfSsxMvER03X0bgsZSuySB0ZaAjVPyrlh C0Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737658079; x=1738262879; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VEays071pvLvL9nwza3/j9gEeJjM0rxxDw0OC4t2mHI=; b=hwvumQ8NXmz4s1cSRqkBEDG7p0+3NwOBpdDrMcUM9GRHXTTmRdbFO17PRFY/o7+FO7 exslWFcJqt3op61oJ4NlUVSVYIbSWeqTdbsiHfeph7SnIutk4GH9OfHr+YjS5p1eZQdm I71LqJx2zVn0COcTJf8KwO0RSPhKMuA6xzi0FdGhGC6YapyHwrcPfP1RL9u3qDYCC/iq t8tnsMOnOXx4Hwuwdy1DQClWWDhvb99L4aawYMidgkDoa2JRBVyskR4QjnAHwkOcnsrK Jp4TpKLAMHbKwVqSY312N/0Nbze0fVNjM3kG6JfXr7Xdb8aIQxYJIF7i23kD4axHtKoc pHYw== X-Gm-Message-State: AOJu0Ywy0XxTakVgIhkypmSPkXYH11fUyi9Whmt8hkE5QcjDyZb+Phxb 019jEnwMRRjV+Qfwqi7Zinagqgt/6ziReHRzZp+GEDc6jIcEAZ4JnVOOu0qByxLfPlK0LSi2jGo v X-Gm-Gg: ASbGncuFlvpoVRorhHOz1y4mvhHgw95dYs01EmGmjr0zWiF4TPwXvdu1f8EpN1YgbTn hY/YL7AVLoD2dOO9BqmWdzTKNYabnp/ZU8Wj1F4TY20tJBQ7zahBuX659K60X1UAby/FgaXSI1b RVwYOi+yIBnxukst725lmo//38dh09f8c7PVWpiFHZ51O531TRxt4JUTdeHb/KzReqvN01kJjdn grKocECBTEu/95qV9GFZWZM16g8zHLZvscBtJkIqceTzZKjntLRxo1MgvF5xo11XfEiDMN1afsb QJbEL88J X-Google-Smtp-Source: AGHT+IHZnierIP0tDrk4zZMd6kRy0lgtFVf15PfJnCApbaqfis1G3busZfeTDYhLXir11NX3/vO+fg== X-Received: by 2002:a05:6602:418d:b0:84a:7902:d424 with SMTP id ca18e2360f4ac-851b6007ad6mr2376475639f.0.1737658079411; Thu, 23 Jan 2025 10:47:59 -0800 (PST) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ec1db6c4b0sm53432173.89.2025.01.23.10.47.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2025 10:47:58 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org Cc: krisman@suse.de, Jens Axboe Subject: [PATCH 2/3] io_uring: get rid of alloc cache init_once handling Date: Thu, 23 Jan 2025 11:45:26 -0700 Message-ID: <20250123184754.555270-3-axboe@kernel.dk> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250123184754.555270-1-axboe@kernel.dk> References: <20250123184754.555270-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 init_once is called when an object doesn't come from the cache, and hence needs initial clearing of certain members. While the whole struct could get cleared by memset() in that case, a few of the cache members are large enough that this may cause unnecessary overhead if the caches used aren't large enough to satisfy the workload. For those cases, some churn of kmalloc+kfree is to be expected. Ensure that the 3 users that need clearing put the members they need cleared at the start of the struct, and wrap the rest of the struct in a struct group so the offset is known. While at it, improve the interaction with KASAN such that when/if KASAN writes to members inside the struct that should be retained over caching, it won't trip over itself. For rw and net, the retaining of the iovec over caching is disabled if KASAN is enabled. A helper will free and clear those members in that case. Signed-off-by: Jens Axboe Reviewed-by: Gabriel Krisman Bertazi --- include/linux/io_uring/cmd.h | 2 +- include/linux/io_uring_types.h | 3 ++- io_uring/alloc_cache.h | 43 +++++++++++++++++++++++++++------- io_uring/futex.c | 4 ++-- io_uring/io_uring.c | 12 ++++++---- io_uring/io_uring.h | 5 ++-- io_uring/net.c | 28 +++++----------------- io_uring/net.h | 20 +++++++++------- io_uring/poll.c | 2 +- io_uring/rw.c | 27 +++++---------------- io_uring/rw.h | 27 ++++++++++++--------- io_uring/uring_cmd.c | 11 ++------- 12 files changed, 91 insertions(+), 93 deletions(-) diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index a3ce553413de..abd0c8bd950b 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -19,8 +19,8 @@ struct io_uring_cmd { }; struct io_uring_cmd_data { - struct io_uring_sqe sqes[2]; void *op_data; + struct io_uring_sqe sqes[2]; }; static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 623d8e798a11..3def525a1da3 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -222,7 +222,8 @@ struct io_alloc_cache { void **entries; unsigned int nr_cached; unsigned int max_cached; - size_t elem_size; + unsigned int elem_size; + unsigned int init_clear; }; struct io_ring_ctx { diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h index a3a8cfec32ce..cca96aff3277 100644 --- a/io_uring/alloc_cache.h +++ b/io_uring/alloc_cache.h @@ -6,6 +6,19 @@ */ #define IO_ALLOC_CACHE_MAX 128 +#if defined(CONFIG_KASAN) +static inline void io_alloc_cache_kasan(struct iovec **iov, int *nr) +{ + kfree(*iov); + *iov = NULL; + *nr = 0; +} +#else +static inline void io_alloc_cache_kasan(struct iovec **iov, int *nr) +{ +} +#endif + static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, void *entry) { @@ -23,35 +36,47 @@ static inline void *io_alloc_cache_get(struct io_alloc_cache *cache) if (cache->nr_cached) { void *entry = cache->entries[--cache->nr_cached]; + /* + * If KASAN is enabled, always clear the initial bytes that + * must be zeroed post alloc, in case any of them overlap + * with KASAN storage. + */ +#if defined(CONFIG_KASAN) kasan_mempool_unpoison_object(entry, cache->elem_size); + if (cache->init_clear) + memset(entry, 0, cache->init_clear); +#endif return entry; } return NULL; } -static inline void *io_cache_alloc(struct io_alloc_cache *cache, gfp_t gfp, - void (*init_once)(void *obj)) +static inline void *io_cache_alloc(struct io_alloc_cache *cache, gfp_t gfp) { - if (unlikely(!cache->nr_cached)) { - void *obj = kmalloc(cache->elem_size, gfp); + void *obj; - if (obj && init_once) - init_once(obj); + obj = io_alloc_cache_get(cache); + if (obj) return obj; - } - return io_alloc_cache_get(cache); + + obj = kmalloc(cache->elem_size, gfp); + if (obj && cache->init_clear) + memset(obj, 0, cache->init_clear); + return obj; } /* returns false if the cache was initialized properly */ static inline bool io_alloc_cache_init(struct io_alloc_cache *cache, - unsigned max_nr, size_t size) + unsigned max_nr, unsigned int size, + unsigned int init_bytes) { cache->entries = kvmalloc_array(max_nr, sizeof(void *), GFP_KERNEL); if (cache->entries) { cache->nr_cached = 0; cache->max_cached = max_nr; cache->elem_size = size; + cache->init_clear = init_bytes; return false; } return true; diff --git a/io_uring/futex.c b/io_uring/futex.c index 30139cc150f2..3159a2b7eeca 100644 --- a/io_uring/futex.c +++ b/io_uring/futex.c @@ -36,7 +36,7 @@ struct io_futex_data { bool io_futex_cache_init(struct io_ring_ctx *ctx) { return io_alloc_cache_init(&ctx->futex_cache, IO_FUTEX_ALLOC_CACHE_MAX, - sizeof(struct io_futex_data)); + sizeof(struct io_futex_data), 0); } void io_futex_cache_free(struct io_ring_ctx *ctx) @@ -320,7 +320,7 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags) } io_ring_submit_lock(ctx, issue_flags); - ifd = io_cache_alloc(&ctx->futex_cache, GFP_NOWAIT, NULL); + ifd = io_cache_alloc(&ctx->futex_cache, GFP_NOWAIT); if (!ifd) { ret = -ENOMEM; goto done_unlock; diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 7bfbc7c22367..263e504be4a8 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -315,16 +315,18 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) INIT_LIST_HEAD(&ctx->cq_overflow_list); INIT_LIST_HEAD(&ctx->io_buffers_cache); ret = io_alloc_cache_init(&ctx->apoll_cache, IO_POLL_ALLOC_CACHE_MAX, - sizeof(struct async_poll)); + sizeof(struct async_poll), 0); ret |= io_alloc_cache_init(&ctx->netmsg_cache, IO_ALLOC_CACHE_MAX, - sizeof(struct io_async_msghdr)); + sizeof(struct io_async_msghdr), + offsetof(struct io_async_msghdr, clear)); ret |= io_alloc_cache_init(&ctx->rw_cache, IO_ALLOC_CACHE_MAX, - sizeof(struct io_async_rw)); + sizeof(struct io_async_rw), + offsetof(struct io_async_rw, clear)); ret |= io_alloc_cache_init(&ctx->uring_cache, IO_ALLOC_CACHE_MAX, - sizeof(struct io_uring_cmd_data)); + sizeof(struct io_uring_cmd_data), 0); spin_lock_init(&ctx->msg_lock); ret |= io_alloc_cache_init(&ctx->msg_cache, IO_ALLOC_CACHE_MAX, - sizeof(struct io_kiocb)); + sizeof(struct io_kiocb), 0); ret |= io_futex_cache_init(ctx); if (ret) goto free_ref; diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index f65e3f3ede51..67adbb3c1bf5 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -226,10 +226,9 @@ static inline void io_req_set_res(struct io_kiocb *req, s32 res, u32 cflags) } static inline void *io_uring_alloc_async_data(struct io_alloc_cache *cache, - struct io_kiocb *req, - void (*init_once)(void *obj)) + struct io_kiocb *req) { - req->async_data = io_cache_alloc(cache, GFP_KERNEL, init_once); + req->async_data = io_cache_alloc(cache, GFP_KERNEL); if (req->async_data) req->flags |= REQ_F_ASYNC_DATA; return req->async_data; diff --git a/io_uring/net.c b/io_uring/net.c index 85f55fbc25c9..41eef286f8b9 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -137,7 +137,6 @@ static void io_netmsg_iovec_free(struct io_async_msghdr *kmsg) static void io_netmsg_recycle(struct io_kiocb *req, unsigned int issue_flags) { struct io_async_msghdr *hdr = req->async_data; - struct iovec *iov; /* can't recycle, ensure we free the iovec if we have one */ if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) { @@ -146,39 +145,25 @@ static void io_netmsg_recycle(struct io_kiocb *req, unsigned int issue_flags) } /* Let normal cleanup path reap it if we fail adding to the cache */ - iov = hdr->free_iov; + io_alloc_cache_kasan(&hdr->free_iov, &hdr->free_iov_nr); if (io_alloc_cache_put(&req->ctx->netmsg_cache, hdr)) { - if (iov) - kasan_mempool_poison_object(iov); req->async_data = NULL; req->flags &= ~REQ_F_ASYNC_DATA; } } -static void io_msg_async_data_init(void *obj) -{ - struct io_async_msghdr *hdr = (struct io_async_msghdr *)obj; - - hdr->free_iov = NULL; - hdr->free_iov_nr = 0; -} - static struct io_async_msghdr *io_msg_alloc_async(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; struct io_async_msghdr *hdr; - hdr = io_uring_alloc_async_data(&ctx->netmsg_cache, req, - io_msg_async_data_init); + hdr = io_uring_alloc_async_data(&ctx->netmsg_cache, req); if (!hdr) return NULL; /* If the async data was cached, we might have an iov cached inside. */ - if (hdr->free_iov) { - kasan_mempool_unpoison_object(hdr->free_iov, - hdr->free_iov_nr * sizeof(struct iovec)); + if (hdr->free_iov) req->flags |= REQ_F_NEED_CLEANUP; - } return hdr; } @@ -1813,11 +1798,10 @@ void io_netmsg_cache_free(const void *entry) { struct io_async_msghdr *kmsg = (struct io_async_msghdr *) entry; - if (kmsg->free_iov) { - kasan_mempool_unpoison_object(kmsg->free_iov, - kmsg->free_iov_nr * sizeof(struct iovec)); +#if !defined(CONFIG_KASAN) + if (kmsg->free_iov) io_netmsg_iovec_free(kmsg); - } +#endif kfree(kmsg); } #endif diff --git a/io_uring/net.h b/io_uring/net.h index 52bfee05f06a..b804c2b36e60 100644 --- a/io_uring/net.h +++ b/io_uring/net.h @@ -5,16 +5,20 @@ struct io_async_msghdr { #if defined(CONFIG_NET) - struct iovec fast_iov; - /* points to an allocated iov, if NULL we use fast_iov instead */ struct iovec *free_iov; + /* points to an allocated iov, if NULL we use fast_iov instead */ int free_iov_nr; - int namelen; - __kernel_size_t controllen; - __kernel_size_t payloadlen; - struct sockaddr __user *uaddr; - struct msghdr msg; - struct sockaddr_storage addr; + struct_group(clear, + int namelen; + struct iovec fast_iov; + __kernel_size_t controllen; + __kernel_size_t payloadlen; + struct sockaddr __user *uaddr; + struct msghdr msg; + struct sockaddr_storage addr; + ); +#else + struct_group(clear); #endif }; diff --git a/io_uring/poll.c b/io_uring/poll.c index cc01c40b43d3..356474c66f32 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -650,7 +650,7 @@ static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req, kfree(apoll->double_poll); } else { if (!(issue_flags & IO_URING_F_UNLOCKED)) - apoll = io_cache_alloc(&ctx->apoll_cache, GFP_ATOMIC, NULL); + apoll = io_cache_alloc(&ctx->apoll_cache, GFP_ATOMIC); else apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC); if (!apoll) diff --git a/io_uring/rw.c b/io_uring/rw.c index a9a2733be842..991ecfbea88e 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -158,16 +158,13 @@ static void io_rw_iovec_free(struct io_async_rw *rw) static void io_rw_recycle(struct io_kiocb *req, unsigned int issue_flags) { struct io_async_rw *rw = req->async_data; - struct iovec *iov; if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) { io_rw_iovec_free(rw); return; } - iov = rw->free_iovec; + io_alloc_cache_kasan(&rw->free_iovec, &rw->free_iov_nr); if (io_alloc_cache_put(&req->ctx->rw_cache, rw)) { - if (iov) - kasan_mempool_poison_object(iov); req->async_data = NULL; req->flags &= ~REQ_F_ASYNC_DATA; } @@ -208,27 +205,16 @@ static void io_req_rw_cleanup(struct io_kiocb *req, unsigned int issue_flags) } } -static void io_rw_async_data_init(void *obj) -{ - struct io_async_rw *rw = (struct io_async_rw *)obj; - - rw->free_iovec = NULL; - rw->bytes_done = 0; -} - static int io_rw_alloc_async(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; struct io_async_rw *rw; - rw = io_uring_alloc_async_data(&ctx->rw_cache, req, io_rw_async_data_init); + rw = io_uring_alloc_async_data(&ctx->rw_cache, req); if (!rw) return -ENOMEM; - if (rw->free_iovec) { - kasan_mempool_unpoison_object(rw->free_iovec, - rw->free_iov_nr * sizeof(struct iovec)); + if (rw->free_iovec) req->flags |= REQ_F_NEED_CLEANUP; - } rw->bytes_done = 0; return 0; } @@ -1323,10 +1309,9 @@ void io_rw_cache_free(const void *entry) { struct io_async_rw *rw = (struct io_async_rw *) entry; - if (rw->free_iovec) { - kasan_mempool_unpoison_object(rw->free_iovec, - rw->free_iov_nr * sizeof(struct iovec)); +#if !defined(CONFIG_KASAN) + if (rw->free_iovec) io_rw_iovec_free(rw); - } +#endif kfree(rw); } diff --git a/io_uring/rw.h b/io_uring/rw.h index 2d7656bd268d..eaa59bd64870 100644 --- a/io_uring/rw.h +++ b/io_uring/rw.h @@ -9,19 +9,24 @@ struct io_meta_state { struct io_async_rw { size_t bytes_done; - struct iov_iter iter; - struct iov_iter_state iter_state; - struct iovec fast_iov; struct iovec *free_iovec; - int free_iov_nr; - /* wpq is for buffered io, while meta fields are used with direct io */ - union { - struct wait_page_queue wpq; - struct { - struct uio_meta meta; - struct io_meta_state meta_state; + struct_group(clear, + struct iov_iter iter; + struct iov_iter_state iter_state; + struct iovec fast_iov; + int free_iov_nr; + /* + * wpq is for buffered io, while meta fields are used with + * direct io + */ + union { + struct wait_page_queue wpq; + struct { + struct uio_meta meta; + struct io_meta_state meta_state; + }; }; - }; + ); }; int io_prep_read_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe); diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 6a63ec4b5445..1f6a82128b47 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -168,23 +168,16 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, u64 res2, } EXPORT_SYMBOL_GPL(io_uring_cmd_done); -static void io_uring_cmd_init_once(void *obj) -{ - struct io_uring_cmd_data *data = obj; - - data->op_data = NULL; -} - static int io_uring_cmd_prep_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); struct io_uring_cmd_data *cache; - cache = io_uring_alloc_async_data(&req->ctx->uring_cache, req, - io_uring_cmd_init_once); + cache = io_uring_alloc_async_data(&req->ctx->uring_cache, req); if (!cache) return -ENOMEM; + cache->op_data = NULL; if (!(req->flags & REQ_F_FORCE_ASYNC)) { /* defer memcpy until we need it */ From patchwork Thu Jan 23 18:45:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13948544 Received: from mail-il1-f171.google.com (mail-il1-f171.google.com [209.85.166.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EFBC15ECDF for ; Thu, 23 Jan 2025 18:48:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658084; cv=none; b=u2lJF+B02z4vv66q9LzkB89S4hPXTlEk6BGjYBMKSlfHqRqWO76GZxjQdG/MmPrgjNpe2QyegvhrDmzTaG6fTQdQ1mBwmYWgmkWMceB9Nd5ZnimAWA/mIYxHBSrdKZ6A3aAgtR+LqYVKPyaMs7J8pv7Yq6mr2eBp3Gw4JaU5MO8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737658084; c=relaxed/simple; bh=zf7UVRwWjGwfQQ59luxZG4Az/4/jvzJ9epQYq2eDWQ4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pv4cMmWjnD7CqAHknKPPS993cObLjdJXywJrJeR5EVg2gPM5ksbkq0yu0JrD5MUIPfNKO+s3TxoD/IekNVe0BPi/gW78wFXSz9blKfhDjLxfJJE5sUDegZsj6pU3D0mxSc79EYeLWwyMrNlk/iZ0wrX4INVOEeLssSCsgMebV5Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=EHb0eymC; arc=none smtp.client-ip=209.85.166.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="EHb0eymC" Received: by mail-il1-f171.google.com with SMTP id e9e14a558f8ab-3cfb0ed6ef8so7149395ab.0 for ; Thu, 23 Jan 2025 10:48:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1737658080; x=1738262880; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L/tfWCfmDuERkqFhjJUzPDvmxz7mAz60TOX3P5+350o=; b=EHb0eymCjNE58cfCsJDCS7OxFAc/g6dlP3OCu+4aku36m/kIvan3O2QSI94iZuQJlh nADXjzoPcbJEaQBA5kt/iGob2jXBcIObO0OXa2V2XvCyMPyccN3CAJW3iNDG+jmbN5B9 RXchJ6BBq5Xtn6WbtJfALshuMz5gLo1SmRml1UwtHOuux32GahOSAi3+oNhAXzzstXlL qe7K1D1IINICHVT3M300XCuqDodGrjfCBIO8JTCsS65vOFoPm0rizHqwCBfs9wxn5Sx7 mwR8coGVQNuTXHTAA+6wIw5IXJQLk5BN9eEo0V1vDhJu12VgzHZ4YNeca3VkYiqr0KYO nXlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737658080; x=1738262880; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L/tfWCfmDuERkqFhjJUzPDvmxz7mAz60TOX3P5+350o=; b=bH4T0zlu1foLPyJjtkmQ67ARE0Te4hEnYFNhUeEPCGrAlZkesgg1UT0o5mZTWzK6fh fG75MtVDjviBQnrfu6Rh1MjBS91DaMoo0F0xtKS+l5gFNtldj2rkucSemPp+PvUtQ+If wuF2927LlVXV95kzmn8q0tHajxKLaYtZOgfNunu6nKYi/rCvFAs5pXhPwZuJaL/2mt9W 4s1zj+Qc+f2+28lUuKsuXJiLzAx/8/1y8ivPJCK1lGxqXb/AxpLPhO8h6JtIX7ZMN4UM 9ZB88k+QIs0p9R+n8V2xNE+Vy5K/j4Ag9pbGLCzM1KETogTdxaiSCkrPSY3wPVgOplCs anvA== X-Gm-Message-State: AOJu0YxecgOCW2WfPwEPD+h1M/iZYTBk8xkgzibLLIIerLa0WGpqKoXk LXydNhE00X4a6awzdFYjtc6J45WwvapP3npqmLEdhrZGaQmwTZKr2uyki+SnNme7HRxuKgUp0tj e X-Gm-Gg: ASbGncuRpkTUeZaN667XIFL6waLAz4gj0T9hjnpppzT8gIoSiAjU4V2TEN9QC8AJclN 0rpuY59xL2FXW7NcKWRjPjrMkLzGY+dZBa+wD0d7Wnf9aSga+wUXlI8Ij1EJ3Qd/e7C3P6BOMCm oTNRNdCm5mrrz1fOhaFKor682kBi8K0/737Gu9WMzup/+aDLZd9KxODmCAhMCASE+g12jYwlRFb 1KZ/E4O17WqozlyxTy5un6GWsUWDoId+wZunnOtOW9XlEcKCEqMrHOg5uU0BJZotu/AHfzHadfq BNafY45y X-Google-Smtp-Source: AGHT+IFeQsdPT6QrMlPsXEpDFtenhbhiMyd9fM0PI9EGp6AK7ujFOoMcA2ugRnD38ZWJ3hZdfQb9HQ== X-Received: by 2002:a05:6e02:1c82:b0:3ce:7a41:d885 with SMTP id e9e14a558f8ab-3cfc795b6bamr3384215ab.1.1737658080660; Thu, 23 Jan 2025 10:48:00 -0800 (PST) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ec1db6c4b0sm53432173.89.2025.01.23.10.47.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2025 10:47:59 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org Cc: krisman@suse.de, Jens Axboe Subject: [PATCH 3/3] io_uring/alloc_cache: get rid of _nocache() helper Date: Thu, 23 Jan 2025 11:45:27 -0700 Message-ID: <20250123184754.555270-4-axboe@kernel.dk> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250123184754.555270-1-axboe@kernel.dk> References: <20250123184754.555270-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Just allow passing in NULL for the cache, if the type in question doesn't have a cache associated with it. Signed-off-by: Jens Axboe --- io_uring/io_uring.h | 18 +++++++----------- io_uring/timeout.c | 2 +- io_uring/waitid.c | 2 +- 3 files changed, 9 insertions(+), 13 deletions(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 67adbb3c1bf5..ab619e63ef39 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -228,18 +228,14 @@ static inline void io_req_set_res(struct io_kiocb *req, s32 res, u32 cflags) static inline void *io_uring_alloc_async_data(struct io_alloc_cache *cache, struct io_kiocb *req) { - req->async_data = io_cache_alloc(cache, GFP_KERNEL); - if (req->async_data) - req->flags |= REQ_F_ASYNC_DATA; - return req->async_data; -} + if (cache) { + req->async_data = io_cache_alloc(cache, GFP_KERNEL); + } else { + const struct io_issue_def *def = &io_issue_defs[req->opcode]; -static inline void *io_uring_alloc_async_data_nocache(struct io_kiocb *req) -{ - const struct io_issue_def *def = &io_issue_defs[req->opcode]; - - WARN_ON_ONCE(!def->async_size); - req->async_data = kmalloc(def->async_size, GFP_KERNEL); + WARN_ON_ONCE(!def->async_size); + req->async_data = kmalloc(def->async_size, GFP_KERNEL); + } if (req->async_data) req->flags |= REQ_F_ASYNC_DATA; return req->async_data; diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 2bd7e0a317bb..48fc8cf70784 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -544,7 +544,7 @@ static int __io_timeout_prep(struct io_kiocb *req, if (WARN_ON_ONCE(req_has_async_data(req))) return -EFAULT; - data = io_uring_alloc_async_data_nocache(req); + data = io_uring_alloc_async_data(NULL, req); if (!data) return -ENOMEM; data->req = req; diff --git a/io_uring/waitid.c b/io_uring/waitid.c index 6778c0ee76c4..853e97a7b0ec 100644 --- a/io_uring/waitid.c +++ b/io_uring/waitid.c @@ -303,7 +303,7 @@ int io_waitid(struct io_kiocb *req, unsigned int issue_flags) struct io_waitid_async *iwa; int ret; - iwa = io_uring_alloc_async_data_nocache(req); + iwa = io_uring_alloc_async_data(NULL, req); if (!iwa) return -ENOMEM;