From patchwork Mon Feb 12 21:38:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554075 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D73534D5AA for ; Mon, 12 Feb 2024 21:39:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773978; cv=none; b=qtimxnscRsbrg4pUYv0XxshN3dBPgtdVGyIlF24UvGUMGo2v8JEh1upb0qMPMZ8mNPXdTfcjUm+WFUepnl4WtDDrvVHqCIUqOKMy9ZkLNozhqcdv1hjBhVrewfdCos8R33zjjK91HD1p3FdrYwj+SazHiO8+tImG0zd5H4EdWic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773978; c=relaxed/simple; bh=4xkDGU/X5xzLc5cKiZc3zBzY0U1w4d/h7QDR3WZvS5I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bgXrG+MZf+s0eJKoJmLiEqS0Wd4V/DbxyNyKNzs4sRBv4Px1NGzyOGTw94AzQyucyGzzg6I020Gm4cJ2Amq57civVY2B7g8hSpEsawd+LwNoMH97ANyzvh0ryNLSVkOK/jUCExTQGF5PaEnw1StgPGBrMl0GEXGeR5gfDf6NX/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lPWrGyT4; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lPWrGyT4" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6ceade361so6423012276.0 for ; Mon, 12 Feb 2024 13:39:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773975; x=1708378775; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=UEMHu7rdWBA2FK/B5eccMh9Pcx3Z9fwrCFD55ivM58g=; b=lPWrGyT4xIkTy8INqKfDGS35CNspto/FHVnMe//BIBrV2g34ndWuOyZz8lzk7oEEWw SykRPH+ud1ds6oe4qYGJSljHswPl3NXAEXjmAptEVB535Fbs4zJRFwzbIwiF2gmU0e59 xjZlbjvZhrFv47ZsKLBwx8Jd9z5+YkKknLfasvzNllwwwS2jxIfzZiQzapihPlRGFswj 44G2YSiWKmgKCtU7vMF7OlN/IQl524qHibi08rSMEiBIQ5CR/TGPYWa3MnVl9xc7PbNG JA0oGyOhk0uXnOqJShuz0rjy0sCQky6GakTVljCjFCC0EK3iFrJH4rwSRyOrVPO9EKTB S2Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773975; x=1708378775; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=UEMHu7rdWBA2FK/B5eccMh9Pcx3Z9fwrCFD55ivM58g=; b=lEXazEmhbux8UXUSiDd0kQuVv/9fj2GBrRRTtnw8t14kFk70ZIenCKKUsXWHsNNJLW SSMeiP0hZd/OnpO9btNFvfecPXBtlDW3Ly17KxwLWncj9y5bdEDVSE36p4JX6YJAmmlb qDD7KnrsC4TnOJ/jODtzIPTbFLUHlaiRE+W+xfGOs5PWBriYtpeKqDgqxn6X/bDqufqb 8kJq7Y5jm3ZYtrZlkIyX/0IpLbohIFTcAk005D+Qvq85o9ZejwtjyiCsWSYozTeC/Pki F45xVGxBAC6i7mz4lsaPjaK8CMPy6QyAMoHHqUBVO1ma1IOt1TlHqolB284J8CqY5fy7 Xzwg== X-Gm-Message-State: AOJu0Ywp8CZB2UesX46kZV8rJcvTzwkAv083fx6ZStPwL8dX8v8J90A4 Bi1TLSmqlffr50kRHU9QzWpJOHGq26u84BReLKAnO1u6DBM4jQdS0QqvmUGFG7SpYwm8mysqrqN ABQ== X-Google-Smtp-Source: AGHT+IHRC17IGymCwraaE36VsZx1XRfqk/4cdiLSk63iyubj5ReAhK21VAXT2xYjmsZbV1LFTPiUTjH318E= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:10c2:b0:dc6:d2c8:6e50 with SMTP id w2-20020a05690210c200b00dc6d2c86e50mr1230828ybu.7.1707773974574; Mon, 12 Feb 2024 13:39:34 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:47 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-2-surenb@google.com> Subject: [PATCH v3 01/35] lib/string_helpers: Add flags param to string_get_size() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Andy Shevchenko , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , "Michael S. Tsirkin" , Jason Wang , " =?utf-8?q?Noralf_Tr=C3=B8nnes?= " From: Kent Overstreet The new flags parameter allows controlling - Whether or not the units suffix is separated by a space, for compatibility with sort -h - Whether or not to append a B suffix - we're not always printing bytes. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andy Shevchenko Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: "Noralf Trønnes" Cc: Jens Axboe Reviewed-by: Kees Cook --- arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +- drivers/block/virtio_blk.c | 4 ++-- drivers/gpu/drm/gud/gud_drv.c | 2 +- drivers/mmc/core/block.c | 4 ++-- drivers/mtd/spi-nor/debugfs.c | 6 ++--- .../ethernet/chelsio/cxgb4/cxgb4_debugfs.c | 4 ++-- drivers/scsi/sd.c | 8 +++---- include/linux/string_helpers.h | 11 +++++----- lib/string_helpers.c | 22 ++++++++++++++----- lib/test-string_helpers.c | 4 ++-- mm/hugetlb.c | 8 +++---- 11 files changed, 42 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index c6a4ac766b2b..27aa5a083ff0 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -260,7 +260,7 @@ print_mapping(unsigned long start, unsigned long end, unsigned long size, bool e if (end <= start) return; - string_get_size(size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); pr_info("Mapped 0x%016lx-0x%016lx with %s pages%s\n", start, end, buf, exec ? " (exec)" : ""); diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 2bf14a0e2815..94fba7f57079 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -934,9 +934,9 @@ static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize) nblocks = DIV_ROUND_UP_ULL(capacity, queue_logical_block_size(q) >> 9); string_get_size(nblocks, queue_logical_block_size(q), - STRING_UNITS_2, cap_str_2, sizeof(cap_str_2)); + STRING_SIZE_BASE2, cap_str_2, sizeof(cap_str_2)); string_get_size(nblocks, queue_logical_block_size(q), - STRING_UNITS_10, cap_str_10, sizeof(cap_str_10)); + 0, cap_str_10, sizeof(cap_str_10)); dev_notice(&vdev->dev, "[%s] %s%llu %d-byte logical blocks (%s/%s)\n", diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c index 9d7bf8ee45f1..6b1748e1f666 100644 --- a/drivers/gpu/drm/gud/gud_drv.c +++ b/drivers/gpu/drm/gud/gud_drv.c @@ -329,7 +329,7 @@ static int gud_stats_debugfs(struct seq_file *m, void *data) struct gud_device *gdrm = to_gud_device(entry->dev); char buf[10]; - string_get_size(gdrm->bulk_len, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(gdrm->bulk_len, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(m, "Max buffer size: %s\n", buf); seq_printf(m, "Number of errors: %u\n", gdrm->stats_num_errors); diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 32d49100dff5..1cded1e9aca4 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2557,7 +2557,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled); - string_get_size((u64)size, 512, STRING_UNITS_2, + string_get_size((u64)size, 512, STRING_SIZE_BASE2, cap_str, sizeof(cap_str)); pr_info("%s: %s %s %s%s\n", md->disk->disk_name, mmc_card_id(card), mmc_card_name(card), @@ -2753,7 +2753,7 @@ static int mmc_blk_alloc_rpmb_part(struct mmc_card *card, list_add(&rpmb->node, &md->rpmbs); - string_get_size((u64)size, 512, STRING_UNITS_2, + string_get_size((u64)size, 512, STRING_SIZE_BASE2, cap_str, sizeof(cap_str)); pr_info("%s: %s %s %s, chardev (%d:%d)\n", diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index 2dbda6b6938a..f6c3ca430df1 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -85,7 +85,7 @@ static int spi_nor_params_show(struct seq_file *s, void *data) seq_printf(s, "name\t\t%s\n", info->name); seq_printf(s, "id\t\t%*ph\n", SPI_NOR_MAX_ID_LEN, nor->id); - string_get_size(params->size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(params->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, "size\t\t%s\n", buf); seq_printf(s, "write size\t%u\n", params->writesize); seq_printf(s, "page size\t%u\n", params->page_size); @@ -130,14 +130,14 @@ static int spi_nor_params_show(struct seq_file *s, void *data) struct spi_nor_erase_type *et = &erase_map->erase_type[i]; if (et->size) { - string_get_size(et->size, 1, STRING_UNITS_2, buf, + string_get_size(et->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, " %02x (%s) [%d]\n", et->opcode, buf, i); } } if (!(nor->flags & SNOR_F_NO_OP_CHIP_ERASE)) { - string_get_size(params->size, 1, STRING_UNITS_2, buf, sizeof(buf)); + string_get_size(params->size, 1, STRING_SIZE_BASE2, buf, sizeof(buf)); seq_printf(s, " %02x (%s)\n", nor->params->die_erase_opcode, buf); } diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c index 14e0d989c3ba..7d5fbebd36fc 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c @@ -3457,8 +3457,8 @@ static void mem_region_show(struct seq_file *seq, const char *name, { char buf[40]; - string_get_size((u64)to - from + 1, 1, STRING_UNITS_2, buf, - sizeof(buf)); + string_get_size((u64)to - from + 1, 1, STRING_SIZE_BASE2, + buf, sizeof(buf)); seq_printf(seq, "%-15s %#x-%#x [%s]\n", name, from, to, buf); } diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 0833b3e6aa6e..e23bcb1d1ffa 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -2731,10 +2731,10 @@ sd_print_capacity(struct scsi_disk *sdkp, if (!sdkp->first_scan && old_capacity == sdkp->capacity) return; - string_get_size(sdkp->capacity, sector_size, - STRING_UNITS_2, cap_str_2, sizeof(cap_str_2)); - string_get_size(sdkp->capacity, sector_size, - STRING_UNITS_10, cap_str_10, sizeof(cap_str_10)); + string_get_size(sdkp->capacity, sector_size, STRING_SIZE_BASE2, + cap_str_2, sizeof(cap_str_2)); + string_get_size(sdkp->capacity, sector_size, 0, + cap_str_10, sizeof(cap_str_10)); sd_printk(KERN_NOTICE, sdkp, "%llu %d-byte logical blocks: (%s/%s)\n", diff --git a/include/linux/string_helpers.h b/include/linux/string_helpers.h index 58fb1f90eda5..a54467d891db 100644 --- a/include/linux/string_helpers.h +++ b/include/linux/string_helpers.h @@ -17,14 +17,13 @@ static inline bool string_is_terminated(const char *s, int len) return memchr(s, '\0', len) ? true : false; } -/* Descriptions of the types of units to - * print in */ -enum string_size_units { - STRING_UNITS_10, /* use powers of 10^3 (standard SI) */ - STRING_UNITS_2, /* use binary powers of 2^10 */ +enum string_size_flags { + STRING_SIZE_BASE2 = (1 << 0), + STRING_SIZE_NOSPACE = (1 << 1), + STRING_SIZE_NOBYTES = (1 << 2), }; -int string_get_size(u64 size, u64 blk_size, enum string_size_units units, +int string_get_size(u64 size, u64 blk_size, enum string_size_flags flags, char *buf, int len); int parse_int_array_user(const char __user *from, size_t count, int **array); diff --git a/lib/string_helpers.c b/lib/string_helpers.c index 7713f73e66b0..a5d7d1caed70 100644 --- a/lib/string_helpers.c +++ b/lib/string_helpers.c @@ -19,11 +19,17 @@ #include #include +enum string_size_units { + STRING_UNITS_10, /* use powers of 10^3 (standard SI) */ + STRING_UNITS_2, /* use binary powers of 2^10 */ +}; + /** * string_get_size - get the size in the specified units * @size: The size to be converted in blocks * @blk_size: Size of the block (use 1 for size in bytes) - * @units: units to use (powers of 1000 or 1024) + * @flags: units to use (powers of 1000 or 1024), whether to include space + * separator * @buf: buffer to format to * @len: length of buffer * @@ -34,14 +40,16 @@ * Return value: number of characters of output that would have been written * (which may be greater than len, if output was truncated). */ -int string_get_size(u64 size, u64 blk_size, const enum string_size_units units, +int string_get_size(u64 size, u64 blk_size, enum string_size_flags flags, char *buf, int len) { + enum string_size_units units = flags & flags & STRING_SIZE_BASE2 + ? STRING_UNITS_2 : STRING_UNITS_10; static const char *const units_10[] = { - "B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB" + "", "k", "M", "G", "T", "P", "E", "Z", "Y" }; static const char *const units_2[] = { - "B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB" + "", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi", "Yi" }; static const char *const *const units_str[] = { [STRING_UNITS_10] = units_10, @@ -128,8 +136,10 @@ int string_get_size(u64 size, u64 blk_size, const enum string_size_units units, else unit = units_str[units][i]; - return snprintf(buf, len, "%u%s %s", (u32)size, - tmp, unit); + return snprintf(buf, len, "%u%s%s%s%s", (u32)size, tmp, + (flags & STRING_SIZE_NOSPACE) ? "" : " ", + unit, + (flags & STRING_SIZE_NOBYTES) ? "" : "B"); } EXPORT_SYMBOL(string_get_size); diff --git a/lib/test-string_helpers.c b/lib/test-string_helpers.c index 9a68849a5d55..0b01ffca96fb 100644 --- a/lib/test-string_helpers.c +++ b/lib/test-string_helpers.c @@ -507,8 +507,8 @@ static __init void __test_string_get_size(const u64 size, const u64 blk_size, char buf10[string_get_size_maxbuf]; char buf2[string_get_size_maxbuf]; - string_get_size(size, blk_size, STRING_UNITS_10, buf10, sizeof(buf10)); - string_get_size(size, blk_size, STRING_UNITS_2, buf2, sizeof(buf2)); + string_get_size(size, blk_size, 0, buf10, sizeof(buf10)); + string_get_size(size, blk_size, STRING_SIZE_BASE2, buf2, sizeof(buf2)); test_string_get_size_check("STRING_UNITS_10", exp_result10, buf10, size, blk_size); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ed1581b670d4..26a8028e4bb7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3475,7 +3475,7 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (i == h->max_huge_pages_node[nid]) return; - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: allocating %u of page size %s failed node%d. Only allocated %lu hugepages.\n", h->max_huge_pages_node[nid], buf, nid, i); h->max_huge_pages -= (h->max_huge_pages_node[nid] - i); @@ -3561,7 +3561,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) if (i < h->max_huge_pages) { char buf[32]; - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", h->max_huge_pages, buf, i); h->max_huge_pages = i; @@ -3607,7 +3607,7 @@ static void __init report_hugepages(void) for_each_hstate(h) { char buf[32]; - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + string_get_size(huge_page_size(h), 1, STRING_SIZE_BASE2, buf, 32); pr_info("HugeTLB: registered %s page size, pre-allocated %ld pages\n", buf, h->free_huge_pages); pr_info("HugeTLB: %d KiB vmemmap can be freed for a %s page\n", @@ -4527,7 +4527,7 @@ static int __init hugetlb_init(void) char buf[32]; string_get_size(huge_page_size(&default_hstate), - 1, STRING_UNITS_2, buf, 32); + 1, STRING_SIZE_BASE2, buf, 32); pr_warn("HugeTLB: Ignoring hugepages=%lu associated with %s page size\n", default_hstate.max_huge_pages, buf); pr_warn("HugeTLB: Using hugepages=%lu for number of default huge pages\n", From patchwork Mon Feb 12 21:38:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554076 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC26C4D9FA for ; Mon, 12 Feb 2024 21:39:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773979; cv=none; b=IzW6S3L++Y6hA7IjN7EylxiT6ZQUJW0iYgYeBlErCzpCwXHxnYghEngru2Z8RjC/onIGnNBo6hjI758wan4wK5TvKW+Nt6OM27BXwFGA3fbnGbdphRDX0XDMjQ2Xzc1Q51Ih5tCjBQKs0qYvK/p5nXCtyyZ8pYHtA8szCCgmfaU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773979; c=relaxed/simple; bh=xFlCSB2BhnykCmQeRsN0j73C4UQlJIZav61x4g1IDdg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=r5/QTFO3g1h3iqHKedjUWYlsIIbBUMt5xcswxCtuLent90CWsR8N5XZSReK0nl9JNomVtjO/oPckh64mwvG5wPyJ+6O+FAw3oSkssykU2EcI3NA4ybHfKarK+3sCk62ztmBdNkuYZQG6iKbqlhVBcZu45qKwBSVapzSYbN3ifIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=M/12o5m4; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M/12o5m4" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6077ca6e1e1so4672107b3.2 for ; Mon, 12 Feb 2024 13:39:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773977; x=1708378777; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=G9b+4SdACUtX02zckFp4W9j//70BE65Et6Qs8xfY7vk=; b=M/12o5m4O5L7Cmffsgn+3ctDjJPoIyHgXvE3XohbdSYRip+xxol1RqJnieBh2IzGMa IDFtU90EoNVA82v18P0o/JWnTomPM6uY3ZQAPrr4ahvpyZMDM+dStxQlBD5eqVRlffgF snXt0E61LpPpNQ+XIhMNHZrgGIQeSOT0H4s4bTt2zOL2BAmWPW0WMIVI3taQ5KYC3WgQ vOC0dGwchTHXwYRDyWMILr6gMYkQrxa9+ZQNR3UQzbopOalvnHuoAosmbAWDeNx6txbq JjE9mAOFaYIen/RqAR4IL+CSiZDMxLttO3VY8u4Q5LDRwiraX7gghY0g3/Oi96/KNSBW 02Xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773977; x=1708378777; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=G9b+4SdACUtX02zckFp4W9j//70BE65Et6Qs8xfY7vk=; b=ZeTU3X/K7ILRkh33ocDyR24jqMSbOrryxRgzbykNlfWDuG0HZJvi8O8cgTvJq4I5w+ 0wZnjRQF/MMwBL/Vi4pKlcOVOEJG7Es0ig3a8+kiM33s2QXbxxTJ072owIT5EbM0/oYr zbcgQqIX2n8bcQ+LLvAcxAjxhjg9hiHqgMwpAJC+W1pShTD46s/p2cC8J3ds4Aza8rEV QrmWhdbbBmpZhudw8B7Tm72RH2nCOAJU3QWZ+RpTzp3+SN+JjSoZz7SSc1XQ3W69su3N UM1dMKNA/M2J+WbKOINkQL7oOZiXeZyle3K8MKXm1ddFckaAQe5NMs9lSui4iQkBx6Qj 63EA== X-Forwarded-Encrypted: i=1; AJvYcCUFEWahBblhPtt1GTHytxeT8YbHjtTY+VjNvlUX8hFZn3dPzPIkMCUamd3+FagaJCosdso6Uwsn6jzFZb/Bs12jbw80RYXeLAsvGTwDtw== X-Gm-Message-State: AOJu0YwNHFy6cVWl45lsLnRBiCXr40KAstbJ7ZZUBSXpb2hFaX3I6cOO d/8kRCR3gmO4ISG8Ye1eMQSj0eeEFXVbbVkAlnMcJgZ9YpWXnOO25pGBF4/X4XY2V0RtIu7eCtH vDA== X-Google-Smtp-Source: AGHT+IEGxw+pumepe2rycJxKPCLRRtMyemZOKJSP2HsBcIOyAk76oq0qhFPUoU2Rl7QOiTv7xvjwgacHnLI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a0d:d495:0:b0:607:7f86:dc24 with SMTP id w143-20020a0dd495000000b006077f86dc24mr109272ywd.3.1707773976926; Mon, 12 Feb 2024 13:39:36 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:48 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-3-surenb@google.com> Subject: [PATCH v3 02/35] scripts/kallysms: Always include __start and __stop symbols From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet These symbols are used to denote section boundaries: by always including them we can unify loading sections from modules with loading built-in sections, which leads to some significant cleanup. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- scripts/kallsyms.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 653b92f6d4c8..47978efe4797 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -204,6 +204,11 @@ static int symbol_in_range(const struct sym_entry *s, return 0; } +static bool string_starts_with(const char *s, const char *prefix) +{ + return strncmp(s, prefix, strlen(prefix)) == 0; +} + static int symbol_valid(const struct sym_entry *s) { const char *name = sym_name(s); @@ -211,6 +216,14 @@ static int symbol_valid(const struct sym_entry *s) /* if --all-symbols is not specified, then symbols outside the text * and inittext sections are discarded */ if (!all_symbols) { + /* + * Symbols starting with __start and __stop are used to denote + * section boundaries, and should always be included: + */ + if (string_starts_with(name, "__start_") || + string_starts_with(name, "__stop_")) + return 1; + if (symbol_in_range(s, text_ranges, ARRAY_SIZE(text_ranges)) == 0) return 0; From patchwork Mon Feb 12 21:38:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554077 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B98A4E1C8 for ; Mon, 12 Feb 2024 21:39:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773982; cv=none; b=LBou7WHeUM8AVy+vI5tTs8dYUfAb51HTwYmFk/n5uCore+QB6esb5ZgT/5Y+HPywwSpSduYsIHfpXFXtmmEfnAijTxC8l2BrYT5A3Gd7Rq1DhK2rO2wgXZBsqPqMwUMddSmIjCGxAgNhEEhoI6iUbgbDyDA4Cu4OUAXQKK8Exrs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773982; c=relaxed/simple; bh=whDe+q0F2iw/LKcrylpqxuIepYX/P89tJ5PMvrLDnWo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AS/w6dk1aElBa4+sYgEa8TniFQvjvpGY5Qd+9S3WzmNwQUz1aEZQeq9Xc7SCF2YBSs6nM0PgjV/QgLMCeSt8QwV/DqoQKaSiStkCdFp0FKf9h5b2xsj8F53znoNAa4PQGmczIE31+l/noOPutusToKhWydnlKsKajUnIQ74q2CM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=McIt7N6Y; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="McIt7N6Y" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b269686aso5889057276.1 for ; Mon, 12 Feb 2024 13:39:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773979; x=1708378779; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DoSimaxvysC1tsWvSL78khESKiClPBlBlt51p8Cfztk=; b=McIt7N6Y1l6SM0zWWXAAPQXrEOzX0tCunh+LzgR9/THuZZ8itogP4ZQQMeJdNa48Rp Br3gQPIUmgat8qZgImWGS/CgXc3Sj+vJE3ILc39mgJU6r/AXtp8MmvyjDzEY8FEfKvYb Q3blquE22ysaO9MZi2aR72n74GkgEJhPPuHF4ocTj0aqd4zxLG9JX2U7IAjv2kBShlLJ 3RJC/4OdNpIlCGxQjQE6opWQfvb+m/N2KJbxfTjlZ7U4ye2it/N7oA0eec/Na158xAGe nNi//WoXepqSR+oTAL9o8V6X+aie5WSmwa32rDeUfRwj9WK0YC+h8CJ8pUOcO9X/Reku f2Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773979; x=1708378779; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DoSimaxvysC1tsWvSL78khESKiClPBlBlt51p8Cfztk=; b=VzLu+38qH7CUJhUd8EAbSC7qvCy2vpyuFIaBZebMJVSeC531bEQ/jH7GCrqAl49sDA H/eQ/VmhEfOuDlbVi/n8xPbUCBkjJXkxR4AYIH2fE5eQ1pnFyAOb9uPzhghifiiqIUY4 48JtSdAYr9pzgFL40KLMmiagWMK8G55efjyAnRpxJOygC4riMWgEDiq3zbipkfpmzqvZ ZI9mZOH6zIXYp9S+tXZtgLUYJTVg9Nc+dQT5hu7DuXP3EO9frjJ22OXBTwr3SmAYNXoT n/3C0wcOy9mgHxTB5PKHlZkcavH1FZQnfXqR84pC/5rkN48X0zxoNOXyF4afLkAhgJjX Z3tw== X-Gm-Message-State: AOJu0Yy4FyQzwEC/jcrbIjhcgRV4LytCXYuHl8gFjPQmkbgf8eOjGZt8 aNyMbS4UC/kre2BjtVACEZTGhib7SQuORkzL1frbjicB957fyjZjHIfFnx8ZM28Y9kHP7hktBV3 ckg== X-Google-Smtp-Source: AGHT+IHrMi+yJjCe9l2fRgqlZ6eFfFd+JGnX1MN85wktYNynWGXKMQ9Jg1QyNSwk5gH8+cU1DTAXQiWqo+k= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:1005:b0:dcb:c2c0:b319 with SMTP id w5-20020a056902100500b00dcbc2c0b319mr80854ybt.9.1707773978951; Mon, 12 Feb 2024 13:39:38 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:49 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-4-surenb@google.com> Subject: [PATCH v3 03/35] fs: Convert alloc_inode_sb() to a macro From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Alexander Viro From: Kent Overstreet We're introducing alloc tagging, which tracks memory allocations by callsite. Converting alloc_inode_sb() to a macro means allocations will be tracked by its caller, which is a bit more useful. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Alexander Viro Reviewed-by: Kees Cook --- include/linux/fs.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index ed5966a70495..7794b4182bac 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3013,11 +3013,7 @@ int setattr_should_drop_sgid(struct mnt_idmap *idmap, * This must be used for allocating filesystems specific inodes to set * up the inode reclaim context correctly. */ -static inline void * -alloc_inode_sb(struct super_block *sb, struct kmem_cache *cache, gfp_t gfp) -{ - return kmem_cache_alloc_lru(cache, &sb->s_inode_lru, gfp); -} +#define alloc_inode_sb(_sb, _cache, _gfp) kmem_cache_alloc_lru(_cache, &_sb->s_inode_lru, _gfp) extern void __insert_inode_hash(struct inode *, unsigned long hashval); static inline void insert_inode_hash(struct inode *inode) From patchwork Mon Feb 12 21:38:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554078 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5C804EB29 for ; Mon, 12 Feb 2024 21:39:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773984; cv=none; b=AQEKXumoVu5Ti2dCgepo9qzGyyFxUqyUsbO6fmfx3Jy39aJ93hui3GJGlTmw654cGLS7fWO5TAJQ6bAO3xVnIAA4Zgcdo5SP8FBPu5em5069JCkE9r4tQBCnbUyRW+n4P0sm6TXWuJ80lF2wQUm3z+O1bENmJf98QNXcEbRGCmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773984; c=relaxed/simple; bh=40JGP6CzFjg3h39oCJ9HnJST1/NB1QKt/epFUTi9WlI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t5I9TS1mLTrRP3XZ+btMgd4xGk9Y9sRQGQRJr/Ox7OdFYQzFyWOe4+qQ/5ySjUl8FlEBQsELYYH57MP0UUv1aWXJXZH3lHVIkG2I1/vue/fqz0vnq4KZwZPK1d0KKVPGTOIWLSiPS/K5KefeK0kmpr7Es4sNOaYJzeqfuQTtM4o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YpO2u5d0; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YpO2u5d0" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5efe82b835fso84258887b3.0 for ; Mon, 12 Feb 2024 13:39:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773981; x=1708378781; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=S1akKlUBB/xj+27ZoPrRWZFBMZktx+X63RMMKv/Ae94=; b=YpO2u5d0d9mTEQmR5UnaU1FRk1s0b5EhzYkDaPy0iVfgA/W+Nt/n/K9L1tUzICpOeM UC6pSLBw4UhcVX/zWcJAJXXDBi1isHM2HrPGAjERm0UnF0viWfExQV2XhNhHp6/4Ilwv 1OTeJtF1+GXxF/0RupMV2QzcVZMF9ESvGVoAiJbtX8sW5OEOevu3NdYXb7DKZeL75NjQ CWpJX4F3cvng1+y1PUCMV6J8pTr+Kvrn2nGmhU9zZvwpKQKsMQ5Bx3rw7l8MUZT6CBQ7 AYB4nkmllxx+G1zT64VDPvr+LkvInMe8WjvI23QKFsWYzKBeLt4HxlEDkMmE1bqzhqXk X5cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773981; x=1708378781; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=S1akKlUBB/xj+27ZoPrRWZFBMZktx+X63RMMKv/Ae94=; b=LIHC++eELqclUwk8lkLNFUgnc22l9ivzS5IJpiQfN5nf9leVk92TPdvHkL7ZghE6M2 GiBb+mnJKzjUz2FgeXzlTp/gVRGz2bo3Hah/lW6Tvqb280BIUbZj40mQDcQAdImcLd3m cp5S21doeY1XSPGJgbgJB+wObd5kpCOiyTM2f+/l/J8YBkmJ/6iJwsgjPgrsT+m9YRT0 uemr2A9AkEo1LzH12gWk8/M2FjId9lbUOIZqpdS5jG4Crd7NOJIvx4tVjYSEx60hDwZq Glb1jpbmw1UI0PSp1ljwn+Xg0ChwAzbWNpFDkVt+C/VzQ84/1nVGq+jw7PqsG2/mmVsp sEpQ== X-Gm-Message-State: AOJu0YzIQTyZW2Gs8sXkOnXjYPQiW7gVdE6P4aePmPu8lQ4Bi8GRzI5o cmv/rUCzaT3PWcPyIml80ROQAxSRFqGi9uJpMnEZF+J7nUzG5cN9idGTh9RkTRHwafdb/AwRWzI RjQ== X-Google-Smtp-Source: AGHT+IG42uM/J0f/ArRzlXl1uKup/Wz8pDYxmhL3bPE8mSLRBQvJcQ+eddRahpH6ZS5yXp+f3miFrCzzpII= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:a90:b0:5ff:a9fa:2722 with SMTP id ci16-20020a05690c0a9000b005ffa9fa2722mr2210184ywb.3.1707773980866; Mon, 12 Feb 2024 13:39:40 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:50 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-5-surenb@google.com> Subject: [PATCH v3 04/35] mm: enumerate all gfp flags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, " =?utf-8?b?UGV0ciBU?= =?utf-8?b?ZXNhxZnDrWs=?= " Introduce GFP bits enumeration to let compiler track the number of used bits (which depends on the config options) instead of hardcoding them. That simplifies __GFP_BITS_SHIFT calculation. Suggested-by: Petr Tesařík Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/gfp_types.h | 90 +++++++++++++++++++++++++++------------ 1 file changed, 62 insertions(+), 28 deletions(-) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 1b6053da8754..868c8fb1bbc1 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -21,44 +21,78 @@ typedef unsigned int __bitwise gfp_t; * include/trace/events/mmflags.h and tools/perf/builtin-kmem.c */ +enum { + ___GFP_DMA_BIT, + ___GFP_HIGHMEM_BIT, + ___GFP_DMA32_BIT, + ___GFP_MOVABLE_BIT, + ___GFP_RECLAIMABLE_BIT, + ___GFP_HIGH_BIT, + ___GFP_IO_BIT, + ___GFP_FS_BIT, + ___GFP_ZERO_BIT, + ___GFP_UNUSED_BIT, /* 0x200u unused */ + ___GFP_DIRECT_RECLAIM_BIT, + ___GFP_KSWAPD_RECLAIM_BIT, + ___GFP_WRITE_BIT, + ___GFP_NOWARN_BIT, + ___GFP_RETRY_MAYFAIL_BIT, + ___GFP_NOFAIL_BIT, + ___GFP_NORETRY_BIT, + ___GFP_MEMALLOC_BIT, + ___GFP_COMP_BIT, + ___GFP_NOMEMALLOC_BIT, + ___GFP_HARDWALL_BIT, + ___GFP_THISNODE_BIT, + ___GFP_ACCOUNT_BIT, + ___GFP_ZEROTAGS_BIT, +#ifdef CONFIG_KASAN_HW_TAGS + ___GFP_SKIP_ZERO_BIT, + ___GFP_SKIP_KASAN_BIT, +#endif +#ifdef CONFIG_LOCKDEP + ___GFP_NOLOCKDEP_BIT, +#endif + ___GFP_LAST_BIT +}; + /* Plain integer GFP bitmasks. Do not use this directly. */ -#define ___GFP_DMA 0x01u -#define ___GFP_HIGHMEM 0x02u -#define ___GFP_DMA32 0x04u -#define ___GFP_MOVABLE 0x08u -#define ___GFP_RECLAIMABLE 0x10u -#define ___GFP_HIGH 0x20u -#define ___GFP_IO 0x40u -#define ___GFP_FS 0x80u -#define ___GFP_ZERO 0x100u +#define ___GFP_DMA BIT(___GFP_DMA_BIT) +#define ___GFP_HIGHMEM BIT(___GFP_HIGHMEM_BIT) +#define ___GFP_DMA32 BIT(___GFP_DMA32_BIT) +#define ___GFP_MOVABLE BIT(___GFP_MOVABLE_BIT) +#define ___GFP_RECLAIMABLE BIT(___GFP_RECLAIMABLE_BIT) +#define ___GFP_HIGH BIT(___GFP_HIGH_BIT) +#define ___GFP_IO BIT(___GFP_IO_BIT) +#define ___GFP_FS BIT(___GFP_FS_BIT) +#define ___GFP_ZERO BIT(___GFP_ZERO_BIT) /* 0x200u unused */ -#define ___GFP_DIRECT_RECLAIM 0x400u -#define ___GFP_KSWAPD_RECLAIM 0x800u -#define ___GFP_WRITE 0x1000u -#define ___GFP_NOWARN 0x2000u -#define ___GFP_RETRY_MAYFAIL 0x4000u -#define ___GFP_NOFAIL 0x8000u -#define ___GFP_NORETRY 0x10000u -#define ___GFP_MEMALLOC 0x20000u -#define ___GFP_COMP 0x40000u -#define ___GFP_NOMEMALLOC 0x80000u -#define ___GFP_HARDWALL 0x100000u -#define ___GFP_THISNODE 0x200000u -#define ___GFP_ACCOUNT 0x400000u -#define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_DIRECT_RECLAIM BIT(___GFP_DIRECT_RECLAIM_BIT) +#define ___GFP_KSWAPD_RECLAIM BIT(___GFP_KSWAPD_RECLAIM_BIT) +#define ___GFP_WRITE BIT(___GFP_WRITE_BIT) +#define ___GFP_NOWARN BIT(___GFP_NOWARN_BIT) +#define ___GFP_RETRY_MAYFAIL BIT(___GFP_RETRY_MAYFAIL_BIT) +#define ___GFP_NOFAIL BIT(___GFP_NOFAIL_BIT) +#define ___GFP_NORETRY BIT(___GFP_NORETRY_BIT) +#define ___GFP_MEMALLOC BIT(___GFP_MEMALLOC_BIT) +#define ___GFP_COMP BIT(___GFP_COMP_BIT) +#define ___GFP_NOMEMALLOC BIT(___GFP_NOMEMALLOC_BIT) +#define ___GFP_HARDWALL BIT(___GFP_HARDWALL_BIT) +#define ___GFP_THISNODE BIT(___GFP_THISNODE_BIT) +#define ___GFP_ACCOUNT BIT(___GFP_ACCOUNT_BIT) +#define ___GFP_ZEROTAGS BIT(___GFP_ZEROTAGS_BIT) #ifdef CONFIG_KASAN_HW_TAGS -#define ___GFP_SKIP_ZERO 0x1000000u -#define ___GFP_SKIP_KASAN 0x2000000u +#define ___GFP_SKIP_ZERO BIT(___GFP_SKIP_ZERO_BIT) +#define ___GFP_SKIP_KASAN BIT(___GFP_SKIP_KASAN_BIT) #else #define ___GFP_SKIP_ZERO 0 #define ___GFP_SKIP_KASAN 0 #endif #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x4000000u +#define ___GFP_NOLOCKDEP BIT(___GFP_NOLOCKDEP_BIT) #else #define ___GFP_NOLOCKDEP 0 #endif -/* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* * Physical address zone modifiers (see linux/mmzone.h - low four bits) @@ -249,7 +283,7 @@ typedef unsigned int __bitwise gfp_t; #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT ___GFP_LAST_BIT #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** From patchwork Mon Feb 12 21:38:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554079 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F0954F608 for ; Mon, 12 Feb 2024 21:39:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773987; cv=none; b=VaGDzziwj7zMJzrAWaDG2P0T+wZmddC2es/Tnrmj0FmNJExUpXCiXtpU7kQNKPLTreZBQtOxOqu01Pe3r12qKwqRyhItFUU47bsn8LgXEufAAXXKF7t2R0WFtTZyrYYNAPcEBaPl3S5iCD0heSz2v670v7SsTxkNLO0jbq3/XPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773987; c=relaxed/simple; bh=uh7K/q/c4QJyzwW823bcy9QpQKT5WsQlrvqviezygek=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=R1bF1KUGeOCEPaZzw20t2KMFTWhoDUixjqBqTxp05JX/r9azP+U92uFkFpaidu09A387pfsq+CsjQNr+mr9hb3f1PnUkwcGHEVUKgkBDN9K+EI+qJVYv5dfVKl7R4wiEtUJCrek/M0HGmJPt6yO8m4qjVqdTrrhBPf9RAoSvhwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yp7VWu5f; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yp7VWu5f" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcc05887ee9so866618276.1 for ; Mon, 12 Feb 2024 13:39:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773983; x=1708378783; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+4MAAfPCe00kOecp4g2Jxa8MlwYPnJB9NakMZqL/K7o=; b=yp7VWu5fNwitwcvxLMXVWjPkrx2mq8/EoVQuMBoy2EnO8DThAsULS3dOedha5ba9Jc 3L+yb+SCr9iIKjuKlaAXecFckHmyQd/YmtuxYX94E+A+Fd8fwGKtU8/s83tyWPsBg0RI u5wPpludYa7PYYVoXj+eekenoXV5r4JfNQpO5Qs9WcHmxq21CCdQsdAa76R7rNzWhGRY ECgX33zV6c7PZZbnJNNQKCpXdD8nFJWf8n0dub4VZA3k1t1qYVOCB1X4+xsxgbnxa9VH xKBVx3pLDyYN0Zpr/S+/h5wD0TfSiKWAW21Peq4UD2T8CP7UviC1jDre6WFmLAj8Nkph HLSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773983; x=1708378783; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+4MAAfPCe00kOecp4g2Jxa8MlwYPnJB9NakMZqL/K7o=; b=k5nTWOiobXTIFtrb570mafk18J7nTD/lfbupZVeVOOOxv0+RRc1s/CBZK/uZO0J8PI HqGPxKAPXaCgVSMJ9AR1TXNbaQYbYFadXnGd3mCC3aBgK7ZhcMDaSLCf7ReBytMPPp6U jp6037qTE6shA8aIdcqh+CNmoo+GkBO6ItPROecPOf7xOE//S84o0e4qydGezOQYvXzb pCRNHf7PoyhrT9Y2XW/D8gFAZsyw6py0Xuiof9jB5eVV65YJXsmb8IONKRgr3u+RHvaR lg5/0HCvs4BKxQChNOXae6hf/QQa0wrx+G9tH4vCzaK56TG2AeJDKOKDESa8LHc3uxXf TxhQ== X-Forwarded-Encrypted: i=1; AJvYcCVcQkF8GfWYjXAqOEyVE+IkIjSU/HoO+ZUw3Mn4CT3G/PugvZcZc9AG74/rUdqn32cwHCXEBUpQ57Ii9lAJq2bwpL3SZKCJkWmKGuXdtg== X-Gm-Message-State: AOJu0YyBnEkuoNxPKkxX0G2jIK/DByrRC3yQGLphZgnS/c8aZFIH89fD /wJ5wmeC9wRkA+KbxVQzS7fUz9yk56YbwbavgteJMtn6/U55rQJuIm667kUF4pFTd395zPN1cj0 i6w== X-Google-Smtp-Source: AGHT+IEx2k/hmerJPpr6v0WCuO8HuxZg22OlVEzZgrJnpmE7cctHbOLVeHOYQJ7ufTWaWjLLEbzR787VVlU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:abcf:0:b0:dc6:a6e3:ca93 with SMTP id v73-20020a25abcf000000b00dc6a6e3ca93mr292076ybi.10.1707773983086; Mon, 12 Feb 2024 13:39:43 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:51 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-6-surenb@google.com> Subject: [PATCH v3 05/35] mm: introduce slabobj_ext to support slab object extensions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Currently slab pages can store only vectors of obj_cgroup pointers in page->memcg_data. Introduce slabobj_ext structure to allow more data to be stored for each slab object. Wrap obj_cgroup into slabobj_ext to support current functionality while allowing to extend slabobj_ext in the future. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/memcontrol.h | 20 ++++++--- include/linux/mm_types.h | 4 +- init/Kconfig | 4 ++ mm/kfence/core.c | 14 +++--- mm/kfence/kfence.h | 4 +- mm/memcontrol.c | 56 +++-------------------- mm/page_owner.c | 2 +- mm/slab.h | 92 +++++++++++++++++++++++++++++--------- mm/slab_common.c | 48 ++++++++++++++++++++ mm/slub.c | 64 +++++++++++++------------- 10 files changed, 189 insertions(+), 119 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..eb1dc181e412 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -348,8 +348,8 @@ struct mem_cgroup { extern struct mem_cgroup *root_mem_cgroup; enum page_memcg_data_flags { - /* page->memcg_data is a pointer to an objcgs vector */ - MEMCG_DATA_OBJCGS = (1UL << 0), + /* page->memcg_data is a pointer to an slabobj_ext vector */ + MEMCG_DATA_OBJEXTS = (1UL << 0), /* page has been accounted as a non-slab kernel page */ MEMCG_DATA_KMEM = (1UL << 1), /* the next bit after the last actual flag */ @@ -387,7 +387,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -408,7 +408,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -505,7 +505,7 @@ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) */ unsigned long memcg_data = READ_ONCE(folio->memcg_data); - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) return NULL; if (memcg_data & MEMCG_DATA_KMEM) { @@ -551,7 +551,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob static inline bool folio_memcg_kmem(struct folio *folio) { VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); - VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio); return folio->memcg_data & MEMCG_DATA_KMEM; } @@ -1633,6 +1633,14 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +/* + * Extended information for slab objects stored as an array in page->memcg_data + * if MEMCG_DATA_OBJEXTS is set. + */ +struct slabobj_ext { + struct obj_cgroup *objcg; +} __aligned(8); + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8b611e13153e..9ff97f4e74c5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -169,7 +169,7 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif @@ -306,7 +306,7 @@ struct folio { }; atomic_t _mapcount; atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif #if defined(WANT_PAGE_VIRTUAL) diff --git a/init/Kconfig b/init/Kconfig index deda3d14135b..8ca5285108be 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -949,10 +949,14 @@ config CGROUP_FAVOR_DYNMODS Say N if unsure. +config SLAB_OBJ_EXT + bool + config MEMCG bool "Memory controller" select PAGE_COUNTER select EVENTFD + select SLAB_OBJ_EXT help Provides control over the memory footprint of tasks in a cgroup. diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 8350f5c06f2e..964b8482275b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -595,9 +595,9 @@ static unsigned long kfence_init_pool(void) continue; __folio_set_slab(slab_folio(slab)); -#ifdef CONFIG_MEMCG - slab->memcg_data = (unsigned long)&kfence_metadata_init[i / 2 - 1].objcg | - MEMCG_DATA_OBJCGS; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | + MEMCG_DATA_OBJEXTS; #endif } @@ -645,8 +645,8 @@ static unsigned long kfence_init_pool(void) if (!i || (i % 2)) continue; -#ifdef CONFIG_MEMCG - slab->memcg_data = 0; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = 0; #endif __folio_clear_slab(slab_folio(slab)); } @@ -1139,8 +1139,8 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); -#ifdef CONFIG_MEMCG - KFENCE_WARN_ON(meta->objcg); +#ifdef CONFIG_MEMCG_KMEM + KFENCE_WARN_ON(meta->obj_exts.objcg); #endif /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index f46fbb03062b..084f5f36e8e7 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -97,8 +97,8 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; -#ifdef CONFIG_MEMCG - struct obj_cgroup *objcg; +#ifdef CONFIG_MEMCG_KMEM + struct slabobj_ext obj_exts; #endif }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ed40f9d3a27..7021639d2a6f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2977,13 +2977,6 @@ void mem_cgroup_commit_charge(struct folio *folio, struct mem_cgroup *memcg) } #ifdef CONFIG_MEMCG_KMEM -/* - * The allocated objcg pointers array is not accounted directly. - * Moreover, it should not come from DMA buffer and is not readily - * reclaimable. So those GFP bits should be masked off. - */ -#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ - __GFP_ACCOUNT | __GFP_NOFAIL) /* * mod_objcg_mlstate() may be called with irq enabled, so @@ -3003,62 +2996,27 @@ static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, rcu_read_unlock(); } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab) -{ - unsigned int objects = objs_per_slab(s, slab); - unsigned long memcg_data; - void *vec; - - gfp &= ~OBJCGS_CLEAR_MASK; - vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - slab_nid(slab)); - if (!vec) - return -ENOMEM; - - memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS; - if (new_slab) { - /* - * If the slab is brand new and nobody can yet access its - * memcg_data, no synchronization is required and memcg_data can - * be simply assigned. - */ - slab->memcg_data = memcg_data; - } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { - /* - * If the slab is already in use, somebody can allocate and - * assign obj_cgroups in parallel. In this case the existing - * objcg vector should be reused. - */ - kfree(vec); - return 0; - } - - kmemleak_not_leak(vec); - return 0; -} - static __always_inline struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) { /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in - * slab->memcg_data. + * slab->obj_exts. */ if (folio_test_slab(folio)) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; struct slab *slab; unsigned int off; slab = folio_slab(folio); - objcgs = slab_objcgs(slab); - if (!objcgs) + obj_exts = slab_obj_exts(slab); + if (!obj_exts) return NULL; off = obj_to_index(slab->slab_cache, slab, p); - if (objcgs[off]) - return obj_cgroup_memcg(objcgs[off]); + if (obj_exts[off].objcg) + return obj_cgroup_memcg(obj_exts[off].objcg); return NULL; } @@ -3066,7 +3024,7 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) /* * folio_memcg_check() is used here, because in theory we can encounter * a folio where the slab flag has been cleared already, but - * slab->memcg_data has not been freed yet + * slab->obj_exts has not been freed yet * folio_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 5634e5d890f8..262aa7d25f40 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -377,7 +377,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, if (!memcg_data) goto out_unlock; - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) ret += scnprintf(kbuf + ret, count - ret, "Slab cache page\n"); diff --git a/mm/slab.h b/mm/slab.h index 54deeb0428c6..436a126486b5 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -87,8 +87,8 @@ struct slab { unsigned int __unused; atomic_t __page_refcount; -#ifdef CONFIG_MEMCG - unsigned long memcg_data; +#ifdef CONFIG_SLAB_OBJ_EXT + unsigned long obj_exts; #endif }; @@ -97,8 +97,8 @@ struct slab { SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); -#ifdef CONFIG_MEMCG -SLAB_MATCH(memcg_data, memcg_data); +#ifdef CONFIG_SLAB_OBJ_EXT +SLAB_MATCH(memcg_data, obj_exts); #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <= sizeof(struct page)); @@ -541,42 +541,90 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla return false; } -#ifdef CONFIG_MEMCG_KMEM +#ifdef CONFIG_SLAB_OBJ_EXT + /* - * slab_objcgs - get the object cgroups vector associated with a slab + * slab_obj_exts - get the pointer to the slab object extension vector + * associated with a slab. * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the slab, + * Returns a pointer to the object extension vector associated with the slab, * or NULL if no such vector has been associated yet. */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { - unsigned long memcg_data = READ_ONCE(slab->memcg_data); + unsigned long obj_exts = READ_ONCE(slab->obj_exts); - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), +#ifdef CONFIG_MEMCG + VM_BUG_ON_PAGE(obj_exts && !(obj_exts & MEMCG_DATA_OBJEXTS), slab_page(slab)); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); +#else + return (struct slabobj_ext *)obj_exts; +#endif } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab); -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, - enum node_stat_item idx, int nr); -#else /* CONFIG_MEMCG_KMEM */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); + +static inline bool need_slab_obj_ext(void) +{ + /* + * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally + * inside memcg_slab_post_alloc_hook. No other users for now. + */ + return false; +} + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + struct slab *slab; + + if (!p) + return NULL; + + if (!need_slab_obj_ext()) + return NULL; + + slab = virt_to_slab(p); + if (!slab_obj_exts(slab) && + WARN(alloc_slab_obj_exts(slab, s, flags, false), + "%s, %s: Failed to create slab extension vector!\n", + __func__, s->name)) + return NULL; + + return slab_obj_exts(slab) + obj_to_index(s, slab, p); +} + +#else /* CONFIG_SLAB_OBJ_EXT */ + +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { return NULL; } -static inline int memcg_alloc_slab_cgroups(struct slab *slab, - struct kmem_cache *s, gfp_t gfp, - bool new_slab) +static inline int alloc_slab_obj_exts(struct slab *slab, + struct kmem_cache *s, gfp_t gfp, + bool new_slab) { return 0; } -#endif /* CONFIG_MEMCG_KMEM */ + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + return NULL; +} + +#endif /* CONFIG_SLAB_OBJ_EXT */ + +#ifdef CONFIG_MEMCG_KMEM +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, + enum node_stat_item idx, int nr); +#endif size_t __ksize(const void *objp); diff --git a/mm/slab_common.c b/mm/slab_common.c index 238293b1dbe1..6bfa1810da5e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -201,6 +201,54 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, return NULL; } +#ifdef CONFIG_SLAB_OBJ_EXT +/* + * The allocated objcg pointers array is not accounted directly. + * Moreover, it should not come from DMA buffer and is not readily + * reclaimable. So those GFP bits should be masked off. + */ +#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ + __GFP_ACCOUNT | __GFP_NOFAIL) + +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) +{ + unsigned int objects = objs_per_slab(s, slab); + unsigned long obj_exts; + void *vec; + + gfp &= ~OBJCGS_CLEAR_MASK; + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + if (!vec) + return -ENOMEM; + + obj_exts = (unsigned long)vec; +#ifdef CONFIG_MEMCG + obj_exts |= MEMCG_DATA_OBJEXTS; +#endif + if (new_slab) { + /* + * If the slab is brand new and nobody can yet access its + * obj_exts, no synchronization is required and obj_exts can + * be simply assigned. + */ + slab->obj_exts = obj_exts; + } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + /* + * If the slab is already in use, somebody can allocate and + * assign slabobj_exts in parallel. In this case the existing + * objcg vector should be reused. + */ + kfree(vec); + return 0; + } + + kmemleak_not_leak(vec); + return 0; +} +#endif /* CONFIG_SLAB_OBJ_EXT */ + static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, diff --git a/mm/slub.c b/mm/slub.c index 2ef88bbf56a3..1eb1050814aa 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -683,10 +683,10 @@ static inline bool __slab_update_freelist(struct kmem_cache *s, struct slab *sla if (s->flags & __CMPXCHG_DOUBLE) { ret = __update_freelist_fast(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } else { ret = __update_freelist_slow(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } if (likely(ret)) return true; @@ -710,13 +710,13 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, if (s->flags & __CMPXCHG_DOUBLE) { ret = __update_freelist_fast(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } else { unsigned long flags; local_irq_save(flags); ret = __update_freelist_slow(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); local_irq_restore(flags); } if (likely(ret)) @@ -1881,13 +1881,25 @@ static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; } -#ifdef CONFIG_MEMCG_KMEM -static inline void memcg_free_slab_cgroups(struct slab *slab) +#ifdef CONFIG_SLAB_OBJ_EXT +static inline void free_slab_obj_exts(struct slab *slab) +{ + struct slabobj_ext *obj_exts; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + return; + + kfree(obj_exts); + slab->obj_exts = 0; +} +#else +static inline void free_slab_obj_exts(struct slab *slab) { - kfree(slab_objcgs(slab)); - slab->memcg_data = 0; } +#endif +#ifdef CONFIG_MEMCG_KMEM static inline size_t obj_full_size(struct kmem_cache *s) { /* @@ -1966,15 +1978,15 @@ static void __memcg_slab_post_alloc_hook(struct kmem_cache *s, if (likely(p[i])) { slab = virt_to_slab(p[i]); - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, false)) { + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } off = obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; + slab_obj_exts(slab)[off].objcg = objcg; mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { @@ -1995,18 +2007,18 @@ void memcg_slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects, - struct obj_cgroup **objcgs) + struct slabobj_ext *obj_exts) { for (int i = 0; i < objects; i++) { struct obj_cgroup *objcg; unsigned int off; off = obj_to_index(s, slab, p[i]); - objcg = objcgs[off]; + objcg = obj_exts[off].objcg; if (!objcg) continue; - objcgs[off] = NULL; + obj_exts[off].objcg = NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); @@ -2018,16 +2030,16 @@ static __fastpath_inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; if (!memcg_kmem_online()) return; - objcgs = slab_objcgs(slab); - if (likely(!objcgs)) + obj_exts = slab_obj_exts(slab); + if (likely(!obj_exts)) return; - __memcg_slab_free_hook(s, slab, p, objects, objcgs); + __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } static inline @@ -2038,15 +2050,6 @@ void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, obj_cgroup_uncharge(objcg, objects * obj_full_size(s)); } #else /* CONFIG_MEMCG_KMEM */ -static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) -{ - return NULL; -} - -static inline void memcg_free_slab_cgroups(struct slab *slab) -{ -} - static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, struct obj_cgroup **objcgp, @@ -2314,7 +2317,7 @@ static __always_inline void account_slab(struct slab *slab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_online() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_slab_cgroups(slab, s, gfp, true); + alloc_slab_obj_exts(slab, s, gfp, true); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -2323,8 +2326,7 @@ static __always_inline void account_slab(struct slab *slab, int order, static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { - if (memcg_kmem_online()) - memcg_free_slab_cgroups(slab); + free_slab_obj_exts(slab); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); @@ -3775,6 +3777,7 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, unsigned int orig_size) { unsigned int zero_size = s->object_size; + struct slabobj_ext *obj_exts; bool kasan_init = init; size_t i; gfp_t init_flags = flags & gfp_allowed_mask; @@ -3817,6 +3820,7 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, init_flags); kmsan_slab_alloc(s, p[i], init_flags); + obj_exts = prepare_slab_obj_exts_hook(s, flags, p[i]); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); From patchwork Mon Feb 12 21:38:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554080 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8712B50A70 for ; Mon, 12 Feb 2024 21:39:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773988; cv=none; b=apjojkxXbKNTx4nS+ku5ZD7118ADy0R6g3JPxEYSPWz4cnT0sM2KRNVZKFcoP/u4B6xeQJNkao1XMr5h1mQsoIxuNPORHbDmr9zSk1QxAzXNy59aEgfWqCtBCId5b6O3iQuQMLA7nfJxSFaVpoKcHhGRnEGy0qtzwWXbmWyVCTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773988; c=relaxed/simple; bh=INdaAeCu0vMsonNtDmCDKN4gcPdyIDjKBJoc+g+d2cY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qyt2S8on+OKgTRYMhBrfnjjFggE+HEa4/wwiIzJN5qb0TX9jV6TLAswUDTDU8UMqY37sF4ZYmJGyzxHOXZSZvvzWzvVa+oI01Rut4H/d1ucBbeefxzUHruEgWhc7iyNBr+C0PEfGLCTzFU/chur2xHIloEHPA32RuTsJo1jaZ7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Lr9PCdew; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lr9PCdew" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60779e8f67fso4372357b3.0 for ; Mon, 12 Feb 2024 13:39:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773985; x=1708378785; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NFEoAydGNCIakLM+oYHcpcvSKcGKjy30qzf4+mOY1ag=; b=Lr9PCdewJ0iSFH1o9s3j125JjsbmiOYK5XjSQAur4tLGpN4l5mmZVDJNdKubJqmmAJ QeeA6A2KP4j4/VCSunKMDRG2CuLfeUYJK+NlmRC/RswEh/r8L8TKAVxpquJyPwO+c7/w JfQjhVeWS9VFZMuQfFqyrTksW03cBSpNMQRuUQNEUT2Yg6JCBa+aBYFGLsKaH6chilho Fl/5yxsfYBjhACgIvT3h3hxJsBis5iHVwp6qvg8fKhvwnFZUaYGdYfJngTOPTtgFyWoh IRVufmKDszsXsnwhMicnkElCFcm45dEZOdPk6dFkjsykxwhf1SlLlVqTM70IYLS3pqMa 2+JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773985; x=1708378785; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NFEoAydGNCIakLM+oYHcpcvSKcGKjy30qzf4+mOY1ag=; b=vTf9xx0i6AMvRQRVOvg60SJta+lmn6lfU0ZPlksl5kene1ijxDVEyLKbSKsLZ46P32 HteU03gBPr3IJC3kgR/bBuceM9Yz2/7KhlJfSShezHVsk6otxj9QTu3QwckHGljOstX3 KhUtQD5WigkmsSk31tjltffCAHvsS7ciZmrg4SGMOw97qamM85T8j0L+xIio6RcMS0b2 RvfZyKtfOveJib6tzfUkRxZZ7bbGcQWT3AHLAqF/cmwTIYE0RACE8JYZLssyMx2huBaj l6OEaqN7NR0zwRlU5UOHsbSoNOAhdbzJCF2Ws0kQTwWN/MrpNb0rpGbxyktDo2hhOky3 xXYw== X-Gm-Message-State: AOJu0Yw7aauy1gh4bFzWl8j8bkINjHeaXXBjyG2qtTlBhwsq5N1xj55j WddijgsjCA3pg6vJ0zINUU3UoMAZTpSXhBVQYku/J9SZJLuz8vW8aQHpAIcij2gguj1/t5Xf0dA E8Q== X-Google-Smtp-Source: AGHT+IHWeYeSfs9VSWCME9Zm5O2fUep/uRibBNlRqA9VaQU1CvkyHMwI5g7LZivFcfiaDsVBzBtyLk+b/OQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a81:a003:0:b0:607:8294:7631 with SMTP id x3-20020a81a003000000b0060782947631mr42744ywg.10.1707773985395; Mon, 12 Feb 2024 13:39:45 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:52 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-7-surenb@google.com> Subject: [PATCH v3 06/35] mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Introduce __GFP_NO_OBJ_EXT flag in order to prevent recursive allocations when allocating slabobj_ext on a slab. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/gfp_types.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 868c8fb1bbc1..e36e168d8cfd 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -52,6 +52,9 @@ enum { #endif #ifdef CONFIG_LOCKDEP ___GFP_NOLOCKDEP_BIT, +#endif +#ifdef CONFIG_SLAB_OBJ_EXT + ___GFP_NO_OBJ_EXT_BIT, #endif ___GFP_LAST_BIT }; @@ -93,6 +96,11 @@ enum { #else #define ___GFP_NOLOCKDEP 0 #endif +#ifdef CONFIG_SLAB_OBJ_EXT +#define ___GFP_NO_OBJ_EXT BIT(___GFP_NO_OBJ_EXT_BIT) +#else +#define ___GFP_NO_OBJ_EXT 0 +#endif /* * Physical address zone modifiers (see linux/mmzone.h - low four bits) @@ -133,12 +141,15 @@ enum { * node with no fallbacks or placement policy enforcements. * * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. + * + * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension. */ #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) #define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) #define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) #define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE) #define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT) +#define __GFP_NO_OBJ_EXT ((__force gfp_t)___GFP_NO_OBJ_EXT) /** * DOC: Watermark modifiers From patchwork Mon Feb 12 21:38:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554081 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C919451C3E for ; Mon, 12 Feb 2024 21:39:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773991; cv=none; b=cwHcOp8hFq0XA79CS6QEzCQjURJSIadGktHCrrNI+mvqdEOLmvgicy94U6BDt/IZ1wgDjUqgs8lpdipihbdtdQIdLMDdBoy4DV9YfEV1NC1OwpObhVqxlUmnyOKqduzBlTP/9UGAmTFuA6cl2utcWSropPr7fm4j6HOxaKGpJzg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773991; c=relaxed/simple; bh=YHyqeuHvUrbxaU4ktRueMrpYAdLsayJ8KqWKnRsL9Bc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ldQnlaR4KuK3H9AyKKCZ1rsEQmmZz85GdS9mOTE8MqSRIY9et44EozNncfd8hWHXgQrXQecJX45GWpG9lYjuTdYE1rUTewxI60PKAw5ylkzHDUlC/oDTycjqQ83oAcxfMs/RcGZIQkF2eAll7BN+kRGHnUQK3HSyrXnoLBr1t8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rfAX/e2z; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rfAX/e2z" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5eba564eb3fso75950737b3.1 for ; Mon, 12 Feb 2024 13:39:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773988; x=1708378788; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QPRNF15NS/yHs+nflQ9JCJcKpeGLPrq2C4Bm4JvXiOY=; b=rfAX/e2zkk9fIvt2hoH4yXWF6tTbvbza1R8p4oM8EOcsV3jn4jN4UCdShzqkT5uw+3 PCjXcf9A8+A8UEb0ve9Wprj0OsKYmMaUA0R8RRLIdaspuJ8Or0G3AoOLUJ/KqtNR1H/v oNh0jvzvUia0jxJ/kyINC3uowMwB3AJL7VHmK/5iFmzom9Tr0rLFuZ/gZn70srYiocfl Lq7tfzsx4Epu2BCUIq4eEaAZ8YXdwUUaAbE7lfdcGcUEskVYy4/zD+7gWzJYi9nEJoCQ 3+2aM+zgZgMEpbPsfPcwyMR0NH2iaENAcatbFpUzZSR2xRShrUJ0ZOrBDZ+oVFq+NlZT Qg2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773988; x=1708378788; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QPRNF15NS/yHs+nflQ9JCJcKpeGLPrq2C4Bm4JvXiOY=; b=K1z/bOmI/1sE7Hps2HdP4VatxOC+//ONtP/JEdrES1ennxFuY4R310jqpDSeDaDphD B0mHKF/Pr1flC+5VYR62Faqz9pIoVO7+7VckUP9lbmnuDXK6Q7RP5LzttTHxPfABRcQd +tupkrv4dy+r0oolt4hQg/RtZ0tsCzea9ADmN++7N3GGBi3fiw1Wza8ela4yGfx8DdkD yroVNOr/VXYbL33wgynPm0GQuUPt++eCXWFLHZzvjppQ1gEr6LFW90VX0TD/lbginN2R REd2GltS58uy8ee5Zq+SwKPt6kT16BgxfsXLdDNHnhFusIY6nAN8tt7nrs2a522WM/Kn uaIw== X-Forwarded-Encrypted: i=1; AJvYcCWzY4lCGearyTOgzoY7cJe9tga/+OF3kdcdQ+QfLnnR1CtCjreqPCoxwQj00El/hO0PSM0wxR4XwbT92T4VENdZ4sWbiiMNSznj7qmYhw== X-Gm-Message-State: AOJu0YzVo7EiGLaLfk/AsIcJCyxbiQAy07lmknVwjOwD0z+WHZp4tlS3 95phfnOmRJa7Mccu4qfqfLVec5+f2QXKgjVxVRoH0X50vESYMMWjLLmMmzYGjaDhEpXx45+TBmf RLw== X-Google-Smtp-Source: AGHT+IFIMBcV3x/W8fhRczbwea2w1q1gcT3ETfM1GWq+zDEkeIxrI9ytOpFdbEX8EDgRr+EtsMkcZCh6eRk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:53c3:0:b0:dcb:c4d3:6e07 with SMTP id h186-20020a2553c3000000b00dcbc4d36e07mr382686ybb.5.1707773987608; Mon, 12 Feb 2024 13:39:47 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:53 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-8-surenb@google.com> Subject: [PATCH v3 07/35] mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Slab extension objects can't be allocated before slab infrastructure is initialized. Some caches, like kmem_cache and kmem_cache_node, are created before slab infrastructure is initialized. Objects from these caches can't have extension objects. Introduce SLAB_NO_OBJ_EXT slab flag to mark these caches and avoid creating extensions for objects allocated from these slabs. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/slab.h | 7 +++++++ mm/slub.c | 5 +++-- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index b5f5ee8308d0..3ac2fc830f0f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -164,6 +164,13 @@ #endif #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ +#ifdef CONFIG_SLAB_OBJ_EXT +/* Slab created using create_boot_cache */ +#define SLAB_NO_OBJ_EXT ((slab_flags_t __force)0x20000000U) +#else +#define SLAB_NO_OBJ_EXT 0 +#endif + /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/mm/slub.c b/mm/slub.c index 1eb1050814aa..9fd96238ed39 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5650,7 +5650,8 @@ void __init kmem_cache_init(void) node_set(node, slab_nodes); create_boot_cache(kmem_cache_node, "kmem_cache_node", - sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0); + sizeof(struct kmem_cache_node), + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI); @@ -5660,7 +5661,7 @@ void __init kmem_cache_init(void) create_boot_cache(kmem_cache, "kmem_cache", offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *), - SLAB_HWCACHE_ALIGN, 0, 0); + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); kmem_cache = bootstrap(&boot_kmem_cache); kmem_cache_node = bootstrap(&boot_kmem_cache_node); From patchwork Mon Feb 12 21:38:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554082 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BE514EB33 for ; Mon, 12 Feb 2024 21:39:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773993; cv=none; b=Ki7M5SwPYBtJbiMwPXc97XeZgZBC+33q3TZ9J69RfMGvJX8XcUvQknkvFYWHXU8l7yK/94txRscdqTaw14RSQD/fax/NTZqtxeWhUYUg0EnhF/HVHq7MtPJONAXpDY0LHtP3k+jp5itcrYSyFnbd7IP04j5FBdMW8UOHulKZmos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773993; c=relaxed/simple; bh=w44iVPug8jpPRNSPySloNWFf1wUiHVTmGcnwNRfDzpg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ugdq0jUiv/Do68M30AJ/KVMa26D6FYSawmNVBIbn0E+Roy8AavZJ3d8N6cIJP5yRETHMntsMWU6LeRuNpN59XkVJSdXp43UpMaBLmtlTCG3STvEi4WQxI2awq/INtFoZiv4q0LXDtv1klyBYIWI3lRZakD0BA5xG31nSeJ6O5zE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rz5/aBxb; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rz5/aBxb" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60770007e52so13310457b3.1 for ; Mon, 12 Feb 2024 13:39:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773990; x=1708378790; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MHmYMVX3pDUmg31UH8awuyUsQFI7ijlFGj9IhCGVNQw=; b=Rz5/aBxbsyqoYl5hVd7mCn766MbmtfOU4LRvsCRCwTBPF39cHUQvh+tO3RkCXwHgXf bsmdmTKU/K7ZHiQc/uKVNkef4h6ELFCd7LjqxcF7m7FxcTFz5T9FgwyX9QQclGwkt8kK RKNDl/jjzial6AOial4qEZVF0exPUOUJydsG+JD8znWfxIDo/CC+AtF2qk3rndjJkhiP ZHNLySaG40RlwOoBiTLJvDrMlk2v/8gxySFuFLsJaBTMcBZhNOSJsTlPWs9mmnSg7CNN ifGxSG+Zcic0uSjuQNzabq2wLdotg9YpDrMuzojP1X6cenbsB3QRhtenQ1jhgRc8ZdHo k2Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773990; x=1708378790; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MHmYMVX3pDUmg31UH8awuyUsQFI7ijlFGj9IhCGVNQw=; b=ADNdCz/c7LDdBxoIrq6DLcr9ek893tvTEEUE2+CBoGHGxj7EVeByH8j4bFwxA/a/65 sn/1/18/7/+Q5BPNDdI4Twk5g2NERIsgR1AgzQPFzeAf/XFEKQA4K3IoW1AoKbdeuwsv TGhHra+KJwVzbmKdC87VIl7bOm/9u4A48w3pvAQsuh3avrhu4G6KInZ4mhKVC5CsQgKb zppTHsD8EKQDoG3PPjFjGS7MlQOwiZIpH1+akA1PRmo1MVZPwmgj12CMOlHdSL75DzkO urD5c92u+lltVmnpbFtG7OZOOvkWXg+dBAS6SHoqFqrw3rfZuaY7kz40M8dCgt+1Hbth qxiQ== X-Forwarded-Encrypted: i=1; AJvYcCWcT/kgfcymztHzWeczLALM8VTAlis383uNB0oOXMb5VbEdJWbHKJj2y3LDfoOt48FgigrMZ712cWAWOjgMvQKQ7H1t0y9/uquhN+tcBg== X-Gm-Message-State: AOJu0YzpoOg9OfDYfyr0X1c2M0uAn0Oz2Id5Tdp+ubytyvBhZqcx/hs4 +qhV7tSS3wvHxOIhXnOK/bIkEfwXJIjJnnji98DxqZAQCAriWdZ3YFuiZqJcHzpVyA+mKl4MPlr cxg== X-Google-Smtp-Source: AGHT+IFDzjF2cGPjf9byIT/iUDxPrFM8RUYETPrPWdKRGwVNA8HPcuChMwLVorzqu+gtL/yYLkx2m1ZVYB4= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:fd0:b0:602:cd1a:6708 with SMTP id dg16-20020a05690c0fd000b00602cd1a6708mr1449899ywb.0.1707773989875; Mon, 12 Feb 2024 13:39:49 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:54 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-9-surenb@google.com> Subject: [PATCH v3 08/35] mm: prevent slabobj_ext allocations for slabobj_ext and kmem_cache objects From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Use __GFP_NO_OBJ_EXT to prevent recursions when allocating slabobj_ext objects. Also prevent slabobj_ext allocations for kmem_cache objects. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- mm/slab.h | 6 ++++++ mm/slab_common.c | 2 ++ 2 files changed, 8 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 436a126486b5..f4ff635091e4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -589,6 +589,12 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) if (!need_slab_obj_ext()) return NULL; + if (s->flags & SLAB_NO_OBJ_EXT) + return NULL; + + if (flags & __GFP_NO_OBJ_EXT) + return NULL; + slab = virt_to_slab(p); if (!slab_obj_exts(slab) && WARN(alloc_slab_obj_exts(slab, s, flags, false), diff --git a/mm/slab_common.c b/mm/slab_common.c index 6bfa1810da5e..83fec2dd2e2d 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -218,6 +218,8 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, void *vec; gfp &= ~OBJCGS_CLEAR_MASK; + /* Prevent recursive extension vector allocation */ + gfp |= __GFP_NO_OBJ_EXT; vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); if (!vec) From patchwork Mon Feb 12 21:38:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554083 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF61B52F6A for ; Mon, 12 Feb 2024 21:39:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773995; cv=none; b=k7G1/kP3ln2TDedcYzHieE5j66HomrZiqG6aibSt7gkvFNgMlbImAEkbPQQMHFj21+UpmOiYVWKbyyvMNY8VYJaCLUrfZImfybLoMCKc2LfOqquDMlMU481XL2JA8PTmfg9lcH2o/b3X0FCGqUbZq3MIySg60biZ7fsu2Ij/MfM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773995; c=relaxed/simple; bh=91lL1S+QkjC2GC3HJr8NyIPXncCfL+P1OsUGwRSOa+E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RltkNImpuAD1vuw1+OoCHwjp8vbe3I57EpY3+g53YNomfvA3S8e1mgPN6SwBK/mgdNcKpmuP/9BR2r/TVSXvRqU60Pbv69mal+Hvze7jSQLccdKhLltZRG6em8bYCLW693Y0YiVx8o+6Bmh00MARR4OYBbfdICyYW4SSn5QJbLk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=blK3Hg2A; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="blK3Hg2A" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5eba564eb3fso75952097b3.1 for ; Mon, 12 Feb 2024 13:39:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773992; x=1708378792; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Wy0D/wmUn3AGzRHJaKJZBS4cvr5irQ3tEAXEefF/wd0=; b=blK3Hg2AYB1mzMU5eHks0wmrXi2Rw8JAz/an0255CHidx1LFEwnHQivHyKF+fyqCG6 Lny9KzPAtL4IcQclJvLpIfTzgbfNjcasqbgN6iLE2v1LhakhXOVhY+wPuOl0b0t85SIv BsC6b7MyESUwBUSDfKxKiEZYQiwhgh8pwUAxqx99bbHmI1YXPS7q58HVvu6ungsNlz38 NPfel6EeJOlfjVtL9RKK0oeUimvqGSNNwviGvBzxxcB17eVB8Jp1D/qOJRKy/5xqcpFM z3tYOHHUhhBRDNPu4HjF20YERHDsegqN49NPnFyPwSgui8V4+2G5amZC0XnXpVRT7AmR 4ElQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773992; x=1708378792; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Wy0D/wmUn3AGzRHJaKJZBS4cvr5irQ3tEAXEefF/wd0=; b=ehurIxsls0/pJHImDlYrMVoS+BWPCwF6+z+x0pJTsXCLq8D5TApLGATAtzxU1Z1ex+ 0dmbMrV49atJyJaL/neKnP4g5W35fFCrugoe9g8upca3CA/pjs5VnoQn02tEHtpwAqmN nK8bGzuxIdJ8o2lyyqek8o/uX0mRUUSpirIJWfzTTQDvYNLT0xcVRskimppv2J4r/7aq DkzF9aUsjfIUeGg1d0Z9aCnJmHXBbws0HClctXJQSzZdb6b+xLfHvK7PjcGPWOwy2Qhp tGDqhAfie6I7DLC3SJ/AFxqbcHBKIrfWvJDkIEWiL/a2J1I3SZesxum19RkwZ4ap3EmN r5LQ== X-Gm-Message-State: AOJu0Yyp+cPxB6hprQ9cKFrhbKPIgLxgxR/h77qPkG0C6aDRInOzu5Hl 2Se1jHuaLqLdaqK/Y7x94Kv5GzpkiB/XdUvOmhFInH1AXj7C/jgxKtgd4qlEvylba0xDxmJ2uML iaA== X-Google-Smtp-Source: AGHT+IGJNldcJg1nJudvCE1Sp9g/6sTQwYzeBOB4AMRF5kjsz9nHiwJOvFtrkpc6PJTXjQSExrbdy8gvy1w= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a0d:cc91:0:b0:607:7bca:e8d5 with SMTP id o139-20020a0dcc91000000b006077bcae8d5mr286452ywd.0.1707773992000; Mon, 12 Feb 2024 13:39:52 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:55 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-10-surenb@google.com> Subject: [PATCH v3 09/35] slab: objext: introduce objext_flags as extension to page_memcg_data_flags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Introduce objext_flags to store additional objext flags unrelated to memcg. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/memcontrol.h | 29 ++++++++++++++++++++++------- mm/slab.h | 4 +--- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index eb1dc181e412..f3584e98b640 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -356,7 +356,22 @@ enum page_memcg_data_flags { __NR_MEMCG_DATA_FLAGS = (1UL << 2), }; -#define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) +#define __FIRST_OBJEXT_FLAG __NR_MEMCG_DATA_FLAGS + +#else /* CONFIG_MEMCG */ + +#define __FIRST_OBJEXT_FLAG (1UL << 0) + +#endif /* CONFIG_MEMCG */ + +enum objext_flags { + /* the next bit after the last actual flag */ + __NR_OBJEXTS_FLAGS = __FIRST_OBJEXT_FLAG, +}; + +#define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) + +#ifdef CONFIG_MEMCG static inline bool folio_memcg_kmem(struct folio *folio); @@ -390,7 +405,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -411,7 +426,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); - return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct obj_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -468,11 +483,11 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -511,11 +526,11 @@ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } static inline struct mem_cgroup *page_memcg_check(struct page *page) diff --git a/mm/slab.h b/mm/slab.h index f4ff635091e4..77cf7474fe46 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -560,10 +560,8 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) slab_page(slab)); VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); - return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); -#else - return (struct slabobj_ext *)obj_exts; #endif + return (struct slabobj_ext *)(obj_exts & ~OBJEXTS_FLAGS_MASK); } int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, From patchwork Mon Feb 12 21:38:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554084 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5ED7E5337F for ; Mon, 12 Feb 2024 21:39:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773998; cv=none; b=DDD5G7rcfd67fI8qMkLa/2+9BqHiJkVNSThyhskkdQpWHrsWViSJ3n6v1JZuDFrU+b6CGkhY21d8OtyDWDFFWgQ4G9e1cOwrt6k1mWIcs2akBepNdsXu6BZ96oF4EPifQX0z37bYNI/Rhj+/x4jSWOVIp0x7oFGwhM9wjFvNLVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773998; c=relaxed/simple; bh=yw35aaReXg/k0dyxudJNlHkPhP6T+BxCJ8bIFrcG/Z8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WrdsbOZj1kcyIkY2n0SSwDKL8JmTrcR5TITwc4Hgb9mJpTO1EWEUY8poWC9zrQGlobcWEvl2EEwYBOPGUOeuOi0AU6igX8mQI1WZMEALcTiX79exX/23Zm4/qs9PPXP71FMo9zOZ5kD+neffjaCWI03rBFf9oPRd5EFBSU3h6vw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=i9BCYESm; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i9BCYESm" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6ade10cb8so8232862276.0 for ; Mon, 12 Feb 2024 13:39:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773994; x=1708378794; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3+XWH7AoCI6Q7n29IBIT816HNKsBqFH+UYcKnJYD23I=; b=i9BCYESmpnjSlCIWJ49ZCl2O4gjAywPJHD7X5oLS86RMEYyVitujiuJ7jwPMHaoqxz r3YRM2vbpJeTc53D0cglOuGSQl4E2QQzCjb/WSLRPCvo9apDvn4uMwP87zDQmgd65IUj AwnAK/y6515WxexhuQgCXWZUfJW5EBBKI2I2roHuWMkNcHE5hGodmszSw+0+n545/wnT jvaRZ2mXZo5e92PfQpitIrGY2QfhYDAjOBkY+OGjwO96shiqlAmlN5gY/jIooMl+/97M /sIzlFzuV2u0Rx8qFZ3/EaA3d4qUY5r3vlKetNAGZoul50gd8H0It2b+Znhf+MjxqSSQ sT/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773994; x=1708378794; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3+XWH7AoCI6Q7n29IBIT816HNKsBqFH+UYcKnJYD23I=; b=AD9p363DuLUFyWTSR7jACGTxIJgtOXvpcIVFVj9sqWrLiuUyNUidtur6auceJCwSYa Q+pvTfTVgZZBn7u9JeqEkojIHp/YKuGEx7mVHH6x9HVwjHSHn377xfFNNCtU7twiqFrM o46eJfnBekK/24jMFhemsJ6hrbCuYVrm1epXEK1f5tOC33L6gwi4S+VgtGFWU+RKX+SS qYTESTZ9varetd9hEppi4FJuihB5ghAiLHYWq4pQ82Jrcec/ylv5pl7U7unQ7OHZ/elD 6o96TmNSzaIZYbIXEd9Scp7PKpLHttDyyu9n0UXCW1slidksJ+4h1qPxoGjWx6OrPb1T Oc7Q== X-Forwarded-Encrypted: i=1; AJvYcCUqxNzkyyI60L8UqT548Nop2Ic1opR+OggNugi1RFWUltuz3oBsZzL5fv1Q09i+Q22nvknd0Sm6Np1rAno4SzwXrvSxwe3Iy17YpgN1pA== X-Gm-Message-State: AOJu0Yy014ed7E+rcBUX/0woO2vlpsUB3nq3+yBjCXt4FiNiPBtYKr43 4WaWjNran4rzDa989KOw53buxFYObK+z2TiVuSaqt37va0RSkaoxaJHHOHnhPZcgRwVlO4RcQNx aXg== X-Google-Smtp-Source: AGHT+IG0KPVbCnbR173tQAyZd47dBY+49D8romdyHPgaIzYcKde9kTc683kAJqOMsgGylG5greCZPiYqZQA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:2614:0:b0:dc6:b7c2:176e with SMTP id m20-20020a252614000000b00dc6b7c2176emr130871ybm.4.1707773994229; Mon, 12 Feb 2024 13:39:54 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:56 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-11-surenb@google.com> Subject: [PATCH v3 10/35] lib: code tagging framework From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Add basic infrastructure to support code tagging which stores tag common information consisting of the module name, function, file name and line number. Provide functions to register a new code tag type and navigate between code tags. Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 71 ++++++++++++++ lib/Kconfig.debug | 4 + lib/Makefile | 1 + lib/codetag.c | 199 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 275 insertions(+) create mode 100644 include/linux/codetag.h create mode 100644 lib/codetag.c diff --git a/include/linux/codetag.h b/include/linux/codetag.h new file mode 100644 index 000000000000..a9d7adecc2a5 --- /dev/null +++ b/include/linux/codetag.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * code tagging framework + */ +#ifndef _LINUX_CODETAG_H +#define _LINUX_CODETAG_H + +#include + +struct codetag_iterator; +struct codetag_type; +struct seq_buf; +struct module; + +/* + * An instance of this structure is created in a special ELF section at every + * code location being tagged. At runtime, the special section is treated as + * an array of these. + */ +struct codetag { + unsigned int flags; /* used in later patches */ + unsigned int lineno; + const char *modname; + const char *function; + const char *filename; +} __aligned(8); + +union codetag_ref { + struct codetag *ct; +}; + +struct codetag_range { + struct codetag *start; + struct codetag *stop; +}; + +struct codetag_module { + struct module *mod; + struct codetag_range range; +}; + +struct codetag_type_desc { + const char *section; + size_t tag_size; +}; + +struct codetag_iterator { + struct codetag_type *cttype; + struct codetag_module *cmod; + unsigned long mod_id; + struct codetag *ct; +}; + +#define CODE_TAG_INIT { \ + .modname = KBUILD_MODNAME, \ + .function = __func__, \ + .filename = __FILE__, \ + .lineno = __LINE__, \ + .flags = 0, \ +} + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock); +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); +struct codetag *codetag_next_ct(struct codetag_iterator *iter); + +void codetag_to_text(struct seq_buf *out, struct codetag *ct); + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc); + +#endif /* _LINUX_CODETAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 975a07f9f1cc..0be2d00c3696 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -968,6 +968,10 @@ config DEBUG_STACKOVERFLOW If in doubt, say "N". +config CODE_TAGGING + bool + select KALLSYMS + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index 6b09731d8e61..6b48b22fdfac 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -235,6 +235,7 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \ of-reconfig-notifier-error-inject.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o +obj-$(CONFIG_CODE_TAGGING) += codetag.o lib-$(CONFIG_GENERIC_BUG) += bug.o obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o diff --git a/lib/codetag.c b/lib/codetag.c new file mode 100644 index 000000000000..7708f8388e55 --- /dev/null +++ b/lib/codetag.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include + +struct codetag_type { + struct list_head link; + unsigned int count; + struct idr mod_idr; + struct rw_semaphore mod_lock; /* protects mod_idr */ + struct codetag_type_desc desc; +}; + +static DEFINE_MUTEX(codetag_lock); +static LIST_HEAD(codetag_types); + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock) +{ + if (lock) + down_read(&cttype->mod_lock); + else + up_read(&cttype->mod_lock); +} + +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype) +{ + struct codetag_iterator iter = { + .cttype = cttype, + .cmod = NULL, + .mod_id = 0, + .ct = NULL, + }; + + return iter; +} + +static inline struct codetag *get_first_module_ct(struct codetag_module *cmod) +{ + return cmod->range.start < cmod->range.stop ? cmod->range.start : NULL; +} + +static inline +struct codetag *get_next_module_ct(struct codetag_iterator *iter) +{ + struct codetag *res = (struct codetag *) + ((char *)iter->ct + iter->cttype->desc.tag_size); + + return res < iter->cmod->range.stop ? res : NULL; +} + +struct codetag *codetag_next_ct(struct codetag_iterator *iter) +{ + struct codetag_type *cttype = iter->cttype; + struct codetag_module *cmod; + struct codetag *ct; + + lockdep_assert_held(&cttype->mod_lock); + + if (unlikely(idr_is_empty(&cttype->mod_idr))) + return NULL; + + ct = NULL; + while (true) { + cmod = idr_find(&cttype->mod_idr, iter->mod_id); + + /* If module was removed move to the next one */ + if (!cmod) + cmod = idr_get_next_ul(&cttype->mod_idr, + &iter->mod_id); + + /* Exit if no more modules */ + if (!cmod) + break; + + if (cmod != iter->cmod) { + iter->cmod = cmod; + ct = get_first_module_ct(cmod); + } else + ct = get_next_module_ct(iter); + + if (ct) + break; + + iter->mod_id++; + } + + iter->ct = ct; + return ct; +} + +void codetag_to_text(struct seq_buf *out, struct codetag *ct) +{ + seq_buf_printf(out, "%s:%u module:%s func:%s", + ct->filename, ct->lineno, + ct->modname, ct->function); +} + +static inline size_t range_size(const struct codetag_type *cttype, + const struct codetag_range *range) +{ + return ((char *)range->stop - (char *)range->start) / + cttype->desc.tag_size; +} + +static void *get_symbol(struct module *mod, const char *prefix, const char *name) +{ + char buf[64]; + int res; + + res = snprintf(buf, sizeof(buf), "%s%s", prefix, name); + if (WARN_ON(res < 1 || res > sizeof(buf))) + return NULL; + + return mod ? + (void *)find_kallsyms_symbol_value(mod, buf) : + (void *)kallsyms_lookup_name(buf); +} + +static struct codetag_range get_section_range(struct module *mod, + const char *section) +{ + return (struct codetag_range) { + get_symbol(mod, "__start_", section), + get_symbol(mod, "__stop_", section), + }; +} + +static int codetag_module_init(struct codetag_type *cttype, struct module *mod) +{ + struct codetag_range range; + struct codetag_module *cmod; + int err; + + range = get_section_range(mod, cttype->desc.section); + if (!range.start || !range.stop) { + pr_warn("Failed to load code tags of type %s from the module %s\n", + cttype->desc.section, + mod ? mod->name : "(built-in)"); + return -EINVAL; + } + + /* Ignore empty ranges */ + if (range.start == range.stop) + return 0; + + BUG_ON(range.start > range.stop); + + cmod = kmalloc(sizeof(*cmod), GFP_KERNEL); + if (unlikely(!cmod)) + return -ENOMEM; + + cmod->mod = mod; + cmod->range = range; + + down_write(&cttype->mod_lock); + err = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); + if (err >= 0) + cttype->count += range_size(cttype, &range); + up_write(&cttype->mod_lock); + + if (err < 0) { + kfree(cmod); + return err; + } + + return 0; +} + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc) +{ + struct codetag_type *cttype; + int err; + + BUG_ON(desc->tag_size <= 0); + + cttype = kzalloc(sizeof(*cttype), GFP_KERNEL); + if (unlikely(!cttype)) + return ERR_PTR(-ENOMEM); + + cttype->desc = *desc; + idr_init(&cttype->mod_idr); + init_rwsem(&cttype->mod_lock); + + err = codetag_module_init(cttype, NULL); + if (unlikely(err)) { + kfree(cttype); + return ERR_PTR(err); + } + + mutex_lock(&codetag_lock); + list_add_tail(&cttype->link, &codetag_types); + mutex_unlock(&codetag_lock); + + return cttype; +} From patchwork Mon Feb 12 21:38:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554085 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C75814D9EE for ; Mon, 12 Feb 2024 21:39:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773999; cv=none; b=l9CYqs8X001u40/p/WlomoNdszzdixQU8hAWwcvomDKMY2F5vuwvKLICTxwWtnKFHSWknUgvNRgtpZsOxyAycMljwYYlAi/8W0LIj4d83gRu9ZaGmhME0S8UbpiExdHKEjyW507BMFvi3Wq6fCfD59kcY76l4DjSZOWlRiRl3OA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707773999; c=relaxed/simple; bh=bnowLUS1eDPDWMbsPPvU+2zZa2ZumGLmjqsXpkcz6mc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tVAdChl6k/Hjm+W2NxqikjYHHk2MSTvs5CAb47yg/fq7NE7KGoSPmOcX24NuCSCeU+xS2SlOdZF+uOmRJbMvR+x82p1MKwxBIcuUCUmuJ+zmQZdBUiDlWe7By06j7MR2PCb0zVGYJqSV1nmf0woo4Z4rq8dCngPFgKvarAEqFmk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nDUb4q3h; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nDUb4q3h" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc64f63d768so6842422276.2 for ; Mon, 12 Feb 2024 13:39:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773996; x=1708378796; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZVR5x2YILJK+zhPcdicEuOWU7vdbf7DJDXDz97hru8E=; b=nDUb4q3hEfOkiAcSvtfrAMbr6eJhgNi0cElHN6yjvuq2vuU+sJewdLwYYAPJFlxFas iZuZxJXMyKj8qjcWHdhxXwTWH2Px7uiEKIEpQfCCLkcRTK2fClwA+d4uc93a593hNBRl nLVFY+AaGTlNTfcRkLxVkaqLES9MaDWl6cc/KKfxXXp6Wn9BA/jWs+Ho+JKMhpjiEdnL Bbl2fu52IgdTFA5rNJB2uELl4BHkumDRu1sjKMVplZWXOPvQxHTD9l7tq5tXVPrmRhVs EIV5tlqvunqdBCvmaOFPZQD/FiQFP/rmAlxtiWyYbS5h7vM+RVbvfguIM/grpdyoCSS0 m4LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773996; x=1708378796; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZVR5x2YILJK+zhPcdicEuOWU7vdbf7DJDXDz97hru8E=; b=cOtSX6PuJNOB9iBaXmC85rrrL2Z4LK+MzDkT/5ktwLDmZygCoMJpFN2rdsY34AxsiV 7WO2xrhj4JqCOmI5Jcahx1gE8gFW0NEVv379OVeoia7s+kIop4mdXNBDqtGpwfiPZJTI 9g6vtxtjgGFz7Xt6eFdG3qj0DY8aYuwfYESAkL4BjVkZY1XPtgPqauoMYOIGL3/izskA HYZyAQhAmeUT68+6vEbRA/Sd5p84kfG/I8ISVnqmvaeu///lHJHFhdTBUlM5CUUzJbbw ULZVNkwlol4zURdU7YiV3GivFUgrzfKzZ6oAJ/CUTmG6zVGfmza89/lHLIY3LeVypsmX SqtA== X-Gm-Message-State: AOJu0Yzuza0Nqw0Rts5rLFgj3RpjFgvnx24QgF/omHDx0qxweyv9M1zQ usW7a5JjhvvOVA8lUBJGaGzmWppqYg7cZTf5BtC5pPGEbSNtd2QLY75ApBbxdVHSDXuKD+97/Ep 9CA== X-Google-Smtp-Source: AGHT+IFrahWopogc+rmwq0xz3eYYu6J1qHJr+fSwJ9oyAoKj8/t+TZ8Jneh5cBWLRBbyxn7YPxVr/zwosmk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:100b:b0:dc6:fa35:b42 with SMTP id w11-20020a056902100b00b00dc6fa350b42mr2046974ybt.2.1707773996437; Mon, 12 Feb 2024 13:39:56 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:57 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-12-surenb@google.com> Subject: [PATCH v3 11/35] lib: code tagging module support From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Add support for code tagging from dynamically loaded modules. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/codetag.h | 12 +++++++++ kernel/module/main.c | 4 +++ lib/codetag.c | 58 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index a9d7adecc2a5..386733e89b31 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -42,6 +42,10 @@ struct codetag_module { struct codetag_type_desc { const char *section; size_t tag_size; + void (*module_load)(struct codetag_type *cttype, + struct codetag_module *cmod); + void (*module_unload)(struct codetag_type *cttype, + struct codetag_module *cmod); }; struct codetag_iterator { @@ -68,4 +72,12 @@ void codetag_to_text(struct seq_buf *out, struct codetag *ct); struct codetag_type * codetag_register_type(const struct codetag_type_desc *desc); +#ifdef CONFIG_CODE_TAGGING +void codetag_load_module(struct module *mod); +void codetag_unload_module(struct module *mod); +#else +static inline void codetag_load_module(struct module *mod) {} +static inline void codetag_unload_module(struct module *mod) {} +#endif + #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 36681911c05a..f400ba076cc7 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1242,6 +1243,7 @@ static void free_module(struct module *mod) { trace_module_free(mod); + codetag_unload_module(mod); mod_sysfs_teardown(mod); /* @@ -2978,6 +2980,8 @@ static int load_module(struct load_info *info, const char __user *uargs, /* Get rid of temporary copy. */ free_copy(info, flags); + codetag_load_module(mod); + /* Done! */ trace_module_load(mod); diff --git a/lib/codetag.c b/lib/codetag.c index 7708f8388e55..4ea57fb37346 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -108,15 +108,20 @@ static inline size_t range_size(const struct codetag_type *cttype, static void *get_symbol(struct module *mod, const char *prefix, const char *name) { char buf[64]; + void *ret; int res; res = snprintf(buf, sizeof(buf), "%s%s", prefix, name); if (WARN_ON(res < 1 || res > sizeof(buf))) return NULL; - return mod ? + preempt_disable(); + ret = mod ? (void *)find_kallsyms_symbol_value(mod, buf) : (void *)kallsyms_lookup_name(buf); + preempt_enable(); + + return ret; } static struct codetag_range get_section_range(struct module *mod, @@ -157,8 +162,11 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod) down_write(&cttype->mod_lock); err = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); - if (err >= 0) + if (err >= 0) { cttype->count += range_size(cttype, &range); + if (cttype->desc.module_load) + cttype->desc.module_load(cttype, cmod); + } up_write(&cttype->mod_lock); if (err < 0) { @@ -197,3 +205,49 @@ codetag_register_type(const struct codetag_type_desc *desc) return cttype; } + +void codetag_load_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) + codetag_module_init(cttype, mod); + mutex_unlock(&codetag_lock); +} + +void codetag_unload_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) { + struct codetag_module *found = NULL; + struct codetag_module *cmod; + unsigned long mod_id, tmp; + + down_write(&cttype->mod_lock); + idr_for_each_entry_ul(&cttype->mod_idr, cmod, tmp, mod_id) { + if (cmod->mod && cmod->mod == mod) { + found = cmod; + break; + } + } + if (found) { + if (cttype->desc.module_unload) + cttype->desc.module_unload(cttype, cmod); + + cttype->count -= range_size(cttype, &cmod->range); + idr_remove(&cttype->mod_idr, mod_id); + kfree(cmod); + } + up_write(&cttype->mod_lock); + } + mutex_unlock(&codetag_lock); +} From patchwork Mon Feb 12 21:38:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554086 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D46A353801 for ; Mon, 12 Feb 2024 21:39:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774002; cv=none; b=qnxf8rUr7qnyMp1jxmBycf9x4Zm6xwg63mSlD6j5AIImZrR0Kq2SAJ7XCdakwxJ/kMU8FlYLPYWLVXR+f0rsucY2CGf3pIl9Lb8uUfC7s48lpEQ/8rWXkcuX7Zl7IsD6yVYXQDez0XN0laMw/vJCOfXkt2IwYQ1cyewk0dip/dA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774002; c=relaxed/simple; bh=IEByNWOk2G9cT465iLcJ4OSGBtLKEtQCirQ7bjuUgv8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DHz5Iypi1pNyW9T3MGOyhwSkn/Qw/iw1kBaR8De1XusqYl/JmmvmVMgT6SchPMv6G59A6L3O3VgtJzYoXyis/K11eHENGWQPHhuf90W5hMUlBBfgMF7cUAkBJDbJ9km4vjcX3olMOg7n19OLw2CNk6MtTdTbMFgHJflG7MEDQ3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=It+8eW08; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="It+8eW08" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5ecfd153ccfso73580307b3.2 for ; Mon, 12 Feb 2024 13:39:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707773999; x=1708378799; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9alS6gtOXFejwBrY9h+FRijZZu8X6fVY6jwbxDwAezo=; b=It+8eW08vgTIqSxikEhutgPs0/+VEJLPz3UdvzE3c2QnYQpYNCE770DZUy6setLtP2 gyVgPgxf05378arwnOCZUfhlCLnfmwFHOQdIP+Ee6svjdYeUdKKDH2CZkBY+Pi7tytub Gg1ZtiV9drCiZm5MwKjZ+8nmlL8oFOMaj/w27mI7HHYowXU8gVX2MB6h+h9UCEtQrVHS njrtVnuOlVgd3aZNcAFx9Tz/DB7PfOvYJFI5SZNm/pvGz9OGwA7daK0zgE/F1x9piBFr yh+Ems6Y1rX4Jjf5+dA2isb01CtQs2KBF8SEoIQvDf4ItmvUsIzXxUESxOJVzVZKWEKQ oWfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707773999; x=1708378799; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9alS6gtOXFejwBrY9h+FRijZZu8X6fVY6jwbxDwAezo=; b=r2MAGAEmcCfgriCpEbNOCr+dwxcnLJyKFMX7QDDqu2IhjDa3P5J27nd2AitraFp3I3 u1em4BLnih24x0RqfaiZ5D1TEipKE0XYNKG5feRycBW4RVLpjjlDcYyXU722IjgllmJm PxbYMD1ZrJ3rsn6xEmdCipkd268zzB61vaclgj9EmR8QjkZuMPwM/UuDDQMlUFnjEhY4 htQEwvfqTM2IlYA7pEKeRcn5eyrmj+I39BgnUFKdVuUOUSSTaErxe+GEwuEFIfYeBex0 Mg5zaEeuSRI8j2TlSl6ujVl0JG8/pUhQ8enMVx0i0kDZ2ACLXiTAHYciBQCa6Pt1Ql0k Dg/g== X-Gm-Message-State: AOJu0YwQseImgTrou40tjTgiaE37UH5rroiMNfa8mRUF/+vyf6B3lf5q s5xtVbjrQVP5PTdMjxXHxoQQmRLsUtDSX+YRXUNO5sE+G+taEeqe/rkgEwdzYA/I0AjDyDbLkrz 71Q== X-Google-Smtp-Source: AGHT+IFW0cHHZhA7bV0V2zEhIEFhckUZZIzrQEg30Nhl0ohuGdOhgEJdFPgUoRF4LKS4dpvAz4HDei+0kVU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a0d:e844:0:b0:604:a67c:7f8d with SMTP id r65-20020a0de844000000b00604a67c7f8dmr2200694ywe.5.1707773998774; Mon, 12 Feb 2024 13:39:58 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:58 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-13-surenb@google.com> Subject: [PATCH v3 12/35] lib: prevent module unloading if memory is not freed From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Skip freeing module's data section if there are non-zero allocation tags because otherwise, once these allocations are freed, the access to their code tag would cause UAF. Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 6 +++--- kernel/module/main.c | 23 +++++++++++++++-------- lib/codetag.c | 11 ++++++++--- 3 files changed, 26 insertions(+), 14 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 386733e89b31..d98e4c8e86f0 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -44,7 +44,7 @@ struct codetag_type_desc { size_t tag_size; void (*module_load)(struct codetag_type *cttype, struct codetag_module *cmod); - void (*module_unload)(struct codetag_type *cttype, + bool (*module_unload)(struct codetag_type *cttype, struct codetag_module *cmod); }; @@ -74,10 +74,10 @@ codetag_register_type(const struct codetag_type_desc *desc); #ifdef CONFIG_CODE_TAGGING void codetag_load_module(struct module *mod); -void codetag_unload_module(struct module *mod); +bool codetag_unload_module(struct module *mod); #else static inline void codetag_load_module(struct module *mod) {} -static inline void codetag_unload_module(struct module *mod) {} +static inline bool codetag_unload_module(struct module *mod) { return true; } #endif #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index f400ba076cc7..658b631e76ad 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1211,15 +1211,19 @@ static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) return module_alloc(size); } -static void module_memory_free(void *ptr, enum mod_mem_type type) +static void module_memory_free(void *ptr, enum mod_mem_type type, + bool unload_codetags) { + if (!unload_codetags && mod_mem_type_is_core_data(type)) + return; + if (mod_mem_use_vmalloc(type)) vfree(ptr); else module_memfree(ptr); } -static void free_mod_mem(struct module *mod) +static void free_mod_mem(struct module *mod, bool unload_codetags) { for_each_mod_mem_type(type) { struct module_memory *mod_mem = &mod->mem[type]; @@ -1230,20 +1234,23 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); if (mod_mem->size) - module_memory_free(mod_mem->base, type); + module_memory_free(mod_mem->base, type, + unload_codetags); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); - module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); + module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA, unload_codetags); } /* Free a module, remove from lists, etc. */ static void free_module(struct module *mod) { + bool unload_codetags; + trace_module_free(mod); - codetag_unload_module(mod); + unload_codetags = codetag_unload_module(mod); mod_sysfs_teardown(mod); /* @@ -1285,7 +1292,7 @@ static void free_module(struct module *mod) kfree(mod->args); percpu_modfree(mod); - free_mod_mem(mod); + free_mod_mem(mod, unload_codetags); } void *__symbol_get(const char *symbol) @@ -2298,7 +2305,7 @@ static int move_module(struct module *mod, struct load_info *info) return 0; out_enomem: for (t--; t >= 0; t--) - module_memory_free(mod->mem[t].base, t); + module_memory_free(mod->mem[t].base, t, true); return ret; } @@ -2428,7 +2435,7 @@ static void module_deallocate(struct module *mod, struct load_info *info) percpu_modfree(mod); module_arch_freeing_init(mod); - free_mod_mem(mod); + free_mod_mem(mod, true); } int __weak module_finalize(const Elf_Ehdr *hdr, diff --git a/lib/codetag.c b/lib/codetag.c index 4ea57fb37346..0ad4ea66c769 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -5,6 +5,7 @@ #include #include #include +#include struct codetag_type { struct list_head link; @@ -219,12 +220,13 @@ void codetag_load_module(struct module *mod) mutex_unlock(&codetag_lock); } -void codetag_unload_module(struct module *mod) +bool codetag_unload_module(struct module *mod) { struct codetag_type *cttype; + bool unload_ok = true; if (!mod) - return; + return true; mutex_lock(&codetag_lock); list_for_each_entry(cttype, &codetag_types, link) { @@ -241,7 +243,8 @@ void codetag_unload_module(struct module *mod) } if (found) { if (cttype->desc.module_unload) - cttype->desc.module_unload(cttype, cmod); + if (!cttype->desc.module_unload(cttype, cmod)) + unload_ok = false; cttype->count -= range_size(cttype, &cmod->range); idr_remove(&cttype->mod_idr, mod_id); @@ -250,4 +253,6 @@ void codetag_unload_module(struct module *mod) up_write(&cttype->mod_lock); } mutex_unlock(&codetag_lock); + + return unload_ok; } From patchwork Mon Feb 12 21:38:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554087 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAF9B53E22 for ; Mon, 12 Feb 2024 21:40:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774005; cv=none; b=YzEMU7+vIVlQzNkBCgoH0icngjyPnuX03P/drFeL1MOM54FZ744sHNOclmtnpXiNXIdQg6k553OQDwzposGcZXyA5QGZhCpsRydRk9hoCh113MKyJvL0I92qrG3EbA9aX6dbafbMFqHPoHdsOkr4PpYXL32JQLdYf3C8h55u7cA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774005; c=relaxed/simple; bh=CV0/eliGYAyxtX/JuUYDXK9O62NjCKR4EvZLHu9jVNg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NorcQ5w7ayAYliWb0/fWPKYuLkEA1Aag67yO8OsdLPxtwwWOxFOm2aYrv9nmlNmzTEST/cpn2WxUIc9W0FhWf23xRJGobU0YLJHe0LCgS9kOSuzU/tkt8GcXYUM2WXTKzu0mieJO9KWNtpcFr9awqSsR4cizp1sNwOhYswVhppY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G9EuroMD; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G9EuroMD" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6077e9dd249so4429337b3.3 for ; Mon, 12 Feb 2024 13:40:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774001; x=1708378801; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8Z7QS+MaYqtJtKeOfyLYXivI4+YAu2oBG/bydtMi8hs=; b=G9EuroMDICcvbbvA6Qz5ZzRdk/O3djKFBu+Yx76SsgapkzfnIDO4c/aSM6ohY3hTu4 q6h66iH8P92zyfSggE1/0jJHanVGOs4gZjIBK7+sLCBQVenxTkc+pQSaTjjBwg1zd64Z zMPgzq8//QbB4tV6fE4Q9juhOon9l7rYi42/URUjEPAP/nRhylCDm8PY8NLrSu7UGeMH lMBEzVAGWtjnMLNp8aXEG9DLc3aIvrldL59UoRE/gf0IuLFK4jmlN65mrCgCU7mk9Equ feH7hsx487gOI4fUuf7C2qtsjs4RuTL9CX8uvFdOxCwRVt/i8oQHtNvxCT8bAZZthhNS ClIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774001; x=1708378801; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8Z7QS+MaYqtJtKeOfyLYXivI4+YAu2oBG/bydtMi8hs=; b=mkkNDCCdXkudldoMwsTYrhGniS25AC4xJxoIdDG95G5MKP18/KdJRjoJ5Vy5K+bkMi PD8/T9od4/6K2TMB1jfvaQsGHdAxlJDKsPzb1XRwir5/WaWmcS1wushwjwe6ewI9jRIf d5WzEKPRpVF1Ea1liEBttAdEdLwbX3KvV9spIR5VWd/fOIHn3MUsXalM+oC8Ni8BT5zB KibTimDlMN4Qtbk2s06UFkYdgOTO+f9QYLzyhZYVbPoRhkbckpkIPjyi7q5fcVZCAWXT JnvFYiQfzNDINtL/RsYriX301mSgldEbdXO0gkicpc6PABkHwAEwE92zKM1hKQDifNIp +mXw== X-Gm-Message-State: AOJu0YwuLn8+hBO+UxSG3RigujUvdMWrPuEYwkBYBxEIUog6aU5vMy6X Ef0SCMa0i6jptUbc6KeGchLCgWyk1bO46OtinE2dcAXUyh2bnwuz87MTGrC+AevTx/sIqNTch1E XCg== X-Google-Smtp-Source: AGHT+IFvMqjoVXc9TXkfAVsbZzdsmiP9oJg/iMrpDi1bTkoQckFdveXZ0cOEOyNqrIOK7Jlnp3PXZ85YLhw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:908:b0:604:c668:5012 with SMTP id cb8-20020a05690c090800b00604c6685012mr1435458ywb.6.1707774000847; Mon, 12 Feb 2024 13:40:00 -0800 (PST) Date: Mon, 12 Feb 2024 13:38:59 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-14-surenb@google.com> Subject: [PATCH v3 13/35] lib: add allocation tagging support for memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily instrument memory allocators. It registers an "alloc_tags" codetag type with /proc/allocinfo interface to output allocation tag information when the feature is enabled. CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory allocation profiling instrumentation. Memory allocation profiling can be enabled or disabled at runtime using /proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n. CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation profiling by default. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- Documentation/admin-guide/sysctl/vm.rst | 16 +++ Documentation/filesystems/proc.rst | 28 +++++ include/asm-generic/codetag.lds.h | 14 +++ include/asm-generic/vmlinux.lds.h | 3 + include/linux/alloc_tag.h | 133 ++++++++++++++++++++ include/linux/sched.h | 24 ++++ lib/Kconfig.debug | 25 ++++ lib/Makefile | 2 + lib/alloc_tag.c | 158 ++++++++++++++++++++++++ scripts/module.lds.S | 7 ++ 10 files changed, 410 insertions(+) create mode 100644 include/asm-generic/codetag.lds.h create mode 100644 include/linux/alloc_tag.h create mode 100644 lib/alloc_tag.c diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index c59889de122b..a214719492ea 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm: - legacy_va_layout - lowmem_reserve_ratio - max_map_count +- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y) - memory_failure_early_kill - memory_failure_recovery - min_free_kbytes @@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation. The default value is 65530. +mem_profiling +============== + +Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y) + +1: Enable memory profiling. + +0: Disabld memory profiling. + +Enabling memory profiling introduces a small performance overhead for all +memory allocations. + +The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. + + memory_failure_early_kill: ========================== diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index 104c6d047d9b..40d6d18308e4 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -688,6 +688,7 @@ files are there, and which are missing. ============ =============================================================== File Content ============ =============================================================== + allocinfo Memory allocations profiling information apm Advanced power management info bootconfig Kernel command line obtained from boot config, and, if there were kernel parameters from the @@ -953,6 +954,33 @@ also be allocatable although a lot of filesystem metadata may have to be reclaimed to achieve this. +allocinfo +~~~~~~~ + +Provides information about memory allocations at all locations in the code +base. Each allocation in the code is identified by its source file, line +number, module and the function calling the allocation. The number of bytes +allocated at each location is reported. + +Example output. + +:: + + > cat /proc/allocinfo + + 153MiB mm/slub.c:1826 module:slub func:alloc_slab_page + 6.08MiB mm/slab_common.c:950 module:slab_common func:_kmalloc_order + 5.09MiB mm/memcontrol.c:2814 module:memcontrol func:alloc_slab_obj_exts + 4.54MiB mm/page_alloc.c:5777 module:page_alloc func:alloc_pages_exact + 1.32MiB include/asm-generic/pgalloc.h:63 module:pgtable func:__pte_alloc_one + 1.16MiB fs/xfs/xfs_log_priv.h:700 module:xfs func:xlog_kvmalloc + 1.00MiB mm/swap_cgroup.c:48 module:swap_cgroup func:swap_cgroup_prepare + 734KiB fs/xfs/kmem.c:20 module:xfs func:kmem_alloc + 640KiB kernel/rcu/tree.c:3184 module:tree func:fill_page_cache_func + 640KiB drivers/char/virtio_console.c:452 module:virtio_console func:alloc_buf + ... + + meminfo ~~~~~~~ diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codetag.lds.h new file mode 100644 index 000000000000..64f536b80380 --- /dev/null +++ b/include/asm-generic/codetag.lds.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_GENERIC_CODETAG_LDS_H +#define __ASM_GENERIC_CODETAG_LDS_H + +#define SECTION_WITH_BOUNDARIES(_name) \ + . = ALIGN(8); \ + __start_##_name = .; \ + KEEP(*(_name)) \ + __stop_##_name = .; + +#define CODETAG_SECTIONS() \ + SECTION_WITH_BOUNDARIES(alloc_tags) + +#endif /* __ASM_GENERIC_CODETAG_LDS_H */ diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 5dd3a61d673d..c9997dc50c50 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -50,6 +50,8 @@ * [__nosave_begin, __nosave_end] for the nosave data */ +#include + #ifndef LOAD_OFFSET #define LOAD_OFFSET 0 #endif @@ -366,6 +368,7 @@ . = ALIGN(8); \ BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \ BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \ + CODETAG_SECTIONS() \ LIKELY_PROFILE() \ BRANCH_PROFILE() \ TRACE_PRINTKS() \ diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h new file mode 100644 index 000000000000..cf55a149fa84 --- /dev/null +++ b/include/linux/alloc_tag.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * allocation tagging + */ +#ifndef _LINUX_ALLOC_TAG_H +#define _LINUX_ALLOC_TAG_H + +#include +#include +#include +#include +#include +#include +#include + +struct alloc_tag_counters { + u64 bytes; + u64 calls; +}; + +/* + * An instance of this structure is created in a special ELF section at every + * allocation callsite. At runtime, the special section is treated as + * an array of these. Embedded codetag utilizes codetag framework. + */ +struct alloc_tag { + struct codetag ct; + struct alloc_tag_counters __percpu *counters; +} __aligned(8); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) +{ + return container_of(ct, struct alloc_tag, ct); +} + +#ifdef ARCH_NEEDS_WEAK_PER_CPU +/* + * When percpu variables are required to be defined as weak, static percpu + * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION). + */ +#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_PER_CPU" +#endif + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) \ + static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \ + static struct alloc_tag _alloc_tag __used __aligned(8) \ + __section("alloc_tags") = { \ + .ct = CODE_TAG_INIT, \ + .counters = &_alloc_tag_cntr }; \ + struct alloc_tag * __maybe_unused _old = alloc_tag_save(&_alloc_tag) + +DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + mem_alloc_profiling_key); + +static inline bool mem_alloc_profiling_enabled(void) +{ + return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + &mem_alloc_profiling_key); +} + +static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag) +{ + struct alloc_tag_counters v = { 0, 0 }; + struct alloc_tag_counters *counter; + int cpu; + + for_each_possible_cpu(cpu) { + counter = per_cpu_ptr(tag->counters, cpu); + v.bytes += counter->bytes; + v.calls += counter->calls; + } + + return v; +} + +static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes) +{ + struct alloc_tag *tag; + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n"); +#endif + if (!ref || !ref->ct) + return; + + tag = ct_to_alloc_tag(ref->ct); + + this_cpu_sub(tag->counters->bytes, bytes); + this_cpu_dec(tag->counters->calls); + + ref->ct = NULL; +} + +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) +{ + __alloc_tag_sub(ref, bytes); +} + +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) +{ + __alloc_tag_sub(ref, bytes); +} + +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN_ONCE(ref && ref->ct, + "alloc_tag was not cleared (got tag for %s:%u)\n",\ + ref->ct->filename, ref->ct->lineno); + + WARN_ONCE(!tag, "current->alloc_tag not set"); +#endif + if (!ref || !tag) + return; + + ref->ct = &tag->ct; + this_cpu_add(tag->counters->bytes, bytes); + this_cpu_inc(tag->counters->calls); +} + +#else + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {} +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) {} +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, + size_t bytes) {} + +#endif + +#endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index ffe8f618ab86..da68a10517c8 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -770,6 +770,10 @@ struct task_struct { unsigned int flags; unsigned int ptrace; +#ifdef CONFIG_MEM_ALLOC_PROFILING + struct alloc_tag *alloc_tag; +#endif + #ifdef CONFIG_SMP int on_cpu; struct __call_single_node wake_entry; @@ -810,6 +814,7 @@ struct task_struct { struct task_group *sched_task_group; #endif + #ifdef CONFIG_UCLAMP_TASK /* * Clamp values requested for a scheduling entity. @@ -2183,4 +2188,23 @@ static inline int sched_core_idle_cpu(int cpu) { return idle_cpu(cpu); } extern void sched_set_stop_task(int cpu, struct task_struct *stop); +#ifdef CONFIG_MEM_ALLOC_PROFILING +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) +{ + swap(current->alloc_tag, tag); + return tag; +} + +static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n"); +#endif + current->alloc_tag = old; +} +#else +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) { return NULL; } +#define alloc_tag_restore(_tag, _old) +#endif + #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 0be2d00c3696..78d258ca508f 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -972,6 +972,31 @@ config CODE_TAGGING bool select KALLSYMS +config MEM_ALLOC_PROFILING + bool "Enable memory allocation profiling" + default n + depends on PROC_FS + depends on !DEBUG_FORCE_WEAK_PER_CPU + select CODE_TAGGING + help + Track allocation source code and record total allocation size + initiated at that code location. The mechanism can be used to track + memory leaks with a low performance and memory impact. + +config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT + bool "Enable memory allocation profiling by default" + default y + depends on MEM_ALLOC_PROFILING + +config MEM_ALLOC_PROFILING_DEBUG + bool "Memory allocation profiler debugging" + default n + depends on MEM_ALLOC_PROFILING + select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT + help + Adds warnings with helpful error messages for memory allocation + profiling. + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index 6b48b22fdfac..859112f09bf5 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -236,6 +236,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_CODE_TAGGING) += codetag.o +obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o + lib-$(CONFIG_GENERIC_BUG) += bug.o obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c new file mode 100644 index 000000000000..4fc031f9cefd --- /dev/null +++ b/lib/alloc_tag.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +static struct codetag_type *alloc_tag_cttype; + +DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT, + mem_alloc_profiling_key); + +static void *allocinfo_start(struct seq_file *m, loff_t *pos) +{ + struct codetag_iterator *iter; + struct codetag *ct; + loff_t node = *pos; + + iter = kzalloc(sizeof(*iter), GFP_KERNEL); + m->private = iter; + if (!iter) + return NULL; + + codetag_lock_module_list(alloc_tag_cttype, true); + *iter = codetag_get_ct_iter(alloc_tag_cttype); + while ((ct = codetag_next_ct(iter)) != NULL && node) + node--; + + return ct ? iter : NULL; +} + +static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos) +{ + struct codetag_iterator *iter = (struct codetag_iterator *)arg; + struct codetag *ct = codetag_next_ct(iter); + + (*pos)++; + if (!ct) + return NULL; + + return iter; +} + +static void allocinfo_stop(struct seq_file *m, void *arg) +{ + struct codetag_iterator *iter = (struct codetag_iterator *)m->private; + + if (iter) { + codetag_lock_module_list(alloc_tag_cttype, false); + kfree(iter); + } +} + +static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct) +{ + struct alloc_tag *tag = ct_to_alloc_tag(ct); + struct alloc_tag_counters counter = alloc_tag_read(tag); + s64 bytes = counter.bytes; + char val[10], *p = val; + + if (bytes < 0) { + *p++ = '-'; + bytes = -bytes; + } + + string_get_size(bytes, 1, + STRING_SIZE_BASE2|STRING_SIZE_NOSPACE, + p, val + ARRAY_SIZE(val) - p); + + seq_buf_printf(out, "%8s %8llu ", val, counter.calls); + codetag_to_text(out, ct); + seq_buf_putc(out, ' '); + seq_buf_putc(out, '\n'); +} + +static int allocinfo_show(struct seq_file *m, void *arg) +{ + struct codetag_iterator *iter = (struct codetag_iterator *)arg; + char *bufp; + size_t n = seq_get_buf(m, &bufp); + struct seq_buf buf; + + seq_buf_init(&buf, bufp, n); + alloc_tag_to_text(&buf, iter->ct); + seq_commit(m, seq_buf_used(&buf)); + return 0; +} + +static const struct seq_operations allocinfo_seq_op = { + .start = allocinfo_start, + .next = allocinfo_next, + .stop = allocinfo_stop, + .show = allocinfo_show, +}; + +static void __init procfs_init(void) +{ + proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op); +} + +static bool alloc_tag_module_unload(struct codetag_type *cttype, + struct codetag_module *cmod) +{ + struct codetag_iterator iter = codetag_get_ct_iter(cttype); + struct alloc_tag_counters counter; + bool module_unused = true; + struct alloc_tag *tag; + struct codetag *ct; + + for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) { + if (iter.cmod != cmod) + continue; + + tag = ct_to_alloc_tag(ct); + counter = alloc_tag_read(tag); + + if (WARN(counter.bytes, "%s:%u module %s func:%s has %llu allocated at module unload", + ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes)) + module_unused = false; + } + + return module_unused; +} + +static struct ctl_table memory_allocation_profiling_sysctls[] = { + { + .procname = "mem_profiling", + .data = &mem_alloc_profiling_key, +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + .mode = 0444, +#else + .mode = 0644, +#endif + .proc_handler = proc_do_static_key, + }, + { } +}; + +static int __init alloc_tag_init(void) +{ + const struct codetag_type_desc desc = { + .section = "alloc_tags", + .tag_size = sizeof(struct alloc_tag), + .module_unload = alloc_tag_module_unload, + }; + + alloc_tag_cttype = codetag_register_type(&desc); + if (IS_ERR_OR_NULL(alloc_tag_cttype)) + return PTR_ERR(alloc_tag_cttype); + + register_sysctl_init("vm", memory_allocation_profiling_sysctls); + procfs_init(); + + return 0; +} +module_init(alloc_tag_init); diff --git a/scripts/module.lds.S b/scripts/module.lds.S index bf5bcf2836d8..45c67a0994f3 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -9,6 +9,8 @@ #define DISCARD_EH_FRAME *(.eh_frame) #endif +#include + SECTIONS { /DISCARD/ : { *(.discard) @@ -47,12 +49,17 @@ SECTIONS { .data : { *(.data .data.[0-9a-zA-Z_]*) *(.data..L*) + CODETAG_SECTIONS() } .rodata : { *(.rodata .rodata.[0-9a-zA-Z_]*) *(.rodata..L*) } +#else + .data : { + CODETAG_SECTIONS() + } #endif } From patchwork Mon Feb 12 21:39:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554088 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3009954780 for ; Mon, 12 Feb 2024 21:40:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774007; cv=none; b=osjkMHbcb09y95lr06XqCcJaypFhxuaQydrtyreQnBGQUOOYyC0wDdS7qN7ledqHGrnrnedwxrWTYxRmw14RnhJKZ4iY3TOcsgm9e04GnRuzZ71FD7l9D+0GuZYaA8FQOw8l965IY71pqI1/YIvZQHMWYV3pQzaDjGkWoFHcRZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774007; c=relaxed/simple; bh=mbuwThz+BIFsILkPTzOVB+8eaZCVFOoL4E35+PtIaGk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eV4vAXzn2NT3OAlpVzBlvxbqN0d7S6/9cNDsFhIZGmXjS2ODODedLeaAn/e54yr2MEnyu2G6O6SXAE8mwc+CoffpTatYRMdPh/on5TrmVmkzsqWD5Fzrn7hiiboXdgpo59E3CoweoUO65o1eqfuYuLANPr153dY5Sdm06Uw2YSU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=unEXkYtM; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="unEXkYtM" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60761bdbd4cso19835627b3.3 for ; Mon, 12 Feb 2024 13:40:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774003; x=1708378803; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kt8cg7NInhDJR3c6J+BP4SUIPENq39NpOtUeNq0nwQE=; b=unEXkYtMm2asCwKMZZ4nrGluzfQDKPM/Wbd6QYhwFtT+r+4FUbvfQH2vKv9MXEmsLd FAgqQsgTrkgjgeGr1r3VCfoOv0gPG2H9dd4kYPEkFEbM3XWFHB7dXz7AB+3TkoBlHTTO vK3pP4sR2RkOiB6vA8v2XsR6sj4guPwl23WI9xSsQudIZNLmFTj97sS0s/wgqThnh3xQ 1uFPoHdnQSU+jGHGYwMxLYS45VmyGKFLhBD/LKQVmKlZfM4fiT1gr+VmMVY9i9qYax9K n2C4iGWE50GfyjNUPk7eBlntzKiQZhu3MWVLPznSfSp+FtHlBaU6gocgUxJcAGfYmUQ+ Ea1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774003; x=1708378803; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kt8cg7NInhDJR3c6J+BP4SUIPENq39NpOtUeNq0nwQE=; b=cTKpwJncOuhs8sLBYvvvzsli6JC7+2yR2uYuC7ua0XBQbIllK4SvxHc8Kgqsq+KseX ugDUEXaW3AapM9UibUwQc4d6GI7Y7aoW1QvAnwlXOYjhvoy9+TBAx+leRyABK7ke7hmt yuR2JUY4LA+luniVKRJAcMgxLPUr6PcYLiaSyjB/q4RkQnzOn3DEu7orO0FKzmOgYOta It11euDpeidICfXQtwLG3Dz1nKZ0ByamypSb5dF7+dz8kprg347nkq4lCns+A31X91qt 9G9Q0iwIApW01slvFfcQ0/QNF1g39CuTSWWcvTJbZjgW5BkOOG3X0f/1//Z9SBSIS3dy Q+DQ== X-Forwarded-Encrypted: i=1; AJvYcCVMNixGKBBBFM2SCc97bZb2QFwkzh00DiyHaz0kvWtcsvMS+Wz3ybAvkg7/tEnA1keQkHzpaUGUbcggzoX1J/8tD53mbPk+LyFxpXChjA== X-Gm-Message-State: AOJu0YzpQPAOp0Ec2t3zh0XZH0i6A7sxlezrxDRoZDTkkXDkBRDXnTMN +prh7M/4uU4hHz42S24hWV3jzErF8l4QNdekwi+aasPclydon4PndMohwbIvcSJBnXyWuI+kt2O 0hQ== X-Google-Smtp-Source: AGHT+IGLgFnBLQzDqMGIcoBa9Zqezua8Q6ZnRTqobe7m2G77O75/0lc/m1eR/Azu5VyYD/x/WOMy71Jje+w= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:c07:b0:604:42a3:3adc with SMTP id cl7-20020a05690c0c0700b0060442a33adcmr2207535ywb.10.1707774003026; Mon, 12 Feb 2024 13:40:03 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:00 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-15-surenb@google.com> Subject: [PATCH v3 14/35] lib: introduce support for page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Introduce helper functions to easily instrument page allocators by storing a pointer to the allocation tag associated with the code that allocated the page in a page_ext field. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/page_ext.h | 1 - include/linux/pgalloc_tag.h | 73 +++++++++++++++++++++++++++++++++++++ lib/Kconfig.debug | 1 + lib/alloc_tag.c | 17 +++++++++ mm/mm_init.c | 1 + mm/page_alloc.c | 4 ++ mm/page_ext.c | 4 ++ 7 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 include/linux/pgalloc_tag.h diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index be98564191e6..07e0656898f9 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -4,7 +4,6 @@ #include #include -#include struct pglist_data; diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h new file mode 100644 index 000000000000..a060c26eb449 --- /dev/null +++ b/include/linux/pgalloc_tag.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * page allocation tagging + */ +#ifndef _LINUX_PGALLOC_TAG_H +#define _LINUX_PGALLOC_TAG_H + +#include + +#ifdef CONFIG_MEM_ALLOC_PROFILING + +#include + +extern struct page_ext_operations page_alloc_tagging_ops; +extern struct page_ext *page_ext_get(struct page *page); +extern void page_ext_put(struct page_ext *page_ext); + +static inline union codetag_ref *codetag_ref_from_page_ext(struct page_ext *page_ext) +{ + return (void *)page_ext + page_alloc_tagging_ops.offset; +} + +static inline struct page_ext *page_ext_from_codetag_ref(union codetag_ref *ref) +{ + return (void *)ref - page_alloc_tagging_ops.offset; +} + +static inline union codetag_ref *get_page_tag_ref(struct page *page) +{ + if (page && mem_alloc_profiling_enabled()) { + struct page_ext *page_ext = page_ext_get(page); + + if (page_ext) + return codetag_ref_from_page_ext(page_ext); + } + return NULL; +} + +static inline void put_page_tag_ref(union codetag_ref *ref) +{ + page_ext_put(page_ext_from_codetag_ref(ref)); +} + +static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, + unsigned int order) +{ + union codetag_ref *ref = get_page_tag_ref(page); + + if (ref) { + alloc_tag_add(ref, task->alloc_tag, PAGE_SIZE << order); + put_page_tag_ref(ref); + } +} + +static inline void pgalloc_tag_sub(struct page *page, unsigned int order) +{ + union codetag_ref *ref = get_page_tag_ref(page); + + if (ref) { + alloc_tag_sub(ref, PAGE_SIZE << order); + put_page_tag_ref(ref); + } +} + +#else /* CONFIG_MEM_ALLOC_PROFILING */ + +static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, + unsigned int order) {} +static inline void pgalloc_tag_sub(struct page *page, unsigned int order) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + +#endif /* _LINUX_PGALLOC_TAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 78d258ca508f..7bbdb0ddb011 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -978,6 +978,7 @@ config MEM_ALLOC_PROFILING depends on PROC_FS depends on !DEBUG_FORCE_WEAK_PER_CPU select CODE_TAGGING + select PAGE_EXTENSION help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 4fc031f9cefd..2d5226d9262d 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -124,6 +125,22 @@ static bool alloc_tag_module_unload(struct codetag_type *cttype, return module_unused; } +static __init bool need_page_alloc_tagging(void) +{ + return true; +} + +static __init void init_page_alloc_tagging(void) +{ +} + +struct page_ext_operations page_alloc_tagging_ops = { + .size = sizeof(union codetag_ref), + .need = need_page_alloc_tagging, + .init = init_page_alloc_tagging, +}; +EXPORT_SYMBOL(page_alloc_tagging_ops); + static struct ctl_table memory_allocation_profiling_sysctls[] = { { .procname = "mem_profiling", diff --git a/mm/mm_init.c b/mm/mm_init.c index 2c19f5515e36..e9ea2919d02d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 150d4f23b010..edb79a55a252 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -53,6 +53,7 @@ #include #include #include +#include #include #include "internal.h" #include "shuffle.h" @@ -1100,6 +1101,7 @@ static __always_inline bool free_pages_prepare(struct page *page, /* Do not let hwpoison pages hit pcplists/buddy */ reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_sub(page, order); return false; } @@ -1139,6 +1141,7 @@ static __always_inline bool free_pages_prepare(struct page *page, page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_sub(page, order); if (!PageHighMem(page)) { debug_check_no_locks_freed(page_address(page), @@ -1532,6 +1535,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); + pgalloc_tag_add(page, current, order); } static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, diff --git a/mm/page_ext.c b/mm/page_ext.c index 4548fcc66d74..3c58fe8a24df 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -10,6 +10,7 @@ #include #include #include +#include /* * struct page extension @@ -82,6 +83,9 @@ static struct page_ext_operations *page_ext_ops[] __initdata = { #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + &page_alloc_tagging_ops, +#endif #ifdef CONFIG_PAGE_TABLE_CHECK &page_table_check_ops, #endif From patchwork Mon Feb 12 21:39:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554089 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 618DA54BF7 for ; Mon, 12 Feb 2024 21:40:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774009; cv=none; b=Vv2gp499RudTtdPqwClJQpRdqjMwwEPu2fL/pawUtpPasJkR4QlbTj7KuMP1koeGbA0UtR8I9BaFEaQ5Zye9dnmBlUVIc1Iz67xlqf10LuNb/WB8aMyNDOC9LMiIxVSTPoYl26ZO/894jTyRHd4RcqXfKeHF4OavgK6ZOjqi5fA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774009; c=relaxed/simple; bh=lCACRJohYV6DrxkuBbYJIHOo4uDQQ6vlAkwkYmSc41o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FTtimlVTPePYgi52mtcHdjCKwPMOJ8uKdNp1c5VXh4EC/SNgVp22eAcziIYA7aa2o7/SFt5ezW4mwkzbEvtWuQ2ksdHcO2O12WnIC6LQUel2tvga/lk7NBYhrmXiG8zu0Y1POmNKbEzN7j3H+c8XqIUTAcNN0lNjgYIRuNHQOhY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UjFxZ2KC; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UjFxZ2KC" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6077ca422d2so6812107b3.1 for ; Mon, 12 Feb 2024 13:40:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774005; x=1708378805; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8hiMFMYdTb9wRLf05RpnPyNZaK7+4nxliVbJC61s+/Q=; b=UjFxZ2KCC355kvjGN70ZH5aMxAZF3oPs8MXoT83AjabFOObuIbwX+Zz5S5HxhVAwwp Sp1+5iN63jCp4wapaJswQEOyTSURxSaMcmj60Zpe4ay91RdIzdTj/Hp/c1ypfiNLLegL sKQWxJJP+R/ax210KmJgTLE+FjZsQ4dUaHhLaTujAkzqoZWhPIIab8XtycseoO/9sxZy Qe0pK8eSDdezy9MMie0MCuAYsSoPR7VspLN6rZMN5Z/9NWLYrt0l3v5kljhIHm8BOydE wtiZrPju++W4eynLMlaUQDaQ/2G+mZB2BrYoLxKv5R/fDVOjico0JYgV0WuvcMUE1cKW 22bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774005; x=1708378805; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8hiMFMYdTb9wRLf05RpnPyNZaK7+4nxliVbJC61s+/Q=; b=SLAYWH5si0gOwiSkuubOkuDkqEJeOYKAlNmjNIxiEUoRv68dDwm2lk235oBTyovptc W0K6cyDDca2ZVzgWDxonarnCPHvzHN7dWD1XbaLFb+VgMa/j/wKc7yZ4p+Vv1ke4qY9n aK5qvOym0txlQJ5WLgUGhyH57BLO8NJxpZutFBIawgw7zGlFsXhL0unzFg+0CWRjzt5s eDwOooOQ+TLQhwAHJH0c5UdhGlSE9BUeS4fvVQHNIS8S0dLTYE11NDo2upcJhEUEykkb Y81TqxKKnKNBXhcMmpz6JiWEubhPaPOAmEZD+x+Sy8v/AxrluAcOYDBHJw3BuMFYSPiw YfTA== X-Gm-Message-State: AOJu0YxnJG/heO9drVE1K2Qajuh0XITS8WOSF6A22EqcnIpWfRIsaiGs y4GEWQytIAm1yJvmSoMpisHGrA9yWntxkisDIl4aWZ4bSY1CpNfbiCP74EJCG829Rx+R04Rqi55 Cvw== X-Google-Smtp-Source: AGHT+IHMymVhDhc3vqzIXqR7wXKts/Huwxqw8epMlcUK6WIr6/J3gYxRwjZRSc5OiXO09TM/bRBKSIcClBs= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a0d:d650:0:b0:604:7b03:4223 with SMTP id y77-20020a0dd650000000b006047b034223mr2206277ywd.2.1707774005345; Mon, 12 Feb 2024 13:40:05 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:01 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-16-surenb@google.com> Subject: [PATCH v3 15/35] mm: percpu: increase PERCPU_MODULE_RESERVE to accommodate allocation tags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org As each allocation tag generates a per-cpu variable, more space is required to store them. Increase PERCPU_MODULE_RESERVE to provide enough area. A better long-term solution would be to allocate this memory dynamically. Signed-off-by: Suren Baghdasaryan Signed-off-by: Kent Overstreet Cc: Peter Zijlstra Cc: Tejun Heo --- include/linux/percpu.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 8c677f185901..62b5eb45bd89 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -14,7 +14,11 @@ /* enough to cover all DEFINE_PER_CPUs in modules */ #ifdef CONFIG_MODULES +#ifdef CONFIG_MEM_ALLOC_PROFILING +#define PERCPU_MODULE_RESERVE (8 << 12) +#else #define PERCPU_MODULE_RESERVE (8 << 10) +#endif #else #define PERCPU_MODULE_RESERVE 0 #endif From patchwork Mon Feb 12 21:39:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554090 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9FB655769 for ; Mon, 12 Feb 2024 21:40:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774011; cv=none; b=lJagPUVO6VJoT6F3sSwzWtm3Dz3ZeqFj/fdNnseKC28Bf/VDfLd6SYUUtNlHGUJeVFI90mYMYFG1bqeqNTQHKJzpxqinnPVvlSN1QBhZeujq7cwHUfpLfymRlob0j3hfI+w9UDCz6S5hi1VOEKpkzTI7hWnTKmhtdI5VLxZaK14= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774011; c=relaxed/simple; bh=WU1Yd9/gqCbsh2b9xBYcpCwZaKdATTDllI8Tygx0va0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kOrNJ03wDBw/Fs6W17ljcSeaGv1phubOINxjj+3WVrv1mD7wVFE52/S3Kt/AC//AzrOl1cnINiEIwW14S00MK/C73yxulDRuNAp5dCT0vVVUNnQppyz5ajWgUCr2WTThVFciFbpfi1ACWlEUVgP9mRCa8EScjpFUSRaVkieQcbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DvRlxHiF; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DvRlxHiF" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60665b5fabcso4474757b3.1 for ; Mon, 12 Feb 2024 13:40:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774007; x=1708378807; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gyt+GmM2eMHRRKEe8/8uFf8EZSlg2sZI9NA/PbSxBWI=; b=DvRlxHiF587qX6HoKtQvW/pudyQIDSDzUvqlNsAytsy5ctav0atDoxEAr7WhjygAur 0eFQdqvrkgWk8dOWqlp5x6TF5mUHzT6X4xsJQK7eTTg64QEed5gj+N+PI/5LhqKm2Scu xnRNyTTbPlRQXWwnNf7N1J8UoBYGFQR7ZbEw1y6R74SXCUpnhW4vr8MBWDAiwh83SqM/ I+PxrOHHhKuymyI0cjXtC0XyjKDjNM377M1tJiShWCupwgt/JIyoZsQ5IqsmdoB0Sgh5 6CaBljCSck3ZXWhujhxwIPLjFfaJLJqj8RdWVb9pV/gmFIyudCQG2OpWv5d8797vBtml Mpjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774007; x=1708378807; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gyt+GmM2eMHRRKEe8/8uFf8EZSlg2sZI9NA/PbSxBWI=; b=ln51eeZgEKZIZ1Gqa6/oJxerAwHRbcSbBPksJQGYRcDeP+GsWGRchz65KKUUg5AQ5a zLzaMluQzsJZOBdOZBaRsHzsHqGWHzzdnRVZXi1G+nWotYgzNSeuj6/4//BlFo17VE76 /v1RDUHS4NSF4MCeh6iShEHRC9cbzIGlKXqdvVPWSSjVRGCNzKOChewatiz5rFz1ETQZ R/S0GbmfByStzd4+J1e76wk94dPm6jKBm25mMG12e5TOD02t4OxsFH9npteov4xESpna OA6M0KAIfL/G/uvmxvsf44F3QsTuIOqnOPQoDHVVxBFcJL0rRZzOplyYkLv7Q8+tFRh2 IKjw== X-Forwarded-Encrypted: i=1; AJvYcCXHbG4hLcXw5JspSd6MLuWHGVZb1jkn31xgszIOd3JXRMKSPYgp0LcUlTAaa2G9ELC+GzkJ61ZuViFu9q+VhJZ29J3kKMwzsn9GxzMUKA== X-Gm-Message-State: AOJu0YzaTZAZSUaTlxJ7GOXx+XKURpfRw1nQCyltq7d0h08bWTmsjLsE VcKqxBIRpOnMM6v2QljKb8QrnHlUtcR+jI1u7+clD8+Qghp/JQt3qup4o224gk9oDmum8OS6oEz 5dA== X-Google-Smtp-Source: AGHT+IFq2nZiWlkvaCq5BbxuMd0rU4hXzsPUeXzg8XtldG/VPRi/VWxJUnymRdhbS2wacAZbpAHHkokQ9sA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a81:848a:0:b0:604:49a1:8da0 with SMTP id u132-20020a81848a000000b0060449a18da0mr1393758ywf.8.1707774007616; Mon, 12 Feb 2024 13:40:07 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:02 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-17-surenb@google.com> Subject: [PATCH v3 16/35] change alloc_pages name in dma_map_ops to avoid name conflicts From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org After redefining alloc_pages, all uses of that name are being replaced. Change the conflicting names to prevent preprocessor from replacing them when it's not intended. Signed-off-by: Suren Baghdasaryan --- arch/alpha/kernel/pci_iommu.c | 2 +- arch/mips/jazz/jazzdma.c | 2 +- arch/powerpc/kernel/dma-iommu.c | 2 +- arch/powerpc/platforms/ps3/system-bus.c | 4 ++-- arch/powerpc/platforms/pseries/vio.c | 2 +- arch/x86/kernel/amd_gart_64.c | 2 +- drivers/iommu/dma-iommu.c | 2 +- drivers/parisc/ccio-dma.c | 2 +- drivers/parisc/sba_iommu.c | 2 +- drivers/xen/grant-dma-ops.c | 2 +- drivers/xen/swiotlb-xen.c | 2 +- include/linux/dma-map-ops.h | 2 +- kernel/dma/mapping.c | 4 ++-- 13 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index c81183935e97..7fcf3e9b7103 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -929,7 +929,7 @@ const struct dma_map_ops alpha_pci_ops = { .dma_supported = alpha_pci_supported, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; EXPORT_SYMBOL(alpha_pci_ops); diff --git a/arch/mips/jazz/jazzdma.c b/arch/mips/jazz/jazzdma.c index eabddb89d221..c97b089b9902 100644 --- a/arch/mips/jazz/jazzdma.c +++ b/arch/mips/jazz/jazzdma.c @@ -617,7 +617,7 @@ const struct dma_map_ops jazz_dma_ops = { .sync_sg_for_device = jazz_dma_sync_sg_for_device, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; EXPORT_SYMBOL(jazz_dma_ops); diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c index 8920862ffd79..f0ae39e77e37 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -216,6 +216,6 @@ const struct dma_map_ops dma_iommu_ops = { .get_required_mask = dma_iommu_get_required_mask, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platforms/ps3/system-bus.c index d6b5f5ecd515..56dc6b29a3e7 100644 --- a/arch/powerpc/platforms/ps3/system-bus.c +++ b/arch/powerpc/platforms/ps3/system-bus.c @@ -695,7 +695,7 @@ static const struct dma_map_ops ps3_sb_dma_ops = { .unmap_page = ps3_unmap_page, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; @@ -709,7 +709,7 @@ static const struct dma_map_ops ps3_ioc0_dma_ops = { .unmap_page = ps3_unmap_page, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c index 2dc9cbc4bcd8..0c90fc4c3796 100644 --- a/arch/powerpc/platforms/pseries/vio.c +++ b/arch/powerpc/platforms/pseries/vio.c @@ -611,7 +611,7 @@ static const struct dma_map_ops vio_dma_mapping_ops = { .get_required_mask = dma_iommu_get_required_mask, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 2ae98f754e59..c884deca839b 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -676,7 +676,7 @@ static const struct dma_map_ops gart_dma_ops = { .get_sgtable = dma_common_get_sgtable, .dma_supported = dma_direct_supported, .get_required_mask = dma_direct_get_required_mask, - .alloc_pages = dma_direct_alloc_pages, + .alloc_pages_op = dma_direct_alloc_pages, .free_pages = dma_direct_free_pages, }; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 50ccc4f1ef81..8a1f7f5d1bca 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1710,7 +1710,7 @@ static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, .alloc_noncontiguous = iommu_dma_alloc_noncontiguous, .free_noncontiguous = iommu_dma_free_noncontiguous, diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index 9ce0d20a6c58..feef537257d0 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -1022,7 +1022,7 @@ static const struct dma_map_ops ccio_ops = { .map_sg = ccio_map_sg, .unmap_sg = ccio_unmap_sg, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index 784037837f65..fc3863c09f83 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -1090,7 +1090,7 @@ static const struct dma_map_ops sba_ops = { .map_sg = sba_map_sg, .unmap_sg = sba_unmap_sg, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c index 76f6f26265a3..29257d2639db 100644 --- a/drivers/xen/grant-dma-ops.c +++ b/drivers/xen/grant-dma-ops.c @@ -282,7 +282,7 @@ static int xen_grant_dma_supported(struct device *dev, u64 mask) static const struct dma_map_ops xen_grant_dma_ops = { .alloc = xen_grant_dma_alloc, .free = xen_grant_dma_free, - .alloc_pages = xen_grant_dma_alloc_pages, + .alloc_pages_op = xen_grant_dma_alloc_pages, .free_pages = xen_grant_dma_free_pages, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 0e6c6c25d154..1c4ef5111651 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -403,7 +403,7 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .dma_supported = xen_swiotlb_dma_supported, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, .max_mapping_size = swiotlb_max_mapping_size, }; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 4abc60f04209..9ee319851b5f 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -29,7 +29,7 @@ struct dma_map_ops { unsigned long attrs); void (*free)(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs); - struct page *(*alloc_pages)(struct device *dev, size_t size, + struct page *(*alloc_pages_op)(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58db8fd70471..5e2d51e1cdf6 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -570,9 +570,9 @@ static struct page *__dma_alloc_pages(struct device *dev, size_t size, size = PAGE_ALIGN(size); if (dma_alloc_direct(dev, ops)) return dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); - if (!ops->alloc_pages) + if (!ops->alloc_pages_op) return NULL; - return ops->alloc_pages(dev, size, dma_handle, dir, gfp); + return ops->alloc_pages_op(dev, size, dma_handle, dir, gfp); } struct page *dma_alloc_pages(struct device *dev, size_t size, From patchwork Mon Feb 12 21:39:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554092 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A5AB55799 for ; Mon, 12 Feb 2024 21:40:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; cv=none; b=UdsafDP/43B3/XbdkR80vXurOtWoWm5CluwXZ5GdYnvTV0I+e3dteCyPHLazUOOMtN8lle7Kzzv7J6Y+b0u8Ow4XkDXIRb5qOEG0zynrx2G1pniP/iH0iIxEQ2tGkB3YsGuQMEYpWdJcRxa6akyt8c9djmfRyKTu4u56JvoL8yE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; c=relaxed/simple; bh=739Pm3kYnMAr8exPlAiXO0Y/KQxGxE3So1S+V16L2B8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MbERjbrO07ZthHOgpgx9wq0lb9ZldXkNBST9ChLnuLSp9hjZ48/UczgTbjgg6VMJdpXtpVP62bRSh01creJ6Dex6Lj75wA8jgD/ik+sWQklgfUJbJxWYnOyHXzIBOC3+RF9GyqdeQM/+jaHZHPdh67bhDn9w+9ktgRlbLFm8Wu0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TZwz6uBE; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TZwz6uBE" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5efe82b835fso84266327b3.0 for ; Mon, 12 Feb 2024 13:40:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774009; x=1708378809; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GrGwB9qfJZ8baAR6T6ipxUZf1yCQtYwXa3nH2rh59j0=; b=TZwz6uBEjg0OiOzYHw6O84ayV05IB5s+xwMhxMd5Okq0STG6lIhNokaXJJEFBItRBH rmJMa24jZr2DHzaKjdCi7tAinL202GCPoJNFEUwWBFnshp4Bp28hNFf5xOJgeWRAFu8e KJc8y83pT2STLzefEuZ+aF5OYnov8OJrIwXUaIIECvBi7Y78EM16ROtdmiagPOoAqnMj HuqFK370+6TOEcTrk96CzYlTE0IqXeBQMOlAWkcUk9gfvd6S1D8RxhVHxNN3KwtYNw0D 9k1sDhnBR/VbnREV7bm59dmUU8AosDfILjb0cGk3MqfECzPGezsLAxZSUrv6rEng+JHI JD6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774009; x=1708378809; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GrGwB9qfJZ8baAR6T6ipxUZf1yCQtYwXa3nH2rh59j0=; b=ZYHnw6365z5QMT89iGB2N6rtOYIA2DVBqMkZOsTM9B83pye6LTsnVSJZVe2SUIzZR2 vD4QuenuK8iEXT+WJL4hEWOGkhj6ir+9sKVfWN4i/q/+oqbCKYzV7Ruat4I4Xh7TOaoh RJm+51JGTG+YiwPJCVhDgsdDsD/O/pah31qlxhocoO7ebm90ldJDllkj0XuO7Fg6kPub VssLz+i83sXyprh0XdvSaeRB1jyxO8ltPP1ADiDrnE3LjwWb52LQrS0BRtrezZPXxDHz 8fwf94zWoMVU25pJPgdvDec4IlZQWx+v3aahDyseIDn1WGZNr6tvOwDY4MyV5/gi4cSW ixpA== X-Forwarded-Encrypted: i=1; AJvYcCWl8at6E7/TGXIW6putmIW81TFMA8Z3lnAZ5WeJy5kaldqzngDWDpopNmGxjiJEPaPdydVtAkvf/8r2WTuIPm0LOJa8nRNdmx4KZXJ1hw== X-Gm-Message-State: AOJu0YxI1uEsXNRqhQ2VMrWDGStMvz2WiCw3CP7GYLw9oGZ2A/huZ6QI MzvBxsSS6MUdkVjeWMTKDy/dzQ7ozVgPsBD6IWEJFF1v+rTSuvvmXm17d8dSsIyoHHbezRqQux4 9YA== X-Google-Smtp-Source: AGHT+IHSOOtqV+tB0AgXVysbclo7veEI+l/B8vUwDQqYprkJNFv3jRaKZjBjeFfz/9+sevD0Aijuq2/T8Xg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:3505:b0:604:9b11:ff2 with SMTP id fq5-20020a05690c350500b006049b110ff2mr2208602ywb.6.1707774009704; Mon, 12 Feb 2024 13:40:09 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:03 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-18-surenb@google.com> Subject: [PATCH v3 17/35] mm: enable page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Redefine page allocators to record allocation tags upon their invocation. Instrument post_alloc_hook and free_pages_prepare to modify current allocation tag. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Reviewed-by: Kees Cook --- include/linux/alloc_tag.h | 10 +++ include/linux/gfp.h | 126 ++++++++++++++++++++++++-------------- include/linux/pagemap.h | 9 ++- mm/compaction.c | 7 ++- mm/filemap.c | 6 +- mm/mempolicy.c | 52 ++++++++-------- mm/page_alloc.c | 60 +++++++++--------- 7 files changed, 160 insertions(+), 110 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index cf55a149fa84..6fa8a94d8bc1 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -130,4 +130,14 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, #endif +#define alloc_hooks(_do_alloc) \ +({ \ + typeof(_do_alloc) _res; \ + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ + \ + _res = _do_alloc; \ + alloc_tag_restore(&_alloc_tag, _old); \ + _res; \ +}) + #endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bc0fd5259b0b 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -6,6 +6,8 @@ #include #include +#include +#include struct vm_area_struct; struct mempolicy; @@ -175,42 +177,46 @@ static inline void arch_free_page(struct page *page, int order) { } static inline void arch_alloc_page(struct page *page, int order) { } #endif -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, +#define __alloc_pages(...) alloc_hooks(__alloc_pages_noprof(__VA_ARGS__)) + +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); +#define __folio_alloc(...) alloc_hooks(__folio_alloc_noprof(__VA_ARGS__)) -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array); +#define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA_ARGS__)) -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array); +#define alloc_pages_bulk_array_mempolicy(...) \ + alloc_hooks(alloc_pages_bulk_array_mempolicy_noprof(__VA_ARGS__)) /* Bulk allocate order-0 pages */ -static inline unsigned long -alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); -} +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) -static inline unsigned long -alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); -} +#define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_array) static inline unsigned long -alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) +alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, + struct page **page_array) { if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); + return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array); } +#define alloc_pages_bulk_array_node(...) \ + alloc_hooks(alloc_pages_bulk_array_node_noprof(__VA_ARGS__)) + static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) { gfp_t warn_gfp = gfp_mask & (__GFP_THISNODE|__GFP_NOWARN); @@ -230,82 +236,104 @@ static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) * online. For more general interface, see alloc_pages_node(). */ static inline struct page * -__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) +__alloc_pages_node_noprof(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); warn_if_node_offline(nid, gfp_mask); - return __alloc_pages(gfp_mask, order, nid, NULL); + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); } +#define __alloc_pages_node(...) alloc_hooks(__alloc_pages_node_noprof(__VA_ARGS__)) + static inline -struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) +struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); warn_if_node_offline(nid, gfp); - return __folio_alloc(gfp, order, nid, NULL); + return __folio_alloc_noprof(gfp, order, nid, NULL); } +#define __folio_alloc_node(...) alloc_hooks(__folio_alloc_node_noprof(__VA_ARGS__)) + /* * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE, * prefer the current CPU's closest node. Otherwise node must be valid and * online. */ -static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, - unsigned int order) +static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, + unsigned int order) { if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_node(nid, gfp_mask, order); + return __alloc_pages_node_noprof(nid, gfp_mask, order); } +#define alloc_pages_node(...) alloc_hooks(alloc_pages_node_noprof(__VA_ARGS__)) + #ifdef CONFIG_NUMA -struct page *alloc_pages(gfp_t gfp, unsigned int order); -struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order); +struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, struct mempolicy *mpol, pgoff_t ilx, int nid); -struct folio *folio_alloc(gfp_t gfp, unsigned int order); -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order); +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); #else -static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) +static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order) { - return alloc_pages_node(numa_node_id(), gfp_mask, order); + return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order); } -static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, +static inline struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, struct mempolicy *mpol, pgoff_t ilx, int nid) { - return alloc_pages(gfp, order); + return alloc_pages_noprof(gfp, order); } -static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) +static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ - folio_alloc(gfp, order) +#define vma_alloc_folio_noprof(gfp, order, vma, addr, hugepage) \ + folio_alloc_noprof(gfp, order) #endif + +#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__)) +#define alloc_pages_mpol(...) alloc_hooks(alloc_pages_mpol_noprof(__VA_ARGS__)) +#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__)) +#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARGS__)) + #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) -static inline struct page *alloc_page_vma(gfp_t gfp, + +static inline struct page *alloc_page_vma_noprof(gfp_t gfp, struct vm_area_struct *vma, unsigned long addr) { - struct folio *folio = vma_alloc_folio(gfp, 0, vma, addr, false); + struct folio *folio = vma_alloc_folio_noprof(gfp, 0, vma, addr, false); return &folio->page; } +#define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) + +extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); +#define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) -extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); -extern unsigned long get_zeroed_page(gfp_t gfp_mask); +extern unsigned long get_zeroed_page_noprof(gfp_t gfp_mask); +#define get_zeroed_page(...) alloc_hooks(get_zeroed_page_noprof(__VA_ARGS__)) + +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) __alloc_size(1); +#define alloc_pages_exact(...) alloc_hooks(alloc_pages_exact_noprof(__VA_ARGS__)) -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); void free_pages_exact(void *virt, size_t size); -__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); -#define __get_free_page(gfp_mask) \ - __get_free_pages((gfp_mask), 0) +__meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); +#define alloc_pages_exact_nid(...) \ + alloc_hooks(alloc_pages_exact_nid_noprof(__VA_ARGS__)) + +#define __get_free_page(gfp_mask) \ + __get_free_pages((gfp_mask), 0) -#define __get_dma_pages(gfp_mask, order) \ - __get_free_pages((gfp_mask) | GFP_DMA, (order)) +#define __get_dma_pages(gfp_mask, order) \ + __get_free_pages((gfp_mask) | GFP_DMA, (order)) extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); @@ -357,10 +385,14 @@ extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); #ifdef CONFIG_CONTIG_ALLOC /* The below functions must be run on a range from a single zone. */ -extern int alloc_contig_range(unsigned long start, unsigned long end, +extern int alloc_contig_range_noprof(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask); -extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, - int nid, nodemask_t *nodemask); +#define alloc_contig_range(...) alloc_hooks(alloc_contig_range_noprof(__VA_ARGS__)) + +extern struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask, + int nid, nodemask_t *nodemask); +#define alloc_contig_pages(...) alloc_hooks(alloc_contig_pages_noprof(__VA_ARGS__)) + #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2df35e65557d..35636e67e2e1 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -542,14 +542,17 @@ static inline void *detach_page_private(struct page *page) #endif #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); #else -static inline struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { - return folio_alloc(gfp, order); + return folio_alloc_noprof(gfp, order); } #endif +#define filemap_alloc_folio(...) \ + alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) + static inline struct page *__page_cache_alloc(gfp_t gfp) { return &filemap_alloc_folio(gfp, 0)->page; diff --git a/mm/compaction.c b/mm/compaction.c index 4add68d40e8d..f4c0e682c979 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1781,7 +1781,7 @@ static void isolate_freepages(struct compact_control *cc) * This is a migrate-callback that "allocates" freepages by taking pages * from the isolated freelists in the block we are migrating to. */ -static struct folio *compaction_alloc(struct folio *src, unsigned long data) +static struct folio *compaction_alloc_noprof(struct folio *src, unsigned long data) { struct compact_control *cc = (struct compact_control *)data; struct folio *dst; @@ -1800,6 +1800,11 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) return dst; } +static struct folio *compaction_alloc(struct folio *src, unsigned long data) +{ + return alloc_hooks(compaction_alloc_noprof(src, data)); +} + /* * This is a migrate-callback that "frees" freepages back to the isolated * freelist. All pages on the freelist are from the same zone, so there is no diff --git a/mm/filemap.c b/mm/filemap.c index 750e779c23db..e51e474545ad 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -957,7 +957,7 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, EXPORT_SYMBOL_GPL(filemap_add_folio); #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { int n; struct folio *folio; @@ -972,9 +972,9 @@ struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) return folio; } - return folio_alloc(gfp, order); + return folio_alloc_noprof(gfp, order); } -EXPORT_SYMBOL(filemap_alloc_folio); +EXPORT_SYMBOL(filemap_alloc_folio_noprof); #endif /* diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 10a590ee1c89..c329d00b975f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2070,15 +2070,15 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, */ preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - page = __alloc_pages(preferred_gfp, order, nid, nodemask); + page = __alloc_pages_noprof(preferred_gfp, order, nid, nodemask); if (!page) - page = __alloc_pages(gfp, order, nid, NULL); + page = __alloc_pages_noprof(gfp, order, nid, NULL); return page; } /** - * alloc_pages_mpol - Allocate pages according to NUMA mempolicy. + * alloc_pages_mpol_noprof - Allocate pages according to NUMA mempolicy. * @gfp: GFP flags. * @order: Order of the page allocation. * @pol: Pointer to the NUMA mempolicy. @@ -2087,7 +2087,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, * * Return: The page on success or NULL if allocation fails. */ -struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, +struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, struct mempolicy *pol, pgoff_t ilx, int nid) { nodemask_t *nodemask; @@ -2117,7 +2117,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, * First, try to allocate THP only on local node, but * don't reclaim unnecessarily, just compact. */ - page = __alloc_pages_node(nid, + page = __alloc_pages_node_noprof(nid, gfp | __GFP_THISNODE | __GFP_NORETRY, order); if (page || !(gfp & __GFP_DIRECT_RECLAIM)) return page; @@ -2130,7 +2130,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, } } - page = __alloc_pages(gfp, order, nid, nodemask); + page = __alloc_pages_noprof(gfp, order, nid, nodemask); if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) { /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */ @@ -2146,7 +2146,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, } /** - * vma_alloc_folio - Allocate a folio for a VMA. + * vma_alloc_folio_noprof - Allocate a folio for a VMA. * @gfp: GFP flags. * @order: Order of the folio. * @vma: Pointer to VMA. @@ -2161,7 +2161,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, * * Return: The folio on success or NULL if allocation fails. */ -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage) { struct mempolicy *pol; @@ -2169,15 +2169,15 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, struct page *page; pol = get_vma_policy(vma, addr, order, &ilx); - page = alloc_pages_mpol(gfp | __GFP_COMP, order, - pol, ilx, numa_node_id()); + page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order, + pol, ilx, numa_node_id()); mpol_cond_put(pol); return page_rmappable_folio(page); } -EXPORT_SYMBOL(vma_alloc_folio); +EXPORT_SYMBOL(vma_alloc_folio_noprof); /** - * alloc_pages - Allocate pages. + * alloc_pages_noprof - Allocate pages. * @gfp: GFP flags. * @order: Power of two of number of pages to allocate. * @@ -2190,7 +2190,7 @@ EXPORT_SYMBOL(vma_alloc_folio); * flags are used. * Return: The page on success or NULL if allocation fails. */ -struct page *alloc_pages(gfp_t gfp, unsigned int order) +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order) { struct mempolicy *pol = &default_policy; @@ -2201,16 +2201,16 @@ struct page *alloc_pages(gfp_t gfp, unsigned int order) if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); - return alloc_pages_mpol(gfp, order, - pol, NO_INTERLEAVE_INDEX, numa_node_id()); + return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX, + numa_node_id()); } -EXPORT_SYMBOL(alloc_pages); +EXPORT_SYMBOL(alloc_pages_noprof); -struct folio *folio_alloc(gfp_t gfp, unsigned int order) +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) { - return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order)); + return page_rmappable_folio(alloc_pages_noprof(gfp | __GFP_COMP, order)); } -EXPORT_SYMBOL(folio_alloc); +EXPORT_SYMBOL(folio_alloc_noprof); static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, @@ -2229,13 +2229,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, for (i = 0; i < nodes; i++) { if (delta) { - nr_allocated = __alloc_pages_bulk(gfp, + nr_allocated = alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, nr_pages_per_node + 1, NULL, page_array); delta--; } else { - nr_allocated = __alloc_pages_bulk(gfp, + nr_allocated = alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, nr_pages_per_node, NULL, page_array); } @@ -2257,11 +2257,11 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, + nr_allocated = alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, nr_pages, NULL, page_array); if (nr_allocated < nr_pages) - nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, + nr_allocated += alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, nr_pages - nr_allocated, NULL, page_array + nr_allocated); return nr_allocated; @@ -2273,7 +2273,7 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, * It can accelerate memory allocation especially interleaving * allocate memory. */ -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol = &default_policy; @@ -2293,8 +2293,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, nid = numa_node_id(); nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); - return __alloc_pages_bulk(gfp, nid, nodemask, - nr_pages, NULL, page_array); + return alloc_pages_bulk_noprof(gfp, nid, nodemask, + nr_pages, NULL, page_array); } int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index edb79a55a252..58c0e8b948a4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4380,7 +4380,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, * * Returns the number of pages on the list or array. */ -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array) @@ -4516,7 +4516,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, pcp_trylock_finish(UP_flags); failed: - page = __alloc_pages(gfp, 0, preferred_nid, nodemask); + page = __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); if (page) { if (page_list) list_add(&page->lru, page_list); @@ -4527,13 +4527,13 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, goto out; } -EXPORT_SYMBOL_GPL(__alloc_pages_bulk); +EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); /* * This is the 'heart' of the zoned buddy allocator. */ -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, - nodemask_t *nodemask) +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, + int preferred_nid, nodemask_t *nodemask) { struct page *page; unsigned int alloc_flags = ALLOC_WMARK_LOW; @@ -4595,38 +4595,38 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, return page; } -EXPORT_SYMBOL(__alloc_pages); +EXPORT_SYMBOL(__alloc_pages_noprof); -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { - struct page *page = __alloc_pages(gfp | __GFP_COMP, order, + struct page *page = __alloc_pages_noprof(gfp | __GFP_COMP, order, preferred_nid, nodemask); return page_rmappable_folio(page); } -EXPORT_SYMBOL(__folio_alloc); +EXPORT_SYMBOL(__folio_alloc_noprof); /* * Common helper functions. Never use with __GFP_HIGHMEM because the returned * address cannot represent highmem pages. Use alloc_pages and then kmap if * you need to access high mem. */ -unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order) +unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order) { struct page *page; - page = alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order); + page = alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order); if (!page) return 0; return (unsigned long) page_address(page); } -EXPORT_SYMBOL(__get_free_pages); +EXPORT_SYMBOL(get_free_pages_noprof); -unsigned long get_zeroed_page(gfp_t gfp_mask) +unsigned long get_zeroed_page_noprof(gfp_t gfp_mask) { - return __get_free_page(gfp_mask | __GFP_ZERO); + return get_free_pages_noprof(gfp_mask | __GFP_ZERO, 0); } -EXPORT_SYMBOL(get_zeroed_page); +EXPORT_SYMBOL(get_zeroed_page_noprof); /** * __free_pages - Free pages allocated with alloc_pages(). @@ -4818,7 +4818,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, } /** - * alloc_pages_exact - allocate an exact number physically-contiguous pages. + * alloc_pages_exact_noprof - allocate an exact number physically-contiguous pages. * @size: the number of bytes to allocate * @gfp_mask: GFP flags for the allocation, must not contain __GFP_COMP * @@ -4832,7 +4832,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, * * Return: pointer to the allocated area or %NULL in case of error. */ -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) { unsigned int order = get_order(size); unsigned long addr; @@ -4840,13 +4840,13 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - addr = __get_free_pages(gfp_mask, order); + addr = get_free_pages_noprof(gfp_mask, order); return make_alloc_exact(addr, order, size); } -EXPORT_SYMBOL(alloc_pages_exact); +EXPORT_SYMBOL(alloc_pages_exact_noprof); /** - * alloc_pages_exact_nid - allocate an exact number of physically-contiguous + * alloc_pages_exact_nid_noprof - allocate an exact number of physically-contiguous * pages on a node. * @nid: the preferred node ID where memory should be allocated * @size: the number of bytes to allocate @@ -4857,7 +4857,7 @@ EXPORT_SYMBOL(alloc_pages_exact); * * Return: pointer to the allocated area or %NULL in case of error. */ -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) +void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mask) { unsigned int order = get_order(size); struct page *p; @@ -4865,7 +4865,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - p = alloc_pages_node(nid, gfp_mask, order); + p = alloc_pages_node_noprof(nid, gfp_mask, order); if (!p) return NULL; return make_alloc_exact((unsigned long)page_address(p), order, size); @@ -6283,7 +6283,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, } /** - * alloc_contig_range() -- tries to allocate given range of pages + * alloc_contig_range_noprof() -- tries to allocate given range of pages * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate * @migratetype: migratetype of the underlying pageblocks (either @@ -6303,7 +6303,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, * pages which PFN is in [start, end) are allocated for the caller and * need to be freed with free_contig_range(). */ -int alloc_contig_range(unsigned long start, unsigned long end, +int alloc_contig_range_noprof(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; @@ -6427,15 +6427,15 @@ int alloc_contig_range(unsigned long start, unsigned long end, undo_isolate_page_range(start, end, migratetype); return ret; } -EXPORT_SYMBOL(alloc_contig_range); +EXPORT_SYMBOL(alloc_contig_range_noprof); static int __alloc_contig_pages(unsigned long start_pfn, unsigned long nr_pages, gfp_t gfp_mask) { unsigned long end_pfn = start_pfn + nr_pages; - return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, - gfp_mask); + return alloc_contig_range_noprof(start_pfn, end_pfn, MIGRATE_MOVABLE, + gfp_mask); } static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, @@ -6470,7 +6470,7 @@ static bool zone_spans_last_pfn(const struct zone *zone, } /** - * alloc_contig_pages() -- tries to find and allocate contiguous range of pages + * alloc_contig_pages_noprof() -- tries to find and allocate contiguous range of pages * @nr_pages: Number of contiguous pages to allocate * @gfp_mask: GFP mask to limit search and used during compaction * @nid: Target node @@ -6490,8 +6490,8 @@ static bool zone_spans_last_pfn(const struct zone *zone, * * Return: pointer to contiguous pages on success, or NULL if not successful. */ -struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, - int nid, nodemask_t *nodemask) +struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) { unsigned long ret, pfn, flags; struct zonelist *zonelist; From patchwork Mon Feb 12 21:39:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554091 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0ED0055E4A for ; Mon, 12 Feb 2024 21:40:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; cv=none; b=d2goUHECqBsJXJiPuoQYlbVjH49vD9tFJRYI8dT2HgzbwNYylP23ykEM8T7CCdMG9VJO/+oXodmYriZIOxJmYqpuD4TafmhV0Qm/EwuPkDHYsDSzxKNr0a3sg/IiU308DqNAXfQ8TJeyUbMQ7+ijwc/7x7p9wq3ZPxLNRvFt69U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; c=relaxed/simple; bh=4k57aS8GXMk2iiyKPx5CmNXvPcq94eqkm08DeOX77O0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fV9H7bCSfS+R73WcCE0SatrFMmtkd1dsdThGL/UJp6TSAfzD+N70/olgAZnsSr4KkCAXUcGzHVkHp6v53ItghcluK0EH2J9rMVEVk0Q1P++294CDhsjzlDVY+LK1vVpV2EJDuA7HQxL9mlp5swj55pGLWV7i7SC0yBGTESL1Czg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OMMRIghA; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OMMRIghA" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5f38d676cecso56832317b3.0 for ; Mon, 12 Feb 2024 13:40:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774012; x=1708378812; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZrBz9DyywfwO606ZQMwpTbc2aylPEGoXqCQTswmuPUY=; b=OMMRIghA3WxmjiTZ3CP9/pkJhKKeCdPMwJR2nWzu8M7EcfU5lL2pejbzPFJTTdubAa wVLjkMLvmxqdGp8uzdu9dem8137vtGW/RS01ddLsipTYPYQO5qnL6Gp83yWI6aBEGQqP 1jhEMKuEf7sqxESWm/wHqm0ycrm9BIaD8ve8ivgOTE4eKUuk2zJc86w/7smqz59YDAgV +paBKr7e/6ARk2885w1hiqOvbvAruUf3qEC61nwvpFCtQKmkL1M/5v4c+tg4+jldl0yD Eecy0WeETwGAYzPcN59c2h4BFJNwBri5rcW3a3lcM9/biWpZvoGtzB/NmUzFgHi5MfU7 UYCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774012; x=1708378812; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZrBz9DyywfwO606ZQMwpTbc2aylPEGoXqCQTswmuPUY=; b=Ga7ZwAMWu1kvqp0Qtd2obgVChTaWx3DnHnYQKeiYi+mbNSNkEm0qfV32o/E0njBS1Q W3q642XmkELIIjlGozJ33mkdxCpYu58ZU7CmDaMVt3FzCEma9h2RuqfC7OVsBnwlYrdJ WiK97Ku49vDXZNEjjvA9Ca8n3OpuRtlVYJ2vjBpKPzyvSCns6wfgVM15KY18u1ueBM32 nLxdaGEVKn+YJ1CotbVpcX8dZbWPUkdl4sQgP5CugQcFVW90l3dSzQ9jbo8fppGXGaJv MpoNH6oUX9OXszTATiGIuujyOiklrHk+IbPLKzkcbbp/Lj6aT9INc5JrlkYNnNo0xlS0 Wd8w== X-Forwarded-Encrypted: i=1; AJvYcCUthfLJU6iKCVujeBgkcgl6dE54WQ7ymEyxC+baomm2x9o1qRYst3o1mpvjaipX34UukWlzPplHzCGvTnxDi9kufBBMcrqVj8N/wqouyQ== X-Gm-Message-State: AOJu0YxsTCDf5cDNgVSWS0Bz0YUzpWqmt54wd0//3X2RtXbjLdB7qdui BLtAIsOInp4vkULk033G0e+QK+oyvwcTbROxZ8ZQGsxiwGddKA/Q2s2+8B70F2I00BzS/u/gvPV p5g== X-Google-Smtp-Source: AGHT+IGdELNRz/U7g++PhfoEsyUKyEa0DYZtdFbwbl9fcNHM0h9+fVLMKP8bhlCRwXrMrONYMoTWo6orz2A= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:ad02:0:b0:dcc:2267:796e with SMTP id y2-20020a25ad02000000b00dcc2267796emr133364ybi.2.1707774012029; Mon, 12 Feb 2024 13:40:12 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:04 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-19-surenb@google.com> Subject: [PATCH v3 18/35] mm: create new codetag references during page splitting From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org When a high-order page is split into smaller ones, each newly split page should get its codetag. The original codetag is reused for these pages but it's recorded as 0-byte allocation because original codetag already accounts for the original high-order allocated page. Signed-off-by: Suren Baghdasaryan --- include/linux/pgalloc_tag.h | 30 ++++++++++++++++++++++++++++++ mm/huge_memory.c | 2 ++ mm/page_alloc.c | 2 ++ 3 files changed, 34 insertions(+) diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index a060c26eb449..0174aff5e871 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -62,11 +62,41 @@ static inline void pgalloc_tag_sub(struct page *page, unsigned int order) } } +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) +{ + int i; + struct page_ext *page_ext; + union codetag_ref *ref; + struct alloc_tag *tag; + + if (!mem_alloc_profiling_enabled()) + return; + + page_ext = page_ext_get(page); + if (unlikely(!page_ext)) + return; + + ref = codetag_ref_from_page_ext(page_ext); + if (!ref->ct) + goto out; + + tag = ct_to_alloc_tag(ref->ct); + page_ext = page_ext_next(page_ext); + for (i = 1; i < nr; i++) { + /* New reference with 0 bytes accounted */ + alloc_tag_add(codetag_ref_from_page_ext(page_ext), tag, 0); + page_ext = page_ext_next(page_ext); + } +out: + page_ext_put(page_ext); +} + #else /* CONFIG_MEM_ALLOC_PROFILING */ static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, unsigned int order) {} static inline void pgalloc_tag_sub(struct page *page, unsigned int order) {} +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) {} #endif /* CONFIG_MEM_ALLOC_PROFILING */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..86daae671319 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -2899,6 +2900,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); + pgalloc_tag_split(head, nr); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 58c0e8b948a4..4bc5b4720fee 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2621,6 +2621,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page); @@ -4806,6 +4807,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); while (page < --last) set_page_refcounted(last); From patchwork Mon Feb 12 21:39:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554093 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 291C755E6F for ; Mon, 12 Feb 2024 21:40:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774017; cv=none; b=UGyYn87D54ZUZ3BpTSK0OBXZreAvH/YvMAc7T3/zUXZnabyE/+AZfA33T17lB5QVbWCLAaYhjBSow/eUaXW+6BHKre27F3+m3UIYYocVED+HLkVdHwP7ZTjp2zTHxZIJiREEnTvYQQ29Wo2htVRH1BBmIYE+u5yhUzvd0RmWLok= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774017; c=relaxed/simple; bh=0VXvOfFGtVEGo1DudaY7A2/vICWwaW6LKiU6pA/68sI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MEeD93LnILuu1cPfEahqHHFn1ZkJ0Z3Ofq/5MMTO36tQaEyr/x5FjZLzHBKRQZh2vc6drppIb5AmdzMUpsUocG0g+ealpphwoKcixvzdn0H+8CWpHyy7TPTYuobFt+mARVhJS4Eeo+tQZjNA0GJu+jy5y7naAighOw+j+YZ+zBg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sUhSPVJ7; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sUhSPVJ7" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5eba564eb3fso75957107b3.1 for ; Mon, 12 Feb 2024 13:40:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774014; x=1708378814; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=q3nAahWS0BuDAP/EiPEY1h7hxq9o0SR1xq9twkhL8Tk=; b=sUhSPVJ7xhqNi0GLFCEB2XvyxGUrlOwz6elzivt0cCzVvmYJ2Ix2wbLMtDRAVbSAIC EnVMW07tJnXBcBuA08aNZ4eJHh6LqkH8e/hf37HBdruYO+46pYaZIzI7+zGbV8Z2U2N4 gjjADvMYYYEiXp+IQ0Ibn6XJ3sHp+5+SJYDSUNRJ0tz5FTtLFMYhuyoFb8qLSU3LEhb+ h9oAiYsu7PFwQaI6Fo9i9N0ynF1fj15KfHVHZAhiRjnrZOW8ncZTYKEg3u/H4A/MxYZR dv1+s++2OgwaEbe1MoFD4D6LSOf/Yqbqdc+mNvHDJs6o+/ILAImDPsEI7dzBqvX6j6JG zlSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774014; x=1708378814; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=q3nAahWS0BuDAP/EiPEY1h7hxq9o0SR1xq9twkhL8Tk=; b=ektllYAL9IRzKQ+zFtuH1cSGXzIAwv1sbK7kQplANQkS3bEyzldzqBkGGI1+zcTRGk WRNsuGrSTmue75TNG1So3u40+obQidoOjpYdlRnhrdkoB4q1zlienSpxeRxhyBmOre7t CafuHacBfyPiQq3tqAAPhY7Ub6yqQDUDNxVOq97EUpa57y7KShUb14jSI1L3KDua8Yxt 5RM6GZiraGRHsQhL6qepVPf59zqOv+2NK2QzObf9JDjNwRXMfZ3aG1CT/3SH+Rqw8dT7 MwZfUmblz2j7uUmYI6MZGFR/1OMnlv6n5U2o/tRgsPb6JL0iiJvzRkudsZtBPwPU1MZq tSuA== X-Forwarded-Encrypted: i=1; AJvYcCVfaYl9EWoyeOmuQCTChbs2FNRGL0SGmL9S0pias9F8IumFYXAcywPYkAoYPfskthb8XotMNnvqzLkfeFju2h3ftirprmjVDl9ejxZ2PA== X-Gm-Message-State: AOJu0YwYy0nHFn3Rhmlxc0AnYaAUBmYlpQA/KGPV/NrIgKpR/EDQG2rl JRenSAZ2TdGQ+9a29gFHBI0ZRq8sdqzpSota6AutwEbrW65oEFsCxyuXbw30cVottg9ndz0xjC0 dFg== X-Google-Smtp-Source: AGHT+IFjfWZYV/B1TC2tamqn5wnV41GSoY2BgJr6sKI43uRZs97PbAp/sxw1wS58qt9V9k9WwRg1VI3jBIQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:70c:b0:dc6:fec4:1c26 with SMTP id k12-20020a056902070c00b00dc6fec41c26mr2112341ybt.1.1707774014153; Mon, 12 Feb 2024 13:40:14 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:05 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-20-surenb@google.com> Subject: [PATCH v3 19/35] mm/page_ext: enable early_page_ext when CONFIG_MEM_ALLOC_PROFILING_DEBUG=y From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org For all page allocations to be tagged, page_ext has to be initialized before the first page allocation. Early tasks allocate their stacks using page allocator before alloc_node_page_ext() initializes page_ext area, unless early_page_ext is enabled. Therefore these allocations will generate a warning when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Enable early_page_ext whenever CONFIG_MEM_ALLOC_PROFILING_DEBUG=y to ensure page_ext initialization prior to any page allocation. This will have all the negative effects associated with early_page_ext, such as possible longer boot time, therefore we enable it only when debugging with CONFIG_MEM_ALLOC_PROFILING_DEBUG enabled and not universally for CONFIG_MEM_ALLOC_PROFILING. Signed-off-by: Suren Baghdasaryan --- mm/page_ext.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/page_ext.c b/mm/page_ext.c index 3c58fe8a24df..e7d8f1a5589e 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -95,7 +95,16 @@ unsigned long page_ext_size; static unsigned long total_usage; +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG +/* + * To ensure correct allocation tagging for pages, page_ext should be available + * before the first page allocation. Otherwise early task stacks will be + * allocated before page_ext initialization and missing tags will be flagged. + */ +bool early_page_ext __meminitdata = true; +#else bool early_page_ext __meminitdata; +#endif static int __init setup_early_page_ext(char *str) { early_page_ext = true; From patchwork Mon Feb 12 21:39:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554094 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C8F55646D for ; Mon, 12 Feb 2024 21:40:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774019; cv=none; b=jBOm31ODfGX+eiubA7rJ7F2ii6L7enkXK6inOY4pd97rtCKMBaLi7QnMX275+mT27jL9RUxxfFxfL+vgUQr7S2YeJZoue7Y9e4gjXEdhY/1F+A1y00aCfWkYfQezt703VRu74T36opBpMT0Xa/tYeob/bZlU+PKq6aK23PAswmI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774019; c=relaxed/simple; bh=/ObVYElsMTOqZ264N892FMXju4ko1jf53Il+HTJwMLI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bf5sw0KbHwkRjuBtOILfgYP8oVd7+smwBePsOPPiR4Xrw2WKcIHhUlKUuUd3qbeJ2fo4oNwaqlQcqnGHeTXTn1qtg6Wxw/kuUGrwNlRILvheqc4bAjYBulu9sr29w+gkpuvdjI4M7/ZHAKEOmG5FsQp4lwbf8QM9tmyRu2htl3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pMWy+lzK; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pMWy+lzK" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5efe82b835fso84268417b3.0 for ; Mon, 12 Feb 2024 13:40:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774016; x=1708378816; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xFf/g8Eh9xc1yf0azF3fMp/AEmHqfeMm/yqsFpU3vIo=; b=pMWy+lzK7SReRl6zpXUVm4yB/mPCcmkrRLo3N/MCtABvymylfNkbvcaHKwdsZIcV8r RHZMh77/x+aO5Vh75KmDL0tbF+Hjx528tSEJPo2SKToKpLfnvl1gE9Cey2J4cB+fnbV3 QmxGqr7HQgSDkKYyVrJwQV3l9+U22Amu94aI5wqdGN8gx2CUUKDxCW1xdby7MDSwLJ/a 0Kzx2CPxb7TRTa+UFqxANct51yc9F2Zg1dSANpqCRDwxPtJqKX8AxbF7IesiUCLzlFvv XZhcNt8Ly1JHN1Z7LLKj61PIiJ3FpXIpxOixOFgZf+bQFNRvmuNvkwKL/tO+KFZGWYer IxEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774016; x=1708378816; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xFf/g8Eh9xc1yf0azF3fMp/AEmHqfeMm/yqsFpU3vIo=; b=gno7nAGXIRhW7hZ9csReULjAHObHTCPrr1LgOdmT13NyjpgpVNC7zjy1qcUbzsh9uJ fHYY7xT+6iXWLpS5+mpByM38nkJvYN4b5dsBRC/jcuVkGWkHLaUwWyt2cD772BxKYrPU cZ7l94mftAMVb/udHksioyT+E56ihL1CAEovUQ3KHl8TZOr/sRYBGU/a4M58NTU6fHK1 T629dWkBrBidJdUgpaeKM8C5fNOgG/ShTc2wm9l1hSZh/EtfTYVkTwxD22+YJ/3Y4/LF zRqGASo6TNaHquoksZJbTNaKqeCo6zH4Hat0a4V6GAGysMhoHHnGu3tNhnKaKVoP6J9B +IYA== X-Forwarded-Encrypted: i=1; AJvYcCX2BpR/SY687XsNVATMXcxkUgGrz6kNrYmr6JNPeVXUn1Dii2ZqfLbG2cTw/2eFvHPGCikH4JY6rCdeA2N7gCpruRRzPEym8p40fRRd/g== X-Gm-Message-State: AOJu0Yy0iqCojrQAWg+s4jb+io4igHxifBqUHv5GPo+O4CLhFFqU6c2K YN7romruWxdmqau5Q7RAZk6wmfaG2b020Nli4s8I2pPPoCuNSlqgzR2ejnz3HfHQGPC1aNTcQ+c +3A== X-Google-Smtp-Source: AGHT+IEjOAHzMTDk3G6Mkq9BQfM87/zY/tGgKD9M86mlp2G5yR63uKFbjyK/9+Jhrb/BiK/AnGAds8f8uig= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:8391:0:b0:dc2:1cd6:346e with SMTP id t17-20020a258391000000b00dc21cd6346emr2029085ybk.8.1707774016324; Mon, 12 Feb 2024 13:40:16 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:06 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-21-surenb@google.com> Subject: [PATCH v3 20/35] lib: add codetag reference into slabobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org To store code tag for every slab object, a codetag reference is embedded into slabobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/memcontrol.h | 5 +++++ lib/Kconfig.debug | 1 + mm/slab.h | 4 ++++ 3 files changed, 10 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index f3584e98b640..2b010316016c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1653,7 +1653,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, * if MEMCG_DATA_OBJEXTS is set. */ struct slabobj_ext { +#ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; +#endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref ref; +#endif } __aligned(8); static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 7bbdb0ddb011..9ecfcdb54417 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -979,6 +979,7 @@ config MEM_ALLOC_PROFILING depends on !DEBUG_FORCE_WEAK_PER_CPU select CODE_TAGGING select PAGE_EXTENSION + select SLAB_OBJ_EXT help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/mm/slab.h b/mm/slab.h index 77cf7474fe46..224a4b2305fb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -569,6 +569,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, static inline bool need_slab_obj_ext(void) { +#ifdef CONFIG_MEM_ALLOC_PROFILING + if (mem_alloc_profiling_enabled()) + return true; +#endif /* * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally * inside memcg_slab_post_alloc_hook. No other users for now. From patchwork Mon Feb 12 21:39:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554095 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E35A14E1B3 for ; Mon, 12 Feb 2024 21:40:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774021; cv=none; b=sOE/nfcORpTb3A7/QgTSYMU3vp8qPgQxLZbOvIMg5r4Ih2wyzyxNJaVn77RTBE44LMXCL6h1nQep3yA3q0gHgg6ZDsQFBCfFoRtpqrX2DxgcQGZoGPTCju7APTiPfcEhh2dt6Uw/6SB7THtXJFyF/eDwqnIB9YYdzETPIdDBgFE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774021; c=relaxed/simple; bh=zQbNUcgZTcmtYa+q7mrh4BT9RI4mys9HqWTZhG7v/zg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IybqqVjcAK8nSd9RQ7uHNRtDPf57LzCqHVr7drgpRoj2wz/oqgFhPbYfztLmQ4lcAnA3ZvedlFPeY2oz6a9dAbaEntT5J+y0yzHE56G0DlU5syVqSSaW5igXfTt9mgsxUrwSrZcL0k7O3rCu2YTB4KMLU8DchJlK7JeKmZsYWw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G+8/C5Ql; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G+8/C5Ql" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcbfe1a42a4so1072366276.2 for ; Mon, 12 Feb 2024 13:40:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774018; x=1708378818; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TxFmyXLHYuBt1ZmWD3OCbH5m3f76lMQsgX1qwMm0SnA=; b=G+8/C5QlJHXwh0LEd9ckH1UAUUHHgMvWgNCS/zt70qKJ3P7g1gEuos6UfEZTGS34X/ VvwWHlwsTqFv7U34C7UfcgphH7zvLx+wYRJlGQR9bXomi7dMBBYmR5Ioqj631FZR5OlX b/34+hA7kMqL0VX1IQemIEZNroHYbcs2eStvCtnVdml1+w5TAJkQr2Jmk07wMr7H60SU UfIkQBcEAJJFQXaC3swdL+1gGVLK2lqufipd25e8A/uyEbgn3WPjZYz4c+TQV6Iysctt VQBECAJdysCXHRCp2ceiX8M6L6mMQiA8AvdgxBWQaadnCa0U62VGzul3QWOKLJ053EDX iXIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774018; x=1708378818; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TxFmyXLHYuBt1ZmWD3OCbH5m3f76lMQsgX1qwMm0SnA=; b=rlOtZrCHDuqt6w2bPwRjZkQ9PNObYEicQLTS5DETWDPlL//d+0whhb7tSvoTdzXUet amvngXAEBKHbjCH3MkI70Q3rh1J5/jmU344A2Jw2yNLEUVMPMTORN+o9H3T8qVChS36/ PYYIIy0Y9ek9cHhL4Piq3vRRLvQqeamD1iGBfNu18qAOPGFjCzKv21m3L698i3ZIfwf8 Jl6lbae0ktcpGvlLtUeKeUPRP0VrwJzqp0A8pQvKw2WZvFIMFnaq8w2rQf1acwfFBna+ GazXp9mS9laEPau4b358f4uHcd4PO0VJDBlMU+hPFJ9oP5v0jNAekStn7Zhq9qi1yZNB INAA== X-Forwarded-Encrypted: i=1; AJvYcCUAocspxUW4EhUEmt8EEqQSvHreeG+Ws12SHhKvLLOdIkXsfe3SMZQ8ouyZU89a5A3z4g3ux6k+mlGJhcSotz4Wkgozzj7g5nC2JCgkSw== X-Gm-Message-State: AOJu0Yx/cQPrkHwnLSk+rl9M5GgfLlNYLXHanPpK4NzzOs0GwdVrCHoU J/dPb8Iw7S+w9kNx3WbcssZZ1jHId6D9qxLq9+eC7s2nCM7XLOwz/zcbsGEehf60W0+6DFdb0ae kRA== X-Google-Smtp-Source: AGHT+IGBONdcVlXEEGFa18cuaG6bv+Dg1NI+YmaYVRfFeBb82PPmmVaHOveRp5zx3fRCNj/22b8sYtrJmjo= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:2291:b0:dcc:5a91:aee9 with SMTP id dn17-20020a056902229100b00dcc5a91aee9mr85473ybb.7.1707774018612; Mon, 12 Feb 2024 13:40:18 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:07 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-22-surenb@google.com> Subject: [PATCH v3 21/35] mm/slab: add allocation accounting into slab allocation and free paths From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Account slab allocations using codetag reference embedded into slabobj_ext. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Reviewed-by: Kees Cook --- mm/slab.h | 26 ++++++++++++++++++++++++++ mm/slub.c | 5 +++++ 2 files changed, 31 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 224a4b2305fb..c4bd0d5348cb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -629,6 +629,32 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) #endif /* CONFIG_SLAB_OBJ_EXT */ +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects) +{ + struct slabobj_ext *obj_exts; + int i; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + return; + + for (i = 0; i < objects; i++) { + unsigned int off = obj_to_index(s, slab, p[i]); + + alloc_tag_sub(&obj_exts[off].ref, s->size); + } +} + +#else + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + #ifdef CONFIG_MEMCG_KMEM void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); diff --git a/mm/slub.c b/mm/slub.c index 9fd96238ed39..f4d5794c1e86 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3821,6 +3821,11 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, s->flags, init_flags); kmsan_slab_alloc(s, p[i], init_flags); obj_exts = prepare_slab_obj_exts_hook(s, flags, p[i]); +#ifdef CONFIG_MEM_ALLOC_PROFILING + /* obj_exts can be allocated for other reasons */ + if (likely(obj_exts) && mem_alloc_profiling_enabled()) + alloc_tag_add(&obj_exts->ref, current->alloc_tag, s->size); +#endif } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); From patchwork Mon Feb 12 21:39:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554096 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BED484D5B4 for ; Mon, 12 Feb 2024 21:40:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774025; cv=none; b=LVPGz7qLRYUzOOFxshpjl/7LhHxxZfs9r5dVe620dEOAiHr3preYTswjYL0Mk2nXGCKs/5ExYW1lwr2CTpCUSNeWitRXDRKmbY7h0J3ojOso/IbfE2590zYvX4AyGMEhm1mFsFyMrf1yC/7u911i5BDHBtfdwvGQ+8+Xg1yrZa0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774025; c=relaxed/simple; bh=0bjT4MC3GN4io9qfNstJAA6og3uwAm9xMBTJI0JqgY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Jop28rhUREYXZ6yO5UxOPrHvZUrQslolG/RXH8zzaoZmBkHHA7YxEAZDlCmwJfBCC5EVdwg4zWWt5NryawEH3mL0IB5lycgk/JPPPZxPlvgVv+aWnOqGYFWU5kAt2HYnM4JVFxm+4VV3sNFFHJBcAEpcNAObve0CZ/OWCuho7Q0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n4/iq98a; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n4/iq98a" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbf216080f5so5719209276.1 for ; Mon, 12 Feb 2024 13:40:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774021; x=1708378821; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BW+F7aNEsn42MAqFjWDXBChBmgYKTAcf/l21NhAmWHo=; b=n4/iq98atCmWSqmIjRrjidP4NeNGLZ2pgVgh9MEJ4TRKkEMh9rYm0ZbomQDY0jOjWC kupONmF1LcW2e+GCBqkjE9bmCc6uGCzeMm+94H6f+FnHZTmqyWbOHDNPzdchm9W4IyIB 8/xexgZMa4p+hs1Z8jRgRj+3BMgoT1Pa1/hLJ/PUiugoXtknfYQMUYR+b/ruZ4DhytvS uZtHaakeJjjavFRk5xd35FfyQ7A7tiQK0pRDgmpO9Tzt+jTFqWdgvBa7mk49ynYeIeLt swuIkb9CfRiXh7lVBjBzO1heQJZO5qP89VynWVDYNOA6GK5lNOscuhG4LqrHajK82Z69 E+aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774021; x=1708378821; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BW+F7aNEsn42MAqFjWDXBChBmgYKTAcf/l21NhAmWHo=; b=sBtGe4evSqjqe/Xa3Zj3TvIQOoexbBq1FxwZyHAQ41dz39mHTYViFgDxo2Nkoygtk+ H5UzBQ6qVhvgqxblh+pZRtd11ESXvmhDZbIdImAkk53MJCkMOVe3963IP0ZUY8UkfQ7y OaSLEVPuqkTbvAGKaTXa5/Sxh+I0dgFGmn25UMT0YbavHZ1OOTKh0ZJBdFi4mGWlW53b t4i7NCOe7GXWEIGRtzbOSj1zecy5ndIk5i2EpKvgu7Hjpd8JDbqNVsa7ZDFmy/6GhfzF yo97dRjcnqtKSlswvobVWLfxsPdiO9vqR92zJPjM/0nQ6JvoDh3otvR3IHmtimWe5CKy GIyQ== X-Forwarded-Encrypted: i=1; AJvYcCXFFdc4OCnFQTzv+0l2IZULWSDSFEFu7J1Mj8Q0xQxpsYr7JeFPwytumezwLzABxG5SYP/iKeCs4hYEmBY2c5a6+A9uhQB/EbvUN4WQIQ== X-Gm-Message-State: AOJu0YzvkKMndXjkOnUjGg+aYbhC3vaqkYJjML6TWY/x5eogLpPtd8V1 rHTdqomwVBGc7seUunTxHzRdsdeLNUM/Z4/kX2VS/JFHUGoNal4gOlEPwIG9TEuaJImzH5Dxa7f WjA== X-Google-Smtp-Source: AGHT+IGhsTVWNh39IlIHgl7r6Rr3e1iSmOEiml6TukgPG+gmxAY5xjZC7YFwvZdpHJoCGCGJJSAE2QDryhk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:124a:b0:dc6:9e4a:f950 with SMTP id t10-20020a056902124a00b00dc69e4af950mr2042179ybu.3.1707774020742; Mon, 12 Feb 2024 13:40:20 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:08 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-23-surenb@google.com> Subject: [PATCH v3 22/35] mm/slab: enable slab allocation tagging for kmalloc and friends From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Redefine kmalloc, krealloc, kzalloc, kcalloc, etc. to record allocations and deallocations done by these functions. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Reviewed-by: Kees Cook --- include/linux/fortify-string.h | 5 +- include/linux/slab.h | 177 ++++++++++++++++----------------- include/linux/string.h | 4 +- mm/slab_common.c | 6 +- mm/slub.c | 54 +++++----- mm/util.c | 20 ++-- 6 files changed, 134 insertions(+), 132 deletions(-) diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index 89a6888f2f9e..55f66bd8a366 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -697,9 +697,9 @@ __FORTIFY_INLINE void *memchr_inv(const void * const POS0 p, int c, size_t size) return __real_memchr_inv(p, c, size); } -extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup) +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup_noprof) __realloc_size(2); -__FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp) +__FORTIFY_INLINE void *kmemdup_noprof(const void * const POS0 p, size_t size, gfp_t gfp) { const size_t p_size = __struct_size(p); @@ -709,6 +709,7 @@ __FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp fortify_panic(__func__); return __real_kmemdup(p, size, gfp); } +#define kmemdup(...) alloc_hooks(kmemdup_noprof(__VA_ARGS__)) /** * strcpy - Copy a string into another string buffer diff --git a/include/linux/slab.h b/include/linux/slab.h index 3ac2fc830f0f..910473e07e86 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -230,7 +230,10 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __realloc_size(2); +void * __must_check krealloc_noprof(const void *objp, size_t new_size, + gfp_t flags) __realloc_size(2); +#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__)) + void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -482,7 +485,10 @@ static __always_inline unsigned int __kmalloc_index(size_t size, static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); +#include + +void *__kmalloc_noprof(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); +#define __kmalloc(...) alloc_hooks(__kmalloc_noprof(__VA_ARGS__)) /** * kmem_cache_alloc - Allocate an object @@ -494,9 +500,14 @@ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_siz * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) __assume_slab_alignment __malloc; -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) __assume_slab_alignment __malloc; +void *kmem_cache_alloc_noprof(struct kmem_cache *cachep, + gfp_t flags) __assume_slab_alignment __malloc; +#define kmem_cache_alloc(...) alloc_hooks(kmem_cache_alloc_noprof(__VA_ARGS__)) + +void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_lru(...) alloc_hooks(kmem_cache_alloc_lru_noprof(__VA_ARGS__)) + void kmem_cache_free(struct kmem_cache *s, void *objp); /* @@ -507,29 +518,40 @@ void kmem_cache_free(struct kmem_cache *s, void *objp); * Note that interrupts must be enabled when calling these functions. */ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); + +int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, void **p); +#define kmem_cache_alloc_bulk(...) alloc_hooks(kmem_cache_alloc_bulk_noprof(__VA_ARGS__)) static __always_inline void kfree_bulk(size_t size, void **p) { kmem_cache_free_bulk(NULL, size, p); } -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment +void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __alloc_size(1); -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment - __malloc; +#define __kmalloc_node(...) alloc_hooks(__kmalloc_node_noprof(__VA_ARGS__)) -void *kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) +void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags, + int node) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_noprof(__VA_ARGS__)) + +void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t flags, size_t size) __assume_kmalloc_alignment __alloc_size(3); -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_kmalloc_alignment +void *kmalloc_node_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) __assume_kmalloc_alignment __alloc_size(4); -void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment +#define kmalloc_trace(...) alloc_hooks(kmalloc_trace_noprof(__VA_ARGS__)) + +#define kmalloc_node_trace(...) alloc_hooks(kmalloc_node_trace_noprof(__VA_ARGS__)) + +void *kmalloc_large_noprof(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); +#define kmalloc_large(...) alloc_hooks(kmalloc_large_noprof(__VA_ARGS__)) -void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment +void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); +#define kmalloc_large_node(...) alloc_hooks(kmalloc_large_node_noprof(__VA_ARGS__)) /** * kmalloc - allocate kernel memory @@ -585,37 +607,39 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t flags) { if (__builtin_constant_p(size) && size) { unsigned int index; if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); + return kmalloc_large_noprof(size, flags); index = kmalloc_index(size); - return kmalloc_trace( + return kmalloc_trace_noprof( kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index], flags, size); } - return __kmalloc(size, flags); + return __kmalloc_noprof(size, flags); } +#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t size, gfp_t flags, int node) { if (__builtin_constant_p(size) && size) { unsigned int index; if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); + return kmalloc_large_node_noprof(size, flags, node); index = kmalloc_index(size); - return kmalloc_node_trace( + return kmalloc_node_trace_noprof( kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index], flags, node, size); } - return __kmalloc_node(size, flags, node); + return __kmalloc_node_noprof(size, flags, node); } +#define kmalloc_node(...) alloc_hooks(kmalloc_node_noprof(__VA_ARGS__)) /** * kmalloc_array - allocate memory for an array. @@ -623,16 +647,17 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kmalloc_array_noprof(size_t n, size_t size, gfp_t flags) { size_t bytes; if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc(bytes, flags); - return __kmalloc(bytes, flags); + return kmalloc_noprof(bytes, flags); + return kmalloc_noprof(bytes, flags); } +#define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) /** * krealloc_array - reallocate memory for an array. @@ -641,18 +666,19 @@ static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_ * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline __realloc_size(2, 3) void * __must_check krealloc_array(void *p, - size_t new_n, - size_t new_size, - gfp_t flags) +static inline __realloc_size(2, 3) void * __must_check krealloc_array_noprof(void *p, + size_t new_n, + size_t new_size, + gfp_t flags) { size_t bytes; if (unlikely(check_mul_overflow(new_n, new_size, &bytes))) return NULL; - return krealloc(p, bytes, flags); + return krealloc_noprof(p, bytes, flags); } +#define krealloc_array(...) alloc_hooks(krealloc_array_noprof(__VA_ARGS__)) /** * kcalloc - allocate memory for an array. The memory is set to zero. @@ -660,16 +686,12 @@ static inline __realloc_size(2, 3) void * __must_check krealloc_array(void *p, * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flags) -{ - return kmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kcalloc(_n, _size, _flags) kmalloc_array(_n, _size, (_flags) | __GFP_ZERO) -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, +void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) +#define kmalloc_node_track_caller(...) \ + alloc_hooks(kmalloc_node_track_caller_noprof(__VA_ARGS__, _RET_IP_)) /* * kmalloc_track_caller is a special version of kmalloc that records the @@ -679,11 +701,9 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, * allocator where we care about the real place the memory allocation * request comes from. */ -#define kmalloc_track_caller(size, flags) \ - __kmalloc_node_track_caller(size, flags, \ - NUMA_NO_NODE, _RET_IP_) +#define kmalloc_track_caller(...) kmalloc_node_track_caller(__VA_ARGS__, NUMA_NO_NODE) -static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, +static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node) { size_t bytes; @@ -691,75 +711,52 @@ static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc_node(bytes, flags, node); - return __kmalloc_node(bytes, flags, node); + return kmalloc_node_noprof(bytes, flags, node); + return __kmalloc_node_noprof(bytes, flags, node); } +#define kmalloc_array_node(...) alloc_hooks(kmalloc_array_node_noprof(__VA_ARGS__)) -static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) -{ - return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); -} +#define kcalloc_node(_n, _size, _flags, _node) \ + kmalloc_array_node(_n, _size, (_flags) | __GFP_ZERO, _node) /* * Shortcuts */ -static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) -{ - return kmem_cache_alloc(k, flags | __GFP_ZERO); -} +#define kmem_cache_zalloc(_k, _flags) kmem_cache_alloc(_k, (_flags)|__GFP_ZERO) /** * kzalloc - allocate memory. The memory is set to zero. * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags) { - return kmalloc(size, flags | __GFP_ZERO); + return kmalloc_noprof(size, flags | __GFP_ZERO); } +#define kzalloc(...) alloc_hooks(kzalloc_noprof(__VA_ARGS__)) +#define kzalloc_node(_size, _flags, _node) kmalloc_node(_size, (_flags)|__GFP_ZERO, _node) -/** - * kzalloc_node - allocate zeroed memory from a particular memory node. - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @node: memory node from which to allocate - */ -static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags, int node) -{ - return kmalloc_node(size, flags | __GFP_ZERO, node); -} +extern void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) __alloc_size(1); +#define kvmalloc_node(...) alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__)) -extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); -static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) -{ - return kvmalloc_node(size, flags, NUMA_NO_NODE); -} -static inline __alloc_size(1) void *kvzalloc_node(size_t size, gfp_t flags, int node) -{ - return kvmalloc_node(size, flags | __GFP_ZERO, node); -} -static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) -{ - return kvmalloc(size, flags | __GFP_ZERO); -} +#define kvmalloc(_size, _flags) kvmalloc_node(_size, _flags, NUMA_NO_NODE) +#define kvzalloc(_size, _flags) kvmalloc(_size, _flags|__GFP_ZERO) -static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) -{ - size_t bytes; - - if (unlikely(check_mul_overflow(n, size, &bytes))) - return NULL; +#define kvzalloc_node(_size, _flags, _node) kvmalloc_node(_size, _flags|__GFP_ZERO, _node) - return kvmalloc(bytes, flags); -} +#define kvmalloc_array(_n, _size, _flags) \ +({ \ + size_t _bytes; \ + \ + !check_mul_overflow(_n, _size, &_bytes) ? kvmalloc(_bytes, _flags) : NULL; \ +}) -static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags) -{ - return kvmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kvcalloc(_n, _size, _flags) kvmalloc_array(_n, _size, _flags|__GFP_ZERO) -extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) +extern void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsize, gfp_t flags) __realloc_size(3); +#define kvrealloc(...) alloc_hooks(kvrealloc_noprof(__VA_ARGS__)) + extern void kvfree(const void *addr); DEFINE_FREE(kvfree, void *, if (_T) kvfree(_T)) diff --git a/include/linux/string.h b/include/linux/string.h index ab148d8dbfc1..14e4fb4340f4 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -214,7 +214,9 @@ extern void kfree_const(const void *x); extern char *kstrdup(const char *s, gfp_t gfp) __malloc; extern const char *kstrdup_const(const char *s, gfp_t gfp); extern char *kstrndup(const char *s, size_t len, gfp_t gfp); -extern void *kmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); +extern void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp) __realloc_size(2); +#define kmemdup(...) alloc_hooks(kmemdup_noprof(__VA_ARGS__)) + extern void *kvmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp); diff --git a/mm/slab_common.c b/mm/slab_common.c index 83fec2dd2e2d..21b0b9e9cd9e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1234,7 +1234,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) return (void *)p; } - ret = kmalloc_track_caller(new_size, flags); + ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_); if (ret && p) { /* Disable KASAN checks as the object's redzone is accessed. */ kasan_disable_current(); @@ -1258,7 +1258,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) * * Return: pointer to the allocated memory or %NULL in case of error */ -void *krealloc(const void *p, size_t new_size, gfp_t flags) +void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags) { void *ret; @@ -1273,7 +1273,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags) return ret; } -EXPORT_SYMBOL(krealloc); +EXPORT_SYMBOL(krealloc_noprof); /** * kfree_sensitive - Clear sensitive information in memory before freeing diff --git a/mm/slub.c b/mm/slub.c index f4d5794c1e86..9ea03d6e9c9d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3871,7 +3871,7 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list return object; } -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) +void *kmem_cache_alloc_noprof(struct kmem_cache *s, gfp_t gfpflags) { void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE, _RET_IP_, s->object_size); @@ -3880,9 +3880,9 @@ void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) return ret; } -EXPORT_SYMBOL(kmem_cache_alloc); +EXPORT_SYMBOL(kmem_cache_alloc_noprof); -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, +void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) { void *ret = slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, _RET_IP_, @@ -3892,10 +3892,10 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); /** - * kmem_cache_alloc_node - Allocate an object on the specified node + * kmem_cache_alloc_node_noprof - Allocate an object on the specified node * @s: The cache to allocate from. * @gfpflags: See kmalloc(). * @node: node number of the target node. @@ -3907,7 +3907,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_lru); * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3915,7 +3915,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); /* * To avoid unnecessary overhead, we pass through large allocation requests @@ -3932,7 +3932,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) flags = kmalloc_fix_flags(flags); flags |= __GFP_COMP; - folio = (struct folio *)alloc_pages_node(node, flags, order); + folio = (struct folio *)alloc_pages_node_noprof(node, flags, order); if (folio) { ptr = folio_address(folio); lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, @@ -3947,7 +3947,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) return ptr; } -void *kmalloc_large(size_t size, gfp_t flags) +void *kmalloc_large_noprof(size_t size, gfp_t flags) { void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); @@ -3955,9 +3955,9 @@ void *kmalloc_large(size_t size, gfp_t flags) flags, NUMA_NO_NODE); return ret; } -EXPORT_SYMBOL(kmalloc_large); +EXPORT_SYMBOL(kmalloc_large_noprof); -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) { void *ret = __kmalloc_large_node(size, flags, node); @@ -3965,7 +3965,7 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) flags, node); return ret; } -EXPORT_SYMBOL(kmalloc_large_node); +EXPORT_SYMBOL(kmalloc_large_node_noprof); static __always_inline void *__do_kmalloc_node(size_t size, gfp_t flags, int node, @@ -3992,26 +3992,26 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, return ret; } -void *__kmalloc_node(size_t size, gfp_t flags, int node) +void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) { return __do_kmalloc_node(size, flags, node, _RET_IP_); } -EXPORT_SYMBOL(__kmalloc_node); +EXPORT_SYMBOL(__kmalloc_node_noprof); -void *__kmalloc(size_t size, gfp_t flags) +void *__kmalloc_noprof(size_t size, gfp_t flags) { return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } -EXPORT_SYMBOL(__kmalloc); +EXPORT_SYMBOL(__kmalloc_noprof); -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) +void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, + int node, unsigned long caller) { return __do_kmalloc_node(size, flags, node, caller); } -EXPORT_SYMBOL(__kmalloc_node_track_caller); +EXPORT_SYMBOL(kmalloc_node_track_caller_noprof); -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE, _RET_IP_, size); @@ -4021,9 +4021,9 @@ void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_trace); +EXPORT_SYMBOL(kmalloc_trace_noprof); -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, +void *kmalloc_node_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); @@ -4033,7 +4033,7 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_node_trace); +EXPORT_SYMBOL(kmalloc_node_trace_noprof); static noinline void free_to_partial_list( struct kmem_cache *s, struct slab *slab, @@ -4304,6 +4304,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { memcg_slab_free_hook(s, slab, &object, 1); + alloc_tagging_slab_free_hook(s, slab, &object, 1); if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) do_slab_free(s, slab, object, object, 1, addr); @@ -4314,6 +4315,7 @@ void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); + alloc_tagging_slab_free_hook(s, slab, p, cnt); /* * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. @@ -4640,8 +4642,8 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, #endif /* CONFIG_SLUB_TINY */ /* Note that interrupts must be enabled when calling this function. */ -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, - void **p) +int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, + void **p) { int i; struct obj_cgroup *objcg = NULL; @@ -4669,7 +4671,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, return i; } -EXPORT_SYMBOL(kmem_cache_alloc_bulk); +EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); /* diff --git a/mm/util.c b/mm/util.c index 5a6a9802583b..291f7945190f 100644 --- a/mm/util.c +++ b/mm/util.c @@ -115,7 +115,7 @@ char *kstrndup(const char *s, size_t max, gfp_t gfp) EXPORT_SYMBOL(kstrndup); /** - * kmemdup - duplicate region of memory + * kmemdup_noprof - duplicate region of memory * * @src: memory region to duplicate * @len: memory region length @@ -124,16 +124,16 @@ EXPORT_SYMBOL(kstrndup); * Return: newly allocated copy of @src or %NULL in case of error, * result is physically contiguous. Use kfree() to free. */ -void *kmemdup(const void *src, size_t len, gfp_t gfp) +void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp) { void *p; - p = kmalloc_track_caller(len, gfp); + p = kmalloc_node_track_caller_noprof(len, gfp, NUMA_NO_NODE, _RET_IP_); if (p) memcpy(p, src, len); return p; } -EXPORT_SYMBOL(kmemdup); +EXPORT_SYMBOL(kmemdup_noprof); /** * kvmemdup - duplicate region of memory @@ -577,7 +577,7 @@ unsigned long vm_mmap(struct file *file, unsigned long addr, EXPORT_SYMBOL(vm_mmap); /** - * kvmalloc_node - attempt to allocate physically contiguous memory, but upon + * kvmalloc_node_noprof - attempt to allocate physically contiguous memory, but upon * failure, fall back to non-contiguous (vmalloc) allocation. * @size: size of the request. * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL. @@ -592,7 +592,7 @@ EXPORT_SYMBOL(vm_mmap); * * Return: pointer to the allocated memory of %NULL in case of failure */ -void *kvmalloc_node(size_t size, gfp_t flags, int node) +void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) { gfp_t kmalloc_flags = flags; void *ret; @@ -614,7 +614,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) kmalloc_flags &= ~__GFP_NOFAIL; } - ret = kmalloc_node(size, kmalloc_flags, node); + ret = kmalloc_node_noprof(size, kmalloc_flags, node); /* * It doesn't really make sense to fallback to vmalloc for sub page @@ -643,7 +643,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(kvmalloc_node); +EXPORT_SYMBOL(kvmalloc_node_noprof); /** * kvfree() - Free memory. @@ -682,7 +682,7 @@ void kvfree_sensitive(const void *addr, size_t len) } EXPORT_SYMBOL(kvfree_sensitive); -void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) +void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsize, gfp_t flags) { void *newp; @@ -695,7 +695,7 @@ void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) kvfree(p); return newp; } -EXPORT_SYMBOL(kvrealloc); +EXPORT_SYMBOL(kvrealloc_noprof); /** * __vmalloc_array - allocate memory for a virtually contiguous array. From patchwork Mon Feb 12 21:39:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554097 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0ACD5812A for ; Mon, 12 Feb 2024 21:40:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774026; cv=none; b=FTy2PzCWVLOCnczm58cJyr6NeKuQd+KkA8OhFT6OkE/+4mpsXf3QIN0nn+/oK+VDng6k0ZrAVHGOiDIZvt909i/bJMeKrUxEQrlE6dGAthYc2NQzmbz+q9fVLcAI+tT+Ifz6oqrkXw4bim0g+xsCs31oIUfumkkK3lgmGpsfsFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774026; c=relaxed/simple; bh=mETHOHGta/ApXOLd4cZ9kSxCTLotDnWcLiMxmS8gzhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JUGGg3oIfinxq+EwVIdYTHw8zSZ3/3x+gvx/SndbKcRv+HDM1sj/O6DtQoPEczxvG6ejznwdY2ufOf43MxifoJgtvhWXQ9A5qnInO9nBg0sgbtTbSipYwv53FgmroQhZ69fdoHR9uF6+TBgv9XW4KsH7IZ09mlZw/tY/E2huIZA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RjTPiwri; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RjTPiwri" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6077e9dd249so4432977b3.3 for ; Mon, 12 Feb 2024 13:40:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774023; x=1708378823; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QVjGOqS2/C9DQQqaQQFc7iw3YwNnxSDsGz2z4KoUl2k=; b=RjTPiwriVDcuzwCfbRq9xz7eA2HwUPhlq7u9tBFItarGbooXsTHQrlQpNlJwmuRl+I AY+0rRZ7z9gxpONCWjOF9sIJw+wBlb6KwBkhfrw2WREPdVk36sYpA/6IRvxV29aphyMf zvX8ZyzRJ/sESFwB3DhyxyBZpiOro5RsFO6WghWMyIxZUzanm975zVvqx4BFStHGpXlQ Rq+rqSLyfcBNkfPlYT3L4aH/gGNoeC2s4VBGK63HGGzZJiF+SkHTIPHFhHzY83VhzS3Y YRGAEQMgXHRAnUPX7olhQlsnfjoLlYnyGISEgePfojGjl7aPnctGrv0jrL0ISua5jFcw 7cZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774023; x=1708378823; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QVjGOqS2/C9DQQqaQQFc7iw3YwNnxSDsGz2z4KoUl2k=; b=UzBCy12ZaT6Xep/vcs1TKEum4E/bEPqodHiX2z3wky9LP4QZ4lEtsxre+bA4JfycrS eJAhR61EWieyU5HQFeIwAk/nMJc8lqgIdUy2Lqyw38d/Oxsv+hz0MI3N5W7qNF24mYbh 6wbBQDNqPGSs8Y/M881y8uMxdimPsBvwVvajOBpPb5QORIV83wU5BdpzC3f2DHRVdIr2 wYsnAefdvxOQEGzEiBjAEx7yGnsbOImp6UkWUJqIhX8iDrQIu73SGpky9IhKdshHVgie BuRLZWbU8HH0Zc19gIjpJ9a7tigGXLD8uvFop65lPC3QxVDUUZS+f1Ws4m3V3nGgQYVc vpOw== X-Forwarded-Encrypted: i=1; AJvYcCUkPNDDzlp0vDJNfoPCjmjsBPa+Ctz3DyYCiAYJSg9BjSUhaf0OBUygjwfEHLoAZ6AaLQe9dO9z3XuFArcAA6yKKFEbLzggUtZR+bWlHQ== X-Gm-Message-State: AOJu0Yycup6eNtCX0LP8Y//wS6bC9/y+zoJ+b6trkc4YptXOQ8usPT2j E8n4oCn9fcuCSKdEQ0FwkRBuwSclOEcE/eaEvOzfWgLG5yBfQpOy+ZNMVdqFZNnVum/MdIhZUYL UFg== X-Google-Smtp-Source: AGHT+IHcBrIgMIQ03gLdwIObn4fTPAITy7Rn+dgjwGrjWr5hIgaJ7mUN+6YCHWnvHC5rR1WgctLJs84VL90= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:709:b0:dbd:73bd:e55a with SMTP id k9-20020a056902070900b00dbd73bde55amr364194ybt.4.1707774022635; Mon, 12 Feb 2024 13:40:22 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:09 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-24-surenb@google.com> Subject: [PATCH v3 23/35] mm/slub: Mark slab_free_freelist_hook() __always_inline From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet It seems we need to be more forceful with the compiler on this one. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 9ea03d6e9c9d..4d480784942e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2124,7 +2124,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) return !kasan_slab_free(s, x, init); } -static inline bool slab_free_freelist_hook(struct kmem_cache *s, +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) { From patchwork Mon Feb 12 21:39:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554098 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E47058228 for ; Mon, 12 Feb 2024 21:40:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774028; cv=none; b=VFSmzT9gG8d/qYTxxm/QigAuFyjY4WABLPwX2A/srqzk6VQoPlU/9gXfBqrO3VJ9fmYyxNSDIrpwgp9Rnbs7m52QLpkDDbOC1TYN933joSsc9w/S6ubVsRRJl+63DCML0W7z6rxv7Bmtke8z7EPEXJ4bq/rQUTdXulXX30VM70s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774028; c=relaxed/simple; bh=PmKeN7vmTqxLXiZj031z5+rsSma28wkZJAJ21oArHtE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MXzPF6zHpqy+EVOi/nt8LXFLSpRzerk9Dn6DS94UHXkEhpBWXEZBqLompOuxF9qdqu9ktALttD5u46LN6DSOLjYepTnAU8eCA3RoamZHw27N++W103nYGZRgFBUZyqv4qtYqlTdJkGW/UlcNpKJ/D4Rntx4RQdLWPJyDcu0dEL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Jn5CNXcH; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Jn5CNXcH" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6dbdcfd39so7591630276.2 for ; Mon, 12 Feb 2024 13:40:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774025; x=1708378825; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nNnBkPYeQ1kRTTlryOW31uuwa/BW+/OIiMnW19R7vGw=; b=Jn5CNXcH32rrGlaFXh98U7T8h6+hT7hBoR07NIzEPFHkvbMRv1EnSm2N/VZL1jU6j0 eTF1T/hCQDVEpfbN37uOX5pTmnA7Meko9E1OEH8MHcOFwuRmv6/mDPiuhrhvYyc4wcRd 1xF2jlNaPt3bZMz3xMeUjrgfZ6Iaxkdo6LSiMu73sDg03PWgCNenBQbSrum7zjXmGXvQ u0VNHEUS1VKV3jhPZz8dPW1A92KE2jVLLy1yedykOVSmWEARdb+UYph+166OVQN5u5CT aaf6kcgOBE9fyE3AXsUhhSInWCZLVI0xbNFl1PPhx75XxY6f6yMzP3OvejjWrsBqM2vs hPLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774025; x=1708378825; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nNnBkPYeQ1kRTTlryOW31uuwa/BW+/OIiMnW19R7vGw=; b=Okf2pisNoUFVzBm5midSuW+0UyW0yK3dE7g/FhRybf3/N7y1tVF4en+2MPgyX1qXya 1qnA3WsvLyvkGcBdOIYvdVp8igHGeKClfreM/TiHwG/qU7xIXgJFzUxhhbSMAQgJ1F5W hGE5YXO4+oz96Td81ywGUPCzOIWQgb3B2OahjLsYk0KZg0CAXClJUgyOb9kcE1h1UyaD tcO729PycKVd43wZW12mDv44BHFcGnPpDt8tpxwz2hY88La72Ft7byVUFxiPxUXjvCNv 4c6SlnkQdWmhZLYbFMKferzXdyvjM8aGN4V6qEAH6kxP9JTDyUoNcy6Fce8/NsIxGGBK XDzQ== X-Gm-Message-State: AOJu0YxYyzrd6obqZWGNDbLv4hlP65ILpru8Qkg94x7RZW5cANLkHkbp 2fE9M6jQqvbEZ04PYS2r3emmSo3k+2HFH/ckph83Qt/M9dCoxz5cY5z5W2n9cOUN+N6Mj4sZm0i bXQ== X-Google-Smtp-Source: AGHT+IEpiuRr80eSOkkX3iShmGwFZB235v7m7vIUsPrbIrvmqpTi8MXkiRRWkpGv6G+qILNGIJvYkoh+KuQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:154b:b0:dc7:5aad:8965 with SMTP id r11-20020a056902154b00b00dc75aad8965mr2054600ybu.0.1707774024786; Mon, 12 Feb 2024 13:40:24 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:10 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-25-surenb@google.com> Subject: [PATCH v3 24/35] mempool: Hook up to memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet This adds hooks to mempools for correctly annotating mempool-backed allocations at the correct source line, so they show up correctly in /sys/kernel/debug/allocations. Various inline functions are converted to wrappers so that we can invoke alloc_hooks() in fewer places. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/mempool.h | 73 ++++++++++++++++++++--------------------- mm/mempool.c | 36 ++++++++------------ 2 files changed, 49 insertions(+), 60 deletions(-) diff --git a/include/linux/mempool.h b/include/linux/mempool.h index 7be1e32e6d42..69e65ca515ee 100644 --- a/include/linux/mempool.h +++ b/include/linux/mempool.h @@ -5,6 +5,8 @@ #ifndef _LINUX_MEMPOOL_H #define _LINUX_MEMPOOL_H +#include +#include #include #include @@ -39,18 +41,32 @@ void mempool_exit(mempool_t *pool); int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id); -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, + +int mempool_init_noprof(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); +#define mempool_init(...) \ + alloc_hooks(mempool_init_noprof(__VA_ARGS__)) extern mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); -extern mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, + +extern mempool_t *mempool_create_node_noprof(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int nid); +#define mempool_create_node(...) \ + alloc_hooks(mempool_create_node_noprof(__VA_ARGS__)) + +#define mempool_create(_min_nr, _alloc_fn, _free_fn, _pool_data) \ + mempool_create_node(_min_nr, _alloc_fn, _free_fn, _pool_data, \ + GFP_KERNEL, NUMA_NO_NODE) extern int mempool_resize(mempool_t *pool, int new_min_nr); extern void mempool_destroy(mempool_t *pool); -extern void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc; + +extern void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask) __malloc; +#define mempool_alloc(...) \ + alloc_hooks(mempool_alloc_noprof(__VA_ARGS__)) + extern void *mempool_alloc_preallocated(mempool_t *pool) __malloc; extern void mempool_free(void *element, mempool_t *pool); @@ -62,19 +78,10 @@ extern void mempool_free(void *element, mempool_t *pool); void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data); void mempool_free_slab(void *element, void *pool_data); -static inline int -mempool_init_slab_pool(mempool_t *pool, int min_nr, struct kmem_cache *kc) -{ - return mempool_init(pool, min_nr, mempool_alloc_slab, - mempool_free_slab, (void *) kc); -} - -static inline mempool_t * -mempool_create_slab_pool(int min_nr, struct kmem_cache *kc) -{ - return mempool_create(min_nr, mempool_alloc_slab, mempool_free_slab, - (void *) kc); -} +#define mempool_init_slab_pool(_pool, _min_nr, _kc) \ + mempool_init(_pool, (_min_nr), mempool_alloc_slab, mempool_free_slab, (void *)(_kc)) +#define mempool_create_slab_pool(_min_nr, _kc) \ + mempool_create((_min_nr), mempool_alloc_slab, mempool_free_slab, (void *)(_kc)) /* * a mempool_alloc_t and a mempool_free_t to kmalloc and kfree the @@ -83,17 +90,12 @@ mempool_create_slab_pool(int min_nr, struct kmem_cache *kc) void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data); void mempool_kfree(void *element, void *pool_data); -static inline int mempool_init_kmalloc_pool(mempool_t *pool, int min_nr, size_t size) -{ - return mempool_init(pool, min_nr, mempool_kmalloc, - mempool_kfree, (void *) size); -} - -static inline mempool_t *mempool_create_kmalloc_pool(int min_nr, size_t size) -{ - return mempool_create(min_nr, mempool_kmalloc, mempool_kfree, - (void *) size); -} +#define mempool_init_kmalloc_pool(_pool, _min_nr, _size) \ + mempool_init(_pool, (_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) +#define mempool_create_kmalloc_pool(_min_nr, _size) \ + mempool_create((_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) /* * A mempool_alloc_t and mempool_free_t for a simple page allocator that @@ -102,16 +104,11 @@ static inline mempool_t *mempool_create_kmalloc_pool(int min_nr, size_t size) void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data); void mempool_free_pages(void *element, void *pool_data); -static inline int mempool_init_page_pool(mempool_t *pool, int min_nr, int order) -{ - return mempool_init(pool, min_nr, mempool_alloc_pages, - mempool_free_pages, (void *)(long)order); -} - -static inline mempool_t *mempool_create_page_pool(int min_nr, int order) -{ - return mempool_create(min_nr, mempool_alloc_pages, mempool_free_pages, - (void *)(long)order); -} +#define mempool_init_page_pool(_pool, _min_nr, _order) \ + mempool_init(_pool, (_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) +#define mempool_create_page_pool(_min_nr, _order) \ + mempool_create((_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) #endif /* _LINUX_MEMPOOL_H */ diff --git a/mm/mempool.c b/mm/mempool.c index dbbf0e9fb424..c47ff883cf36 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -240,17 +240,17 @@ EXPORT_SYMBOL(mempool_init_node); * * Return: %0 on success, negative error code otherwise. */ -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data) +int mempool_init_noprof(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data) { return mempool_init_node(pool, min_nr, alloc_fn, free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); } -EXPORT_SYMBOL(mempool_init); +EXPORT_SYMBOL(mempool_init_noprof); /** - * mempool_create - create a memory pool + * mempool_create_node - create a memory pool * @min_nr: the minimum number of elements guaranteed to be * allocated for this pool. * @alloc_fn: user-defined element-allocation function. @@ -265,17 +265,9 @@ EXPORT_SYMBOL(mempool_init); * * Return: pointer to the created memory pool object or %NULL on error. */ -mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data) -{ - return mempool_create_node(min_nr, alloc_fn, free_fn, pool_data, - GFP_KERNEL, NUMA_NO_NODE); -} -EXPORT_SYMBOL(mempool_create); - -mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data, - gfp_t gfp_mask, int node_id) +mempool_t *mempool_create_node_noprof(int min_nr, mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) { mempool_t *pool; @@ -291,7 +283,7 @@ mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, return pool; } -EXPORT_SYMBOL(mempool_create_node); +EXPORT_SYMBOL(mempool_create_node_noprof); /** * mempool_resize - resize an existing memory pool @@ -374,7 +366,7 @@ int mempool_resize(mempool_t *pool, int new_min_nr) EXPORT_SYMBOL(mempool_resize); /** - * mempool_alloc - allocate an element from a specific memory pool + * mempool_alloc_noprof - allocate an element from a specific memory pool * @pool: pointer to the memory pool which was allocated via * mempool_create(). * @gfp_mask: the usual allocation bitmask. @@ -387,7 +379,7 @@ EXPORT_SYMBOL(mempool_resize); * * Return: pointer to the allocated element or %NULL on error. */ -void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask) { void *element; unsigned long flags; @@ -454,7 +446,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) finish_wait(&pool->wait, &wait); goto repeat_alloc; } -EXPORT_SYMBOL(mempool_alloc); +EXPORT_SYMBOL(mempool_alloc_noprof); /** * mempool_alloc_preallocated - allocate an element from preallocated elements @@ -562,7 +554,7 @@ void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data) { struct kmem_cache *mem = pool_data; VM_BUG_ON(mem->ctor); - return kmem_cache_alloc(mem, gfp_mask); + return kmem_cache_alloc_noprof(mem, gfp_mask); } EXPORT_SYMBOL(mempool_alloc_slab); @@ -580,7 +572,7 @@ EXPORT_SYMBOL(mempool_free_slab); void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data) { size_t size = (size_t)pool_data; - return kmalloc(size, gfp_mask); + return kmalloc_noprof(size, gfp_mask); } EXPORT_SYMBOL(mempool_kmalloc); @@ -597,7 +589,7 @@ EXPORT_SYMBOL(mempool_kfree); void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data) { int order = (int)(long)pool_data; - return alloc_pages(gfp_mask, order); + return alloc_pages_noprof(gfp_mask, order); } EXPORT_SYMBOL(mempool_alloc_pages); From patchwork Mon Feb 12 21:39:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554099 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E4B958AB4 for ; Mon, 12 Feb 2024 21:40:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774030; cv=none; b=nQ8Pmka4hv+/HoBIMeph+0XniYHStD9C92dLKdkk/hLQpCFNfH2+tK6ArS1YvZgm57elnT/ZOfd/sjJmthXtsF+VnmDropSyPc5bZXkGrSPvoxvVusZE9sD7im/GzZDACHBLguA2Z4U1lv9BnfSrXvEcuhPTAYtFtgjcZ3/K1Ec= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774030; c=relaxed/simple; bh=SPQ6Ysfd7WnLLii6ApAHVobNpCAxJwmzP4EGGARuO/M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tU60o/DYZKgsX+nfv1AaO2xor6Uuh8CKRDINcNElPwo0dX9HDB9WlT4oW5RsJCx4Z4ZOENANGZbKwP/Vzvt5zPRW5UsOuMLWH1TWWKg+eDgBHp6BzgeNX2s6vOQQgPXi8W3ehzQkioCzIGHz9E5A7mq0SdtSUD+Fd0Mmr9MVgjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FNho1vl3; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FNho1vl3" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc647f65573so6527313276.2 for ; Mon, 12 Feb 2024 13:40:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774027; x=1708378827; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Uyzb7E9FIhn/mbh3tSg2FbzBy8bNHcnwrWfMf2QZXg0=; b=FNho1vl3tt9J+6Doa5B0s+MpcIxnRkqDjVGbtulbmBq+KlAel0ZRPSDKK5WK6m2YUI v5mbXUCJL7cHOZPaO/Z/isl9yYFvC0fQ6Ba54wk2zZNnB20JGQEvjPJbCNPNEumi2G2v TCVSgWGroVKF+trDAcs9bCaBGiARj+xv8V2Ew9bCpZxMRNl+fd1OjFzNml5UAoxcQlYx YwMcLllMnvgJbNXfJkJ50caD3DDtPwuWkYTKXTDfLUadKu4W5dxBcw1V0hsKxVqpBnIo FLqwwV8Z4Ut/Q1n96ll3obaZBsRTc22a4yBF9F1JHTQ32EnjRNdsWrXMymkJZ60bzQo/ 2Zlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774027; x=1708378827; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Uyzb7E9FIhn/mbh3tSg2FbzBy8bNHcnwrWfMf2QZXg0=; b=nCYxLBAY4/aHcBj7RpRzv0hpQGFkWjCTdBjl5otawcYzeCLf/IAazywoEgNTw9rMiG WECjpFcRN3KggEGYgrAwl+lIdhD+Jy1O6wF2QNr1UEwQasbXhOrMFzkMiQjG6vDZBCu8 4rqH5ZWUFpJ7UMHjZDjxR2fDDIZEMZ66YDi3xE3N8mpHEc5TevByUEaWoRZvHGPy2SZe 8AZouOza/IDhsCOeAgJebVC3JWbiAJlFxan20+mxB/EcvJ7RKmZJV7rvtoo9c4U2T6pj hnO38mPfQqxnBJbLCQpm7+qx18P4r8NrX0YBMVDSAMso3E7sMgdWBDqvyrZ0TODVAJoA 6Ecg== X-Forwarded-Encrypted: i=1; AJvYcCXoQ6767u8IfIeVRxTkXZppuhC5buaRKsqEcB0ZI8rR16Q8Ow/Xx4qKnnyuSLx7/QT5ZNd8Oi2BDBSb/LqWIZ99Hv2kkAvhAVyIySE8wA== X-Gm-Message-State: AOJu0Yy6xXNY7TKPqhJp/qxz7Js107sKNofbyTamyqED3hOUbxEAVL0h hCyXDRuinTN0gGW+jx0YXeg8FAr2ct5eJfXZgH8mdOgonuTQEvkwbKInPf+tUYGo79OQynlPJdg Tsw== X-Google-Smtp-Source: AGHT+IE5NPp+ufCGF1WHGTLjC5ri/m4V06t1gtAUtmtnTKuKy22ZlehWWdmQuoopo1S/2T4p6wMzddrrcNE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:1505:b0:dc7:48ce:d17f with SMTP id q5-20020a056902150500b00dc748ced17fmr2107200ybu.10.1707774026593; Mon, 12 Feb 2024 13:40:26 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:11 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-26-surenb@google.com> Subject: [PATCH v3 25/35] xfs: Memory allocation profiling fixups From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet This adds an alloc_hooks() wrapper around kmem_alloc(), so that we can have allocations accounted to the proper callsite. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- fs/xfs/kmem.c | 4 ++-- fs/xfs/kmem.h | 10 ++++------ 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c index c557a030acfe..9aa57a4e2478 100644 --- a/fs/xfs/kmem.c +++ b/fs/xfs/kmem.c @@ -8,7 +8,7 @@ #include "xfs_trace.h" void * -kmem_alloc(size_t size, xfs_km_flags_t flags) +kmem_alloc_noprof(size_t size, xfs_km_flags_t flags) { int retries = 0; gfp_t lflags = kmem_flags_convert(flags); @@ -17,7 +17,7 @@ kmem_alloc(size_t size, xfs_km_flags_t flags) trace_kmem_alloc(size, flags, _RET_IP_); do { - ptr = kmalloc(size, lflags); + ptr = kmalloc_noprof(size, lflags); if (ptr || (flags & KM_MAYFAIL)) return ptr; if (!(++retries % 100)) diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h index b987dc2c6851..c4cf1dc2a7af 100644 --- a/fs/xfs/kmem.h +++ b/fs/xfs/kmem.h @@ -6,6 +6,7 @@ #ifndef __XFS_SUPPORT_KMEM_H__ #define __XFS_SUPPORT_KMEM_H__ +#include #include #include #include @@ -56,18 +57,15 @@ kmem_flags_convert(xfs_km_flags_t flags) return lflags; } -extern void *kmem_alloc(size_t, xfs_km_flags_t); static inline void kmem_free(const void *ptr) { kvfree(ptr); } +extern void *kmem_alloc_noprof(size_t, xfs_km_flags_t); +#define kmem_alloc(...) alloc_hooks(kmem_alloc_noprof(__VA_ARGS__)) -static inline void * -kmem_zalloc(size_t size, xfs_km_flags_t flags) -{ - return kmem_alloc(size, flags | KM_ZERO); -} +#define kmem_zalloc(_size, _flags) kmem_alloc((_size), (_flags) | KM_ZERO) /* * Zone interfaces From patchwork Mon Feb 12 21:39:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554100 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8898559160 for ; Mon, 12 Feb 2024 21:40:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774032; cv=none; b=YDvhix5G78XXGcrO7aR/FDlOL7e7l81v8teO95v2dkLVB02QaIxOG8etO0qEuAYzSQvqzOxFFi/OGs4Tk9yXraNFMbcz/0WjIpxVGx6xRF8+/u/ekrMPUfWVrQfcKylhAaSGhEJK0uWOUW3mWqm+uC/x+IeiKTt676vupgmSw7Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774032; c=relaxed/simple; bh=mzglEb5YKkLnnJ3niOsvBi9TJKM55Uc1F8muH3WRTCc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uc9yBRRDZDnEDiaueTGhFvVqCO3PrDOsEeP9fuLjFJBX78UR23R1JFv1YfDF+LYtfXc3IVXG5iR84ia+WGth05uoH5yzSH0iwb2Mrk3NoKjsmzm2N4CskKZ5J8cb3v2sYmYSSIKPTKruLpppmNfw746/JwT4q1KIvRkhGnovSbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gy/wjXr9; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gy/wjXr9" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-604ab15463aso4583367b3.3 for ; Mon, 12 Feb 2024 13:40:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774028; x=1708378828; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ATNLg2TqDpb+ESy76iaYpc/lOC/dJ7HcAs1g5FXdHcs=; b=gy/wjXr96kNrOhKO0M2HCufU6LQIbY5a0ZC7IRd2fgzEeKO/N/pcarxHGOxtncXRKT oQqYj+LbC0XsLCVz7lo7nvcFdKwqLiESPdxr9XnIkLfLl1Udwp0Q1ePw2I3gVwUOF8sf JMd14SeM/L/XWUMAFGHi49npOS2TgQGN1ABQPyEfqj/Ogh9feLtKmutyYEMGNXb/fH1s dTWuRaih2E9vlxhmQCVC3DouP5g7TCaBNwrNW2EwfcP7druFZoaCi6JcyKAxcpVxDzlF z5X8VLO3a4aWri1P7YcYv0HT1ACyfssDPBEH3vc1GKk670R9M9lmovZ2RP2UWp3yfsSi C7oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774028; x=1708378828; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ATNLg2TqDpb+ESy76iaYpc/lOC/dJ7HcAs1g5FXdHcs=; b=KFlSKravxd5FakM2yx3T1gakxCymzwzl4oeWEfi0mGNghLnIJdFCmZLFb5IEs0P19W 951MWWM0aM0Eim+moDdyJjUOjjiuUqswm7c4Oob1Il5sC+0Fmw0ErE04Qvhcj2XG3dCJ 8MaELoPKeofIrH2sPK3rCIQ2wTeHdJyjnm9ra32uw4dBza40Fol1R8w8EnTztjg9wyZi eF1hGn8E5BfcfZ3vcyOMENIK1PS/dyFAF7hUL0I7GyZ2ap0Up9GlSQwafG60vAVU5flX wBBb+E05FfZW6bZEp1fOegJBfMLFtOQZbIF/eJOfsv+j9M8w+V7E8J/uYI++NwIva50W r0qg== X-Forwarded-Encrypted: i=1; AJvYcCUyhVgU6xtImF0279HVFdjfxq9rnpNe9QsiQ8vOQy/U7FmFhkd3V0A1JjogvdoFcRtEyT3XIY0M2L8UBLL/rHGwYPYlZkG979khLe6Sbw== X-Gm-Message-State: AOJu0YzQfXvxwHXfbcerMnSqwWOyRcvEjDRA2AZ5T5Rhxk79TbMsX9WE YZjTKckdXbvLwAK5jxAGvVBOVBHBJYB7bUcIqB7zX/tAvIBPr53KxXJpLYFZsA1qXZstZiUZHLt atw== X-Google-Smtp-Source: AGHT+IFdaJ2ru42StbL1ZKAAjOQXI8b0LHOU1d1orFo3FRUTifWVI/lwUfEuIqH9ZjQMhzI7lKgTZH7BiIY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:884:b0:604:d53e:4616 with SMTP id cd4-20020a05690c088400b00604d53e4616mr1398151ywb.6.1707774028573; Mon, 12 Feb 2024 13:40:28 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:12 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-27-surenb@google.com> Subject: [PATCH v3 26/35] mm: percpu: Introduce pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet Upcoming alloc tagging patches require a place to stash per-allocation metadata. We already do this when memcg is enabled, so this patch generalizes the obj_cgroup * vector in struct pcpu_chunk by creating a pcpu_obj_ext type, which we will be adding to in an upcoming patch - similarly to the previous slabobj_ext patch. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andrew Morton Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: linux-mm@kvack.org --- mm/percpu-internal.h | 19 +++++++++++++++++-- mm/percpu.c | 30 +++++++++++++++--------------- 2 files changed, 32 insertions(+), 17 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index cdd0aa597a81..e62d582f4bf3 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -32,6 +32,16 @@ struct pcpu_block_md { int nr_bits; /* total bits responsible for */ }; +struct pcpuobj_ext { +#ifdef CONFIG_MEMCG_KMEM + struct obj_cgroup *cgroup; +#endif +}; + +#ifdef CONFIG_MEMCG_KMEM +#define NEED_PCPUOBJ_EXT +#endif + struct pcpu_chunk { #ifdef CONFIG_PERCPU_STATS int nr_alloc; /* # of allocations */ @@ -64,8 +74,8 @@ struct pcpu_chunk { int end_offset; /* additional area required to have the region end page aligned */ -#ifdef CONFIG_MEMCG_KMEM - struct obj_cgroup **obj_cgroups; /* vector of object cgroups */ +#ifdef NEED_PCPUOBJ_EXT + struct pcpuobj_ext *obj_exts; /* vector of object cgroups */ #endif int nr_pages; /* # of pages served by this chunk */ @@ -74,6 +84,11 @@ struct pcpu_chunk { unsigned long populated[]; /* populated bitmap */ }; +static inline bool need_pcpuobj_ext(void) +{ + return !mem_cgroup_kmem_disabled(); +} + extern spinlock_t pcpu_lock; extern struct list_head *pcpu_chunk_lists; diff --git a/mm/percpu.c b/mm/percpu.c index 4e11fc1e6def..2e5edaad9cc3 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1392,9 +1392,9 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, panic("%s: Failed to allocate %zu bytes\n", __func__, alloc_size); -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT /* first chunk is free to use */ - chunk->obj_cgroups = NULL; + chunk->obj_exts = NULL; #endif pcpu_init_md_blocks(chunk); @@ -1463,12 +1463,12 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp) if (!chunk->md_blocks) goto md_blocks_fail; -#ifdef CONFIG_MEMCG_KMEM - if (!mem_cgroup_kmem_disabled()) { - chunk->obj_cgroups = +#ifdef NEED_PCPUOBJ_EXT + if (need_pcpuobj_ext()) { + chunk->obj_exts = pcpu_mem_zalloc(pcpu_chunk_map_bits(chunk) * - sizeof(struct obj_cgroup *), gfp); - if (!chunk->obj_cgroups) + sizeof(struct pcpuobj_ext), gfp); + if (!chunk->obj_exts) goto objcg_fail; } #endif @@ -1480,7 +1480,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp) return chunk; -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT objcg_fail: pcpu_mem_free(chunk->md_blocks); #endif @@ -1498,8 +1498,8 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk) { if (!chunk) return; -#ifdef CONFIG_MEMCG_KMEM - pcpu_mem_free(chunk->obj_cgroups); +#ifdef NEED_PCPUOBJ_EXT + pcpu_mem_free(chunk->obj_exts); #endif pcpu_mem_free(chunk->md_blocks); pcpu_mem_free(chunk->bound_map); @@ -1646,9 +1646,9 @@ static void pcpu_memcg_post_alloc_hook(struct obj_cgroup *objcg, if (!objcg) return; - if (likely(chunk && chunk->obj_cgroups)) { + if (likely(chunk && chunk->obj_exts)) { obj_cgroup_get(objcg); - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = objcg; + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup = objcg; rcu_read_lock(); mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B, @@ -1663,13 +1663,13 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) { struct obj_cgroup *objcg; - if (unlikely(!chunk->obj_cgroups)) + if (unlikely(!chunk->obj_exts)) return; - objcg = chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT]; + objcg = chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup; if (!objcg) return; - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = NULL; + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup = NULL; obj_cgroup_uncharge(objcg, pcpu_obj_full_size(size)); From patchwork Mon Feb 12 21:39:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554101 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C61015A7B3 for ; Mon, 12 Feb 2024 21:40:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774034; cv=none; b=rclhtnMgH7259o3KODSjhxgUqxlVT3Z+eimCdXgV7Wp1SeBKXNTziJBeqZYwnmB9ErJ6BB25AYwQXnk86ZkVkowt5JEcXpfc/eDhwdPo27bTe1oxYZn5C/Ii4HAwgS63Ojs6dOBx9jwQEStCFIji9E1OQSHKwQoSRFCv939KDvo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774034; c=relaxed/simple; bh=agT5b25ACG41VKpR8q87SrJ3Bcb+qFiIEtA30AzV538=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YJOME5Ax78tzaSnKMjb4ZOX8bgYKM8TU4AtAcG182u+tpWzMKxG1o6W81InIBq+KODnz8cdGVWEjRY66Wz4cFil443UR4ID5hc9YbJfdUwkJRxqNWYMOymKycPEuUdXRb33qKWSlrkOP61UOMfJ1acBjwiyGhU9UCFZt9ATRNZY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lPzM8CUh; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lPzM8CUh" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-604ab15463aso4584137b3.3 for ; Mon, 12 Feb 2024 13:40:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774031; x=1708378831; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X07z1RjJe29sLMh5N/vKSXU8uipRKYcH64glAbq+0r0=; b=lPzM8CUhyrv/sPst6GWvy6n9rv20CXuQ/4zauYIxESdrS2e2oB2YyltlxNYRqZELJb 3Y/9ffwm1BbBJlPShZ1vB3Y6vjjuiwYOzEBK/lVXxRuoRd2w3mdiIe6SgufsOP0ggnEv //jwQp8FI7R7NqRIX5QBKXIkWU/tzVLtP9ERHhZowRn0AohY1d/3H+p7sZiKXG5Wky5Z g30I5jKc0TzaAvbZxQTIV+1zKgTbqfLF9NJi9+2H9YR/paoKEJwC0Y/Ww7VlTp+ieXpB l4frLkDMMBdyyhwpEmkde49wedJgoqTs4fuWb66SoZAEY/ZddeT0lsRPtBebBHaMKZSi yh7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774031; x=1708378831; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X07z1RjJe29sLMh5N/vKSXU8uipRKYcH64glAbq+0r0=; b=ncZ+7/1wlP3k5bibL7uxYkeESLi3TmVNGJKLlnLe7LEMsRpT3RkwCh3ZPdBMk4UCGy bfY/N7QhfOMUkgVnM2oZXkTqBzt0uQV7q+FLtt8dzoP0Pacg2xTMrGHju8WbviETJpY0 BTdCh849mrrjlJIGmoa0EOMyLEQhQWk2RPrD09KLp2jqZjFfUPCnyudyExxndPCnLv0A xB/HRTFX2D2wHCVXZJE5aoHlBtM4Wa8jIZ+kxdNpw86NN85SDDrhrwAtO12Hu1qFhmQW pw/DOZd3crH52z5RgqERnx+DiWMR6BhHvrDokUmxjDdVPn/i+eTZ33AyasU92S99NOqY U9iQ== X-Forwarded-Encrypted: i=1; AJvYcCWoZBJj+JQWeytJ3OmCTwhwT90CTQrF2PkDtq+DFsvP3Jr3WFtVHiCQR1eVaVBfuyFGEcJpnfKruo0j9sAQwvDDJWwl4yF0nb7TRWyONA== X-Gm-Message-State: AOJu0Ywdj11YFi3CY/mhBVmW3k80HwtGsPqNlyZBoreNQJVS41ZPExgH bMmnzbsS8FEbQTyy95yqfq4y94BqumF8Cq5R3aZ92aoQBjeLD2p/6uALbv0SG4InaRpOlf01Y/S 93w== X-Google-Smtp-Source: AGHT+IE+9v5iTSehfGXvFLsyFgeYeTOkKmnm1/9Fi71HMokK15QIurDQiuu9x1J7ZwyZ5mBIdOmgWIrIEcg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:709:b0:dc7:59d9:7b46 with SMTP id k9-20020a056902070900b00dc759d97b46mr291099ybt.3.1707774030642; Mon, 12 Feb 2024 13:40:30 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:13 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-28-surenb@google.com> Subject: [PATCH v3 27/35] mm: percpu: Add codetag reference into pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet To store codetag for every per-cpu allocation, a codetag reference is embedded into pcpuobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. Hooks to use the newly introduced codetag are added. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- mm/percpu-internal.h | 11 +++++++++-- mm/percpu.c | 26 ++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index e62d582f4bf3..7e42f0ca3b7b 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -36,9 +36,12 @@ struct pcpuobj_ext { #ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *cgroup; #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref tag; +#endif }; -#ifdef CONFIG_MEMCG_KMEM +#if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MEM_ALLOC_PROFILING) #define NEED_PCPUOBJ_EXT #endif @@ -86,7 +89,11 @@ struct pcpu_chunk { static inline bool need_pcpuobj_ext(void) { - return !mem_cgroup_kmem_disabled(); + if (IS_ENABLED(CONFIG_MEM_ALLOC_PROFILING)) + return true; + if (!mem_cgroup_kmem_disabled()) + return true; + return false; } extern spinlock_t pcpu_lock; diff --git a/mm/percpu.c b/mm/percpu.c index 2e5edaad9cc3..578531ea1f43 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1699,6 +1699,32 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) } #endif /* CONFIG_MEMCG_KMEM */ +#ifdef CONFIG_MEM_ALLOC_PROFILING +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) { + alloc_tag_add(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag, + current->alloc_tag, size); + } +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) + alloc_tag_sub_noalloc(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag, size); +} +#else +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t size) +{ +} +#endif + /** * pcpu_alloc - the percpu allocator * @size: size of area to allocate in bytes From patchwork Mon Feb 12 21:39:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554102 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFDE65C604 for ; Mon, 12 Feb 2024 21:40:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774036; cv=none; b=DAZ0QArGdtqJHQniEJP4LwCNsHhifujzuYXte4DdgznZ/aSYba96BTi8bhiuFKCqjFDHqV3Veon+KvSqcDIvA/zc7eD3GZjtOsm24tqeau6MTdGCNoS/2CZUR0XHm+qz4K+J91Lt+ZqDwmjUwNtN/bR72o96Dkj+H6Wa02h0Z9E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774036; c=relaxed/simple; bh=UVxueQLV9PsGBi6aBKLsc34Nv2XDXYOpMFjz8adQuiA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NhtY6sIwT5goUCirUnDHbpaMo1xmGl7bcRQ3nt47iBhCUNg8HCDzuolUdRkzWBJzjNHO1/CelLjWP1uJB9YMaDHPfyeZQJDyuRRxQMsMn2E0YIkCKeBCWtECTuPkrTICU0J0U2hGoDb6+TFVsxat0gFtIWafyfv8kj5zv3rL8i0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Kvgh/AIo; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Kvgh/AIo" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6077e9dd249so4435167b3.3 for ; Mon, 12 Feb 2024 13:40:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774033; x=1708378833; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z8tkEzFiQHW4+OeWkbnef4rwrjbu90Cr9IgJj7MTk7I=; b=Kvgh/AIoiy4Br9CWIHFAhLFLyf1G4zmZHsPzj+/vbhTKtFhpaS45pK3NnjAP6sOHUC saG0AVoLP0G1eKuBMME+nLg1fFa4ZfpsBz4+CvSwwWAJ9jt5XZ/5Q6eOpFw5XHRBmBlk a3Di/KemdA3EGDuv4dSM78BPvcS7LSNJZmwFjQt0Jkbol1dny6j4g/1+YPrAOpPIJyWT +b/CbSk7LTaTuzIYn155PzvqxlvbCsvYWCO4ELsQQeai/hyqoHJDNM2UwtOBOyDPXuGk As0gm1JpbYLdH2uLaXyMV7AAL80PBhvHFRMvoM9w4pHHgg6UDBxSHp98mBCI9IuXxtLe nFfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774033; x=1708378833; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z8tkEzFiQHW4+OeWkbnef4rwrjbu90Cr9IgJj7MTk7I=; b=ExdUjJUbPzLQrev9yFYvvM8/Z/Uobz9crdfqQHsFN1amHmDG9l+qw+n0EBtubs6uMP YoM/s/7CeFm8PKEIIdmNo0RaZAx6bDUVxHgavc/3O4TCwh/q8qQt9XabekiznXM2ZsoR xCyXC9+GXRERgVFfpsUv3uF7IseH1HNbQEzOy04C5ys2MzWEeRgzZ4tgCrXCYVwMZ5+n jYoPXbUqVD3UIjjfvtKciMKbq8NH+WDJ0gDEcLuNPw0ablquynLJyevilF9oLhV+wPak q+bB/7EEpdwEi0xQEWn92cGsIoibHktoYjXyRNrxyO5JlZ6AhZTh7/PboOmSayTAhAc3 f/Ng== X-Forwarded-Encrypted: i=1; AJvYcCXd2dwx8pYe2aZuDSuJMhLaqxShCDUF/MgAGjRVYdNxXmgJk6etrkBxxc+Cic0WUzp1uml0C2Up9AKzsuB/V57YmVW0E6d0cZDLo7iOWw== X-Gm-Message-State: AOJu0YwAEonR0xbN7YKoncG0EVuR2RjWVH2EfLhe55L+JM3spJCLpK6b sCeWDUEGmnIlnyLUUUe/IrfQnqu2FpvLhpjsJlbmWnz4+GiXPJFxMr9dHOYxULo+ti5MPKQW3bQ Ggw== X-Google-Smtp-Source: AGHT+IHm6015U/EcZR8Woa1htECUFmOLxos5kLFwlm3ImstqT3JjQx2GwcIf2KWvuAyJVMYRK/0LWLgINsI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a81:a010:0:b0:607:7dee:a7fa with SMTP id x16-20020a81a010000000b006077deea7famr162033ywg.2.1707774032997; Mon, 12 Feb 2024 13:40:32 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:14 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-29-surenb@google.com> Subject: [PATCH v3 28/35] mm: percpu: enable per-cpu allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Redefine __alloc_percpu, __alloc_percpu_gfp and __alloc_reserved_percpu to record allocations and deallocations done by these functions. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 15 +++++++++ include/linux/percpu.h | 23 +++++++++----- mm/percpu.c | 64 +++++---------------------------------- 3 files changed, 38 insertions(+), 64 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 6fa8a94d8bc1..3fe51e67e231 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -140,4 +140,19 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, _res; \ }) +/* + * workaround for a sparse bug: it complains about res_type_to_err() when + * typeof(_do_alloc) is a __percpu pointer, but gcc won't let us add a separate + * __percpu case to res_type_to_err(): + */ +#define alloc_hooks_pcpu(_do_alloc) \ +({ \ + typeof(_do_alloc) _res; \ + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ + \ + _res = _do_alloc; \ + alloc_tag_restore(&_alloc_tag, _old); \ + _res; \ +}) + #endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 62b5eb45bd89..eb4eb264136f 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -2,6 +2,7 @@ #ifndef __LINUX_PERCPU_H #define __LINUX_PERCPU_H +#include #include #include #include @@ -9,6 +10,7 @@ #include #include #include +#include #include @@ -125,7 +127,6 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn); #endif -extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align) __alloc_size(1); extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr); extern bool is_kernel_percpu_address(unsigned long addr); @@ -133,14 +134,16 @@ extern bool is_kernel_percpu_address(unsigned long addr); extern void __init setup_per_cpu_areas(void); #endif -extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); -extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); -extern void free_percpu(void __percpu *__pdata); +extern void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, + gfp_t gfp) __alloc_size(1); extern size_t pcpu_alloc_size(void __percpu *__pdata); -DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) - -extern phys_addr_t per_cpu_ptr_to_phys(void *addr); +#define __alloc_percpu_gfp(_size, _align, _gfp) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, false, _gfp)) +#define __alloc_percpu(_size, _align) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, false, GFP_KERNEL)) +#define __alloc_reserved_percpu(_size, _align) \ + alloc_hooks_pcpu(pcpu_alloc_noprof(_size, _align, true, GFP_KERNEL)) #define alloc_percpu_gfp(type, gfp) \ (typeof(type) __percpu *)__alloc_percpu_gfp(sizeof(type), \ @@ -149,6 +152,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); (typeof(type) __percpu *)__alloc_percpu(sizeof(type), \ __alignof__(type)) +extern void free_percpu(void __percpu *__pdata); + +DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) + +extern phys_addr_t per_cpu_ptr_to_phys(void *addr); + extern unsigned long pcpu_nr_pages(void); #endif /* __LINUX_PERCPU_H */ diff --git a/mm/percpu.c b/mm/percpu.c index 578531ea1f43..2badcc5e0e71 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1726,7 +1726,7 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t s #endif /** - * pcpu_alloc - the percpu allocator + * pcpu_alloc_noprof - the percpu allocator * @size: size of area to allocate in bytes * @align: alignment of area (max PAGE_SIZE) * @reserved: allocate from the reserved chunk if available @@ -1740,7 +1740,7 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t s * RETURNS: * Percpu pointer to the allocated area on success, NULL on failure. */ -static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, +void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, gfp_t gfp) { gfp_t pcpu_gfp; @@ -1907,6 +1907,8 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, pcpu_memcg_post_alloc_hook(objcg, chunk, off, size); + pcpu_alloc_tag_alloc_hook(chunk, off, size); + return ptr; fail_unlock: @@ -1935,61 +1937,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, return NULL; } - -/** - * __alloc_percpu_gfp - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * @gfp: allocation flags - * - * Allocate zero-filled percpu area of @size bytes aligned at @align. If - * @gfp doesn't contain %GFP_KERNEL, the allocation doesn't block and can - * be called from any context but is a lot more likely to fail. If @gfp - * has __GFP_NOWARN then no warning will be triggered on invalid or failed - * allocation requests. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) -{ - return pcpu_alloc(size, align, false, gfp); -} -EXPORT_SYMBOL_GPL(__alloc_percpu_gfp); - -/** - * __alloc_percpu - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Equivalent to __alloc_percpu_gfp(size, align, %GFP_KERNEL). - */ -void __percpu *__alloc_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, false, GFP_KERNEL); -} -EXPORT_SYMBOL_GPL(__alloc_percpu); - -/** - * __alloc_reserved_percpu - allocate reserved percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Allocate zero-filled percpu area of @size bytes aligned at @align - * from reserved percpu area if arch has set it up; otherwise, - * allocation is served from the same dynamic area. Might sleep. - * Might trigger writeouts. - * - * CONTEXT: - * Does GFP_KERNEL allocation. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_reserved_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, true, GFP_KERNEL); -} +EXPORT_SYMBOL_GPL(pcpu_alloc_noprof); /** * pcpu_balance_free - manage the amount of free chunks @@ -2328,6 +2276,8 @@ void free_percpu(void __percpu *ptr) spin_lock_irqsave(&pcpu_lock, flags); size = pcpu_free_area(chunk, off); + pcpu_alloc_tag_free_hook(chunk, off, size); + pcpu_memcg_free_hook(chunk, off, size); /* From patchwork Mon Feb 12 21:39:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554103 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 453205D493 for ; Mon, 12 Feb 2024 21:40:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774039; cv=none; b=MjzZESheovaAXUEoqkdQI+ZPpnnOuEaKljlh/1kXCgtOwy80yzzQo39AtNKQirms0TAEhb+BNd1J7kfTwAfvswm1KXMUXiKL31IGwcAa7VeqImTG8IjFLaT6id2JyWaNaSRofL2BZ8lPO/QUi/ptR+lNQy3MNGfxNXQwUdonYv4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774039; c=relaxed/simple; bh=ttEGIhrFvH2c/OJV9eSjoMC8GLBsvFW1OqwaAhSEK2M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JvnpgVfnrEPKoQ02c53hpuwhn9qzEJ6rKJgV7pG6wrvMcDjJxZ2XhxPKAF3TODZVelo9KhjjTpmxZf+Rr8Vu1EgkHwo5o/EX+x55XW2/x5uaH2wb2IfQbQQvR6dbCzzAT9uHcQEdunyElHlBWNQgXOUBv6et0UQmlmx3SehbtOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Xx+EcmrU; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xx+EcmrU" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b269b172so5948887276.1 for ; Mon, 12 Feb 2024 13:40:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774035; x=1708378835; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4cusAtaCctZ7K3pFyusQLmD/gTJ9EMe3N2uM61S6cPw=; b=Xx+EcmrUZOVGqqJm/gHONpptE8no30vM1Ulf4XbGW5HKxAITLkKs/0oIS7yhL2K89F /Q6wEfi1KSIngscU6uPZr7lb06fwr2HarAmq4f1nGqwSw9fgs6vne99+YakJ/HJO2D4z XCW/Y7URvu4TmlG77bTlcjAp/5/gSMKtqtSP0CGZt8IT8MmF7SGUWMCVmi8jNTUtpe6B n/+8PPhDBHhUua8RViCzvxw9qRs6d8tzQ1zmHwP1kN1Q8+ZzLuZ+ESZ5OEFmezENni70 szWsXOtrSUlQvwDDT6GSAOM70zk1e60E5vOWgzJ8imCwv0wXowANL3GKmqjzxUWfbP1U W0cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774035; x=1708378835; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4cusAtaCctZ7K3pFyusQLmD/gTJ9EMe3N2uM61S6cPw=; b=IdH+8SnH1+iGL1hQz+Z4/kmhO0sL+IIM/6SPubBCmlGa5SfEMhLbl/qRk43/doXVKK 1KVb6bKt7hZAhKH9EZ6ML2pRS5HKT8xOc1tLrjQ5gXn4SGEChg3ETArCpQmEpH6pB2c3 6qxyD7qR1ImkOXTM6S41s2oCaGhwphJz8PXcvEVjVz0gpO4x1ilh8RuUfdT5yf+gzi61 aXr0SWes/xGb0MflTMMcwxy8UGwyyXNBJ9HcTmmrZcCofk14L2Kz0II4I/X7C0TuAAWi UPxMlAvb89c871J3gjZQn8pxF3hmCNQi2lRNHrs8Ci9Dx1wUoZpZuA/3JnKGzXSSP/98 gaQg== X-Gm-Message-State: AOJu0YyxIpOo6oHpoqe3tcmDuiMKwCqusnzfXboQWwXIW/swELHhjmmY r+/WiUfhOpmTFcT3OyKanyL75mNkWPdAnz5QDSOLD4y/Nr/Z4QcalBfAlUGilRfErV85gdzvGe8 24Q== X-Google-Smtp-Source: AGHT+IF2lAJfAQMKJa+dwJUeCTuXNowNKuBbS7XvK2jtWHY6jXD0O3VIO4SDQSUEkESVpV0NowOi7s2lO00= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:102e:b0:dc6:e884:2342 with SMTP id x14-20020a056902102e00b00dc6e8842342mr148128ybt.5.1707774035250; Mon, 12 Feb 2024 13:40:35 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:15 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-30-surenb@google.com> Subject: [PATCH v3 29/35] mm: vmalloc: Enable memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet This wrapps all external vmalloc allocation functions with the alloc_hooks() wrapper, and switches internal allocations to _noprof variants where appropriate, for the new memory allocation profiling feature. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- drivers/staging/media/atomisp/pci/hmm/hmm.c | 2 +- include/linux/vmalloc.h | 60 ++++++++++---- kernel/kallsyms_selftest.c | 2 +- mm/util.c | 24 +++--- mm/vmalloc.c | 88 ++++++++++----------- 5 files changed, 103 insertions(+), 73 deletions(-) diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c index bb12644fd033..3e2899ad8517 100644 --- a/drivers/staging/media/atomisp/pci/hmm/hmm.c +++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c @@ -205,7 +205,7 @@ static ia_css_ptr __hmm_alloc(size_t bytes, enum hmm_bo_type type, } dev_dbg(atomisp_dev, "pages: 0x%08x (%zu bytes), type: %d, vmalloc %p\n", - bo->start, bytes, type, vmalloc); + bo->start, bytes, type, vmalloc_noprof); return bo->start; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..106d78e75606 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -2,6 +2,8 @@ #ifndef _LINUX_VMALLOC_H #define _LINUX_VMALLOC_H +#include +#include #include #include #include @@ -137,26 +139,54 @@ extern unsigned long vmalloc_nr_pages(void); static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif -extern void *vmalloc(unsigned long size) __alloc_size(1); -extern void *vzalloc(unsigned long size) __alloc_size(1); -extern void *vmalloc_user(unsigned long size) __alloc_size(1); -extern void *vmalloc_node(unsigned long size, int node) __alloc_size(1); -extern void *vzalloc_node(unsigned long size, int node) __alloc_size(1); -extern void *vmalloc_32(unsigned long size) __alloc_size(1); -extern void *vmalloc_32_user(unsigned long size) __alloc_size(1); -extern void *__vmalloc(unsigned long size, gfp_t gfp_mask) __alloc_size(1); -extern void *__vmalloc_node_range(unsigned long size, unsigned long align, +extern void *vmalloc_noprof(unsigned long size) __alloc_size(1); +#define vmalloc(...) alloc_hooks(vmalloc_noprof(__VA_ARGS__)) + +extern void *vzalloc_noprof(unsigned long size) __alloc_size(1); +#define vzalloc(...) alloc_hooks(vzalloc_noprof(__VA_ARGS__)) + +extern void *vmalloc_user_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_user(...) alloc_hooks(vmalloc_user_noprof(__VA_ARGS__)) + +extern void *vmalloc_node_noprof(unsigned long size, int node) __alloc_size(1); +#define vmalloc_node(...) alloc_hooks(vmalloc_node_noprof(__VA_ARGS__)) + +extern void *vzalloc_node_noprof(unsigned long size, int node) __alloc_size(1); +#define vzalloc_node(...) alloc_hooks(vzalloc_node_noprof(__VA_ARGS__)) + +extern void *vmalloc_32_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_32(...) alloc_hooks(vmalloc_32_noprof(__VA_ARGS__)) + +extern void *vmalloc_32_user_noprof(unsigned long size) __alloc_size(1); +#define vmalloc_32_user(...) alloc_hooks(vmalloc_32_user_noprof(__VA_ARGS__)) + +extern void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +#define __vmalloc(...) alloc_hooks(__vmalloc_noprof(__VA_ARGS__)) + +extern void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller) __alloc_size(1); -void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, +#define __vmalloc_node_range(...) alloc_hooks(__vmalloc_node_range_noprof(__VA_ARGS__)) + +void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) __alloc_size(1); -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +#define __vmalloc_node(...) alloc_hooks(__vmalloc_node_noprof(__VA_ARGS__)) + +void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +#define vmalloc_huge(...) alloc_hooks(vmalloc_huge_noprof(__VA_ARGS__)) + +extern void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); +#define __vmalloc_array(...) alloc_hooks(__vmalloc_array_noprof(__VA_ARGS__)) + +extern void *vmalloc_array_noprof(size_t n, size_t size) __alloc_size(1, 2); +#define vmalloc_array(...) alloc_hooks(vmalloc_array_noprof(__VA_ARGS__)) + +extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); +#define __vcalloc(...) alloc_hooks(__vcalloc_noprof(__VA_ARGS__)) -extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); -extern void *vmalloc_array(size_t n, size_t size) __alloc_size(1, 2); -extern void *__vcalloc(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); -extern void *vcalloc(size_t n, size_t size) __alloc_size(1, 2); +extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2); +#define vcalloc(...) alloc_hooks(vcalloc_noprof(__VA_ARGS__)) extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); diff --git a/kernel/kallsyms_selftest.c b/kernel/kallsyms_selftest.c index b4cac76ea5e9..3ea9be364e32 100644 --- a/kernel/kallsyms_selftest.c +++ b/kernel/kallsyms_selftest.c @@ -82,7 +82,7 @@ static struct test_item test_items[] = { ITEM_FUNC(kallsyms_test_func_static), ITEM_FUNC(kallsyms_test_func), ITEM_FUNC(kallsyms_test_func_weak), - ITEM_FUNC(vmalloc), + ITEM_FUNC(vmalloc_noprof), ITEM_FUNC(vfree), #ifdef CONFIG_KALLSYMS_ALL ITEM_DATA(kallsyms_test_var_bss_static), diff --git a/mm/util.c b/mm/util.c index 291f7945190f..19c90036d3cc 100644 --- a/mm/util.c +++ b/mm/util.c @@ -639,7 +639,7 @@ void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) * about the resulting pointer, and cannot play * protection games. */ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, node, __builtin_return_address(0)); } @@ -698,12 +698,12 @@ void *kvrealloc_noprof(const void *p, size_t oldsize, size_t newsize, gfp_t flag EXPORT_SYMBOL(kvrealloc_noprof); /** - * __vmalloc_array - allocate memory for a virtually contiguous array. + * __vmalloc_array_noprof - allocate memory for a virtually contiguous array. * @n: number of elements. * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -void *__vmalloc_array(size_t n, size_t size, gfp_t flags) +void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) { size_t bytes; @@ -711,18 +711,18 @@ void *__vmalloc_array(size_t n, size_t size, gfp_t flags) return NULL; return __vmalloc(bytes, flags); } -EXPORT_SYMBOL(__vmalloc_array); +EXPORT_SYMBOL(__vmalloc_array_noprof); /** - * vmalloc_array - allocate memory for a virtually contiguous array. + * vmalloc_array_noprof - allocate memory for a virtually contiguous array. * @n: number of elements. * @size: element size. */ -void *vmalloc_array(size_t n, size_t size) +void *vmalloc_array_noprof(size_t n, size_t size) { return __vmalloc_array(n, size, GFP_KERNEL); } -EXPORT_SYMBOL(vmalloc_array); +EXPORT_SYMBOL(vmalloc_array_noprof); /** * __vcalloc - allocate and zero memory for a virtually contiguous array. @@ -730,22 +730,22 @@ EXPORT_SYMBOL(vmalloc_array); * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -void *__vcalloc(size_t n, size_t size, gfp_t flags) +void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) { return __vmalloc_array(n, size, flags | __GFP_ZERO); } -EXPORT_SYMBOL(__vcalloc); +EXPORT_SYMBOL(__vcalloc_noprof); /** - * vcalloc - allocate and zero memory for a virtually contiguous array. + * vcalloc_noprof - allocate and zero memory for a virtually contiguous array. * @n: number of elements. * @size: element size. */ -void *vcalloc(size_t n, size_t size) +void *vcalloc_noprof(size_t n, size_t size) { return __vmalloc_array(n, size, GFP_KERNEL | __GFP_ZERO); } -EXPORT_SYMBOL(vcalloc); +EXPORT_SYMBOL(vcalloc_noprof); struct anon_vma *folio_anon_vma(struct folio *folio) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..5239f2c9ecae 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3025,12 +3025,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) - nr = alloc_pages_bulk_array_mempolicy(bulk_gfp, + nr = alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, nr_pages_request, pages + nr_allocated); else - nr = alloc_pages_bulk_array_node(bulk_gfp, nid, + nr = alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, nr_pages_request, pages + nr_allocated); @@ -3060,9 +3060,9 @@ vm_area_alloc_pages(gfp_t gfp, int nid, break; if (nid == NUMA_NO_NODE) - page = alloc_pages(alloc_gfp, order); + page = alloc_pages_noprof(alloc_gfp, order); else - page = alloc_pages_node(nid, alloc_gfp, order); + page = alloc_pages_node_noprof(nid, alloc_gfp, order); if (unlikely(!page)) { if (!nofail) break; @@ -3119,10 +3119,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, /* Please note that the recursion is strictly bounded. */ if (array_size > PAGE_SIZE) { - area->pages = __vmalloc_node(array_size, 1, nested_gfp, node, + area->pages = __vmalloc_node_noprof(array_size, 1, nested_gfp, node, area->caller); } else { - area->pages = kmalloc_node(array_size, nested_gfp, node); + area->pages = kmalloc_node_noprof(array_size, nested_gfp, node); } if (!area->pages) { @@ -3205,7 +3205,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, } /** - * __vmalloc_node_range - allocate virtually contiguous memory + * __vmalloc_node_range_noprof - allocate virtually contiguous memory * @size: allocation size * @align: desired alignment * @start: vm area range start @@ -3232,7 +3232,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, * * Return: the address of the area or %NULL on failure */ -void *__vmalloc_node_range(unsigned long size, unsigned long align, +void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller) @@ -3361,7 +3361,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, } /** - * __vmalloc_node - allocate virtually contiguous memory + * __vmalloc_node_noprof - allocate virtually contiguous memory * @size: allocation size * @align: desired alignment * @gfp_mask: flags for the page level allocator @@ -3379,10 +3379,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, * * Return: pointer to the allocated memory or %NULL on error */ -void *__vmalloc_node(unsigned long size, unsigned long align, +void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) { - return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, align, VMALLOC_START, VMALLOC_END, gfp_mask, PAGE_KERNEL, 0, node, caller); } /* @@ -3391,15 +3391,15 @@ void *__vmalloc_node(unsigned long size, unsigned long align, * than that. */ #ifdef CONFIG_TEST_VMALLOC_MODULE -EXPORT_SYMBOL_GPL(__vmalloc_node); +EXPORT_SYMBOL_GPL(__vmalloc_node_noprof); #endif -void *__vmalloc(unsigned long size, gfp_t gfp_mask) +void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask) { - return __vmalloc_node(size, 1, gfp_mask, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, gfp_mask, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(__vmalloc); +EXPORT_SYMBOL(__vmalloc_noprof); /** * vmalloc - allocate virtually contiguous memory @@ -3413,12 +3413,12 @@ EXPORT_SYMBOL(__vmalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc(unsigned long size) +void *vmalloc_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_KERNEL, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc); +EXPORT_SYMBOL(vmalloc_noprof); /** * vmalloc_huge - allocate virtually contiguous memory, allow huge pages @@ -3432,16 +3432,16 @@ EXPORT_SYMBOL(vmalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) +void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) { - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL_GPL(vmalloc_huge); +EXPORT_SYMBOL_GPL(vmalloc_huge_noprof); /** - * vzalloc - allocate virtually contiguous memory with zero fill + * vzalloc_noprof - allocate virtually contiguous memory with zero fill * @size: allocation size * * Allocate enough pages to cover @size from the page level @@ -3453,12 +3453,12 @@ EXPORT_SYMBOL_GPL(vmalloc_huge); * * Return: pointer to the allocated memory or %NULL on error */ -void *vzalloc(unsigned long size) +void *vzalloc_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_KERNEL | __GFP_ZERO, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL | __GFP_ZERO, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vzalloc); +EXPORT_SYMBOL(vzalloc_noprof); /** * vmalloc_user - allocate zeroed virtually contiguous memory for userspace @@ -3469,17 +3469,17 @@ EXPORT_SYMBOL(vzalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_user(unsigned long size) +void *vmalloc_user_noprof(unsigned long size) { - return __vmalloc_node_range(size, SHMLBA, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, SHMLBA, VMALLOC_START, VMALLOC_END, GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL, VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_user); +EXPORT_SYMBOL(vmalloc_user_noprof); /** - * vmalloc_node - allocate memory on a specific node + * vmalloc_node_noprof - allocate memory on a specific node * @size: allocation size * @node: numa node * @@ -3491,15 +3491,15 @@ EXPORT_SYMBOL(vmalloc_user); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_node(unsigned long size, int node) +void *vmalloc_node_noprof(unsigned long size, int node) { - return __vmalloc_node(size, 1, GFP_KERNEL, node, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_node); +EXPORT_SYMBOL(vmalloc_node_noprof); /** - * vzalloc_node - allocate memory on a specific node with zero fill + * vzalloc_node_noprof - allocate memory on a specific node with zero fill * @size: allocation size * @node: numa node * @@ -3509,12 +3509,12 @@ EXPORT_SYMBOL(vmalloc_node); * * Return: pointer to the allocated memory or %NULL on error */ -void *vzalloc_node(unsigned long size, int node) +void *vzalloc_node_noprof(unsigned long size, int node) { - return __vmalloc_node(size, 1, GFP_KERNEL | __GFP_ZERO, node, + return __vmalloc_node_noprof(size, 1, GFP_KERNEL | __GFP_ZERO, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(vzalloc_node); +EXPORT_SYMBOL(vzalloc_node_noprof); #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32) #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL) @@ -3529,7 +3529,7 @@ EXPORT_SYMBOL(vzalloc_node); #endif /** - * vmalloc_32 - allocate virtually contiguous memory (32bit addressable) + * vmalloc_32_noprof - allocate virtually contiguous memory (32bit addressable) * @size: allocation size * * Allocate enough 32bit PA addressable pages to cover @size from the @@ -3537,15 +3537,15 @@ EXPORT_SYMBOL(vzalloc_node); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_32(unsigned long size) +void *vmalloc_32_noprof(unsigned long size) { - return __vmalloc_node(size, 1, GFP_VMALLOC32, NUMA_NO_NODE, + return __vmalloc_node_noprof(size, 1, GFP_VMALLOC32, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_32); +EXPORT_SYMBOL(vmalloc_32_noprof); /** - * vmalloc_32_user - allocate zeroed virtually contiguous 32bit memory + * vmalloc_32_user_noprof - allocate zeroed virtually contiguous 32bit memory * @size: allocation size * * The resulting memory area is 32bit addressable and zeroed so it can be @@ -3553,14 +3553,14 @@ EXPORT_SYMBOL(vmalloc_32); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_32_user(unsigned long size) +void *vmalloc_32_user_noprof(unsigned long size) { - return __vmalloc_node_range(size, SHMLBA, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range_noprof(size, SHMLBA, VMALLOC_START, VMALLOC_END, GFP_VMALLOC32 | __GFP_ZERO, PAGE_KERNEL, VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_32_user); +EXPORT_SYMBOL(vmalloc_32_user_noprof); /* * Atomically zero bytes in the iterator. From patchwork Mon Feb 12 21:39:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554104 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 595855D8E7 for ; Mon, 12 Feb 2024 21:40:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774040; cv=none; b=Q3nque28STfpEIVJok5j2VWHoflCDu5cWjAYOl00KtBdTIj3UWdhILN0IOwhYfJwORykEZ+e+QU21e3IkjreMgwijvdBQR11MbHKIHnR5QzbiJfC/OuAYAATu1pWRQvQ7TGoRaHeFt+xfGOVyabJycPLGH7QD8QokLsZvdJWufE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774040; c=relaxed/simple; bh=tCvhP3PYjDBZ2XkO8WMVqHxillTEc89au2Y0t57PRYU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LIWfqwAnTqH/LS7XaKFCYLEMShdt0S5KkC/PLCtJ6VoJOTAcPXCdJC+iJL3Q3ITAP0dnfi+hNOAXFkRROchLCHzyjDnibYHeU+wp/DZgkKpVNMkYrnnYURnOerUt05FC+VcUl9t+Aq5Owre5K57hj/VkdGEvnlAYn6Ie0fHNPTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yPKU6Pq1; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yPKU6Pq1" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc64b659a9cso6361561276.3 for ; Mon, 12 Feb 2024 13:40:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774037; x=1708378837; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IGO51nbRXLBlE7dZyNty79pnfUUWZlGfdlbPBWriSco=; b=yPKU6Pq1LfM+A9L28L6Ke5x8igPWbKAeEor8EYgkugczHsYfkFokG0N4Lz6H//iChi oQjN/PfzrK/JNbn9iweCl2mfyuHm1HT2g3jeFa7SU16ZaB4FUXd2n3I/hDIHvreg+xIp G3oTfy4IJWt1eYA796CioaeQ3E0ZUrfI90vYyYJ0k/nhQkOXYeQxNJ3Z7f6O29OGhu2T +1+3pYrJA1lnr3c48R3+S5ciBgQqljL5+B2nHMM/mp5uxyOfv67Nmk9C4m0IfSaZXixk 0Yr1xysqF9RDV1+UWRpDNB4cYQyatYNLBuOQyQ4Jv4ifiXSChRiDoGQMQTM8SwY6ZBK6 ArXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774037; x=1708378837; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IGO51nbRXLBlE7dZyNty79pnfUUWZlGfdlbPBWriSco=; b=XWyg+Z6jJDAomHqq3x80ywLDjmuUxmgDcLI9BJVhuF+Df2dAv6WYtHmT2DPT8PGWWn 1h9YTf37Y0GsPh658bQZzKku7HkZmqP0mikWK3BlB8YNfdVxRxOJlV4flUWCoVVCm2hM gkXjqJdThiljk7oIJ80j99KkdNT5r21WNSolTK+MdBFIncxtwmYTs/o6JHSjpB0bNERg d6XOHL4uAIa4UA1NNkzP/czFx6IUOT7cR0jsCq4mOT5T/YPpGOMd0DcUcKX/H4F3XH2g byuQxfaam2AwD7qmBQDi5HJnKUVYazxVlrYgL6g4beLVpJyuBBx9qoYL7+Xk5Os9mjzS ukZw== X-Forwarded-Encrypted: i=1; AJvYcCWuMdH0yO9jofFz5ZAwoxrdFjXv6iewPyeyhM1y1Ocvkkg1dtI2cZreJCzsGSpbfrUK+KEacGtvzFQfnvkJch7/dR0HckNXt/Blzxj8JQ== X-Gm-Message-State: AOJu0YxnlzqVOFa04QCei/JrhmfyW785V2UGW+Nh1jC1WxAmTJxLc520 4y36TNeTpGdKJYLgoaUnmKA07578DKIQnyvIcWcXhp5nP4lYP7tlw5U2dQ8yCeEz44gsGFX4rQr /jA== X-Google-Smtp-Source: AGHT+IHmk8+MK6zpdhADLMK0RWOIRjTnTYcSM/aIheok3v5Ff3xU8ZgKrwhYoeQuPApfz/4RhyV47R9pCGQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:9c08:0:b0:dc6:f21f:64ac with SMTP id c8-20020a259c08000000b00dc6f21f64acmr2107909ybo.12.1707774037425; Mon, 12 Feb 2024 13:40:37 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:16 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-31-surenb@google.com> Subject: [PATCH v3 30/35] rhashtable: Plumb through alloc tag From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet This gives better memory allocation profiling results; rhashtable allocations will be accounted to the code that initialized the rhashtable. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/rhashtable-types.h | 11 +++++-- lib/rhashtable.c | 52 +++++++++++++++++++++++++------- 2 files changed, 50 insertions(+), 13 deletions(-) diff --git a/include/linux/rhashtable-types.h b/include/linux/rhashtable-types.h index b6f3797277ff..015c8298bebc 100644 --- a/include/linux/rhashtable-types.h +++ b/include/linux/rhashtable-types.h @@ -9,6 +9,7 @@ #ifndef _LINUX_RHASHTABLE_TYPES_H #define _LINUX_RHASHTABLE_TYPES_H +#include #include #include #include @@ -88,6 +89,9 @@ struct rhashtable { struct mutex mutex; spinlock_t lock; atomic_t nelems; +#ifdef CONFIG_MEM_ALLOC_PROFILING + struct alloc_tag *alloc_tag; +#endif }; /** @@ -127,9 +131,12 @@ struct rhashtable_iter { bool end_of_table; }; -int rhashtable_init(struct rhashtable *ht, +int rhashtable_init_noprof(struct rhashtable *ht, const struct rhashtable_params *params); -int rhltable_init(struct rhltable *hlt, +#define rhashtable_init(...) alloc_hooks(rhashtable_init_noprof(__VA_ARGS__)) + +int rhltable_init_noprof(struct rhltable *hlt, const struct rhashtable_params *params); +#define rhltable_init(...) alloc_hooks(rhltable_init_noprof(__VA_ARGS__)) #endif /* _LINUX_RHASHTABLE_TYPES_H */ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 6ae2ba8e06a2..b62116f332b8 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -63,6 +63,27 @@ EXPORT_SYMBOL_GPL(lockdep_rht_bucket_is_held); #define ASSERT_RHT_MUTEX(HT) #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING +static inline void rhashtable_alloc_tag_init(struct rhashtable *ht) +{ + ht->alloc_tag = current->alloc_tag; +} + +static inline struct alloc_tag *rhashtable_alloc_tag_save(struct rhashtable *ht) +{ + return alloc_tag_save(ht->alloc_tag); +} + +static inline void rhashtable_alloc_tag_restore(struct rhashtable *ht, struct alloc_tag *old) +{ + alloc_tag_restore(ht->alloc_tag, old); +} +#else +#define rhashtable_alloc_tag_init(ht) +static inline struct alloc_tag *rhashtable_alloc_tag_save(struct rhashtable *ht) { return NULL; } +#define rhashtable_alloc_tag_restore(ht, old) +#endif + static inline union nested_table *nested_table_top( const struct bucket_table *tbl) { @@ -130,7 +151,7 @@ static union nested_table *nested_table_alloc(struct rhashtable *ht, if (ntbl) return ntbl; - ntbl = kzalloc(PAGE_SIZE, GFP_ATOMIC); + ntbl = kmalloc_noprof(PAGE_SIZE, GFP_ATOMIC|__GFP_ZERO); if (ntbl && leaf) { for (i = 0; i < PAGE_SIZE / sizeof(ntbl[0]); i++) @@ -157,7 +178,7 @@ static struct bucket_table *nested_bucket_table_alloc(struct rhashtable *ht, size = sizeof(*tbl) + sizeof(tbl->buckets[0]); - tbl = kzalloc(size, gfp); + tbl = kmalloc_noprof(size, gfp|__GFP_ZERO); if (!tbl) return NULL; @@ -180,8 +201,10 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht, size_t size; int i; static struct lock_class_key __key; + struct alloc_tag * __maybe_unused old = rhashtable_alloc_tag_save(ht); - tbl = kvzalloc(struct_size(tbl, buckets, nbuckets), gfp); + tbl = kvmalloc_node_noprof(struct_size(tbl, buckets, nbuckets), + gfp|__GFP_ZERO, NUMA_NO_NODE); size = nbuckets; @@ -190,6 +213,8 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht, nbuckets = 0; } + rhashtable_alloc_tag_restore(ht, old); + if (tbl == NULL) return NULL; @@ -975,7 +1000,7 @@ static u32 rhashtable_jhash2(const void *key, u32 length, u32 seed) } /** - * rhashtable_init - initialize a new hash table + * rhashtable_init_noprof - initialize a new hash table * @ht: hash table to be initialized * @params: configuration parameters * @@ -1016,7 +1041,7 @@ static u32 rhashtable_jhash2(const void *key, u32 length, u32 seed) * .obj_hashfn = my_hash_fn, * }; */ -int rhashtable_init(struct rhashtable *ht, +int rhashtable_init_noprof(struct rhashtable *ht, const struct rhashtable_params *params) { struct bucket_table *tbl; @@ -1031,6 +1056,8 @@ int rhashtable_init(struct rhashtable *ht, spin_lock_init(&ht->lock); memcpy(&ht->p, params, sizeof(*params)); + rhashtable_alloc_tag_init(ht); + if (params->min_size) ht->p.min_size = roundup_pow_of_two(params->min_size); @@ -1076,26 +1103,26 @@ int rhashtable_init(struct rhashtable *ht, return 0; } -EXPORT_SYMBOL_GPL(rhashtable_init); +EXPORT_SYMBOL_GPL(rhashtable_init_noprof); /** - * rhltable_init - initialize a new hash list table + * rhltable_init_noprof - initialize a new hash list table * @hlt: hash list table to be initialized * @params: configuration parameters * * Initializes a new hash list table. * - * See documentation for rhashtable_init. + * See documentation for rhashtable_init_noprof. */ -int rhltable_init(struct rhltable *hlt, const struct rhashtable_params *params) +int rhltable_init_noprof(struct rhltable *hlt, const struct rhashtable_params *params) { int err; - err = rhashtable_init(&hlt->ht, params); + err = rhashtable_init_noprof(&hlt->ht, params); hlt->ht.rhlist = true; return err; } -EXPORT_SYMBOL_GPL(rhltable_init); +EXPORT_SYMBOL_GPL(rhltable_init_noprof); static void rhashtable_free_one(struct rhashtable *ht, struct rhash_head *obj, void (*free_fn)(void *ptr, void *arg), @@ -1222,6 +1249,7 @@ struct rhash_lock_head __rcu **rht_bucket_nested_insert( unsigned int index = hash & ((1 << tbl->nest) - 1); unsigned int size = tbl->size >> tbl->nest; union nested_table *ntbl; + struct alloc_tag * __maybe_unused old = rhashtable_alloc_tag_save(ht); ntbl = nested_table_top(tbl); hash >>= tbl->nest; @@ -1236,6 +1264,8 @@ struct rhash_lock_head __rcu **rht_bucket_nested_insert( size <= (1 << shift)); } + rhashtable_alloc_tag_restore(ht, old); + if (!ntbl) return NULL; From patchwork Mon Feb 12 21:39:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554105 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FFF05DF05 for ; Mon, 12 Feb 2024 21:40:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774042; cv=none; b=ZIJ+HUDDM3EqRgnzAdQJ3LFsWg9x3wI2GkTOYLooycjexsc8PZgew1knMIMtSNAK+2PspvYBohSTBj1sgTp1Mvco0Zrp6AZqmGw5ipJ3uVnsWbrZWnX+CM2XDNCuqlORKfX04yP0Dz2UtljvfStXmLs6nJsyhoZvL757FuG0CDY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774042; c=relaxed/simple; bh=MrsgVd/Qc298kApLcMr0fYQLGuQi40dU4EQGVe/x6w4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sr661dhCBq8Ex9CfknHH1JcvnJ5yyVJN+RWbZYpcY1DWx9UdSVWLocz1uV0bZ5O8QvraxcwHNQVGNdOl44nt47ER7BZj5LbmnPNqP0b38Lrnl8foY7IwttBHVzVq0LYNtyfu3+a0K1jYwrl7AaziwiQLn8Rw/yv2O4rSV4bzLz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c/1ZZfBR; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c/1ZZfBR" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-604a423af12so82714607b3.1 for ; Mon, 12 Feb 2024 13:40:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774040; x=1708378840; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hFi0jScQsczt4c2Plav2pAvubIeyjPFOX5wIUH8AZHM=; b=c/1ZZfBRdBbLqbqdFHuYzhc9WAChL0QL3QRl25KzATLzer6svTpEiPd1L/ak7eCfCf rFChlTkhcz+Xz9gSjYMn2ynJelxCPQJWx2d2kkLQ6fb1YsCd/Jq/zQSXfXHpbVFvruWF bR5hSonjSEnsIDKQ97gQQKzxX1aH86iDpKnW7O+837eLQKXus7xEJIF6hxmPcOL+6sDh P/Pt1YadLifgnfiJ/fOyhztuRxtiqCOaWkp1En6qyU/TTKjPYVYvTBgquBy5i8eGPFL5 nRVLR2xRR24diT4HrF8MA8tUNbR7jgTuSyvnnzHsYW4WfYRUsDN7jCObhXsCXajOy3sl UPqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774040; x=1708378840; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hFi0jScQsczt4c2Plav2pAvubIeyjPFOX5wIUH8AZHM=; b=WepbcWgtV8xJ8iL1IjfyEW/bbOKX9RTfXOSTSsbC4u9a403QQeedt2IUPTAs7TSV83 n7BUgKAmIiPLqav4hsL9D/IZBrzzXBjs55StY8sOlisn+k83wNrahpysacCF0+cr7nYP KGMiKHdC+8fGYTXFya+hH2vQTDnLO0owORlgcV2WyX75EhES7TTlKLY8SNhvr2sNNcbh RMwO21yxgfuZUQr7AJl0bWM2sFyphCs0g94Z62yaPP3qBLsaAGo3dShi99+VWwl2VA9m 8pWrGD6B2Yt7G28r02AL/GZ27sSVVyGPGmoZ32fHA7/FKaegK7Ht+Qtm3SAAu1aqmIxS ItYw== X-Forwarded-Encrypted: i=1; AJvYcCVWTbZpVlR0iWLXCJ5ejJKQnP+BHe0UQEROQGCltFAch/q6V3vrApygKZCeFDxqjATRSmU51bypH9I/lTD89ctko3qQKkf2HcytGET5+w== X-Gm-Message-State: AOJu0YzFmhBFpFCtSi+UW0SaSp8FiPQ+FLDAFYuJAriOknYWmnIeldQX +OIuIgE+iqFs2Ynd44dAYon56eVjYCHEJKAuaGYVlYlzi8ljq2WjMjhbHIL1WmTTC4jNV5tlQvN WgA== X-Google-Smtp-Source: AGHT+IG01ZIdCxyG6doWIwi6F85UI5DWfcKo01/n0teetJJReA8rQ10tG50Lh8eu7adVmXuhCLeMmXOg6Mg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:b8b:b0:5ff:96b6:8ee1 with SMTP id ck11-20020a05690c0b8b00b005ff96b68ee1mr2134418ywb.7.1707774039644; Mon, 12 Feb 2024 13:40:39 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:17 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-32-surenb@google.com> Subject: [PATCH v3 31/35] lib: add memory allocations report in show_mem() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Include allocations in show_mem reports. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 2 ++ lib/alloc_tag.c | 38 ++++++++++++++++++++++++++++++++++++++ mm/show_mem.c | 15 +++++++++++++++ 3 files changed, 55 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 3fe51e67e231..0a5973c4ad77 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -30,6 +30,8 @@ struct alloc_tag { #ifdef CONFIG_MEM_ALLOC_PROFILING +void alloc_tags_show_mem_report(struct seq_buf *s); + static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) { return container_of(ct, struct alloc_tag, ct); diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 2d5226d9262d..54312c213860 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -96,6 +96,44 @@ static const struct seq_operations allocinfo_seq_op = { .show = allocinfo_show, }; +void alloc_tags_show_mem_report(struct seq_buf *s) +{ + struct codetag_iterator iter; + struct codetag *ct; + struct { + struct codetag *tag; + size_t bytes; + } tags[10], n; + unsigned int i, nr = 0; + + codetag_lock_module_list(alloc_tag_cttype, true); + iter = codetag_get_ct_iter(alloc_tag_cttype); + while ((ct = codetag_next_ct(&iter))) { + struct alloc_tag_counters counter = alloc_tag_read(ct_to_alloc_tag(ct)); + + n.tag = ct; + n.bytes = counter.bytes; + + for (i = 0; i < nr; i++) + if (n.bytes > tags[i].bytes) + break; + + if (i < ARRAY_SIZE(tags)) { + nr -= nr == ARRAY_SIZE(tags); + memmove(&tags[i + 1], + &tags[i], + sizeof(tags[0]) * (nr - i)); + nr++; + tags[i] = n; + } + } + + for (i = 0; i < nr; i++) + alloc_tag_to_text(s, tags[i].tag); + + codetag_lock_module_list(alloc_tag_cttype, false); +} + static void __init procfs_init(void) { proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op); diff --git a/mm/show_mem.c b/mm/show_mem.c index 8dcfafbd283c..d514c15ca076 100644 --- a/mm/show_mem.c +++ b/mm/show_mem.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -423,4 +424,18 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) #ifdef CONFIG_MEMORY_FAILURE printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + { + struct seq_buf s; + char *buf = kmalloc(4096, GFP_ATOMIC); + + if (buf) { + printk("Memory allocations:\n"); + seq_buf_init(&s, buf, 4096); + alloc_tags_show_mem_report(&s); + printk("%s", buf); + kfree(buf); + } + } +#endif } From patchwork Mon Feb 12 21:39:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554106 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26B535DF05 for ; Mon, 12 Feb 2024 21:40:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774045; cv=none; b=EYKrbg/czllrWiUHu4dyncE33bZEWH5tEe5Ky2RFBfw0N5gYjJ5QagrZJ1tcPOjNARQw8+ZcEr25LSt8z6uFRYTdiB2fk9d5wkNByhdyJSDew9MTqWcYPM5BU5gxlZeNc4SBLG5IRJHOPB+QwVbR1UfMbwLiiGEpPEZIbbdsytI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774045; c=relaxed/simple; bh=5ChqV8HHhpLeW+E8kMoIkC2NLvWtduhWvlkcjpnpm1w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QsBmfCgx1Zr6/OiBW0clsDtwh2HlZkT+3S96mDvgmFxEpri3Hl7N4xUrtjbeP1fZ2SWs1/n5xp0clbkKiKdR0mihkml3KBrCAMkzaqh8VelgI41trauZ654BrGw8ELjkd4LLyE8sjpqaknGWPVAjUset3jB0PYg1+gg+gxdJVVM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1sbrvriH; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1sbrvriH" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dcbfe1a42a4so1072988276.2 for ; Mon, 12 Feb 2024 13:40:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774042; x=1708378842; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/sI0PWD0cr+RKARN01FIe/JdJLbOCpqo744sqtIKAHU=; b=1sbrvriH73mQp8gpsJeAGGW9oO1S5Z9thwgjrip2JSRnVpYSHCVgjk5a1Qnj9Xc0xi exT3BhixN0wsdesoHOH5YfIUnQJhPsCEi7yCScf+RApnfSyM6SpBs66CPhLRnBlwY/r5 eoTM761VuyPQkpg5MmrGxZtIkD+ZFjGt9jYvk1Z+u8MibtcN6nPPTw2YbXxTsYG5hPd0 Ca1m8pct+iYMJmuEEZOblIFKFW5DGSmkDtdEDyKNZeOdGcmry/uhuAImuok2Idr5xlxJ 59s6VhEOb9Gzt7tWsJomK6edp056CIScs37u/NfRjeUHDBt+wodrk8UWLnXQzy7Y0FeU NM/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774042; x=1708378842; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/sI0PWD0cr+RKARN01FIe/JdJLbOCpqo744sqtIKAHU=; b=G1yD6gy+1ipMW8xq4tirGGl8870C1Uvi81161Yf3W29embo9jTznfjA6cKp/2olBjV tITlUJ3w/PC2NR4Kb0SVifIJ/wyhph5/45PXwz/XT8x4T1in8WRe5FNe/kdkF6oWpvFz PZ/vP1XN5/boktNrFf8jULO653DM6P4NqPW/o7+OBkIbVlLYeucFKq9j+dNwElEz+oG0 7APB4J+Dgq5mLz6yPevCjuoawiUQVA/H09IPk17eEIRj4iEJv2bvshJDSY/aH10AukHQ Z0K1VvAJ+CC2+/xoUJkIUh1cAY4eK+2sfUOmmiXExzuXKZ5f6XpyRbwqQvd4sKIjL6YD DV7A== X-Forwarded-Encrypted: i=1; AJvYcCWYbcx+waVxCbgCCw53CMcqwKj+ApNDOsoebIxcTv4UYmeyE4lhJepAyHNajD1wh7lWe6msZHsDXkEXMCG8+eVAERklhWM4itGNbjjLjg== X-Gm-Message-State: AOJu0YwRnq/FfzgJDmnGNP6gYfkvdVDTGOUmfoLWpaxqe0xkTl1IoSjC dOifa67yNtjwUM5VLh3wMb7FPpj2nn+2rwsS/QNPsp7hNEVMOhrMSFGkE/so5iTQJKt/VJ8E3ym p8Q== X-Google-Smtp-Source: AGHT+IGlJug7UeocMp/yJN58BwDAI5E8aN7090iq/Hzoe26lNZkGF0b2He1z8X2tvc+mRRdeZBlUlmhSz1U= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:2186:b0:dc6:cafd:dce5 with SMTP id dl6-20020a056902218600b00dc6cafddce5mr2274526ybb.12.1707774041837; Mon, 12 Feb 2024 13:40:41 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:18 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-33-surenb@google.com> Subject: [PATCH v3 32/35] codetag: debug: skip objext checking when it's for objext itself From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org objext objects are created with __GFP_NO_OBJ_EXT flag and therefore have no corresponding objext themselves (otherwise we would get an infinite recursion). When freeing these objects their codetag will be empty and when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled this will lead to false warnings. Introduce CODETAG_EMPTY special codetag value to mark allocations which intentionally lack codetag to avoid these warnings. Set objext codetags to CODETAG_EMPTY before freeing to indicate that the codetag is expected to be empty. Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 26 ++++++++++++++++++++++++++ mm/slab.h | 25 +++++++++++++++++++++++++ mm/slab_common.c | 1 + mm/slub.c | 8 ++++++++ 4 files changed, 60 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 0a5973c4ad77..1f3207097b03 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -77,6 +77,27 @@ static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag) return v; } +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +#define CODETAG_EMPTY (void *)1 + +static inline bool is_codetag_empty(union codetag_ref *ref) +{ + return ref->ct == CODETAG_EMPTY; +} + +static inline void set_codetag_empty(union codetag_ref *ref) +{ + if (ref) + ref->ct = CODETAG_EMPTY; +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline bool is_codetag_empty(union codetag_ref *ref) { return false; } + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes) { struct alloc_tag *tag; @@ -87,6 +108,11 @@ static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes) if (!ref || !ref->ct) return; + if (is_codetag_empty(ref)) { + ref->ct = NULL; + return; + } + tag = ct_to_alloc_tag(ref->ct); this_cpu_sub(tag->counters->bytes, bytes); diff --git a/mm/slab.h b/mm/slab.h index c4bd0d5348cb..cf332a839bf4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -567,6 +567,31 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab); + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) +{ + struct slabobj_ext *slab_exts; + struct slab *obj_exts_slab; + + obj_exts_slab = virt_to_slab(obj_exts); + slab_exts = slab_obj_exts(obj_exts_slab); + if (slab_exts) { + unsigned int offs = obj_to_index(obj_exts_slab->slab_cache, + obj_exts_slab, obj_exts); + /* codetag should be NULL */ + WARN_ON(slab_exts[offs].ref.ct); + set_codetag_empty(&slab_exts[offs].ref); + } +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline bool need_slab_obj_ext(void) { #ifdef CONFIG_MEM_ALLOC_PROFILING diff --git a/mm/slab_common.c b/mm/slab_common.c index 21b0b9e9cd9e..d5f75d04ced2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -242,6 +242,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, * assign slabobj_exts in parallel. In this case the existing * objcg vector should be reused. */ + mark_objexts_empty(vec); kfree(vec); return 0; } diff --git a/mm/slub.c b/mm/slub.c index 4d480784942e..1136ff18b4fe 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1890,6 +1890,14 @@ static inline void free_slab_obj_exts(struct slab *slab) if (!obj_exts) return; + /* + * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its + * corresponding extension will be NULL. alloc_tag_sub() will throw a + * warning if slab has extensions but the extension of an object is + * NULL, therefore replace NULL with CODETAG_EMPTY to indicate that + * the extension for obj_exts is expected to be NULL. + */ + mark_objexts_empty(obj_exts); kfree(obj_exts); slab->obj_exts = 0; } From patchwork Mon Feb 12 21:39:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554107 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EFB55EE9B for ; Mon, 12 Feb 2024 21:40:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774047; cv=none; b=EdSMwKdbGJVr7FXQT7HrvQm1oT4N86y1i8omR6YC9oOLI7kcefwOz3WEoZ4oRFKcE1hc3a8l/S5ASMA/qZuSoZAM6pT0epJOyCuR49JydTUfug/Ypcz2MGm1naS9J5bx5EyTmBch3MpwiMfOXzKEktPazhcHVm6GKiq8qXrssY8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774047; c=relaxed/simple; bh=YSv5f7I2dnKMkhKo6foFZjKi7C7ZfZQ0Dh90Hrbid3I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jKO3Kv2Nb+sP32y10TE1DEmtHxUOh3+OOXraDNndOV6Y8bjVe8O0tYR8jNoBpr7Nfj4FOmI+OLoMYws6oMUD5Cp46YJ6S4DyoRc5lnUzxwmR8p5+R1sJp58ZrD3gdAAeFVt/itC047xyq6XzQiiUaPbER7BLFThSf/o65XhRdLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WsgFZffW; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WsgFZffW" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60753c3fab9so16582287b3.0 for ; Mon, 12 Feb 2024 13:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774044; x=1708378844; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DSEwnOfj2VuhpdkIHZ8VNVuKfvBXo1I5xAUr9MF0J1M=; b=WsgFZffWHCiunhFYSG85Wdx2weLmuEGaHGH/oFm0cnp13Tgcs+IsI6oF7WEQE3bQEt EPDCLJnwpDPx55RZmEsDVdVh15RPQsQb9xc8Q7IaET2M9DsneYgj5Jkd7PFHezyeFmwO 5XoihY+LwxyPZi4IElVPJKsOoVelm9IIGyz4difZP+HLL3vR58cVP1Pm2M8cf/+W2LVq Ka5vviUmim9ni/d7HeBLUdIQIJ/kdCXheL2xmwnztZ1lmwNc+dZMw5ut71vPpSCLQ4k9 6zQTbQuCLYr+VYv2tSBalNYWcmibDuviev8d3yYCmSvQ9LwWRiTtH9Rpt+0IaE/6LHGp atOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774044; x=1708378844; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DSEwnOfj2VuhpdkIHZ8VNVuKfvBXo1I5xAUr9MF0J1M=; b=w+AndOimvRWxWWF8xkYb0HRLjMpfyqUGTZfo3xmcfNaatm4PSteSgx1A2Tzrxn+YMP /yr6ZKxojNhIhBzHEWDBC+Whk16NkqnQ4SbJ1yu1r7ag02P2NmbBUSXdV7lGfpwOHV+v fPGZs1ZvHb5r2Q/wPYmHOUTq4iPgOVbKKy6RDTbpy5RbMKj1+ReN4y3PcbVi1r9syCWs 3m6mYt0DxLl7YV89fVccZDRBeZ9kRrJqStFMVm3/6ffKWGGKgkE9w4UJLES03cxpWLJX cwXbxwbS4VrY0M/0TOjkvqu/yHZZ8w0cwRGWISBFMwceIr9YBfJId3HQ6AuQgWcqtZPB s1zQ== X-Forwarded-Encrypted: i=1; AJvYcCXMk/La+jN5Ha/YNc0svpbtjAtBXzZrgqS5r2VI3NG9mTSgMj3agqwprFKYgSRoHt6pW8MfzB/ceYIiuWys4EGfNwtXCONRRDM/H1XU8A== X-Gm-Message-State: AOJu0Ywreq85zbeld5P3/YS0Ofw2vKIv/U4WzyGPsJ9yOyLdtxiJxaIu hUZlCFLkJhFh9m7ctQWcrgLJ77R3Y7+hq/N3Mlx9z/YxtosGlwYHwKt5pPrYfHixpmvRZxCv2Kx 2ew== X-Google-Smtp-Source: AGHT+IFtNaKOZcDNkW8O/y0IS6dfwCCkuRAaS6+IZN21h1P/6QI7aHSb74NpkK/DZ6QgX1s1BGxeLh90Utw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a0d:e8c2:0:b0:607:79be:9120 with SMTP id r185-20020a0de8c2000000b0060779be9120mr169543ywe.0.1707774044127; Mon, 12 Feb 2024 13:40:44 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:19 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-34-surenb@google.com> Subject: [PATCH v3 33/35] codetag: debug: mark codetags for reserved pages as empty From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org To avoid debug warnings while freeing reserved pages which were not allocated with usual allocators, mark their codetags as empty before freeing. Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- include/linux/alloc_tag.h | 2 ++ include/linux/mm.h | 8 ++++++++ include/linux/pgalloc_tag.h | 2 ++ mm/mm_init.c | 9 +++++++++ 4 files changed, 21 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 1f3207097b03..102caf62c2a9 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -95,6 +95,7 @@ static inline void set_codetag_empty(union codetag_ref *ref) #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ static inline bool is_codetag_empty(union codetag_ref *ref) { return false; } +static inline void set_codetag_empty(union codetag_ref *ref) {} #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ @@ -155,6 +156,7 @@ static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {} static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) {} static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes) {} +static inline void set_codetag_empty(union codetag_ref *ref) {} #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..ac1b661987ed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -3112,6 +3113,13 @@ extern void reserve_bootmem_region(phys_addr_t start, /* Free the reserved page into the buddy system, so it gets managed. */ static inline void free_reserved_page(struct page *page) { + union codetag_ref *ref; + + ref = get_page_tag_ref(page); + if (ref) { + set_codetag_empty(ref); + put_page_tag_ref(ref); + } ClearPageReserved(page); init_page_count(page); __free_page(page); diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index 0174aff5e871..ae9b0f359264 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -93,6 +93,8 @@ static inline void pgalloc_tag_split(struct page *page, unsigned int nr) #else /* CONFIG_MEM_ALLOC_PROFILING */ +static inline union codetag_ref *get_page_tag_ref(struct page *page) { return NULL; } +static inline void put_page_tag_ref(union codetag_ref *ref) {} static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, unsigned int order) {} static inline void pgalloc_tag_sub(struct page *page, unsigned int order) {} diff --git a/mm/mm_init.c b/mm/mm_init.c index e9ea2919d02d..f5386632fe86 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2566,6 +2566,7 @@ void __init set_dma_reserve(unsigned long new_dma_reserve) void __init memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order) { + union codetag_ref *ref; if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) { int nid = early_pfn_to_nid(pfn); @@ -2578,6 +2579,14 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, /* KMSAN will take care of these pages. */ return; } + + /* pages were reserved and not allocated */ + ref = get_page_tag_ref(page); + if (ref) { + set_codetag_empty(ref); + put_page_tag_ref(ref); + } + __free_pages_core(page, order); } From patchwork Mon Feb 12 21:39:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554108 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A8AC5F575 for ; Mon, 12 Feb 2024 21:40:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774050; cv=none; b=mVtEb5BEAX74C8NDE/s8WKCRyGdvwmZ1o9lSTNFTn6g81IOnPqjZeD8vQQ3Y/2EyrMtODO5vR3XmlPFD6gPQJcF6bggVpiCgThu1XL63s+8pVHhnBwI38LlyVhR4E7cwk8eyKJK18ra2clEbfwitZ+b+kmpEcs/j0uhCI1NmUVw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774050; c=relaxed/simple; bh=rFxOazRmJvXVDMBFp+0gxbyGbXjSs9OXcenRjz5zrp8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZXPdFbDxV1l4IJS8ZXi4tjuZfvsqTF/r/jLUCUd1Wqh3oS3EykNYnJmmJ9Aya9rZgtsRJEb3bhvHHQwPfuVq7QYEZlN04V19BLbeRyyHUhVaDf4cRr7rjrB4Hf4RjpR/8hg98iHxEUfJMURpvRIQ6Lo14zfco8O/E1rTewxF4vc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=O9tRS26q; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="O9tRS26q" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcc58cddb50so292943276.0 for ; Mon, 12 Feb 2024 13:40:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774046; x=1708378846; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RjVDAYkVgDU7fuXLh0MJZ7V+qIVlVtuXoydpki6LzsY=; b=O9tRS26qal1HZMYF7ammvgw/ywiDw8AWm6PPmTg7HW+iCa8bKyQFxuy6KXdWeUcYzy bKFqD79XbbclStxC6/b433r+dYqlqMq6QXHUhC1xqYNIVxywCa9BQt/t3/zx/W4qZgEH stLvOkNtGbOW6aMLoPW3omgezI5IpAtafmuz7IqHUeups2ywSisXDVTBGJmxeR8klh/L b6L85WxEkPy38PiADRrcjJRNXPQG3RE8JWRpCClI41FQefQW2i9NaGaZIUHiGKh6lMyE ytjf5peUUN3WV46bH7MdREZA3cCjp/R7BWpRlcSOcD8dhSzRI9LqsX46wylLx1upKa95 x/Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774046; x=1708378846; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RjVDAYkVgDU7fuXLh0MJZ7V+qIVlVtuXoydpki6LzsY=; b=SxTKDa0h8xieWLnMVCg6EfBE2qKLZfB3ZYOOXf87Dp0xd2zcvwJcxIZkciWCGijYLF WPFUdTu59UTm/WK64XI0hxLzThGUALd+FBek0kGWCW/9Pdwb9uFjyf5AbKtPi2wlqHVu TzMZHtMvcpScDW7RGeYMZkymjWN/iQ9frNnVB1j/Oy66Z64IHyHog2dYYXrxqTv+gG97 AUNpuGof6YtY5iVClAownPAnVuTwNwCVVDepGLqcV+x7kWYUMbMVd1MnfTAfBQyNUidw U9+23apZc9OWLW2NTxFs0PPtp5hTFUrdEp1PmppDfkmuqv7FZUMBRTDr3PWrJC8IATsO JVBA== X-Forwarded-Encrypted: i=1; AJvYcCVBkU07ny7iwtWGXojGkzYY7zNqzaPdvmJThl+OrATjUaIgYwoeJRlEiLBEHLc5RmfuBYryF9V5gyYJ1gkDgjwsyQObQWuimmm75t/dyg== X-Gm-Message-State: AOJu0Yy4FhI0Py3HZi7+6ys1pEeiwAZK3npZWCMtF+ECkxR5ufMeaFw6 1ZxiY8VlAx96CmJSojUO+kGoFKJePsgKXsv7BvB2wRAhD+ZvdWETzPLfmsoQMRFzrqdSbgBFaFo nxQ== X-Google-Smtp-Source: AGHT+IESoh+UWKTP+1hXTVbYaUvPuidDpjnUl+NYldJf9cgWEgRiy1LGQkGWPWQFR+H7vya0ViMEzA6ILJE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:6902:150d:b0:dc6:d678:371d with SMTP id q13-20020a056902150d00b00dc6d678371dmr2278885ybu.3.1707774046204; Mon, 12 Feb 2024 13:40:46 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:20 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-35-surenb@google.com> Subject: [PATCH v3 34/35] codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext allocations From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org If slabobj_ext vector allocation for a slab object fails and later on it succeeds for another object in the same slab, the slabobj_ext for the original object will be NULL and will be flagged in case when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Mark failed slabobj_ext vector allocations using a new objext_flags flag stored in the lower bits of slab->obj_exts. When new allocation succeeds it marks all tag references in the same slabobj_ext vector as empty to avoid warnings implemented by CONFIG_MEM_ALLOC_PROFILING_DEBUG checks. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 4 +++- mm/slab.h | 25 +++++++++++++++++++++++++ mm/slab_common.c | 22 +++++++++++++++------- 3 files changed, 43 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2b010316016c..f95241ca9052 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -365,8 +365,10 @@ enum page_memcg_data_flags { #endif /* CONFIG_MEMCG */ enum objext_flags { + /* slabobj_ext vector failed to allocate */ + OBJEXTS_ALLOC_FAIL = __FIRST_OBJEXT_FLAG, /* the next bit after the last actual flag */ - __NR_OBJEXTS_FLAGS = __FIRST_OBJEXT_FLAG, + __NR_OBJEXTS_FLAGS = (__FIRST_OBJEXT_FLAG << 1), }; #define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) diff --git a/mm/slab.h b/mm/slab.h index cf332a839bf4..7bb3900f83ef 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -586,9 +586,34 @@ static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) } } +static inline void mark_failed_objexts_alloc(struct slab *slab) +{ + slab->obj_exts = OBJEXTS_ALLOC_FAIL; +} + +static inline void handle_failed_objexts_alloc(unsigned long obj_exts, + struct slabobj_ext *vec, unsigned int objects) +{ + /* + * If vector previously failed to allocate then we have live + * objects with no tag reference. Mark all references in this + * vector as empty to avoid warnings later on. + */ + if (obj_exts & OBJEXTS_ALLOC_FAIL) { + unsigned int i; + + for (i = 0; i < objects; i++) + set_codetag_empty(&vec[i].ref); + } +} + + #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {} +static inline void mark_failed_objexts_alloc(struct slab *slab) {} +static inline void handle_failed_objexts_alloc(unsigned long obj_exts, + struct slabobj_ext *vec, unsigned int objects) {} #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ diff --git a/mm/slab_common.c b/mm/slab_common.c index d5f75d04ced2..489c7a8ba8f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -214,29 +214,37 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab) { unsigned int objects = objs_per_slab(s, slab); - unsigned long obj_exts; - void *vec; + unsigned long new_exts; + unsigned long old_exts; + struct slabobj_ext *vec; gfp &= ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |= __GFP_NO_OBJ_EXT; vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); - if (!vec) + if (!vec) { + /* Mark vectors which failed to allocate */ + if (new_slab) + mark_failed_objexts_alloc(slab); + return -ENOMEM; + } - obj_exts = (unsigned long)vec; + new_exts = (unsigned long)vec; #ifdef CONFIG_MEMCG - obj_exts |= MEMCG_DATA_OBJEXTS; + new_exts |= MEMCG_DATA_OBJEXTS; #endif + old_exts = slab->obj_exts; + handle_failed_objexts_alloc(old_exts, vec, objects); if (new_slab) { /* * If the slab is brand new and nobody can yet access its * obj_exts, no synchronization is required and obj_exts can * be simply assigned. */ - slab->obj_exts = obj_exts; - } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + slab->obj_exts = new_exts; + } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) { /* * If the slab is already in use, somebody can allocate and * assign slabobj_exts in parallel. In this case the existing From patchwork Mon Feb 12 21:39:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13554109 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F27D5F867 for ; Mon, 12 Feb 2024 21:40:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774051; cv=none; b=QNWMD5RAQEHkEXem43a7FEpHxqlPniNXPSMDXltyp3uhQXKKVyCGpiqCQMnMEBGIbQwvZyIc8aV00tmt5SXJ04Hdtg7DXmc8EqB/k+vHeV3qLCpuIQVYFz+wePAcXFGS6Vjh8wcU5MvgQlGkh3bOcPyss/9xs2yrIpnX1m/hzEs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774051; c=relaxed/simple; bh=vS3hFfqyUn7F6Wyl2Bup5pxO9jekfVbmKT9YXb3RIpI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WERePJLlvwha/6lbOg6IH3WjBsp3wndCLHb4eeDKZW7OVztXQLnsVz2FG33TraMrT5/WI1eQ6HCWm+J12Kmc1/n9XDYKzj3oc7MjEKpVCgDV6+djE7pPVWLfFLFuheXg0Yb3GXO4Lx9llv55ohdMhiY7/dbIcmPEwc6Nns9kCDo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vc5AxPqe; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vc5AxPqe" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-604832bcbd0so78727867b3.0 for ; Mon, 12 Feb 2024 13:40:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774048; x=1708378848; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SBX12uj3ZhdKUrlA/PpkAMdvyZt1FRyTju3ec31OSZc=; b=vc5AxPqeoJBaP6CJzYBZKxHiP/DQGt0I9S4yEzdaLl1mcsFSky0Wm82h7aTAHg4ui7 3ii5QvvIcgBEmO/J8f09wf1MoxQ0GtxK/Zrv55HKo+WlvhUf/1+8txh8Lcyxa8/2Oys+ OwFRw9nF+b4KA+G7NDrBGUww33SAdIKI7R315L+iqovLAs6p2HYL29aAyPhmx/jfJC43 4TBsxNsgRGkRs53o85/X7MVx7BkY8BgugIEjzE5iE1jb4qk6ljVn/i74pTUp30TIPKBm iKRCN9q7PNmjA+s8cljPaF0ECouOdCaj0oQN7gmLiWBbRFBnhd2kguBJrnMcxWYJxT6C 8BSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774048; x=1708378848; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SBX12uj3ZhdKUrlA/PpkAMdvyZt1FRyTju3ec31OSZc=; b=KqsYzybh8RXBx9X4pKkKuwgAJ7XUGPhiA75dDGvkIeqOpLPDBxbR70cr8Rz/Gy/4L0 V14QzvQqpi7xs7t2G996cDGhWP5Webr3v2WMXvP1LY7KVrSzMOUPIEvgjY+l0SxBwmWV GtxJ6lYZlcB4CWqTpBIRUIqiI4+Na9rZkjc/AaqmiSJNAhKeVv69UUFowLT5UWHvc2zQ ACLzoMJrvuIKXZgS1um2HTdOBHTCfmRr94ExapusnR2l7mx/8Yz2Zsc8yMkxeoq58YlU jIHJgFyoIYVctSo4KFcwZQPTnlINBXj35g/ko6J8uIYiTyrczPDFeQZ6snPpHv//exSH apHA== X-Gm-Message-State: AOJu0YxPulUBvxOPBfnZV1T2W1cbFYDx7AHEqZL9N8stvFE0bYh/VpwM 1CZs5zBbVs5LGEiHDgPoZDw877BByEfhP/ZT//T7ep8ILPdCyFzkcSDeUmq3pLfbikLCSIxHFwf 3Og== X-Google-Smtp-Source: AGHT+IGnimMvKfHnjDqQ+fcufl2ID9NuWKv0TEQmpUKb7IHMbluonmQIQ8q+5RdznzBwDieXP2FUkEOc23U= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a05:690c:ec9:b0:604:648:6dc0 with SMTP id cs9-20020a05690c0ec900b0060406486dc0mr2354569ywb.10.1707774048308; Mon, 12 Feb 2024 13:40:48 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:21 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-modules@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-36-surenb@google.com> Subject: [PATCH v3 35/35] MAINTAINERS: Add entries for code tagging and memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org From: Kent Overstreet The new code & libraries added are being maintained - mark them as such. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Reviewed-by: Kees Cook --- MAINTAINERS | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 73d898383e51..6da139418775 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5210,6 +5210,13 @@ S: Supported F: Documentation/process/code-of-conduct-interpretation.rst F: Documentation/process/code-of-conduct.rst +CODE TAGGING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/codetag.h +F: lib/codetag.c + COMEDI DRIVERS M: Ian Abbott M: H Hartley Sweeten @@ -14056,6 +14063,15 @@ F: mm/memblock.c F: mm/mm_init.c F: tools/testing/memblock/ +MEMORY ALLOCATION PROFILING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/alloc_tag.h +F: include/linux/codetag_ctx.h +F: lib/alloc_tag.c +F: lib/pgalloc_tag.c + MEMORY CONTROLLER DRIVERS M: Krzysztof Kozlowski L: linux-kernel@vger.kernel.org