From patchwork Mon Feb 24 14:47:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13988329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02BFCC021BB for ; Mon, 24 Feb 2025 14:47:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C21C6B00A3; Mon, 24 Feb 2025 09:47:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 070886B00A4; Mon, 24 Feb 2025 09:47:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D425B6B00A5; Mon, 24 Feb 2025 09:47:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B56726B00A3 for ; Mon, 24 Feb 2025 09:47:38 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6BDF6C16E4 for ; Mon, 24 Feb 2025 14:47:38 +0000 (UTC) X-FDA: 83155116996.13.2F689EC Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf20.hostedemail.com (Postfix) with ESMTP id 6DF111C000D for ; Mon, 24 Feb 2025 14:47:36 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ON60Svna; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3h4a8ZwgKCMozqs02q3rw44w1u.s421y3AD-220Bqs0.47w@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3h4a8ZwgKCMozqs02q3rw44w1u.s421y3AD-220Bqs0.47w@flex--jackmanb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740408456; a=rsa-sha256; cv=none; b=zImI5f3BAqLU39aUg9Dk5mxQDAjsquyPHjG2edk4LFYQlnh9p4dIWWIaOma3VOPOTtxMNG YNHRQJ4JGQ4IdKMKDeWRTDogvnn5axLdHzmJyGp2Q8j8WGgLqBdPvWnkhJ2qwqbx2kvppl EFyLoSMer6pcMdhdgcjx+byfM39vjpE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ON60Svna; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3h4a8ZwgKCMozqs02q3rw44w1u.s421y3AD-220Bqs0.47w@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3h4a8ZwgKCMozqs02q3rw44w1u.s421y3AD-220Bqs0.47w@flex--jackmanb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740408456; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2Fz5NU1PueoQjecq9oG9iR+vgvt/Z95TUw+fwY0nW88=; b=VaRbojDsK6vOgz94FsGUhc6NrNIAldB5bfDCiHb0manU2x4NxWj5PfNhbcivLysea2mtP4 V4UzSkiXsrn0L3/CUjSa74B2/0FT2r3aJQaAeDES5w1XjWBXWZv7mwjXVq7Nr8AwtqDQgB pOJ9/1K3l7LglUzIlNk7OqTG/wdUjPQ= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-438da39bb69so34037715e9.0 for ; Mon, 24 Feb 2025 06:47:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740408455; x=1741013255; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2Fz5NU1PueoQjecq9oG9iR+vgvt/Z95TUw+fwY0nW88=; b=ON60Svna36JSnR/f6kMrFE/IBNsGMb4VaoqjcEbSAY4YIhciEsKkUlKry0pNnbyhFU wvkGBsiFZRAvO4usVNTc95adqPBZIPuQG3XJJg1Y+z83/L1IgclSJDHGCmhO7Iq19wHz jYqDT8yzUilfsPyh1yY+Xy3RVV2k6cxuNFlgsHWA2HR5GJycxPKjIfWpflk2/NJiPNOl ZOrswqIf1xSl+ETkUFW/0t3Uj06CLIBoAQ8z18HiTmRoiE4Ip08pt/hzI0wo14jNf5B0 NkfBkc/9kLWFWEsKfW9a7dHQIKlcMjuFjAba7T69M5vgivVE+RDI83QxLBCJIOa4jk9S iBBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740408455; x=1741013255; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Fz5NU1PueoQjecq9oG9iR+vgvt/Z95TUw+fwY0nW88=; b=WPOvwRr7lcvG1b7F+2TFyYOJ1wucZvwgnAPwSbQNBFdd63mb9NDDqIVaZT7S0jzE0H Eh0vhXLuqhcbahISjVFWAUhduU9w6SytdDs4d5naQGqvomnaWmzZYzNNKmsAj+gwfDgn 1uPgbFEZrAgXuQrSAiu8fz4/f33wprx84nDIWgP6hywc11UbkdR4U8b2IRd0Y9TB5Dli sByX5HvVf6qkJuBOXTuZ3tBLKlaIPPB4xN6+lrab2Q6FZ3hwqbV4EeBI7NZiA2B1iueb aa4pTyTSfckl0iA+58gqTHRPRtgHcLkCNtPOHb5v5+UhcFLUydOB4tkhbXzlT3WuCF6O P6IQ== X-Forwarded-Encrypted: i=1; AJvYcCVIKYtohINj+2J7O8Ck8iOd87jGqfc2S4Pa4rsqnjConGpTWDNwH2iN+KG+JOwlm7pOjcSVOTeI+g==@kvack.org X-Gm-Message-State: AOJu0Yyvvnjmv7Y41tA13zqL7FZJ5V3Ey5MQxYOmORc8UQRV59tJ4Zd8 rthYhK/lV3xS78jUM3zgMiEG9Oe5GpoOaBorG/6qGUCn7ktwVS1zrPyANGTL4eUNGILYJifmghA zT1vtnVFIlA== X-Google-Smtp-Source: AGHT+IEAGhOq6ZLG1kWNmTBHOtlzeySJw4C81f6PzEKZO3mD1LQVv/+voZGC5+kCMIWufaPww+gOqaaEPDa8Ww== X-Received: from wmqe19.prod.google.com ([2002:a05:600c:4e53:b0:439:831e:ca7c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:468f:b0:439:5541:53cc with SMTP id 5b1f17b1804b1-439ae2206d5mr101272755e9.29.1740408455220; Mon, 24 Feb 2025 06:47:35 -0800 (PST) Date: Mon, 24 Feb 2025 14:47:14 +0000 In-Reply-To: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> Mime-Version: 1.0 References: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-page-alloc-kunit-v1-4-d337bb440889@google.com> Subject: [PATCH RFC 4/4] mm/page_alloc_test: Add smoke-test for page allocation From: Brendan Jackman To: Brendan Higgins , David Gow , Rae Moar , Andrew Morton , David Hildenbrand , Oscar Salvador Cc: Lorenzo Stoakes , Vlastimil Babka , Michal Hocko , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Brendan Jackman , Yosry Ahmed X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6DF111C000D X-Stat-Signature: sh7mm3tmjafyi9r5rpt6zqpsi4yx3stj X-Rspam-User: X-HE-Tag: 1740408456-423300 X-HE-Meta: U2FsdGVkX1/fNlk9HUc66rWbYQHtyF+MX21VZRFMOdAWxwJWjTZ8KCq2c0RaAiIWaps2QodglHAN+CoV0Y6nYWoepDtJi5lQtuj03KiRwoGQ0v0Ncf2A61MwTZrKNAWnDmbrMozdRA/+o0vh1RX4iH9W8FDShwEr+WTmF53OR97N10JOGs5UE6sGJvUtSZ8AkEL1TfmJQRUz114HTKCkGljU56M0Gl7ZIsWzyFaS6u3Z387Cq3se1E6DHsXwx8qSCJ7Pjyg7Ls7ZuFoZ/5VxgJvW0mrDne/0fUz9iyO8oYCmyDJLIqMr+z7uMkWeXAg3fzeGX1L0XpPRG3R5Jn7AK2FhtbffcEtS/o2uC/4Q2AOI+pF5zsey6M7wWnCSVbj0T5ipWXatX4c5KUD40xKvk5qj0OTmtSXIvuEeby92MhCXCZbjBXZx2bxhwZwtNQcyG2uctowCU4/cDICX7oHL7PsA/vYVWY06GcSeXodjNAHkbZMxJHDJ6yH7t2TwRYC163JTQzeKYcdnx396aO9wSzG5DdqNiLk+iiLiEf9VbTi/hvwbcqmg71+uSSY9zGZG6Ejk9VVbi8MuxzYC77KyiuBYwyeMx5tc/R6AWL69DeGv2N0MqriAgCuFh0URKHYACrNnXpq73heuIjLKxdIJg+ZFI/AZBwL7QvMQgUJ4sJEzFsIwVhBm8V0nHuEdSylZMjGZZpyKITKGVMsJGscSGolqURJW59TGXFQ8xAvwdwZtTMwHamZ9hAcZ+JZxSUvASeysZ2SbRxpDeCjNDOwpW8vtlv7IBpGBrO+P3IyyaBpxZqAcZNphg4IJc1xZcHPvXBY6FHmpkjyRExhzDUxsKamizC5vbRvbUYsCgHKTbaHTzKmuIG5nhvfpKpBH0i0I5+x9D5Z0KpWhbsL7ZI6xcWBOGU8dEJ+1NPqPKD5hxdhGZV7YuKsYwlHwHJ4wlDxBGZ2lhf79lFQizPXwBGv gf7n5lBM cC1PlkQiRA/deDT5XUn2opEXLgDUKB4w+p8j1Mrq0PBL5EJHtuOmsrkToWEoqM9FrjFiQJ9LFEP5e/tCt/bPo44EIjUrrCoDXkpWjxnuQ5yQL9M5d3ZkRYiFk2rmLZbEN7Un1hlTLY4ltiOVzGzukxr4fI8TlhVOGcC6FmYLoaBeF/I8lZlawKq61rQPH85wg7Aaqq0+S5c5nDe6mBeJuquxzT1HgAxKVX64cPW6fWtbUJaEKL9LPAqJYKDapYiGkjFOaDDUlcERa5p3m2MYxU3Tgz9MgF1Sren1BfH2Ex8C7Xk8J7GcAmhoSaxf6mt2FOM3sr8LwZRN2HIACdOOtmk7XuQGuWdGRNRn0Z9ZkxwWL7FcUxQgRV1trY3PheINEuavERfqLvYEDb9+0nXjXUbBznGPHqF2mOj1eIu8XG/ETTunXb/pfP51deeMFbEudEfIAqOn17TanpQOZ6e6t8ILtW7Cooc85DdSUt82MxOqrjzO4k1tA8D0dNs/knmZE51dzeNnEoBUgyig9ywzWFZw+RzcL6mUZqWrL1z9gS1r02V6sZEiDpCRjNPGz9sfRtvx5NiZyJ/sUrS4b1Fff4/AhPguJg7GQS31pZZPMoULIU04= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is the bare minimum to illustrate what KUnit code would look like that covers the page allocator. Even this trivial test illustrates a couple of nice things that are possible when testing via KUnit 1. We can directly assert that the correct zone was used. (Although note due to the simplistic setup, you can have any zone you like as long as it's ZONE_NORMAL). 2. We can assert that a page got freed. It's probably pretty unlikely that we'd have a bug that actually causes a page to get leaked by the allocator, but it serves as a good example of the kind of assertions we can make by judicously peeking at allocator internals. Signed-off-by: Brendan Jackman --- mm/page_alloc_test.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 138 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc_test.c b/mm/page_alloc_test.c index c6bcfcaf61b57ca35ad1b5fc48fd07d0402843bc..0c4effb151f4cd31ec6a696615a9b6ae4964b332 100644 --- a/mm/page_alloc_test.c +++ b/mm/page_alloc_test.c @@ -26,6 +26,139 @@ } \ }) +#define EXPECT_WITHIN_ZONE(test, page, zone) ({ \ + unsigned long pfn = page_to_pfn(page); \ + unsigned long start_pfn = zone->zone_start_pfn; \ + unsigned long end_pfn = start_pfn + zone->spanned_pages; \ + \ + KUNIT_EXPECT_TRUE_MSG(test, \ + pfn >= start_pfn && pfn < end_pfn, \ + "Wanted PFN 0x%lx - 0x%lx, got 0x%lx", \ + start_pfn, end_pfn, pfn); \ + KUNIT_EXPECT_PTR_EQ_MSG(test, page_zone(page), zone, \ + "Wanted %px (%s), got %px (%s)", \ + zone, zone->name, page_zone(page), page_zone(page)->name); \ +}) + +static void action_nodemask_free(void *ctx) +{ + NODEMASK_FREE(ctx); +} + +/* + * Call __alloc_pages_noprof with a nodemask containing only the nid. + * + * Never returns NULL. + */ +static inline struct page *alloc_pages_force_nid(struct kunit *test, + gfp_t gfp, int order, int nid) +{ + NODEMASK_ALLOC(nodemask_t, nodemask, GFP_KERNEL); + struct page *page; + + KUNIT_ASSERT_NOT_NULL(test, nodemask); + kunit_add_action(test, action_nodemask_free, &nodemask); + nodes_clear(*nodemask); + node_set(nid, *nodemask); + + page = __alloc_pages_noprof(GFP_KERNEL, 0, nid, nodemask); + KUNIT_ASSERT_NOT_NULL(test, page); + return page; +} + +static inline bool page_on_buddy_list(struct page *want_page, struct list_head *head) +{ + struct page *found_page; + + list_for_each_entry(found_page, head, buddy_list) { + if (found_page == want_page) + return true; + } + + return false; +} + +/* Test case parameters that are independent of alloc order. */ +static const struct { + gfp_t gfp_flags; + enum zone_type want_zone; +} alloc_fresh_gfps[] = { + /* + * The way we currently set up the isolated node, everything ends up in + * ZONE_NORMAL. + */ + { .gfp_flags = GFP_KERNEL, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_ATOMIC, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_USER, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_DMA32, .want_zone = ZONE_NORMAL }, +}; + +struct alloc_fresh_test_case { + int order; + int gfp_idx; +}; + +/* Generate test cases as the cross product of orders and alloc_fresh_gfps. */ +static const void *alloc_fresh_gen_params(const void *prev, char *desc) +{ + /* Buffer to avoid allocations. */ + static struct alloc_fresh_test_case tc; + + if (!prev) { + /* First call */ + tc.order = 0; + tc.gfp_idx = 0; + return &tc; + } + + tc.gfp_idx++; + if (tc.gfp_idx >= ARRAY_SIZE(alloc_fresh_gfps)) { + tc.gfp_idx = 0; + tc.order++; + } + if (tc.order > MAX_PAGE_ORDER) + /* Finished. */ + return NULL; + + snprintf(desc, KUNIT_PARAM_DESC_SIZE, "order %d %pGg\n", + tc.order, &alloc_fresh_gfps[tc.gfp_idx].gfp_flags); + return &tc; +} + +/* Smoke test: allocate from a node where everything is in a pristine state. */ +static void test_alloc_fresh(struct kunit *test) +{ + const struct alloc_fresh_test_case *tc = test->param_value; + gfp_t gfp_flags = alloc_fresh_gfps[tc->gfp_idx].gfp_flags; + enum zone_type want_zone_type = alloc_fresh_gfps[tc->gfp_idx].want_zone; + struct zone *want_zone = &NODE_DATA(isolated_node)->node_zones[want_zone_type]; + struct list_head *buddy_list; + struct per_cpu_pages *pcp; + struct page *page, *merged_page; + int cpu; + + page = alloc_pages_force_nid(test, gfp_flags, tc->order, isolated_node); + + EXPECT_WITHIN_ZONE(test, page, want_zone); + + cpu = get_cpu(); + __free_pages(page, 0); + pcp = per_cpu_ptr(want_zone->per_cpu_pageset, cpu); + put_cpu(); + + /* + * Should end up back in the free area when drained. Because everything + * is free, it should get buddy-merged up to the maximum order. + */ + drain_zone_pages(want_zone, pcp); + KUNIT_EXPECT_TRUE(test, PageBuddy(page)); + KUNIT_EXPECT_EQ(test, buddy_order(page), MAX_PAGE_ORDER); + KUNIT_EXPECT_TRUE(test, list_empty(&pcp->lists[MIGRATE_UNMOVABLE])); + merged_page = pfn_to_page(round_down(page_to_pfn(page), 1 << MAX_PAGE_ORDER)); + buddy_list = &want_zone->free_area[MAX_PAGE_ORDER].free_list[MIGRATE_UNMOVABLE]; + KUNIT_EXPECT_TRUE(test, page_on_buddy_list(merged_page, buddy_list)); +} + static void action_drain_pages_all(void *unused) { int cpu; @@ -144,7 +277,11 @@ static void depopulate_isolated_node(struct kunit_suite *suite) WARN_ON(add_memory(0, start, size, MMOP_ONLINE)); WARN_ON(walk_memory_blocks(start, size, NULL, memory_block_online_cb)); } -static struct kunit_case test_cases[] = { {} }; + +static struct kunit_case test_cases[] = { + KUNIT_CASE_PARAM(test_alloc_fresh, alloc_fresh_gen_params), + {} +}; struct kunit_suite page_alloc_test_suite = { .name = "page_alloc",