From patchwork Mon Feb 24 14:47:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13988316 X-Patchwork-Delegate: brendanhiggins@google.com Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C09118C01D for ; Mon, 24 Feb 2025 14:47:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408452; cv=none; b=R1WKSMfOP6/7Nf0rcPERNh9IQwRHxecjUyvHokxRQjPUq/KzqnpALCewS9zD37Ob4IO5mZwcRHkUHohMs0bTt1TidvYelihI74jdrEIuWuMjFxL288enytOY2H+pRGbgKwTDFSoPf4JxPKSaS5no9gzYh2uqzkwtcCO2ykD8Et4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408452; c=relaxed/simple; bh=P68516FL20Q73MKJrvoy+KUUJ1cdsXSffNv5b3x6YAY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eQpEuHfOtLBLyk3du/tqf88JVzhZwjpho8DFu5IBlSs7UZCbiGtrCnm1ghMr9BZcChJ2Kcmx3rrSObx1jqSfeGWM9Pnt1TRoCf8CESfyacPQd23SrbPnw4rIm1HQlIjBlwNdKzqDc9eOsAKmoC5CwbBLNrgwvHhZcO9usbTekV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UG2p7037; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UG2p7037" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43aafafe6b7so2200805e9.1 for ; Mon, 24 Feb 2025 06:47:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740408449; x=1741013249; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cx79yAQvvtOreEIgXQgmWnaukRcNTY0vcbEckVNMqhA=; b=UG2p7037ZKGP2jLzBbvtmbmH6p+4QeJjtUD4CwALbDjCWxx1i6TYevWluhCA7bI2p9 VnpJ4aeHW8v0s7cnmFB7ExgktkL4WxmiGuwwEZObJ/ONYiwofOA8xjFmokhmwxgEpu87 /56lF4wTnyJoF3meowETV4e1h9NcgB+xRaQI60F0rexuizMmKLkuy7NYK1bbNw4L20DB sZgRdc4K3CLaK0AbSeW6BBNr89sjI9FN7bhaWSMfln8kqp/mAnSDgg90gHmonG0lKME/ ShRxz90G9wMdjqLXXbZCkuyB9m0JBwgVaoX0DpPwwc2+4FKJbSM91jIKPCzGJwTfx5iR ZntA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740408449; x=1741013249; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cx79yAQvvtOreEIgXQgmWnaukRcNTY0vcbEckVNMqhA=; b=agSQBtfYriWo4v23Dw+N7sIY5339HHkgHG2C/nt5QanVcBgrhHbpUjPQheQpQe7MSC t6uWpnUsRDzF2pL4J9aqvTNAWsi81OAYQQG2XYZg3Dmk0oVd40VOQTWzOuyiRvhiFJDk XkGFdE13dKMJQmGW3uq1GFgG8U/rNyG5/rtpuzY6Dpx99gt2tpq3z5xwJK0EGgMbZANH LXhEM065JPyG0RWpCaW8rmd9UH9BiCTvA9COWPM2Lx/X2qxlmPY5Zoonz3J5Jxz+ZOB6 8wZ3+p9CN5gny0vXTeuB3Am3Kb91XNAgDlZrPM6pqZr8/ICqKg9MpooJrE+6tHjJkxiy /tYw== X-Forwarded-Encrypted: i=1; AJvYcCVkM/B5Ezgd8Kai4HRZFdg/gwSltUceoS4QK+960b+NKGq9ZDqWIpMklI02DBFHUyTQ/np7QJM1wAKZ57XmW8A=@vger.kernel.org X-Gm-Message-State: AOJu0YynNSlpGtL8j3nMK+zMXBOTuSH1HpSdVbkJMIYNpUeGMEp87NUX ypoZqcen38wr7GxgY4gsTfVl8zquK/9wwetAWlp/CUJmI+ae6AMTXUMG0xBitjYNiRoMRp45/dC Vd9SSKSw4xQ== X-Google-Smtp-Source: AGHT+IFgzCZS8e4eXhzZ0Xu7b9rwIWCAcRNX6T6m7WXpW/BUkn6CvzEZFi5nDx4xFkLSY1Sid/xnIUacfnELow== X-Received: from wmqd6.prod.google.com ([2002:a05:600c:34c6:b0:439:98eb:28cd]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:b9b:b0:439:8ef6:5782 with SMTP id 5b1f17b1804b1-439ae1e8be4mr121933775e9.10.1740408449496; Mon, 24 Feb 2025 06:47:29 -0800 (PST) Date: Mon, 24 Feb 2025 14:47:11 +0000 In-Reply-To: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-page-alloc-kunit-v1-1-d337bb440889@google.com> Subject: [PATCH RFC 1/4] kunit: Allocate assertion data with GFP_ATOMIC From: Brendan Jackman To: Brendan Higgins , David Gow , Rae Moar , Andrew Morton , David Hildenbrand , Oscar Salvador Cc: Lorenzo Stoakes , Vlastimil Babka , Michal Hocko , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Brendan Jackman , Yosry Ahmed At present KUnit doesn't handle assertions happening in atomic contexts. A later commit will add tests that make assertions with spinlocks held. In preparation, switch to GFP_ATOMIC. "Just use GFP_ATOMIC" is not generally a solution to this kind of problem: since it uses up memory reserves, instead it should be only used when truly needed. However, for test code that should not be expected to run in production systems it seems tolerable, given that it avoids creating more complex APIs. Signed-off-by: Brendan Jackman --- lib/kunit/assert.c | 2 +- lib/kunit/resource.c | 2 +- lib/kunit/test.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/lib/kunit/assert.c b/lib/kunit/assert.c index 867aa5c4bccf764757e190948b8e3a2439116786..f08656c5fb247b510c4215445cc307ed1205a96c 100644 --- a/lib/kunit/assert.c +++ b/lib/kunit/assert.c @@ -101,7 +101,7 @@ VISIBLE_IF_KUNIT bool is_literal(const char *text, long long value) if (strlen(text) != len) return false; - buffer = kmalloc(len+1, GFP_KERNEL); + buffer = kmalloc(len+1, GFP_ATOMIC); if (!buffer) return false; diff --git a/lib/kunit/resource.c b/lib/kunit/resource.c index f0209252b179f8b48d47ecc244c468ed80e23bdc..eac511af4f8d7843d58c4e3976c77a9c4def86a7 100644 --- a/lib/kunit/resource.c +++ b/lib/kunit/resource.c @@ -98,7 +98,7 @@ int kunit_add_action(struct kunit *test, void (*action)(void *), void *ctx) KUNIT_ASSERT_NOT_NULL_MSG(test, action, "Tried to action a NULL function!"); - action_ctx = kzalloc(sizeof(*action_ctx), GFP_KERNEL); + action_ctx = kzalloc(sizeof(*action_ctx), GFP_ATOMIC); if (!action_ctx) return -ENOMEM; diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 146d1b48a0965e8aaddb6162928f408bbb542645..08d0ff51bd85845a08b40cd3933dd588bd10bddf 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -279,7 +279,7 @@ static void kunit_fail(struct kunit *test, const struct kunit_loc *loc, kunit_set_failure(test); - stream = kunit_alloc_string_stream(test, GFP_KERNEL); + stream = kunit_alloc_string_stream(test, GFP_ATOMIC); if (IS_ERR(stream)) { WARN(true, "Could not allocate stream to print failed assertion in %s:%d\n", From patchwork Mon Feb 24 14:47:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13988317 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE4EA24EF7F for ; Mon, 24 Feb 2025 14:47:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408454; cv=none; b=i+yQHkv08viOD7powECZ/+46kiq3hXqxP7DE7zYORXNIeC5tu9gzCA7qHPcgSLtN+KIpBztENeqeP8GveF+G2v+dJRy8B1sESZmL9NXablaK+3jnezM5hpvRPeM1YWspSodyNDe+whWc5CRY7lzP5QJC0PmddfRbYvsqalTrgro= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408454; c=relaxed/simple; bh=E+1O1rAL1bQoC3910IAA8+5w8gcxHtPnJaOqoJosgNg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tJXh3XnVAp7CZS2XkycleDFsa/GTonz9sU6yfMnk1l/YSRA8tLbb7EHD76pcUiugbagXLSZRBna5/f0aSw6DtjERkdfapbHZIZVLJMQcTQ9J2E5uiwfRjrQJn+atn9OjCcRyrJ/se5QKWbRjApgSOkrEcjF5rG5LT4+9jlMp8K8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lYIlqsAV; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lYIlqsAV" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43935e09897so31879945e9.1 for ; Mon, 24 Feb 2025 06:47:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740408451; x=1741013251; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GCHGbSH5u71x3rPQwLU8VHf0HNWJtt0XTaDImu1fyhU=; b=lYIlqsAVBPlBXFDqoacq35h+XbpIhL1xPW/D5XXuNfi6etUPeSdzZXZxYXjHT+kFxb QRDu2q97NhXuNYPtvJY2e6wi7WPdI/7QFLgcwrnSU5EcHJtG/izSzCrXZEMkrV5rBn7s zeC9Zifm4K+7J1ovi4khGg0RQtB0zMoFc+pqX/aforQ8B3TAoMEXRicuk4+EIKpmX0kw FCGKCyr8vSw8O9lurmvl8D+Mji9wdXDbfGp0rKdMuXWCD7bSWlx/bBwpbsMyVTIFahJn G9Nlq/u3RHnUKaheKHCLIp4X2B6cQz2EVaLrBP5SI3BphcP/GnUQ43fEjsWhOdSL85wU WRxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740408451; x=1741013251; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GCHGbSH5u71x3rPQwLU8VHf0HNWJtt0XTaDImu1fyhU=; b=c8+ftyNJF/YTF7RV6qgWTT2LWHpxqzV6Zu8HDMMqsPs1BnP2W3R9g+91CE7LGer+M4 RjkHs2GwWA2K/WBU6RdR4O3MCrGnoBsFJ7Y2a7+umHJ1Y6HJ24j7VKA57AkyVk7E0te6 1MRUE4mdrWPuzb/zOpFNq3iBevciudRcBHooMWfsXfUaG3nCvTNdtDDckAU4+lwQD4ee vWXA5HkeL9NmHeQAPNDpS5S7oSIsepWQRGKji7DaDbgxbj9BojN1QjP2FWQTKTiPxhsH Ybjj11hnJzNj1p6xUrE4V+5f4egr8PNNgJSRgrrx5nbQrWkXeV13XfYmT8jQRKjY7rxZ /WlQ== X-Forwarded-Encrypted: i=1; AJvYcCVtvcgx5bDxkzCK5fMtXN/6h6QhGV/V7wm3p87fbmbwdSu8HXEA/6dbMDMKEG1dqTiyOvipl+VkMAtgkmBitCU=@vger.kernel.org X-Gm-Message-State: AOJu0YwWA1It2bdCD0vOrmaPV3S5HRQwVF0a/ZObqJsfkGPHR+b3lGKT 0HjsRO3b1r3tf8hfFD/7VI+ZTGURtXIDF1OcFVKDufebFx8APy9/j1nPVcKhvK5cFu32yietmxh 0peD9sGz5Nw== X-Google-Smtp-Source: AGHT+IFwIHbdkreZNxpIcvirfoEEWXDLZ8e9Qs1I57U8z8tdFzGqrcazUy7/EH9Qq/dKoOv0alrn0GIwiDqq/Q== X-Received: from wmrn16.prod.google.com ([2002:a05:600c:5010:b0:439:846f:f9c8]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4fd3:b0:439:9543:9488 with SMTP id 5b1f17b1804b1-439ae2196a6mr90807405e9.21.1740408451368; Mon, 24 Feb 2025 06:47:31 -0800 (PST) Date: Mon, 24 Feb 2025 14:47:12 +0000 In-Reply-To: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-page-alloc-kunit-v1-2-d337bb440889@google.com> Subject: [PATCH RFC 2/4] mm/page_alloc_test: Add empty KUnit boilerplate From: Brendan Jackman To: Brendan Higgins , David Gow , Rae Moar , Andrew Morton , David Hildenbrand , Oscar Salvador Cc: Lorenzo Stoakes , Vlastimil Babka , Michal Hocko , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Brendan Jackman , Yosry Ahmed Add the Kbuild plumbing to create a new KUnit suite. Create the suite, with no tests inside it. Signed-off-by: Brendan Jackman --- mm/.kunitconfig | 2 ++ mm/Kconfig | 8 ++++++++ mm/Makefile | 2 ++ mm/page_alloc_test.c | 21 +++++++++++++++++++++ 4 files changed, 33 insertions(+) diff --git a/mm/.kunitconfig b/mm/.kunitconfig new file mode 100644 index 0000000000000000000000000000000000000000..fcc28557fa1c1412b21f9dbddbf6a63adca6f2b4 --- /dev/null +++ b/mm/.kunitconfig @@ -0,0 +1,2 @@ +CONFIG_KUNIT=y +CONFIG_PAGE_ALLOC_KUNIT_TEST=y \ No newline at end of file diff --git a/mm/Kconfig b/mm/Kconfig index 1b501db064172cf54f1b1259893dfba676473c56..1fac51c536c66243a1321195a78eb40668386158 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1358,6 +1358,14 @@ config PT_RECLAIM Note: now only empty user PTE page table pages will be reclaimed. +config PAGE_ALLOC_KUNIT_TEST + tristate "KUnit test for page allocator" if !KUNIT_ALL_TESTS + depends on KUNIT + default KUNIT_ALL_TESTS + help + Builds unit tests for page allocator. + + If unsure, say N. source "mm/damon/Kconfig" diff --git a/mm/Makefile b/mm/Makefile index 850386a67b3e0e3b543b27691a6512c448815697..7b8018e0e6510881fac6e4295fdd1472e38d743d 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -61,6 +61,8 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ page-alloc-y := page_alloc.o page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o +obj-$(CONFIG_PAGE_ALLOC_KUNIT_TEST) += page_alloc_test.o + # Give 'memory_hotplug' its own module-parameter namespace memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o diff --git a/mm/page_alloc_test.c b/mm/page_alloc_test.c new file mode 100644 index 0000000000000000000000000000000000000000..377dfdd50a3c6928e15210cc87d5399c1db80da7 --- /dev/null +++ b/mm/page_alloc_test.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +#include + +static struct kunit_case test_cases[] = { {} }; + +static struct kunit_suite test_suite = { + .name = "page_alloc", + .test_cases = test_cases, +}; +kunit_test_suite(test_suite); + +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); From patchwork Mon Feb 24 14:47:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13988318 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4D9B2505A4 for ; Mon, 24 Feb 2025 14:47:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408457; cv=none; b=LX/G9qCY3KfsaRzp1TV2ZszhNYzbGMHlmPISOG4aP+5d4sLhAIxubjRpJRGx3hJxYakWqzEnUnDpLIeBJAvLqhQUTXZd2m789uIKwgWY8HiQ+detmBVQqgEkV8YsttfCMtbubDNsaIbZz5UXMo5EKeNb9jXdQ0D7CAFrwMCmCb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408457; c=relaxed/simple; bh=bsZtGhyxbFaKi4/+REsOrdbX8Dfsls7gV0tJI//lXz8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HTu2FV6vszUzHHvpjtK+kyKAo5jkrL9McQvwNp+fdMZ5raAVt0Q3VTnlUZgAmin6JYqLNPa5/EB11TWbiadeh+5pg3ppRR+E8rUD9bs9koj4sfM7lhc75nqpYW5vlvK2VBO1WGhWmjYrD+7FR2tqaikFF3VhZJfeuq++kxkAgHk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MgJOrNe6; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MgJOrNe6" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43945f32e2dso38277795e9.2 for ; Mon, 24 Feb 2025 06:47:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740408453; x=1741013253; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6sXFXBg+7MhZsXsNbEHU7cZAUpeJWSKspLCJpInfPLA=; b=MgJOrNe66EKtMDNwunppAFDc1c8tSTlQfiLQsDwQTzpZkBrzKVkNajWp8g+JPbn9fA pT4P21UvCsmMdkOKnsxMwvLme768VaD1mbmS8WuLpGHxAfKDgwQlSvyvlIDgbF/Vx36K NYsWnLMrfTpZJSquc1RUyH4pQpEcxPOyprRfdez/zii3ymWeTOMLmZh32lpNyJDWod18 OG4MCAXSIcuiGTCqRTqCSVZZp4iIo36hzjC6/MDpZzt8DFnTZH+p1OqwYIaMiCmDhaTe WW8MWXTWA5pEW2rShBrJ650mOxf1Gg0hhqY2teYv8+D7XaE0on3FihpIt5/oGS4R1DtJ zxzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740408453; x=1741013253; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6sXFXBg+7MhZsXsNbEHU7cZAUpeJWSKspLCJpInfPLA=; b=ZBbYaWl8AxEdtih062Bjp7yZ1gr8rTAFyPFtY5Vrtih0o4gttsg9vBgIueeo3VRlXQ n5M6SlzCNzMZbZkJm3MKTxj3aUxn5gpuGqqkzKHUmZOexwoQ1aAxPLpH+cr/OiFU64Yk XPXx2Qh82sRBFoy/K0n3MPJqWut5G09tUxUQ0hfNwgPS//Jty7904+XMe0liAn58ky75 9zX4zrsgddzHnh5SLQtRuJKlSKY5ko+ew6pouuh0hZIbxRslxGANk9XmN2mAu2zrnPHR I97SH9h11yYKTP9AUg6WQ8NSmLBUhwu7yPnGG53WDFu/EA1W2LGpNa4kdb2xyfk0V3rR MHaw== X-Forwarded-Encrypted: i=1; AJvYcCXQf6ZOHfqlZm5lrrWEH7atK5eoFRG4L0CiEa2/rMUANqLh+1zGsWzJaW7GctxaGWXqIT2hqPhY0FAH28rS7RQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yw2OVeYSUbgI/JaypvQh09Lz8lVEyt7qVUaeaKbFY8OIs26mykZ 35V21D0DmZ96LF0E4EmBFDuYq2G6OSMw9WE68vIA+HEvVHEUV3KDYF6KqvzpxL0uaE/YV1uKdOP DMPMqBSk49g== X-Google-Smtp-Source: AGHT+IHO7N6qYrH/BUokoWLhGYhgqcjpGiqKJ3qIp+kO3mhMur74Ev86E0j6gIXIm+MPWKLOpxpEcFg/elTSuA== X-Received: from wmpz20.prod.google.com ([2002:a05:600c:a14:b0:439:7ebc:1bdd]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d21:b0:439:9b39:b31 with SMTP id 5b1f17b1804b1-439b411c2bcmr91798975e9.8.1740408453303; Mon, 24 Feb 2025 06:47:33 -0800 (PST) Date: Mon, 24 Feb 2025 14:47:13 +0000 In-Reply-To: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-page-alloc-kunit-v1-3-d337bb440889@google.com> Subject: [PATCH RFC 3/4] mm/page_alloc_test: Add logic to isolate a node for testing From: Brendan Jackman To: Brendan Higgins , David Gow , Rae Moar , Andrew Morton , David Hildenbrand , Oscar Salvador Cc: Lorenzo Stoakes , Vlastimil Babka , Michal Hocko , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Brendan Jackman , Yosry Ahmed In order to test the page allocator, we need an "instance" of the page allocator that is not subject to unpredictable perturbation by the live system. The closest thing that we have to an "instance" of the allocator is a NUMA node. So, introduce a new concept of an "isolated" node. This is an extension of the existing concept of a "fake" node, with the addition that nothing else in the system will touch it unless instructed to by the test code. The node is created during boot but has no memory nor any CPUs attached. It is not on any other node's fallback lists. Any code that pays general attention to NODE_DATA in such a way that might cause the page allocator data structures to be modified asynchronously to the test, is enlightened to ignore it via the node_isolated() helper. Then, during initialization of the allocator test suite, hotplug out some memory and then plug it back in to the isolated node. The node can then be used for testing. Because it's easy to miss code that needs enlightenment, which can lead to confusing test behaviour, also add some defensive checks to try and interference with the isolated node before the start of the test. Signed-off-by: Brendan Jackman --- drivers/base/memory.c | 5 +- include/linux/memory.h | 4 ++ include/linux/nodemask.h | 13 +++++ kernel/kthread.c | 3 + mm/.kunitconfig | 10 +++- mm/Kconfig | 2 +- mm/internal.h | 11 ++++ mm/memory_hotplug.c | 26 ++++++--- mm/numa_memblks.c | 22 ++++++++ mm/page_alloc.c | 37 +++++++++++- mm/page_alloc_test.c | 142 ++++++++++++++++++++++++++++++++++++++++++++++- 11 files changed, 260 insertions(+), 15 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 348c5dbbfa68ad30d34b344ace1dd8deac0e1947..cdb893d7f13324862ee0943df080440d19fbd957 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -26,6 +26,8 @@ #include #include +#include + #define MEMORY_CLASS_NAME "memory" static const char *const online_type_to_str[] = { @@ -183,7 +185,7 @@ static inline unsigned long memblk_nr_poison(struct memory_block *mem) /* * Must acquire mem_hotplug_lock in write mode. */ -static int memory_block_online(struct memory_block *mem) +VISIBLE_IF_KUNIT int memory_block_online(struct memory_block *mem) { unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr); unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block; @@ -250,6 +252,7 @@ static int memory_block_online(struct memory_block *mem) mem_hotplug_done(); return ret; } +EXPORT_SYMBOL_IF_KUNIT(memory_block_online); /* * Must acquire mem_hotplug_lock in write mode. diff --git a/include/linux/memory.h b/include/linux/memory.h index c0afee5d126ef65d420770e1f8669842c499c8de..99139a6e9c11a407a8d7bfb17b7bbe3d276048ff 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -177,6 +177,10 @@ int walk_dynamic_memory_groups(int nid, walk_memory_groups_func_t func, register_memory_notifier(&fn##_mem_nb); \ }) +#ifdef CONFIG_KUNIT +int memory_block_online(struct memory_block *mem); +#endif + #ifdef CONFIG_NUMA void memory_block_add_nid(struct memory_block *mem, int nid, enum meminit_context context); diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 9fd7a0ce9c1a7336df46f12622867e6786a5c0a9..6ea38963487e1fbb800eab69e5e6413aa17a8047 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -536,6 +536,19 @@ static __always_inline int node_random(const nodemask_t *maskp) #define for_each_node(node) for_each_node_state(node, N_POSSIBLE) #define for_each_online_node(node) for_each_node_state(node, N_ONLINE) + +#ifdef CONFIG_PAGE_ALLOC_KUNIT_TEST +/* + * An isolated node is a fake node for testing, that boots with no memory and no + * attached CPUs, and nothing should touch it except for test code. + */ +extern bool node_isolated(int node); +/* Only one isolated node is supported at present and it cannot be un-isolated. */ +extern void node_set_isolated(int node); +#else +static inline bool node_isolated(int node) { return false; } +#endif /* CONFIG_PAGE_ALLOC_KUNIT_TEST */ + /* * For nodemask scratch area. * NODEMASK_ALLOC(type, name) allocates an object with a specified type and diff --git a/kernel/kthread.c b/kernel/kthread.c index 5dc5b0d7238e85ad4074076e4036062c7bfcae74..93f65c5935cba8a59c7d3df2e36335130c3e1f71 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -9,6 +9,7 @@ */ #include #include +#include #include #include #include @@ -511,6 +512,8 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), struct kthread_create_info *create = kmalloc(sizeof(*create), GFP_KERNEL); + VM_WARN_ON(node != NUMA_NO_NODE && node_isolated(node)); + if (!create) return ERR_PTR(-ENOMEM); create->threadfn = threadfn; diff --git a/mm/.kunitconfig b/mm/.kunitconfig index fcc28557fa1c1412b21f9dbddbf6a63adca6f2b4..4ff4e1654c3e9b364072d33bfffb3a2336825859 100644 --- a/mm/.kunitconfig +++ b/mm/.kunitconfig @@ -1,2 +1,10 @@ CONFIG_KUNIT=y -CONFIG_PAGE_ALLOC_KUNIT_TEST=y \ No newline at end of file +CONFIG_PAGE_ALLOC_KUNIT_TEST=y + +# Required for NUMA +CONFIG_SMP=y +# Used by tests to carve out fake node for isolating page_alloc data. +CONFIG_NUMA=y +CONFIG_NUMA_EMU=y +CONFIG_MEMORY_HOTPLUG=y +CONFIG_MEMORY_HOTREMOVE=y \ No newline at end of file diff --git a/mm/Kconfig b/mm/Kconfig index 1fac51c536c66243a1321195a78eb40668386158..64c3794120002a839f56e3feb284c6d5c2635f40 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1360,7 +1360,7 @@ config PT_RECLAIM config PAGE_ALLOC_KUNIT_TEST tristate "KUnit test for page allocator" if !KUNIT_ALL_TESTS - depends on KUNIT + depends on KUNIT && NUMA && MEMORY_HOTREMOVE default KUNIT_ALL_TESTS help Builds unit tests for page allocator. diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11f8b399f6bac42eab078cd51e01a5..9dbe5853b90b53ff261ba1b2fca12eabfda1a9de 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1545,5 +1545,16 @@ static inline bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, } #endif /* CONFIG_PT_RECLAIM */ +#ifdef CONFIG_PAGE_ALLOC_KUNIT_TEST +/* + * Note that node_isolated() is separate, that's a "public API". But only + * test code needs to look up which node is isolated. + */ +extern int isolated_node; +#endif + +#ifdef CONFIG_KUNIT +void drain_pages(unsigned int cpu); +#endif #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e3655f07dd6e33efb3e811cab07f240649487441..968c23b6f347cf6a0c30d00cb556166b8df9c9c3 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1198,10 +1198,12 @@ int online_pages(unsigned long pfn, unsigned long nr_pages, arg.nr_pages = nr_pages; node_states_check_changes_online(nr_pages, zone, &arg); - ret = memory_notify(MEM_GOING_ONLINE, &arg); - ret = notifier_to_errno(ret); - if (ret) - goto failed_addition; + if (!node_isolated(nid)) { + ret = memory_notify(MEM_GOING_ONLINE, &arg); + ret = notifier_to_errno(ret); + if (ret) + goto failed_addition; + } /* * Fixup the number of isolated pageblocks before marking the sections @@ -1242,19 +1244,27 @@ int online_pages(unsigned long pfn, unsigned long nr_pages, /* reinitialise watermarks and update pcp limits */ init_per_zone_wmark_min(); - kswapd_run(nid); - kcompactd_run(nid); + /* + * Don't run daemons on the special test node, if that needs to be + * tested the test should run it. + */ + if (!node_isolated(nid)) { + kswapd_run(nid); + kcompactd_run(nid); + } writeback_set_ratelimit(); - memory_notify(MEM_ONLINE, &arg); + if (!node_isolated(nid)) + memory_notify(MEM_ONLINE, &arg); return 0; failed_addition: pr_debug("online_pages [mem %#010llx-%#010llx] failed\n", (unsigned long long) pfn << PAGE_SHIFT, (((unsigned long long) pfn + nr_pages) << PAGE_SHIFT) - 1); - memory_notify(MEM_CANCEL_ONLINE, &arg); + if (!node_isolated(nid)) + memory_notify(MEM_CANCEL_ONLINE, &arg); remove_pfn_range_from_zone(zone, pfn, nr_pages); return ret; } diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index ff4054f4334dae42ee3b3668da18bba01dc3cd8b..190c879af2c779df1be448c45c43b0570bb6c308 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -7,6 +7,8 @@ #include #include +#include "internal.h" + int numa_distance_cnt; static u8 *numa_distance; @@ -371,6 +373,24 @@ static void __init numa_clear_kernel_node_hotplug(void) } } +#ifdef CONFIG_PAGE_ALLOC_KUNIT_TEST +static inline void make_isolated_node(void) +{ + int node; + + node = num_possible_nodes(); + if (!numa_valid_node(node)) { + pr_err("All node IDs used, can't fake another.\n"); + } else { + node_set(num_possible_nodes(), node_possible_map); + node_set_isolated(node); + } +} +#else +static inline void make_isolated_node(void) { } +#endif + + static int __init numa_register_meminfo(struct numa_meminfo *mi) { int i; @@ -381,6 +401,8 @@ static int __init numa_register_meminfo(struct numa_meminfo *mi) if (WARN_ON(nodes_empty(node_possible_map))) return -EINVAL; + make_isolated_node(); + for (i = 0; i < mi->nr_blks; i++) { struct numa_memblk *mb = &mi->blk[i]; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 579789600a3c7bfb7b0d847d51af702a9d4b139a..9472da738119589150db26126dfcf808e2dc9371 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -56,6 +56,7 @@ #include #include #include +#include #include "internal.h" #include "shuffle.h" #include "page_reporting.h" @@ -291,6 +292,26 @@ static bool __free_unaccepted(struct page *page); int page_group_by_mobility_disabled __read_mostly; +/* + * Test harness for KUnit - pick a node that we will never allocate from, except + * for in the page allocator tests. + */ +#ifdef CONFIG_PAGE_ALLOC_KUNIT_TEST +int isolated_node = NUMA_NO_NODE; +EXPORT_SYMBOL(isolated_node); + +void node_set_isolated(int node) +{ + WARN_ON(isolated_node != NUMA_NO_NODE); + isolated_node = node; +} + +bool node_isolated(int node) +{ + return node == isolated_node; +} +#endif /* CONFIG_POAGE_ALLOC_KUNIT_TEST */ + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* * During boot we initialize deferred pages on-demand, as needed, but once @@ -2410,7 +2431,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) /* * Drain pcplists of all zones on the indicated processor. */ -static void drain_pages(unsigned int cpu) +VISIBLE_IF_KUNIT void drain_pages(unsigned int cpu) { struct zone *zone; @@ -2418,6 +2439,7 @@ static void drain_pages(unsigned int cpu) drain_pages_zone(cpu, zone); } } +EXPORT_SYMBOL_IF_KUNIT(drain_pages); /* * Spill all of this CPU's per-cpu pages back into the buddy allocator. @@ -5087,6 +5109,8 @@ int find_next_best_node(int node, nodemask_t *used_node_mask) } for_each_node_state(n, N_MEMORY) { + if (node_isolated(n)) + continue; /* Don't want a node to appear more than once */ if (node_isset(n, *used_node_mask)) @@ -5134,8 +5158,17 @@ static void build_zonelists_in_node_order(pg_data_t *pgdat, int *node_order, for (i = 0; i < nr_nodes; i++) { int nr_zones; + int other_nid = node_order[i]; + pg_data_t *node = NODE_DATA(other_nid); - pg_data_t *node = NODE_DATA(node_order[i]); + /* + * Never fall back to the isolated node. The isolated node has + * to be able to fall back to other nodes because that fallback + * is relied on for allocating data structures that describe the + * node. + */ + if (node_isolated(other_nid) && other_nid != pgdat->node_id) + continue; nr_zones = build_zonerefs_node(node, zonerefs); zonerefs += nr_zones; diff --git a/mm/page_alloc_test.c b/mm/page_alloc_test.c index 377dfdd50a3c6928e15210cc87d5399c1db80da7..c6bcfcaf61b57ca35ad1b5fc48fd07d0402843bc 100644 --- a/mm/page_alloc_test.c +++ b/mm/page_alloc_test.c @@ -3,19 +3,157 @@ #include #include #include +#include #include #include #include #include +#include "internal.h" + +#define EXPECT_PCPLIST_EMPTY(test, zone, cpu, pindex) ({ \ + struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); \ + struct page *page; \ + \ + lockdep_assert_held(&pcp->lock); \ + page = list_first_entry_or_null( \ + &pcp->lists[pindex], struct page, pcp_list); \ + \ + if (page) { \ + KUNIT_FAIL(test, "PCPlist %d on CPU %d wasn't empty", i, cpu); \ + dump_page(page, "unexpectedly on pcplist"); \ + } \ +}) + +static void action_drain_pages_all(void *unused) +{ + int cpu; + + for_each_online_cpu(cpu) + drain_pages(cpu); +} + +/* Runs before each test. */ +static int test_init(struct kunit *test) +{ + struct zone *zone_normal; + int cpu; + + if (isolated_node == NUMA_NO_NODE) + kunit_skip(test, "No fake NUMA node ID allocated"); + + zone_normal = &NODE_DATA(isolated_node)->node_zones[ZONE_NORMAL]; + + /* + * Nothing except these tests should be allocating from the fake node so + * the pcplists should be empty. Obviously this is racy but at least it + * can probabilistically detect issues that would otherwise make for + * really confusing test results. + */ + for_each_possible_cpu(cpu) { + struct per_cpu_pages *pcp = per_cpu_ptr(zone_normal->per_cpu_pageset, cpu); + unsigned long flags; + int i; + + spin_lock_irqsave(&pcp->lock, flags); + for (i = 0; i < ARRAY_SIZE(pcp->lists); i++) + EXPECT_PCPLIST_EMPTY(test, zone_normal, cpu, i); + spin_unlock_irqrestore(&pcp->lock, flags); + } + + /* Also ensure we don't leave a mess for the next test. */ + kunit_add_action(test, action_drain_pages_all, NULL); + + return 0; +} + +static int memory_block_online_cb(struct memory_block *mem, void *unused) +{ + return memory_block_online(mem); +} + +struct region { + int node; + unsigned long start; + unsigned long size; +}; + +/* + * Unplug some memory from a "real" node and plug it into the isolated node, for + * use during the tests. + */ +static int populate_isolated_node(struct kunit_suite *suite) +{ + struct zone *zone_movable = &NODE_DATA(0)->node_zones[ZONE_MOVABLE]; + phys_addr_t zone_start = zone_movable->zone_start_pfn << PAGE_SHIFT; + phys_addr_t zone_size = zone_movable->spanned_pages << PAGE_SHIFT; + unsigned long bs = memory_block_size_bytes(); + u64 start = round_up(zone_start, bs); + /* Plug a memory block if we can find it. */ + unsigned long size = round_down(min(zone_size, bs), bs); + int err; + + if (!size) { + pr_err("Couldn't find ZONE_MOVABLE block to offline\n"); + pr_err("Try setting/expanding movablecore=\n"); + return -1; + } + + err = offline_and_remove_memory(start, size); + if (err) { + pr_notice("Couldn't offline PFNs 0x%llx - 0x%llx\n", + start >> PAGE_SHIFT, (start + size) >> PAGE_SHIFT); + return err; + } + err = add_memory(isolated_node, start, size, MMOP_ONLINE); + if (err) { + pr_notice("Couldn't add PFNs 0x%llx - 0x%llx\n", + start >> PAGE_SHIFT, (start + size) >> PAGE_SHIFT); + goto add_and_online_memory; + } + err = walk_memory_blocks(start, size, NULL, memory_block_online_cb); + if (err) { + pr_notice("Couldn't online PFNs 0x%llx - 0x%llx\n", + start >> PAGE_SHIFT, (start + size) >> PAGE_SHIFT); + goto remove_memory; + } + + return 0; + +remove_memory: + if (WARN_ON(remove_memory(start, size))) + return err; +add_and_online_memory: + if (WARN_ON(add_memory(0, start, size, MMOP_ONLINE))) + return err; + WARN_ON(walk_memory_blocks(start, size, NULL, memory_block_online_cb)); + return err; +} + +static void depopulate_isolated_node(struct kunit_suite *suite) +{ + unsigned long start, size = memory_block_size_bytes(); + + if (suite->suite_init_err) + return; + + start = NODE_DATA(isolated_node)->node_start_pfn << PAGE_SHIFT; + + WARN_ON(remove_memory(start, size)); + WARN_ON(add_memory(0, start, size, MMOP_ONLINE)); + WARN_ON(walk_memory_blocks(start, size, NULL, memory_block_online_cb)); +} static struct kunit_case test_cases[] = { {} }; -static struct kunit_suite test_suite = { +struct kunit_suite page_alloc_test_suite = { .name = "page_alloc", .test_cases = test_cases, + .suite_init = populate_isolated_node, + .suite_exit = depopulate_isolated_node, + .init = test_init, }; -kunit_test_suite(test_suite); +kunit_test_suite(page_alloc_test_suite); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING"); From patchwork Mon Feb 24 14:47:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13988319 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1EAC254867 for ; Mon, 24 Feb 2025 14:47:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408458; cv=none; b=OV2+758AzbELVPVObvofa07pl82APqN0CurK8x6soUXUztppx3/1wRGOEkdqLBgm4K/CiGVSnj7wqCb3jYHEkwA6zj2gGjBOPSs4GkjFzc9W+L2H36usxNTtyk8oLcF1qOlrZkSHhVffZd9XfK5GY9MBaCra1iK7Li0/8iJwZPg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740408458; c=relaxed/simple; bh=/rC3zjRkXgsFNzgn2Bsk3EVpsKL8TMjcDS8os7nggNM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J6cRH8kOhhtSJoSUfuWmucAGHtrZ6mQo2JfykGqFpCyT2fTDRNxzo/p3X+1Hgg4j7c4dDNbPqi/X+MO8At6oTCl/40fn8Pf8HRM6GM4v7YwE5PSe/arzSQRpKjMzXra3+dQV6Pm552qdWdysMMe8FOcemDDE26q0QvhcYIPVxlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gPgm+K37; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gPgm+K37" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-438da39bb69so34037705e9.0 for ; Mon, 24 Feb 2025 06:47:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740408455; x=1741013255; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2Fz5NU1PueoQjecq9oG9iR+vgvt/Z95TUw+fwY0nW88=; b=gPgm+K37fqYvaDkfm7GVEUrYczTQ/Bsmn4SHyGO4f7z5BbjjYblXV9Wn/Moe0H9VC5 l2KhQK88qE0EC9LJn6F2XGZ+I7ocLEMWRa3WJmug/LQ+h+6lgj7RHW6lp+naY1gpOuGv O7lCCWaKjW9k5a6EoMJnkJ5W/AuAye5aB8+L9+H+j/AobvSnVcG93QrutQY4y95eNzsy pjsbc5mXfLQnpOtegU42G4CSDi136WoReehZRE8jmFZ7FZmQOgIz1h+y8NjJLeCwGNeU r2bP/SpcMMxP4XNsbHOqa+zXSra8NzgcgU34mxCjS+Ojg6yb+o16Hlcfev7RzFtYNmtY GcGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740408455; x=1741013255; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Fz5NU1PueoQjecq9oG9iR+vgvt/Z95TUw+fwY0nW88=; b=fO44RYgved7+apoUU9IHqveEnB1rx7Atb5+3Ke65UD3YQTnisq456OpJ++fYMIy78I aY1hzmdI+zeTcV7kSl+vLlOtkOqP6VWY+2A6UiAO5RlmXOnH3WO/OhHu9M2YnnGXvP0h yI44oVDygVAVCMdtZ2pqCXOfS775f4XodHGF+sqv/a6Vu3x0qfQgnb7MOy/yec5kZ79t KfqIH+4220Sr7Hk0Pu1uN6ZiqM5zqnlnyHZXGsgDpfVNzpI8krLStz8LsETHjK+FMKrH qqkzsdtGOWOyuBOKKTecqiABxTER/+wqoXsEFwfD3HQnz0YPgfxaqKRU6GL7daAljdiK ZYEQ== X-Forwarded-Encrypted: i=1; AJvYcCW+uS/bBea2HIMABjPZ5JsPHPpObYUasOEYt7e2PcR6izB0soOHD3ok9JTtDB6aAyarosPINiLp8RAsfvgfwBI=@vger.kernel.org X-Gm-Message-State: AOJu0Yygmat9EmndmQpOz2arxPWjrui9fi7/ysAzDHcawxeI9ZS0BG1F 3JFjUnHBxz0Mj49C84v3gsSHZjGwzUwNV7NOa7xYD/SjAp2DaNxiDJuup+TVJGVlT07nxwyxJ+e 4uwzsgniXTA== X-Google-Smtp-Source: AGHT+IEAGhOq6ZLG1kWNmTBHOtlzeySJw4C81f6PzEKZO3mD1LQVv/+voZGC5+kCMIWufaPww+gOqaaEPDa8Ww== X-Received: from wmqe19.prod.google.com ([2002:a05:600c:4e53:b0:439:831e:ca7c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:468f:b0:439:5541:53cc with SMTP id 5b1f17b1804b1-439ae2206d5mr101272755e9.29.1740408455220; Mon, 24 Feb 2025 06:47:35 -0800 (PST) Date: Mon, 24 Feb 2025 14:47:14 +0000 In-Reply-To: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250224-page-alloc-kunit-v1-0-d337bb440889@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250224-page-alloc-kunit-v1-4-d337bb440889@google.com> Subject: [PATCH RFC 4/4] mm/page_alloc_test: Add smoke-test for page allocation From: Brendan Jackman To: Brendan Higgins , David Gow , Rae Moar , Andrew Morton , David Hildenbrand , Oscar Salvador Cc: Lorenzo Stoakes , Vlastimil Babka , Michal Hocko , linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Brendan Jackman , Yosry Ahmed This is the bare minimum to illustrate what KUnit code would look like that covers the page allocator. Even this trivial test illustrates a couple of nice things that are possible when testing via KUnit 1. We can directly assert that the correct zone was used. (Although note due to the simplistic setup, you can have any zone you like as long as it's ZONE_NORMAL). 2. We can assert that a page got freed. It's probably pretty unlikely that we'd have a bug that actually causes a page to get leaked by the allocator, but it serves as a good example of the kind of assertions we can make by judicously peeking at allocator internals. Signed-off-by: Brendan Jackman --- mm/page_alloc_test.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 138 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc_test.c b/mm/page_alloc_test.c index c6bcfcaf61b57ca35ad1b5fc48fd07d0402843bc..0c4effb151f4cd31ec6a696615a9b6ae4964b332 100644 --- a/mm/page_alloc_test.c +++ b/mm/page_alloc_test.c @@ -26,6 +26,139 @@ } \ }) +#define EXPECT_WITHIN_ZONE(test, page, zone) ({ \ + unsigned long pfn = page_to_pfn(page); \ + unsigned long start_pfn = zone->zone_start_pfn; \ + unsigned long end_pfn = start_pfn + zone->spanned_pages; \ + \ + KUNIT_EXPECT_TRUE_MSG(test, \ + pfn >= start_pfn && pfn < end_pfn, \ + "Wanted PFN 0x%lx - 0x%lx, got 0x%lx", \ + start_pfn, end_pfn, pfn); \ + KUNIT_EXPECT_PTR_EQ_MSG(test, page_zone(page), zone, \ + "Wanted %px (%s), got %px (%s)", \ + zone, zone->name, page_zone(page), page_zone(page)->name); \ +}) + +static void action_nodemask_free(void *ctx) +{ + NODEMASK_FREE(ctx); +} + +/* + * Call __alloc_pages_noprof with a nodemask containing only the nid. + * + * Never returns NULL. + */ +static inline struct page *alloc_pages_force_nid(struct kunit *test, + gfp_t gfp, int order, int nid) +{ + NODEMASK_ALLOC(nodemask_t, nodemask, GFP_KERNEL); + struct page *page; + + KUNIT_ASSERT_NOT_NULL(test, nodemask); + kunit_add_action(test, action_nodemask_free, &nodemask); + nodes_clear(*nodemask); + node_set(nid, *nodemask); + + page = __alloc_pages_noprof(GFP_KERNEL, 0, nid, nodemask); + KUNIT_ASSERT_NOT_NULL(test, page); + return page; +} + +static inline bool page_on_buddy_list(struct page *want_page, struct list_head *head) +{ + struct page *found_page; + + list_for_each_entry(found_page, head, buddy_list) { + if (found_page == want_page) + return true; + } + + return false; +} + +/* Test case parameters that are independent of alloc order. */ +static const struct { + gfp_t gfp_flags; + enum zone_type want_zone; +} alloc_fresh_gfps[] = { + /* + * The way we currently set up the isolated node, everything ends up in + * ZONE_NORMAL. + */ + { .gfp_flags = GFP_KERNEL, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_ATOMIC, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_USER, .want_zone = ZONE_NORMAL }, + { .gfp_flags = GFP_DMA32, .want_zone = ZONE_NORMAL }, +}; + +struct alloc_fresh_test_case { + int order; + int gfp_idx; +}; + +/* Generate test cases as the cross product of orders and alloc_fresh_gfps. */ +static const void *alloc_fresh_gen_params(const void *prev, char *desc) +{ + /* Buffer to avoid allocations. */ + static struct alloc_fresh_test_case tc; + + if (!prev) { + /* First call */ + tc.order = 0; + tc.gfp_idx = 0; + return &tc; + } + + tc.gfp_idx++; + if (tc.gfp_idx >= ARRAY_SIZE(alloc_fresh_gfps)) { + tc.gfp_idx = 0; + tc.order++; + } + if (tc.order > MAX_PAGE_ORDER) + /* Finished. */ + return NULL; + + snprintf(desc, KUNIT_PARAM_DESC_SIZE, "order %d %pGg\n", + tc.order, &alloc_fresh_gfps[tc.gfp_idx].gfp_flags); + return &tc; +} + +/* Smoke test: allocate from a node where everything is in a pristine state. */ +static void test_alloc_fresh(struct kunit *test) +{ + const struct alloc_fresh_test_case *tc = test->param_value; + gfp_t gfp_flags = alloc_fresh_gfps[tc->gfp_idx].gfp_flags; + enum zone_type want_zone_type = alloc_fresh_gfps[tc->gfp_idx].want_zone; + struct zone *want_zone = &NODE_DATA(isolated_node)->node_zones[want_zone_type]; + struct list_head *buddy_list; + struct per_cpu_pages *pcp; + struct page *page, *merged_page; + int cpu; + + page = alloc_pages_force_nid(test, gfp_flags, tc->order, isolated_node); + + EXPECT_WITHIN_ZONE(test, page, want_zone); + + cpu = get_cpu(); + __free_pages(page, 0); + pcp = per_cpu_ptr(want_zone->per_cpu_pageset, cpu); + put_cpu(); + + /* + * Should end up back in the free area when drained. Because everything + * is free, it should get buddy-merged up to the maximum order. + */ + drain_zone_pages(want_zone, pcp); + KUNIT_EXPECT_TRUE(test, PageBuddy(page)); + KUNIT_EXPECT_EQ(test, buddy_order(page), MAX_PAGE_ORDER); + KUNIT_EXPECT_TRUE(test, list_empty(&pcp->lists[MIGRATE_UNMOVABLE])); + merged_page = pfn_to_page(round_down(page_to_pfn(page), 1 << MAX_PAGE_ORDER)); + buddy_list = &want_zone->free_area[MAX_PAGE_ORDER].free_list[MIGRATE_UNMOVABLE]; + KUNIT_EXPECT_TRUE(test, page_on_buddy_list(merged_page, buddy_list)); +} + static void action_drain_pages_all(void *unused) { int cpu; @@ -144,7 +277,11 @@ static void depopulate_isolated_node(struct kunit_suite *suite) WARN_ON(add_memory(0, start, size, MMOP_ONLINE)); WARN_ON(walk_memory_blocks(start, size, NULL, memory_block_online_cb)); } -static struct kunit_case test_cases[] = { {} }; + +static struct kunit_case test_cases[] = { + KUNIT_CASE_PARAM(test_alloc_fresh, alloc_fresh_gen_params), + {} +}; struct kunit_suite page_alloc_test_suite = { .name = "page_alloc",