From patchwork Sun Aug 14 05:54:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rebecca Mckeever X-Patchwork-Id: 12942850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 763CFC25B08 for ; Sun, 14 Aug 2022 05:54:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE6CA8E000B; Sun, 14 Aug 2022 01:54:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E46E08E0001; Sun, 14 Aug 2022 01:54:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC1D98E000B; Sun, 14 Aug 2022 01:54:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BD41D8E0001 for ; Sun, 14 Aug 2022 01:54:23 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 981F8160743 for ; Sun, 14 Aug 2022 05:54:23 +0000 (UTC) X-FDA: 79797133206.20.0B31042 Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by imf04.hostedemail.com (Postfix) with ESMTP id 4370440044 for ; Sun, 14 Aug 2022 05:54:23 +0000 (UTC) Received: by mail-il1-f193.google.com with SMTP id g18so2493716ilk.4 for ; Sat, 13 Aug 2022 22:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=57rgW1sYbO78tij7BVX5WYr55Vq5FbqyewxGMYBSZtk=; b=B/fgMudF1KCEFCKV6FWg8Gz3rGGiLl0m7TNOf0gfhZZp6Wrgxs92Z4hAbQ+aQjouFu X1OEhXaQUMv8zDIRSMsxneJhLVHEOzJrVtY5K5bCqT8vInwovKktKyBO+cfLYVHy9Aj1 9X2hRFjnuKLsoPh+WDybApb5+vLU2URzvWqvDUny2XJetlHCIEFJrgNaHJZTLpofWny0 +OgdAxLv45Bb5QQCWLQxMB3Ji++3isUtoslT8mnlSQWYA1BGe/FApjhKJuhSlCBNGGow 0gqtnCh97tSHuFJ/lKsADY3kBBv14SjhH0sqmEQMJLbuA/BonAIPynG6Y8rUyqSM6zhE RQyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=57rgW1sYbO78tij7BVX5WYr55Vq5FbqyewxGMYBSZtk=; b=ichjIoiJAIxsdnlHldSMz9Oji/fOi3IlDrqEBI9O7M2HqVAfK37tr43oleuPrfQzd1 VPpsSe4vdAAOOzln7962nSadbNf3juN/yGfliD9H3vOoxrKSrJ7FH2CD0edmbF5VIKpK 3e6qNRNmYCw2P8+hXpzLZLGrqp6mu/MmSZ7VnY5kh41+xnsvw1zouUJpXk+TJihVnINs gCols0RJWFcEJw/D/gz6rqp6jzrNLuCNZgag8corvQwtUi9RjSG4L4IL4dlxNJBltZ0W fWNqRwjD1QzhW+UyHFigMV4wWRany8hmQvNQHIVNdBfwwHTwFPJY1zLgB3VvM/X4vEhy Tq6A== X-Gm-Message-State: ACgBeo1uwt/iFTxcLr05YBJEI/ZxS527VfHMNKJ0YNbu7XJHLyofEpF5 IgNe2O3hCAdzmVUJGO81ZWA= X-Google-Smtp-Source: AA6agR406vu4PkQdRHCd5fMLZ7RIpgMg5EM1fsk2ivjCdmJ3HsNQd0botplZ5UriMbe3vmUfiBL3QA== X-Received: by 2002:a05:6e02:2185:b0:2de:9a5a:4993 with SMTP id j5-20020a056e02218500b002de9a5a4993mr4535302ila.182.1660456462573; Sat, 13 Aug 2022 22:54:22 -0700 (PDT) Received: from sophie (static-198-54-128-70.cust.tzulo.com. [198.54.128.70]) by smtp.gmail.com with ESMTPSA id b6-20020a05660214c600b0067b7a057ee8sm3144563iow.25.2022.08.13.22.54.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 13 Aug 2022 22:54:22 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH 8/8] memblock tests: add tests for memblock_trim_memory Date: Sun, 14 Aug 2022 00:54:00 -0500 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660456463; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=57rgW1sYbO78tij7BVX5WYr55Vq5FbqyewxGMYBSZtk=; b=gmcC/MrO3ZdvPJVKUX5YiMU00Y/osgtjvMfeiXRduRM+HinJVxhaa30dEHt3ie7DTkNSkt zIMe+V6lsoqk2we3aLuMEtxHgRwZ2E+NGoVJ5JyMbS9YYSVuVuRMbwe+3mWvyHS7ucKWJz SzZK83bPj9aDXcmMeM94xtSOB7NBXho= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="B/fgMudF"; spf=pass (imf04.hostedemail.com: domain of remckee0@gmail.com designates 209.85.166.193 as permitted sender) smtp.mailfrom=remckee0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660456463; a=rsa-sha256; cv=none; b=LR2i6mQX0i/Qx8slz9n/UZENxMqqT+F6rOtn0tYxR4s2KftiGlgxcbWyzr8Shc4an0Ek21 2QyzhnYZ9zOTooL7DetAy1XABcIjqM/n2Wd2edLfnLBoIgHFqzPveU3xYkXnzFaFRmhtEE YepxL7xWgPNojwVEvTDDWHTfK+t4nyQ= X-Stat-Signature: ofiyufnw6mdex4xydf31wu7u9eadeskk X-Rspamd-Queue-Id: 4370440044 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="B/fgMudF"; spf=pass (imf04.hostedemail.com: domain of remckee0@gmail.com designates 209.85.166.193 as permitted sender) smtp.mailfrom=remckee0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1660456463-421936 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add tests for memblock_trim_memory() for the following scenarios: - all regions aligned - one region unalign that is smaller than the alignment - one region unaligned at the base - one region unaligned at the end Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/basic_api.c | 223 +++++++++++++++++++++++ 1 file changed, 223 insertions(+) diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c index d7f008e7a12a..c8bb44f20846 100644 --- a/tools/testing/memblock/tests/basic_api.c +++ b/tools/testing/memblock/tests/basic_api.c @@ -8,6 +8,7 @@ #define FUNC_RESERVE "memblock_reserve" #define FUNC_REMOVE "memblock_remove" #define FUNC_FREE "memblock_free" +#define FUNC_TRIM "memblock_trim_memory" static int memblock_initialization_check(void) { @@ -1723,6 +1724,227 @@ static int memblock_bottom_up_checks(void) return 0; } +/* + * A test that tries to trim memory when both ends of the memory region are + * aligned. Expect that the memory will not be trimmed. Expect the counter to + * not be updated. + */ +static int memblock_trim_memory_aligned_check(void) +{ + struct memblock_region *rgn; + phys_addr_t alignment = SMP_CACHE_BYTES; + + rgn = &memblock.memory.regions[0]; + + struct region r = { + .base = alignment, + .size = alignment * 4 + }; + + PREFIX_PUSH(); + + reset_memblock_regions(); + memblock_add(r.base, r.size); + memblock_trim_memory(alignment); + + ASSERT_EQ(rgn->base, r.base); + ASSERT_EQ(rgn->size, r.size); + + ASSERT_EQ(memblock.memory.cnt, 1); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to trim memory when there are two available regions, r1 and + * r2. Region r1 is aligned on both ends and region r2 is unaligned on one end + * and smaller than the alignment: + * + * alignment + * |--------| + * | +-----------------+ +------+ | + * | | r1 | | r2 | | + * +--------+-----------------+--------+------+---+ + * ^ ^ ^ ^ ^ + * |________|________|________| | + * | Unaligned address + * Aligned addresses + * + * Expect that r1 will not be trimmed and r2 will be removed. Expect the + * counter to be updated. + */ +static int memblock_trim_memory_too_small_check(void) +{ + struct memblock_region *rgn; + phys_addr_t alignment = SMP_CACHE_BYTES; + + rgn = &memblock.memory.regions[0]; + + struct region r1 = { + .base = alignment, + .size = alignment * 2 + }; + struct region r2 = { + .base = alignment * 4, + .size = alignment - SZ_2 + }; + + PREFIX_PUSH(); + + reset_memblock_regions(); + memblock_add(r1.base, r1.size); + memblock_add(r2.base, r2.size); + memblock_trim_memory(alignment); + + ASSERT_EQ(rgn->base, r1.base); + ASSERT_EQ(rgn->size, r1.size); + + ASSERT_EQ(memblock.memory.cnt, 1); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to trim memory when there are two available regions, r1 and + * r2. Region r1 is aligned on both ends and region r2 is unaligned at the base + * and aligned at the end: + * + * Unaligned address + * | + * v + * | +-----------------+ +---------------+ | + * | | r1 | | r2 | | + * +--------+-----------------+----------+---------------+---+ + * ^ ^ ^ ^ ^ ^ + * |________|________|________|________|________| + * | + * Aligned addresses + * + * Expect that r1 will not be trimmed and r2 will be trimmed at the base. + * Expect the counter to not be updated. + */ +static int memblock_trim_memory_unaligned_base_check(void) +{ + struct memblock_region *rgn1, *rgn2; + phys_addr_t alignment = SMP_CACHE_BYTES; + phys_addr_t offset = SZ_2; + phys_addr_t r2_base, r2_size; + + rgn1 = &memblock.memory.regions[0]; + rgn2 = &memblock.memory.regions[1]; + + struct region r1 = { + .base = alignment, + .size = alignment * 2 + }; + struct region r2 = { + .base = alignment * 4 + offset, + .size = alignment * 2 - offset + }; + + PREFIX_PUSH(); + + r2_base = r2.base + (alignment - offset); + r2_size = r2.size - (alignment - offset); + + reset_memblock_regions(); + memblock_add(r1.base, r1.size); + memblock_add(r2.base, r2.size); + memblock_trim_memory(alignment); + + ASSERT_EQ(rgn1->base, r1.base); + ASSERT_EQ(rgn1->size, r1.size); + + ASSERT_EQ(rgn2->base, r2_base); + ASSERT_EQ(rgn2->size, r2_size); + + ASSERT_EQ(memblock.memory.cnt, 2); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to trim memory when there are two available regions, r1 and + * r2. Region r1 is aligned on both ends and region r2 is aligned at the base + * and unaligned at the end: + * + * Unaligned address + * | + * v + * | +-----------------+ +---------------+ | + * | | r1 | | r2 | | + * +--------+-----------------+--------+---------------+---+ + * ^ ^ ^ ^ ^ ^ + * |________|________|________|________|________| + * | + * Aligned addresses + * + * Expect that r1 will not be trimmed and r2 will be trimmed at the base. + * Expect the counter to not be updated. + */ +static int memblock_trim_memory_unaligned_end_check(void) +{ + struct memblock_region *rgn1, *rgn2; + phys_addr_t alignment = SMP_CACHE_BYTES; + phys_addr_t offset = SZ_2; + phys_addr_t r2_size; + + rgn1 = &memblock.memory.regions[0]; + rgn2 = &memblock.memory.regions[1]; + + struct region r1 = { + .base = alignment, + .size = alignment * 2 + }; + struct region r2 = { + .base = alignment * 4, + .size = alignment * 2 - offset + }; + + PREFIX_PUSH(); + + r2_size = r2.size - (alignment - offset); + + reset_memblock_regions(); + memblock_add(r1.base, r1.size); + memblock_add(r2.base, r2.size); + memblock_trim_memory(alignment); + + ASSERT_EQ(rgn1->base, r1.base); + ASSERT_EQ(rgn1->size, r1.size); + + ASSERT_EQ(rgn2->base, r2.base); + ASSERT_EQ(rgn2->size, r2_size); + + ASSERT_EQ(memblock.memory.cnt, 2); + + test_pass_pop(); + + return 0; +} + +static int memblock_trim_memory_checks(void) +{ + prefix_reset(); + prefix_push(FUNC_TRIM); + test_print("Running %s tests...\n", FUNC_TRIM); + + memblock_trim_memory_aligned_check(); + memblock_trim_memory_too_small_check(); + memblock_trim_memory_unaligned_base_check(); + memblock_trim_memory_unaligned_end_check(); + + prefix_pop(); + + return 0; +} + int memblock_basic_checks(void) { memblock_initialization_check(); @@ -1731,6 +1953,7 @@ int memblock_basic_checks(void) memblock_remove_checks(); memblock_free_checks(); memblock_bottom_up_checks(); + memblock_trim_memory_checks(); return 0; }