From patchwork Wed May 8 10:22:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 13658456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B986C19F4F for ; Wed, 8 May 2024 10:22:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D283E6B00A8; Wed, 8 May 2024 06:22:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD72C6B00AC; Wed, 8 May 2024 06:22:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B78F16B00AF; Wed, 8 May 2024 06:22:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9AC696B00A8 for ; Wed, 8 May 2024 06:22:28 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F27F5141431 for ; Wed, 8 May 2024 10:22:27 +0000 (UTC) X-FDA: 82094839134.29.3E20628 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf17.hostedemail.com (Postfix) with ESMTP id 2F0AB40004 for ; Wed, 8 May 2024 10:22:26 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=CTwr7AcQ; spf=pass (imf17.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715163746; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=NJzunWduLEJtl9HZRHR2liXCrjLe8ar6vqfD6L7Tfl8=; b=c7mmV8++HRUyPNtmJDjEWLpbiNvFf+1E5AKQdIMKtpu+JbgafJ751ONqSDpzWuH2TiRuOX +OHjzC2Ib04hXFcI2okw9wn8LJtgxAt2Dcpo3l87WKKauiub5ZmW/aDyVkGl0d9YckouVY B0oRQXHkzZGHROP1g+xY2v34QwZfDtI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=CTwr7AcQ; spf=pass (imf17.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715163746; a=rsa-sha256; cv=none; b=eyK9/a6OGAjXYoVwTvn1VKKHCpjp3yfWAIoon/RcKQ1QMGKwoqQx5Iog2Ilc+0cU+E6XTn LZw4IktIKhXRBml5PpGu8u1vGweklc25R5Pis5WuWVKRijub2BmFkQnfaUByndyj2qWeMV GRL72FGextzVL6i8ROm+lZ/ReN/R0aA= Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-43ddc01685aso855451cf.2 for ; Wed, 08 May 2024 03:22:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1715163745; x=1715768545; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=NJzunWduLEJtl9HZRHR2liXCrjLe8ar6vqfD6L7Tfl8=; b=CTwr7AcQyrXTaSgKM2Hh3zga0Gj3plpzfzrAvAGOUvKSUDroXGz+kdJ9PSo/J9G8PD 1opcSjtrUk6dTqaQ7rcynw6MFLZnLA0l5W3HxhK9L6X+BTKNLIG8lApWMyG/Xv0/RMZ3 d6fT2lgY7HEQC0a6iGHLwjDXKaSKNVI72ojsRfAHoVwRlOue2EppeBylEyz2T9jBZvqg olkiu8JKrq67xl8vj+gDkasFkZmRBiLxhGu3Weg1RwS45hU54+AffL5dAd9u2UmIje8T PG/Msh1uBbrocrRnVmBkFmnSZCuTeTWMb17a5//AD3JG3eJlNKIuWmTub49VztWDmhup f2Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715163745; x=1715768545; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NJzunWduLEJtl9HZRHR2liXCrjLe8ar6vqfD6L7Tfl8=; b=dzEvi8Vj37TKBiSdm3yxIZnl+8N1fRy5AOKbqXBHtzIcWISI1BZW6YUn3uEnEz1Oh2 EMs1I5JMGPU2JtW66HTKqBJPgBKmh4IyTopWQHpyMBMcgxZI5e8cquw0v53bXUpnJ/dJ 2W+lA80zVL40/fGLB6bdaQqMk5bcBRUjorHJKn7Df368ka+EGLiUvSL91HwF7NeTl/PR qGkFxlkExVQJ7Fu0dAJO3yctHnB/GZ02dbyyoVlhrvoGmQKI72ag9NT3V+b6mKPNVqRH GzLdNcFyGKqbdw50wn8bTcL035/TAgfNS9/OoBLSJYkK3bwCXB1JgysRcbkho1TZQOCE Qr8Q== X-Forwarded-Encrypted: i=1; AJvYcCVEtp6ru4zK4qOpaKLUOrZqRPhTaPcSQ/a/H9kbFLOPOnUhbw2JJXXvX/c61LtYJ3YPBvdKHKWgiBCHmycgSXHf8IY= X-Gm-Message-State: AOJu0YwREa0bw6rmULDtx1PPdTEGQDzvWq7P/xKHyyB2WUZgx8Mr3TPM umQ2qFEBYYzLhy+qtH/2AAxEbdp3WhTbrYR89CUbhVm5k0thD0Uz X-Google-Smtp-Source: AGHT+IEIv91wQviqGRD7/ak0MDKHYkmzCd0dmCixa1tczIRoZlLSBS4O3x9AwtF9MKJuzOCUiNAmNQ== X-Received: by 2002:a05:622a:1315:b0:43a:f007:a476 with SMTP id d75a77b69052e-43dbf2c3f1cmr21187411cf.2.1715163745209; Wed, 08 May 2024 03:22:25 -0700 (PDT) Received: from localhost (fwdproxy-nao-009.fbsv.net. [2a03:2880:23ff:9::face:b00c]) by smtp.gmail.com with ESMTPSA id ex8-20020a05622a518800b0043a51b452a3sm7476483qtb.20.2024.05.08.03.22.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 03:22:24 -0700 (PDT) From: Usama Arif To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v4] selftests: cgroup: add tests to verify the zswap writeback path Date: Wed, 8 May 2024 11:22:22 +0100 Message-ID: <20240508102222.4001882-1-usamaarif642@gmail.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Stat-Signature: ndwhyid6rj3rs39p3kna3sfsd3qeanrd X-Rspamd-Queue-Id: 2F0AB40004 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1715163746-447740 X-HE-Meta: U2FsdGVkX18Gsk9Q+Gyj6XLIBrUiAFfnsqKyMPHJ+XWya/xh0GRbqhltrnvB2FK0DRsbzsx/6lvHzTWl83xZpy4CEAnY6UxFV9IPa46roVm0qWRcecBtiRvT16J2SGBYxY64VkNINtNog2bfdor1vzSaKxZX/Vj5SNlE2rB7mk/UVcx0jNQ2Uj3kskOFooP87xeft3cvOLQXc584HEiU/VUITgtqk5LTlqVmeXKlxPAYAZhv4K5McVitqz7zKftddUxikBf+AvaMdddT28xoJlL98PHElMk04pyu6wk3aonOKIJ2xi8QkkSBrDE9FGA5om20oD8sEsVTRvzgpVJ7PMserf2/Iyu8OqAEbS5MU5xiOiFf/+vOhlKUrFGZimPQwPWDuiAkrxdiumhRX84B2gNOAf0kokwcimhqbNPjh3NHrFt6+azNrVmkcn1a7Tsl6/C43uZE+Hlg5EjsuDckdc5aBwoWkrXN2SlHnF4Up3HbomY+NCVKPcsetfkwK/QcNri6YzaHzRtyMZAVAUVrt/RfdUY9D5uCQIenU0q7OwbPwnVY04b0khiXYyyUKUpndAPNZ4x7Cj7obw0SvaqdY56uANa5rxllAKLqpzU7lZzsXBiOQmV77+YoIUwgGtDwkTn+Oo8EIAu1GT3nTI+L/E9IUaOWxO3gyMdCD8c5tPvEcpIVGO7Ad+Es4/w3VgDTr+lVEXPdDLCB9Ap0nkStRCs+iZLcgToH7GrZE6+0sKM++npUuG6xySBEySghjHKr/j4JONioowuiy8MKmNngaZjww8EN+kFdvwDVl/P13dNaTVzWHYNrESdLffzOCWmvAhTCB76hRp+DzjSX4GUMO6jzudBLETbANkNdyLhyf6aGdUZJ+nRskaWyiu1QsNfNzWKRrG7M2rEqY7K8P6yhfOzXsMka2Mv2AlUHIWa6vIIko5TsbfVMQsST2ZYPsFdkuHSc2f/3A0SiZIbpEXg QnvLmQsQ HRBlSc+9bAVFJdJ2jAs1BSWecrW1WxWr/U3WHA1TPOYfMCWOTQlLjdAC1H7/eVvs4APfS3IyHPN+f+dZZS+vKHJiZjf5x7VS6Nr0o9CsknzDBeVvhSGu0JXGw/8RR1pZxK675RSPe4PI+UmNm1pUP8EHGONgkQi1GijXlT+OElqHXR0TbcAlsitR1xwnL5tTMAYcZfVCD5WJ04XSPOR51Sc2XwlRq7VQt/fYkvK7RRPAn7OshJBNHiY8rjn+q4AG32JJY4Lnrocq/LyLQMJK/xZVhExDgPlDaOA1WnK1c+NPDMBwpwBnV87eyNxoTEctuOkiq61uCQ1+7fsn1AKZMcorIkNSJoJgJu6CvO+6ak0GryCg5XDFt4M9ULFlHTI67wYEpT+7gy+4cnNpJEMdtFjLtKUy1d2c9JNFAka++AfJXMm4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.041945, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Attempt writeback with the below steps and check using memory.stat.zswpwb if zswap writeback occurred: 1. Allocate memory. 2. Reclaim memory equal to the amount that was allocated in step 1. This will move it into zswap. 3. Save current zswap usage. 4. Move the memory allocated in step 1 back in from zswap. 5. Set zswap.max to half the amount that was recorded in step 3. 6. Attempt to reclaim memory equal to the amount that was allocated, this will either trigger writeback if it's enabled, or reclamation will fail if writeback is disabled as there isn't enough zswap space. Suggested-by: Nhat Pham Signed-off-by: Usama Arif Acked-by: Yosry Ahmed --- v3 -> v4 (Yosry Ahmed): - Use a fixed page-sized buffer for filling and checking memory when attempting writeback - Use cg_write_numeric instead of cg_write for memory.reclaim - Improved error checking for zswpwb_before and zswpwb_after v2 -> v3: - Remove memory.max (Yosry Ahmed) - change from random allocation of memory to increasing and 0 allocation (Yosry Ahmed) - stricter error checking when writeback is disabled (Yosry Ahmed) - Ensure zswpwb_before == 0 (Yosry Ahmed) - Variable definition reorder, function name change (Yosry Ahmed) v1 -> v2: - Change method of causing writeback from limit zswap to memory reclaim. (Further described in commit message) (Yosry Ahmed) - Document why using random memory (Nhat Pham) --- tools/testing/selftests/cgroup/test_zswap.c | 130 +++++++++++++++++++- 1 file changed, 129 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index f0e488ed90d8..beab9b979957 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,7 +50,7 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_cg_wb_count(const char *cg) +static long get_cg_wb_count(const char *cg) { return cg_read_key_long(cg, "memory.stat", "zswpwb"); } @@ -248,6 +248,132 @@ static int test_zswapin(const char *root) return ret; } +/* + * Attempt writeback with the following steps: + * 1. Allocate memory. + * 2. Reclaim memory equal to the amount that was allocated in step 1. + This will move it into zswap. + * 3. Save current zswap usage. + * 4. Move the memory allocated in step 1 back in from zswap. + * 5. Set zswap.max to half the amount that was recorded in step 3. + * 6. Attempt to reclaim memory equal to the amount that was allocated, + this will either trigger writeback if it's enabled, or reclamation + will fail if writeback is disabled as there isn't enough zswap space. + */ +static int attempt_writeback(const char *cgroup, void *arg) +{ + long pagesize = sysconf(_SC_PAGESIZE); + char *test_group = arg; + size_t memsize = MB(4); + char buf[pagesize]; + long zswap_usage; + bool wb_enabled; + int ret = -1; + char *mem; + + wb_enabled = cg_read_long(test_group, "memory.zswap.writeback"); + mem = (char *)malloc(memsize); + if (!mem) + return ret; + + /* + * Fill half of each page with increasing data, and keep other + * half empty, this will result in data that is still compressible + * and ends up in zswap, with material zswap usage. + */ + for (int i = 0; i < pagesize; i++) + buf[i] = i < pagesize/2 ? (char) i : 0; + + for (int i = 0; i < memsize; i += pagesize) + memcpy(&mem[i], buf, pagesize); + + /* Try and reclaim allocated memory */ + if (cg_write_numeric(test_group, "memory.reclaim", memsize)) { + ksft_print_msg("Failed to reclaim all of the requested memory\n"); + goto out; + } + + zswap_usage = cg_read_long(test_group, "memory.zswap.current"); + + /* zswpin */ + for (int i = 0; i < memsize; i += pagesize) { + if (memcmp(&mem[i], buf, pagesize)) { + ksft_print_msg("invalid memory\n"); + goto out; + } + } + + if (cg_write_numeric(test_group, "memory.zswap.max", zswap_usage/2)) + goto out; + + /* + * If writeback is enabled, trying to reclaim memory now will trigger a + * writeback as zswap.max is half of what was needed when reclaim ran the first time. + * If writeback is disabled, memory reclaim will fail as zswap is limited and + * it can't writeback to swap. + */ + ret = cg_write(test_group, "memory.reclaim", "4M"); + if (!wb_enabled) + ret = (ret == -EAGAIN) ? 0 : -1; + +out: + free(mem); + return ret; +} + +/* Test to verify the zswap writeback path */ +static int test_zswap_writeback(const char *root, bool wb) +{ + long zswpwb_before, zswpwb_after; + int ret = KSFT_FAIL; + char *test_group; + + test_group = cg_name(root, "zswap_writeback_test"); + if (!test_group) + goto out; + if (cg_create(test_group)) + goto out; + if (cg_write(test_group, "memory.zswap.writeback", wb ? "1" : "0")) + goto out; + + zswpwb_before = get_cg_wb_count(test_group); + if (zswpwb_before != 0) { + ksft_print_msg("zswpwb_before = %ld instead of 0\n", zswpwb_before); + goto out; + } + + if (cg_run(test_group, attempt_writeback, (void *) test_group)) + goto out; + + /* Verify that zswap writeback occurred only if writeback was enabled */ + zswpwb_after = get_cg_wb_count(test_group); + if (zswpwb_after < 0) + goto out; + + if (wb != !!zswpwb_after) { + ksft_print_msg("zswpwb_after is %ld while wb is %s", + zswpwb_after, wb ? "enabled" : "disabled"); + goto out; + } + + ret = KSFT_PASS; + +out: + cg_destroy(test_group); + free(test_group); + return ret; +} + +static int test_zswap_writeback_enabled(const char *root) +{ + return test_zswap_writeback(root, true); +} + +static int test_zswap_writeback_disabled(const char *root) +{ + return test_zswap_writeback(root, false); +} + /* * When trying to store a memcg page in zswap, if the memcg hits its memory * limit in zswap, writeback should affect only the zswapped pages of that @@ -425,6 +551,8 @@ struct zswap_test { T(test_zswap_usage), T(test_swapin_nozswap), T(test_zswapin), + T(test_zswap_writeback_enabled), + T(test_zswap_writeback_disabled), T(test_no_kmem_bypass), T(test_no_invasive_cgroup_shrink), };