From patchwork Wed Jun 7 19:51:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13271198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08F5AC77B7A for ; Wed, 7 Jun 2023 19:51:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E9D26B0072; Wed, 7 Jun 2023 15:51:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 599D96B0074; Wed, 7 Jun 2023 15:51:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 462168E0001; Wed, 7 Jun 2023 15:51:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 364916B0072 for ; Wed, 7 Jun 2023 15:51:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0715F1C7816 for ; Wed, 7 Jun 2023 19:51:49 +0000 (UTC) X-FDA: 80876997138.13.620CD0B Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf30.hostedemail.com (Postfix) with ESMTP id 19A9F80011 for ; Wed, 7 Jun 2023 19:51:46 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=1j0E5TB7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 30d-AZAoKCL42swv2elqihksskpi.gsqpmry1-qqozego.svk@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=30d-AZAoKCL42swv2elqihksskpi.gsqpmry1-qqozego.svk@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686167507; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=NDOi9DtrG3ImHyAY9qZt/+yVKhA3RZlvADlD5WArGHU=; b=PNyCGHPG0pHzfP1oK9tHzSq1f4CMquS/9g4ZW22rRG5ThQghUGKACI4/Ue/V4OTjWU8wOV UKm94LcMXkkkkm/dZahU4uuD+ikO5SJBWoYHuSmrYnOVbenr5XNrgC4XuXL7b9VUW9ddNJ UWgz7GITiVMmoF+ddvbUvZJZhTToM34= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=1j0E5TB7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 30d-AZAoKCL42swv2elqihksskpi.gsqpmry1-qqozego.svk@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=30d-AZAoKCL42swv2elqihksskpi.gsqpmry1-qqozego.svk@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686167507; a=rsa-sha256; cv=none; b=ZBGT9kpR+Zz8N+6MVngjsTVlk4R1HHewZ2TE9l3/hFx6QA4O9X3nBL+YOSTpDJB6qjqp6S eZvRMzehzlLpNdWmkZe8KxFm8OBtaAqjV3hpAyMUOSohNEKEsv+PYHw6ZJOoHJE2NbGxue zyup8yUuH5LOqxIVW0p3oIflTFV/BvQ= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-564fb1018bcso128373607b3.0 for ; Wed, 07 Jun 2023 12:51:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686167506; x=1688759506; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=NDOi9DtrG3ImHyAY9qZt/+yVKhA3RZlvADlD5WArGHU=; b=1j0E5TB7ktv88hKTSJjc5mFVjUVL+4duocWh9tqI2JWOWupxXbkjIrlCv+0cHVwIIK H4nwE4R/ZdC9idZ46oaMCGJoMCi+6ILUzTNC+P7UMx1xRIxSvzJ7TmMvhjYjpN4eIbqN axzn6OzQ8nbwAO0OF/jCmu4Xrna7FB9NVCt97Fwg0pBXNfJXDspqzHe4Frhg8GCvzXhs /QVUduZQ3zptW3i11al2P0rOu1vrGgD8PhhyiJUWYIj2zYOdt2Td74Dw95n8AHEn1eY9 HjKMfbNRg7w26pGGDJN5/JOvRQOzS8pEM/qEW1tjPsPnIqmhwFlCeKwr/wKnevG1VQEh 8fPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686167506; x=1688759506; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NDOi9DtrG3ImHyAY9qZt/+yVKhA3RZlvADlD5WArGHU=; b=WT3mC9wAmJ8u6ZrEYyquCs1UDMoWHQAzTwluDjcbt0DLkkNroP2UJPLWBK64OgWwsq qbIfTNvd/fn/st6lrBFzSXd2noC+h8EyDGDQctjhQMM4pAxc3yunHecuC83Dp/QRuvuq ucZsmhrbLH9oFAxywwc2N3r7DpRZLX8t2Z0jyeSr2egcU0K9mVJ1urf5s7a5nIpDK+hk W+R+xjPpXt5yVlH14EX6BS5UIiohczjlpKJ6E+W1DBR9BgJbZBNeTkoFVvOMvpJX1YIF XLe/vLBnLwLWrUKglYxWFDxyUPSF9VyD8OO6HLlgyJ7A9DfEvYUfYG7iylmsAQ5KuvEN ACYg== X-Gm-Message-State: AC+VfDzIHbxXL4lsTeqIYm1hqSz42Uuu5vc77EEsx+WbvYZUBlzNynko 1bandJU8H6iHl7mOPBBXlZbBqfkIH2lFH9qo X-Google-Smtp-Source: ACHHUZ5Mkc+5WtR2XDfFpKKpGuzDmhAT6vsGdBhYndPwEaaJDfsBRWCIcgn73UeB7TSMMrzyW0RYHmyKl3E89BNw X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:707:b0:565:a33a:a49f with SMTP id bs7-20020a05690c070700b00565a33aa49fmr3431292ywb.6.1686167505991; Wed, 07 Jun 2023 12:51:45 -0700 (PDT) Date: Wed, 7 Jun 2023 19:51:43 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230607195143.1473802-1-yosryahmed@google.com> Subject: [PATCH v2 1/2] mm: zswap: support exclusive loads From: Yosry Ahmed To: Andrew Morton , Konrad Rzeszutek Wilk , Seth Jennings , Dan Streetman , Vitaly Wool Cc: Johannes Weiner , Nhat Pham , Domenico Cerasuolo , Yu Zhao , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 19A9F80011 X-Stat-Signature: 4epwex58jjx6z8xm4787ufoapytah6dy X-HE-Tag: 1686167506-47037 X-HE-Meta: U2FsdGVkX18YPGEpVqbNakr5E33gEdxd8DZSyXO/TaeWjwAihRuyKnKDEOTC3qoPQcu7oLDYJdToChaCE+LC5izJf6gBsHz35+oM01pSL8Wa1XY9BE1cblPGdcNQjzTeQ6f4UgD41YpYRnwSuf/eVnmtvHf7t+3N+hYaWelvS150GL6Dwm5vjUrgi144bKS9XU5klf6qWxe3vvlhoNaCqhdRmE7wZ6GGlECao7Hv5njy61WQq5PKfef7Gzszg5cYswNu+QoPMGHXkpIGTrtD0E417hRrfhOYGM8Knn3eSe47TbH7ytFRhTgOSOPBM2CDWT44pgDhIyxJKDlrQBHdM6E6NlFkQgw2ggCH0y3mUB5SjiH0h/acm5YnJihixnvFoavjnZbiz5efjgu1mcefmk6lf4Z1ahKfli/HRYAhFJk7pPzsrgw6uv8yqL7vhpzAMmR7zthNYWpV7NxLIdxXwf4QW3zvCv1JUT6bxMO1nBq7XxaUfCkXWvBWL1lrwVPz/Z9DvcppYp82b1i2fKBO+qOp2QLEye48zigUA8Qo2IBMXCEZO//fwDRCmyQg5VCfP+EWkDPI6+v4GlsEwl36W0MFw7tt+EDY5BfnSe5yqZo9k4to9oTTZoh3UeOmGhs7VV8Dbd8bqc5QrwdLqUfNHjyEBZGo3pjydqYSbEPsxpSn/TskTbwvQ0FrpNyPcFeV65pYhEoUgjN7jP+DmCo4pmuUeG4dB9sedyM2Cak0BG1hYePh1brtnpw/SVBLvrAa+/N6w6DBCIblyTsjEg0cMhas28Pr9I09rg+ag6aAnRtryg7FhCQtEaJl9OKa/syRy1T+BdX6K3DFrTr7WSJ1ZqbFCL1RrPSzme652LDj6R9U9HkIjLaBPyJsTiNiXpFd74enO5DQBagvAJCfiO6CBYKgDs9wxI5UejjqP03O1KZNB1dbikalXwQ4k6ppt3ABhf86l+g/nGKOUphmg93 NoQaEC07 WQb4egELdIuz+n/yt+EvNyAIiOtKs1PoeuttqgvdlzQV5ZLRf9hYV3oWJvp6n+Q+CuKGwZ1jdX0Id+Fn1RpKZp2PIEfQ/XSGMvbqxBkIrPpPxFsDXzmxG7omEvyeDqc5y7MEafcI1xvRcHWh6a2kQJdDNXrRhMMUvaFkyGB7p+XkVkdnhdSuSUampry2t48hAgJAe95+Ti2lUorHHmeJm92UC2gJ5wfibflSGxz2qyI9Pi75/QnfTPWQxkNxh2CbcyyemSW6JoFnCCCkMrKPk6F9J5Zwt9ts+BvVHXpNKM2694oGx9f7/t8j6dMd/c4w1LNGEE4rrtKPNIKEG08ENEKUX8xkgARkd4hJ5RgaJ6EUkiYaTu2p2M/qAbPGE5isIz/XOCWLcIr0O9uUO40p8ttbKFii2sTMdM0mxe13ItnxxEpPfXcutMxXAN/7Wo1cgOKkV0H0xZe7tWctb/fdr+cpo48p3o75AA+OJnIpoFIG19xSXjwa62QBa/vIeapXXjbz5YGxn7Hv8jQiqWQCw254xIsUFwc0u3QAJi21gFKOWRT4mdk/2pXEyvozrL7gk4kJ1/Sdp/GXGLTuG8xu9A5hNSYhNQP7Ndbkpm9QTIHj4vIUpa2uMOb076ISI++kFP3AKyzu55mjEfh/0Iy0MtIDyEsQjbZCZqfTTcaYECFC7Zyt4QdwqK4X0Uw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit 71024cb4a0bf ("frontswap: remove frontswap_tmem_exclusive_gets") removed support for exclusive loads from frontswap as it was not used. Bring back exclusive loads support to frontswap by adding an "exclusive" output parameter to frontswap_ops->load. On the zswap side, add a module parameter to enable/disable exclusive loads, and a config option to control the boot default value. Refactor zswap entry invalidation in zswap_frontswap_invalidate_page() into zswap_invalidate_entry() to reuse it in zswap_frontswap_load() if exclusive loads are enabled. With exclusive loads, we avoid having two copies of the same page in memory (compressed & uncompressed) after faulting it in from zswap. On the other hand, if the page is to be reclaimed again without being dirtied, it will be re-compressed. Compression is not usually slow, and a page that was just faulted in is less likely to be reclaimed again soon. Suggested-by: Yu Zhao Signed-off-by: Yosry Ahmed Acked-by: Johannes Weiner Reported-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Yosry Ahmed Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- v1 -> v2: - Add a module parameter to control whether exclusive loads are enabled or not, the config option now controls the default boot value instead. Replaced frontswap_ops->exclusive_loads by an output parameter to frontswap_ops->load() (Johannes Weiner). --- include/linux/frontswap.h | 2 +- mm/Kconfig | 16 ++++++++++++++++ mm/frontswap.c | 10 ++++++++-- mm/zswap.c | 28 ++++++++++++++++++++-------- 4 files changed, 45 insertions(+), 11 deletions(-) diff --git a/include/linux/frontswap.h b/include/linux/frontswap.h index a631bac12220..eaa0ac5f9003 100644 --- a/include/linux/frontswap.h +++ b/include/linux/frontswap.h @@ -10,7 +10,7 @@ struct frontswap_ops { void (*init)(unsigned); /* this swap type was just swapon'ed */ int (*store)(unsigned, pgoff_t, struct page *); /* store a page */ - int (*load)(unsigned, pgoff_t, struct page *); /* load a page */ + int (*load)(unsigned, pgoff_t, struct page *, bool *); /* load a page */ void (*invalidate_page)(unsigned, pgoff_t); /* page no longer needed */ void (*invalidate_area)(unsigned); /* swap type just swapoff'ed */ }; diff --git a/mm/Kconfig b/mm/Kconfig index 7672a22647b4..12f32f8d26bf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -46,6 +46,22 @@ config ZSWAP_DEFAULT_ON The selection made here can be overridden by using the kernel command line 'zswap.enabled=' option. +config ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON + bool "Invalidate zswap entries when pages are loaded" + depends on ZSWAP + help + If selected, exclusive loads for zswap will be enabled at boot, + otherwise it will be disabled. + + If exclusive loads are enabled, when a page is loaded from zswap, + the zswap entry is invalidated at once, as opposed to leaving it + in zswap until the swap entry is freed. + + This avoids having two copies of the same page in memory + (compressed and uncompressed) after faulting in a page from zswap. + The cost is that if the page was never dirtied and needs to be + swapped out again, it will be re-compressed. + choice prompt "Default compressor" depends on ZSWAP diff --git a/mm/frontswap.c b/mm/frontswap.c index 279e55b4ed87..2fb5df3384b8 100644 --- a/mm/frontswap.c +++ b/mm/frontswap.c @@ -206,6 +206,7 @@ int __frontswap_load(struct page *page) int type = swp_type(entry); struct swap_info_struct *sis = swap_info[type]; pgoff_t offset = swp_offset(entry); + bool exclusive = false; VM_BUG_ON(!frontswap_ops); VM_BUG_ON(!PageLocked(page)); @@ -215,9 +216,14 @@ int __frontswap_load(struct page *page) return -1; /* Try loading from each implementation, until one succeeds. */ - ret = frontswap_ops->load(type, offset, page); - if (ret == 0) + ret = frontswap_ops->load(type, offset, page, &exclusive); + if (ret == 0) { inc_frontswap_loads(); + if (exclusive) { + SetPageDirty(page); + __frontswap_clear(sis, offset); + } + } return ret; } diff --git a/mm/zswap.c b/mm/zswap.c index 59da2a415fbb..bfbcedce9c89 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -137,6 +137,10 @@ static bool zswap_non_same_filled_pages_enabled = true; module_param_named(non_same_filled_pages_enabled, zswap_non_same_filled_pages_enabled, bool, 0644); +static bool zswap_exclusive_loads_enabled = IS_ENABLED( + CONFIG_ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON); +module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644); + /********************************* * data structures **********************************/ @@ -1329,12 +1333,22 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, goto reject; } +static void zswap_invalidate_entry(struct zswap_tree *tree, + struct zswap_entry *entry) +{ + /* remove from rbtree */ + zswap_rb_erase(&tree->rbroot, entry); + + /* drop the initial reference from entry creation */ + zswap_entry_put(tree, entry); +} + /* * returns 0 if the page was successfully decompressed * return -1 on entry not found or error */ static int zswap_frontswap_load(unsigned type, pgoff_t offset, - struct page *page) + struct page *page, bool *exclusive) { struct zswap_tree *tree = zswap_trees[type]; struct zswap_entry *entry; @@ -1404,6 +1418,10 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, freeentry: spin_lock(&tree->lock); zswap_entry_put(tree, entry); + if (!ret && zswap_exclusive_loads_enabled) { + zswap_invalidate_entry(tree, entry); + *exclusive = true; + } spin_unlock(&tree->lock); return ret; @@ -1423,13 +1441,7 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset) spin_unlock(&tree->lock); return; } - - /* remove from rbtree */ - zswap_rb_erase(&tree->rbroot, entry); - - /* drop the initial reference from entry creation */ - zswap_entry_put(tree, entry); - + zswap_invalidate_entry(tree, entry); spin_unlock(&tree->lock); }