From patchwork Mon Apr 7 23:42:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 14042002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6377FC3601E for ; Mon, 7 Apr 2025 23:42:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E7F56B0025; Mon, 7 Apr 2025 19:42:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 99B256B0027; Mon, 7 Apr 2025 19:42:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FD4F6B0026; Mon, 7 Apr 2025 19:42:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 46BB76B0024 for ; Mon, 7 Apr 2025 19:42:33 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 63D5558403 for ; Mon, 7 Apr 2025 23:42:34 +0000 (UTC) X-FDA: 83308874628.11.BD2FE67 Received: from mail-yb1-f171.google.com (mail-yb1-f171.google.com [209.85.219.171]) by imf22.hostedemail.com (Postfix) with ESMTP id 9FA6AC0003 for ; Mon, 7 Apr 2025 23:42:32 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Fj5AMJWQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744069352; a=rsa-sha256; cv=none; b=ALwrgT7F5dR/2AgNTaPGDQHneVipxc/R0Iux/bwTfET2y89Qv1jGOXOuL9oQvRau+r0wlR 1exaDgV8yeCg7dI5Q2f/LDl8h+2F/c1pQZmx6AF1cMdKGVLXToX9/fSvlf3BSevou5Yw2B FmdSX2S5j2LwgINA0fa5xZcf79S608c= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Fj5AMJWQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.219.171 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744069352; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Gy9yoaTY8WhgNX1FAQKuY1G0jgwt8VM6ywJcV8fbRa8=; b=YCsKTNwAqiFabKHwz+dOXAlcFBM7ioZpQL4S4LCrHslPcIieDvhaWqy868GmoIK1YqNB0A 90HaCoC/Qw6OsCijc3fOlpOPzdVQQqWCU4zhOUSRjGL3iMJj9moD9MU1a/OrHqnKD4VMn3 Gy/ahJPMJBh5EoHCAgwz0OOCbxObDwc= Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-e6e2971f79fso2227984276.0 for ; Mon, 07 Apr 2025 16:42:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744069351; x=1744674151; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Gy9yoaTY8WhgNX1FAQKuY1G0jgwt8VM6ywJcV8fbRa8=; b=Fj5AMJWQNk3TzzkgyD3PuWO4wQjVgEnHQEsVkK6rnfh6jSgUxpaV/XykSozamwe7iq dU0n8QeUlWfuPCHoeHUmQVAFY744G+dopyH0ro63s4Y/EzMkZp5YnND/t/V2V6BEzp+A eLO65GPyu58Ld9TdN44DAwgxlN9t5NSX7g4Q2o79tuQUza0jgyPHDmPy4wgdXHaQeAX/ WmZw0rTt7MJuZVWyacn5YIVzp8+Rm2HCqq2vqvGLZWhld65uHYQEEpuYymajUQG7xk3u 1cjdTdcenzlEUfGZNyZltGu2EPQPH0b8GRoFDELzF79YMN+MFLtjk/bNw7ZWi2YDUngS QMxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744069351; x=1744674151; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gy9yoaTY8WhgNX1FAQKuY1G0jgwt8VM6ywJcV8fbRa8=; b=Ecy505lKZjjVPIcxRUNmH1iqYel3HZcwGHRNMtYJAMZGThHJ/csW7GmTubI9Zpz7qG q586vfKQ/RA+jG/iaDD2NI2+sEgU/VpNW6palhJCiFnTJTDVDYNTJUISA0uNA/tTbKGC iCNkTzjlEwimrwqHAI/gqdDjwOUC3pARXBzZRgeGWQADlluNkQ/H3XWPmV4d6BV5wVhW qPdh0b5uQaO8L5eCs1R/99uz5UOcilU/XmhHi1bLPdBK9KbHsV2e19uQirn11oBKZemA TNll6KpgQN1HsoI4GuN1DJ3A7sLDnAWve81bKPdnofDJECNmrJ+Eqzy9CQ+FnnzW3hZ/ jefg== X-Gm-Message-State: AOJu0YyH5bkPMH+UTo52JfhCVafCwYRyovtghzEaq8t6rxKDJqdDsKog ln4aINIOJdtkR2TZuLDmVu855LXnmgpBltmLXTRCHFCeH5Kyf8E8PJ/z4A== X-Gm-Gg: ASbGnctdyvvja8eU9icnDiEpV8eoq4CnOaqmyA/ZoNJBQ2wUujts47GUosEngidJmvk AiA5iKl/8HpMxqiHwSj/fOIb7wAloHIgqdFhKarUgD5gBKJ0OVzUKdz/iTWN/Of9Q+ojS06mSVM CNOp6sk2LX60N8xeeSkIpn0FiKR+KUFW+8kKdW4n7JX149gW594Dm6IRlTecciR1plBVOsU6kAE WiB+o0c1b28Qfau/3sggKDPMKAp7wABepclLjJm3ADBPU6TthuAVNrD2G+W5mpKyTDfAh1h5qDc kb0KKeX79zB8NPFBV/alWK/ew40o3uaQNZk= X-Google-Smtp-Source: AGHT+IFtjIGWjrSvPLmCF0xDwxdk78WKxan9bK4HNw+sdhPaREGZ3Zv4ygkRbbvK1bFWjicLYGKEQg== X-Received: by 2002:a05:690c:25c1:b0:6fb:a376:3848 with SMTP id 00721157ae682-703e1628096mr243017637b3.34.1744069351629; Mon, 07 Apr 2025 16:42:31 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:4::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-703d1e86700sm27862117b3.68.2025.04.07.16.42.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 16:42:31 -0700 (PDT) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org Subject: [RFC PATCH 11/14] memcg: swap: only charge physical swap slots Date: Mon, 7 Apr 2025 16:42:12 -0700 Message-ID: <20250407234223.1059191-12-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250407234223.1059191-1-nphamcs@gmail.com> References: <20250407234223.1059191-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9FA6AC0003 X-Stat-Signature: xgaphcom48i8nbmz5app1yih4krfc9sg X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1744069352-654181 X-HE-Meta: U2FsdGVkX188E4wvhDIupZ6VWjaUbMbFRzvUWARf4sHqLR01Fu3BZj6KDVWM6198wT79aL9kqdEG3BzwFzeV+F3IDktqi8lVfqeQUXeWKmBFQOgIbA6U7fHLZXwqzOeap5vFoH008KMkVap43hfsIC7TyYES/nK1bkVSlrdGyRJ37Rs69TuzGpXFExQfEe2unagAAxe5divbXcujSDW2nqfw62ARM4gc0BA8MA3HVAab2CvD773wdTZ1+1YvXEzjz/W9DtjhnFrevAkH9d0pHvr0LegEpoo7CIkf0SA56WYGFMudPqQv5ekURRjOXKfCsbGzjbnToMkUclcCjQic0YoguewBzWtqThBlIC5c3TWvX2kw1sHiyR5bBi5CAZNgFPIUzXjPOa3GNKWKTmKIrdwI8SB4HPSqcQz/bAshv5SH3+V0IFfwUwm406HBwYNFBo6GE2Dmw0/3Se0jTfRwfvwmRFF81ls9lewtpRhhj6/ffcCCb5klbwC6oPosCODvHZF/f2XTRdL6e8DM+vfd8QuZhXlWHkos22e96o97qKUwtjU4m8DhoNEDW+8Y5sCeM9wbbdtYMXsXXpxPfFk/TUvPifOprZf3vRzIvhnDbPu96+HfwTbJ/68dapZXM1q6/R4+XiQN2H2JYtdXvviZuc2zkfpmA84H2knzAKVKMIO2LTnMQIC1P1uBbaEi/+nwdCyW2dhEGFgIsCfPD8FFoutm4kAecRhYvmz32EszXaJV3vghAhhF72AFJMXm2pea47NctVTGJQCb5EiVcrHbFGwF69mV2hPtFZVeUVtLAxOwYA41jmckFq0I47a0sj37nNYTUXbN2vBwya0JQ625oXPUEksfejXrWLb6zm0Yb4fagKGTCRkP2vs4jpoEbN5BUJXe4cY44gzqeXm+aX62iEJzMKCn1/pOgu0qEZOowUuC2ZeXIV6NAfyalHKNx2nYB5a/taemoG6/FEUBiqh Z/r6xdma ktHbRflWWPDtMFmm4i/gvdP4hJaqLpdsG+xCt7kxNdBjQG4cMMlovncK/PPWGCUe6DuwVWYo4jB0KU64x8QznDbB7Dhj+nlh0pnumNYSVSUXkuIvPWK7ND3gFb1swdQQb5QI3qEMjpNFkE5M5iSewwaux7tOjpzTWOPiv2T+qT7FRTlv1TlfIumk5fPQXItddz0rc/bTbH+oLT8bO14l4pr2mRtqjeOj9e1cx5HbYrn4r12c1lcB/+lW3CnjrX37tF+o9x/+5F3OD3eyLOV/DGHldabh87OKhuGgJmfsUy608tN3Xb7SydNNLTWm92wKXcCzFarF8LtIcrFV7eEpF3ETR9+BFyxTn90aHzwSvfTHSW+CClMXq0x91SbZq0nCr+Q2IkG/HXnyVy/ZFPltVh7xmAGj6ZwIXtS/K0OrMqDMVWuUR30tC7AuVza2DHInajqcttwDKxlOZHQUqMYJWsS+0kiO+9Y+Xue4+yldQpxeHsZZ5cSQNvxnbWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that zswap and the zero-filled swap page optimization no longer takes up any physical swap space, we should not charge towards the swap usage and limits of the memcg in these case. We will only record the memcg id on virtual swap slot allocation, and defer physical swap charging (i.e towards memory.swap.current) until the virtual swap slot is backed by an actual physical swap slot (on zswap store failure fallback or zswap writeback). Signed-off-by: Nhat Pham --- include/linux/swap.h | 17 ++++++++ mm/memcontrol.c | 102 ++++++++++++++++++++++++++++++++++--------- mm/vswap.c | 43 ++++++++---------- 3 files changed, 118 insertions(+), 44 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 073835335667..98cdfe0c1da7 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -679,6 +679,23 @@ static inline void folio_throttle_swaprate(struct folio *folio, gfp_t gfp) #if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP) void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); + +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry); +static inline void mem_cgroup_record_swap(struct folio *folio, + swp_entry_t entry) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_record_swap(folio, entry); +} + +void __mem_cgroup_unrecord_swap(swp_entry_t entry, unsigned int nr_pages); +static inline void mem_cgroup_unrecord_swap(swp_entry_t entry, + unsigned int nr_pages) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_unrecord_swap(entry, nr_pages); +} + int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry); static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 126b2d0e6aaa..c6bee12f2016 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5020,6 +5020,46 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) css_put(&memcg->css); } +/** + * __mem_cgroup_record_swap - record the folio's cgroup for the swap entries. + * @folio: folio being swapped out. + * @entry: the first swap entry in the range. + * + * In the virtual swap implementation, we only record the folio's cgroup + * for the virtual swap slots on their allocation. We will only charge + * physical swap slots towards the cgroup's swap usage, i.e when physical swap + * slots are allocated for zswap writeback or fallback from zswap store + * failure. + */ +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry) +{ + unsigned int nr_pages = folio_nr_pages(folio); + struct mem_cgroup *memcg; + + memcg = folio_memcg(folio); + + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); + if (!memcg) + return; + + memcg = mem_cgroup_id_get_online(memcg); + if (nr_pages > 1) + mem_cgroup_id_get_many(memcg, nr_pages - 1); + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); +} + +void __mem_cgroup_unrecord_swap(swp_entry_t entry, unsigned int nr_pages) +{ + unsigned short id = swap_cgroup_clear(entry, nr_pages); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_id(id); + if (memcg) + mem_cgroup_id_put_many(memcg, nr_pages); + rcu_read_unlock(); +} + /** * __mem_cgroup_try_charge_swap - try charging swap space for a folio * @folio: folio being added to swap @@ -5038,34 +5078,47 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) if (do_memsw_account()) return 0; - memcg = folio_memcg(folio); + if (IS_ENABLED(CONFIG_VIRTUAL_SWAP)) { + /* + * In the virtual swap implementation, we already record the cgroup + * on virtual swap allocation. Note that the virtual swap slot holds + * a reference to memcg, so this lookup should be safe. + */ + rcu_read_lock(); + memcg = mem_cgroup_from_id(lookup_swap_cgroup_id(entry)); + rcu_read_unlock(); + } else { + memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) - return 0; + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); + if (!memcg) + return 0; - if (!entry.val) { - memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; - } + if (!entry.val) { + memcg_memory_event(memcg, MEMCG_SWAP_FAIL); + return 0; + } - memcg = mem_cgroup_id_get_online(memcg); + memcg = mem_cgroup_id_get_online(memcg); + } if (!mem_cgroup_is_root(memcg) && !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) { memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - mem_cgroup_id_put(memcg); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + mem_cgroup_id_put(memcg); return -ENOMEM; } - /* Get references for the tail pages, too */ - if (nr_pages > 1) - mem_cgroup_id_get_many(memcg, nr_pages - 1); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) { + /* Get references for the tail pages, too */ + if (nr_pages > 1) + mem_cgroup_id_get_many(memcg, nr_pages - 1); + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); + } mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); - swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); - return 0; } @@ -5079,7 +5132,11 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) struct mem_cgroup *memcg; unsigned short id; - id = swap_cgroup_clear(entry, nr_pages); + if (IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + id = lookup_swap_cgroup_id(entry); + else + id = swap_cgroup_clear(entry, nr_pages); + rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (memcg) { @@ -5090,7 +5147,8 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) page_counter_uncharge(&memcg->swap, nr_pages); } mod_memcg_state(memcg, MEMCG_SWAP, -nr_pages); - mem_cgroup_id_put_many(memcg, nr_pages); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + mem_cgroup_id_put_many(memcg, nr_pages); } rcu_read_unlock(); } @@ -5099,7 +5157,7 @@ static bool mem_cgroup_may_zswap(struct mem_cgroup *original_memcg); long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) { - long nr_swap_pages, nr_zswap_pages = 0; + long nr_swap_pages; /* * If swap is virtualized and zswap is enabled, we can still use zswap even @@ -5108,10 +5166,14 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) if (IS_ENABLED(CONFIG_VIRTUAL_SWAP) && zswap_is_enabled() && (mem_cgroup_disabled() || do_memsw_account() || mem_cgroup_may_zswap(memcg))) { - nr_zswap_pages = PAGE_COUNTER_MAX; + /* + * No need to check swap cgroup limits, since zswap is not charged + * towards swap consumption. + */ + return PAGE_COUNTER_MAX; } - nr_swap_pages = max_t(long, nr_zswap_pages, get_nr_swap_pages()); + nr_swap_pages = get_nr_swap_pages(); if (mem_cgroup_disabled() || do_memsw_account()) return nr_swap_pages; for (; !mem_cgroup_is_root(memcg); memcg = parent_mem_cgroup(memcg)) diff --git a/mm/vswap.c b/mm/vswap.c index 3146c231ca69..fcc7807ba89b 100644 --- a/mm/vswap.c +++ b/mm/vswap.c @@ -349,6 +349,7 @@ static inline void release_backing(swp_entry_t entry, int nr) swap_slot_free_nr(slot, nr); swap_slot_put_swap_info(si); } + mem_cgroup_uncharge_swap(entry, nr); } } @@ -367,7 +368,7 @@ static void vswap_free(swp_entry_t entry) virt_clear_shadow_from_swap_cache(entry); release_backing(entry, 1); - mem_cgroup_uncharge_swap(entry, 1); + mem_cgroup_unrecord_swap(entry, 1); /* erase forward mapping and release the virtual slot for reallocation */ release_vswap_slot(entry.val); kfree_rcu(desc, rcu); @@ -392,27 +393,13 @@ swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; struct swp_desc *desc; - int i, nr = folio_nr_pages(folio); + int nr = folio_nr_pages(folio); entry = vswap_alloc(nr); if (!entry.val) return entry; - /* - * XXX: for now, we charge towards the memory cgroup's swap limit on virtual - * swap slots allocation. This will be changed soon - we will only charge on - * physical swap slots allocation. - */ - if (mem_cgroup_try_charge_swap(folio, entry)) { - for (i = 0; i < nr; i++) { - vswap_free(entry); - entry.val++; - } - atomic_add(nr, &vswap_alloc_reject); - entry.val = 0; - return entry; - } - + mem_cgroup_record_swap(folio, entry); XA_STATE(xas, &vswap_map, entry.val); rcu_read_lock(); @@ -454,6 +441,9 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) if (!slot.val) return slot; + if (mem_cgroup_try_charge_swap(folio, entry)) + goto free_phys_swap; + /* establish the vrtual <-> physical swap slots linkages. */ for (i = 0; i < nr; i++) { err = xa_insert(&vswap_rmap, slot.val + i, @@ -462,13 +452,7 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) if (err) { while (--i >= 0) xa_erase(&vswap_rmap, slot.val + i); - /* - * We have not updated the backing type of the virtual swap slot. - * Simply free up the physical swap slots here! - */ - swap_slot_free_nr(slot, nr); - slot.val = 0; - return slot; + goto uncharge; } } @@ -505,6 +489,17 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) } rcu_read_unlock(); return slot; + +uncharge: + mem_cgroup_uncharge_swap(entry, nr); +free_phys_swap: + /* + * We have not updated the backing type of the virtual swap slot. + * Simply free up the physical swap slots here! + */ + swap_slot_free_nr(slot, nr); + slot.val = 0; + return slot; } /**