From patchwork Fri Mar 28 15:31:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E759EC36013 for ; Fri, 28 Mar 2025 15:31:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAAEE280154; Fri, 28 Mar 2025 11:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E33A7280152; Fri, 28 Mar 2025 11:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5FF2280154; Fri, 28 Mar 2025 11:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9F5A5280152 for ; Fri, 28 Mar 2025 11:31:50 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 482A21A0414 for ; Fri, 28 Mar 2025 15:31:51 +0000 (UTC) X-FDA: 83271350022.15.33AA6E7 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf04.hostedemail.com (Postfix) with ESMTP id 2D5944001A for ; Fri, 28 Mar 2025 15:31:45 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="o/R9qHKn"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 34MDmZwUKCEMyfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=34MDmZwUKCEMyfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743175906; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jBMUpGqTWD4UpO2EN6fhtj6WpgPcfU+elgx/+TqNl3U=; b=zeLj+1f7htDE9c6fkprwfkjVdFiX+N2FacLMMOx6+nYxoDAcp5klpmx4p82cjH3nw5+ay5 Kue1BzNYp8I+9J4OUU1hRNDOhpRHO5L3DIHnU9s7pXNjRZTZ6own1BfQ+n7wWqh1qX18uK Ehnj0BEz0xBLaCFve9/Mty2gUU1bhg8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743175906; a=rsa-sha256; cv=none; b=dhkgoOLbxCJcVlYKkfBPPOVUmBc93blK/3Wz7+VRognQb85FvU4cRFul2GjE55QE0/qoII J/YjA5vaiVzAB1q58MBF9lxACsHkcfQZ/Drdsx78LgVsgGnTfBysFDiQokQH+riKvV4TVM 4goyPPDp2lM2qouNqk9MkqrkQoN5/Z8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="o/R9qHKn"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 34MDmZwUKCEMyfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=34MDmZwUKCEMyfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d22c304adso14879755e9.0 for ; Fri, 28 Mar 2025 08:31:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175904; x=1743780704; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jBMUpGqTWD4UpO2EN6fhtj6WpgPcfU+elgx/+TqNl3U=; b=o/R9qHKn5xStO1LPzRkBLHF/u6EvwJOYxQvO1l89SZG36gSAGPbv1/7ZcqCmNzqsy2 MIDSU2GPeoiOE4YFRPXqR1SYhPZ+rU0eL9u+CW40JyT/scLX0qcrBGx3WITalOsCXETU MxlYS/gZF9bA83dFBM9AeDgg/icU2QJHnKJgd/sXwzKwNM+ww8s+Z1OqYXmKUm0Z2/69 53Q7pP1nlbRBCWFthwvTDn+sDB6s8YrSbvCK0P9CPZbZ/AdN6mTN4CGTGT/eAtF5hLdn ykMI5EjTSzqN8L58mY4sINTgHi4uINfQU35EZJ1by+FL/G3beBWzy33U2bLHkF0ud1pJ PkNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175904; x=1743780704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jBMUpGqTWD4UpO2EN6fhtj6WpgPcfU+elgx/+TqNl3U=; b=uIZlHAtAxXJbKnrRe4/8pv8t+SdUia45RVo371sVjLF5sfJvhreo+MAPrBz514Z5mh VSt7mQJaeOHiUIijQZN48PZvGOQgPpcNA7qbVqOoM7aa4D/fN3nNHsWxhjynTlNDrViO eq1ksm42mFEVQj65OWT6YaYKWn5h6JYvychu3W1A9qtiGaHTb0ZHR/dnzNbmzSq8vc3k Js0G49nb8Ku83eRtUarGXFxlgDrJ9kEo5iVctfUWefi60vyOdYJd8pifivbu55K8VTUi 87yYWwGCtwaeWKEO4UF2JTJdJrURZqRIED6b8xh6VqgrpdLkhXT5yN0xYyJeFTI9cX4E GeVg== X-Forwarded-Encrypted: i=1; AJvYcCVsPl7uVjQOjTryhrdbgO9LUGeCl1AaOMjix1lCZbWrBBKw5lHx1/M3RUMY3D62tFpf5AQeuh7mxw==@kvack.org X-Gm-Message-State: AOJu0Yw/wpS2D9bBo2ncLEqW6jXWspwlNB0UzeMct9mjq28RN+GETyrJ H4LepGe90lBcOpev1Mbs0fNT025ZMV2Sk0RXuR1PJE3m4LDYMCjCj7EA4iZw7cls/tRE6bw7ng= = X-Google-Smtp-Source: AGHT+IGsVvvboBRAyzbhN+VadIQHCOH8PGYVPxU5z0Lyav96vZClKVh4KR5eyXtOx0aT0txiHrEbB6kiHg== X-Received: from wmrn12.prod.google.com ([2002:a05:600c:500c:b0:43d:41a2:b768]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6dca:b0:43c:f509:2bbf with SMTP id 5b1f17b1804b1-43d911902efmr30762195e9.15.1743175904477; Fri, 28 Mar 2025 08:31:44 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:31 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-6-tabba@google.com> Subject: [PATCH v7 5/7] KVM: guest_memfd: Restore folio state after final folio_put() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2D5944001A X-Stat-Signature: sq78p6br1a1zyezkjkzbwme1uaf85swq X-Rspam-User: X-HE-Tag: 1743175905-357136 X-HE-Meta: U2FsdGVkX190obT8Q2kR85MBtfs3yG1Isq/HoOHFKqeOx5rWXth62xrtAancklUwyPkPFUslE2JKKyIDCuBcB8mTxzajQRURRBzs17qt7/Tz8XqTGhzeTO6r9aS8MFzr+rhg0Mh1u1fov/TXayUXCyFYZFk+bPDjoHYzvqgTiy0SidZIyH6j623p0EPMb8JpIIg67XFdzenrR7Svxo2FtZQJavKn7tu0dZEpx7RyjN8kgXD+sY87WuyCb9/LVRMsZwBY3I2MwuA5KZKhulLzzM3UigmJShV2g55tReJ7tJceQBBA0sxgRXvNvcJN+6+7k+Dt94tYnUkIW3uKmCJoUiRZOTu4W+Bfy5eqaLeyvbEikcQxDmJ1yU78Jwr0AiIOzc9li+tYDCbsqq8O84NhjdlCfdtXhx1bu0aRZ6ZW2gqKSH5bxhYa0z7ABVr7qmGtS8Jt9zylLeWLwgFfVNGI1OZVj8ewgiEZmk/tjs3VqSzD81hwvu1HzWEFoaFkWqCWAqJWE8wxSmQh56xmdVC3BTN7GtoofSPFrdsS3epCjejOmgCvzywBd2DaqZp64CIPGxuJsY4cceyfAb13DvLPboQiR4ZCvZIYT6FLfB6gLBu1QD0Cmf9l0ED06k6rBMQ/ooE3lBKuwtmtmtl+iZ79tiytEypOKOvqVZSNCYD8Ve7B5qxUxpDk7HCSxOjPG2152Z0oJ/3IJjvKY2zxTlkBFRV5KeL32AFKuSdByCodwuc6c6zr73qieTtwdTCOSAt2ACFDPpk8xHAv+lJkjrcrKoqd+nlkiSOePrZHc6BowhGEQCsO94r0I1sOC1mPz92F/7jZtYS4+3rS6NK1DZr+YA+QXceOQwyLUBoHK15fyrHdr6mhCuiL97yVIDEP7vMsALjnS7AXPS96TFidyptTJ0CD/J7d14zGtsF8IJZTQYJl7M+a9vy/ECoAXqa8BdYCsA+yfl0r6WAkEhJKSuO 5v4OArAK 2+VO46hbxITTldML9/XTgvck3rCobFao51K/Cm10XVDeF20D9WtU4ciIfXMAKfr63GgE6n5R1m5EfMDwKbCd87ugiRW2K/PFTRcUjMRICWKiPhHQcLegDATle9SxBMGGFf2xgYTUy7PC13bX+2QV3WlNNsUKBAe7sqa1jLeNGh5tv+IjMLeOPKIapHYYUzz1QKWf0Owvf1auJwxIFV75/C7VA/KlKihG60vzPD04ngBYZ8H4ITs1zLAbznQ1r2y8nRC4imSbE3JdgQvt072K3wv+vvlTAjlgK+upBQGRVgCCuxblvt+rLxwyoBudOhG9eAsFaUSQUomthNsQOcD4rbH++gKI53H/Hra0NkfWSdrZwOeekUj7zZlHksXRMyYY84RcyuDgNnCES6nv17z+vE99T55uTT8TBP6foTS5CEorKIG1eC6haGil01ooRwJsA6oiOncHmYQZToOEcPXd43R7ZOfwSZvJ0JwdqPM1OAiiq1bN9QJcslEi6ZmYph1jpNfUzfsr997Gfp1u5C88i6MHdqrHTBtaX9sbRzJwKM8m/WyyduY9dItoHFf+gd+ovtNilk6kQsXx7OoaC7qbuCYJ39apqyQUTAzRBb/THqyHayxSnUqAbhmShcfFwMvK0ANx3xd+/+31w2lSMHpOgwTqY/yAey4eGDAH0SRJNyGepwBUSQunw+93OJlOLURyDev9S56OsILJPQBg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Before transitioning a guest_memfd folio to unshared, thereby disallowing access by the host and allowing the hypervisor to transition its view of the guest page as private, we need to ensure that the host doesn't have any references to the folio. This patch uses the guest_memfd folio type to register a callback that informs the guest_memfd subsystem when the last reference is dropped, therefore knowing that the host doesn't have any remaining references. Signed-off-by: Fuad Tabba --- The function kvm_slot_gmem_register_callback() isn't used in this series. It will be used later in code that performs unsharing of memory. I have tested it with pKVM, based on downstream code [*]. It's included in this RFC since it demonstrates the plan to handle unsharing of private folios. [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v7-pkvm --- include/linux/kvm_host.h | 6 ++ virt/kvm/guest_memfd.c | 143 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 148 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bf82faf16c53..d9d9d72d8beb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2607,6 +2607,7 @@ int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end); bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn); +int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_gmem_handle_folio_put(struct folio *folio); #else static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) @@ -2638,6 +2639,11 @@ static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, WARN_ON_ONCE(1); return false; } +static inline int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} #endif /* CONFIG_KVM_GMEM_SHARED_MEM */ #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 3b4d724084a8..ce19bd6c2031 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -392,6 +392,27 @@ enum folio_shareability { KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */ }; +/* + * Unregisters the __folio_put() callback from the folio. + * + * Restores a folio's refcount after all pending references have been released, + * and removes the folio type, thereby removing the callback. Now the folio can + * be freed normaly once all actual references have been dropped. + * + * Must be called with the folio locked and the offsets_lock write lock held. + */ +static void kvm_gmem_restore_pending_folio(struct folio *folio, struct inode *inode) +{ + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + + lockdep_assert_held_write(offsets_lock); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(!folio_test_guestmem(folio)); + + __folio_clear_guestmem(folio); + folio_ref_add(folio, folio_nr_pages(folio)); +} + static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) { struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; @@ -400,6 +421,24 @@ static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) lockdep_assert_held_write(offsets_lock); + /* + * If the folio is NONE_SHARED, it indicates that it is transitioning to + * private (GUEST_SHARED). Transition it to shared (ALL_SHARED) + * immediately, and remove the callback. + */ + if (xa_to_value(xa_load(shared_offsets, index)) == KVM_GMEM_NONE_SHARED) { + struct folio *folio = filemap_lock_folio(inode->i_mapping, index); + + if (WARN_ON_ONCE(IS_ERR(folio))) + return PTR_ERR(folio); + + if (folio_test_guestmem(folio)) + kvm_gmem_restore_pending_folio(folio, inode); + + folio_unlock(folio); + folio_put(folio); + } + return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)); } @@ -503,9 +542,111 @@ static int kvm_gmem_offset_range_clear_shared(struct inode *inode, return r; } +/* + * Registers a callback to __folio_put(), so that gmem knows that the host does + * not have any references to the folio. The callback itself is registered by + * setting the folio type to guestmem. + * + * Returns 0 if a callback was registered or already has been registered, or + * -EAGAIN if the host has references, indicating a callback wasn't registered. + * + * Must be called with the folio locked and the offsets_lock write lock held. + */ +static int kvm_gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t index) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED); + int refcount; + int r = 0; + + lockdep_assert_held_write(offsets_lock); + WARN_ON_ONCE(!folio_test_locked(folio)); + + if (folio_test_guestmem(folio)) + return 0; + + if (folio_mapped(folio)) + return -EAGAIN; + + refcount = folio_ref_count(folio); + if (!folio_ref_freeze(folio, refcount)) + return -EAGAIN; + + /* + * Register callback by setting the folio type and subtracting gmem's + * references for it to trigger once outstanding references are dropped. + */ + if (refcount > 1) { + __folio_set_guestmem(folio); + refcount -= folio_nr_pages(folio); + } else { + /* No outstanding references, transition it to guest shared. */ + r = WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL))); + } + + folio_ref_unfreeze(folio, refcount); + return r; +} + +int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + struct folio *folio; + int r; + + write_lock(offsets_lock); + + folio = filemap_lock_folio(inode->i_mapping, pgoff); + if (WARN_ON_ONCE(IS_ERR(folio))) { + write_unlock(offsets_lock); + return PTR_ERR(folio); + } + + r = kvm_gmem_register_callback(folio, inode, pgoff); + + folio_unlock(folio); + folio_put(folio); + write_unlock(offsets_lock); + + return r; +} +EXPORT_SYMBOL_GPL(kvm_gmem_slot_register_callback); + +/* + * Callback function for __folio_put(), i.e., called once all references by the + * host to the folio have been dropped. This allows gmem to transition the state + * of the folio to shared with the guest, and allows the hypervisor to continue + * transitioning its state to private, since the host cannot attempt to access + * it anymore. + */ void kvm_gmem_handle_folio_put(struct folio *folio) { - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); + struct address_space *mapping; + struct xarray *shared_offsets; + rwlock_t *offsets_lock; + struct inode *inode; + pgoff_t index; + void *xval; + + mapping = folio->mapping; + if (WARN_ON_ONCE(!mapping)) + return; + + inode = mapping->host; + index = folio->index; + shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED); + + write_lock(offsets_lock); + folio_lock(folio); + kvm_gmem_restore_pending_folio(folio, inode); + folio_unlock(folio); + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL))); + write_unlock(offsets_lock); } EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);