From patchwork Fri Sep 30 14:19:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12995514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D125C433FE for ; Fri, 30 Sep 2022 14:20:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D8766B0074; Fri, 30 Sep 2022 10:20:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3619C8D0003; Fri, 30 Sep 2022 10:20:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DBDA8D0002; Fri, 30 Sep 2022 10:20:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0960D6B0074 for ; Fri, 30 Sep 2022 10:20:00 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D79F1ABF26 for ; Fri, 30 Sep 2022 14:19:59 +0000 (UTC) X-FDA: 79968960918.15.B2AFABD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 6750B40008 for ; Fri, 30 Sep 2022 14:19:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664547598; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h/FXbSZy3Kth2MbXXSfAiotPyH32a5Ri5Xru6xlLiiI=; b=H70TQFE1exTXNuD/c7XJULdQzc7I067hhfU/r+ZSZ6UgBvkJ5eabKgFu2ZGN8geW+kW3RU tzdcdnCKrH7Ny+lZj3aSaF8R+r657zz2OX1qE3IjvDV7qez6+9CUBXKkmb0KWeash5gMnK DJtNqeBJz4fGqMxrUt+q3kvAfbB3ZDo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-81-xX1KmHDqPWyeq4x3g8JPWw-1; Fri, 30 Sep 2022 10:19:53 -0400 X-MC-Unique: xX1KmHDqPWyeq4x3g8JPWw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E3D0380C8C3; Fri, 30 Sep 2022 14:19:52 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.187]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05DD71121314; Fri, 30 Sep 2022 14:19:48 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-kselftest@vger.kernel.org, David Hildenbrand , Andrew Morton , Shuah Khan , Hugh Dickins , Vlastimil Babka , Peter Xu , Andrea Arcangeli , "Matthew Wilcox (Oracle)" , Jason Gunthorpe , John Hubbard Subject: [PATCH v1 2/7] mm/ksm: simplify break_ksm() to not rely on VM_FAULT_WRITE Date: Fri, 30 Sep 2022 16:19:26 +0200 Message-Id: <20220930141931.174362-3-david@redhat.com> In-Reply-To: <20220930141931.174362-1-david@redhat.com> References: <20220930141931.174362-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664547599; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h/FXbSZy3Kth2MbXXSfAiotPyH32a5Ri5Xru6xlLiiI=; b=knzC8+JHgFkWQMBAhLozFqPCoZ+1dViG3VJWmfGOsnnOBKVFHUT5hhbz/b/ZiHfKRSLRu6 yYXf4ohujQVVe/2not1qRRNKLZ3XUQcbBW/pgUVNDBvtm+PqqhEhEqDixOnpGXMmjj5ClW ZkRquCWNMRT0P61qCNlSzwMLwwqtiVs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H70TQFE1; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664547599; a=rsa-sha256; cv=none; b=dQXLdc+7dLM+TKLD9VIk1jvE1ji/WMbS5l8oJo4O6RkleqStk7HTXly4KNl1LjxATfWvxG Nbw22HsxdsHvdIrjZiQGEIReKaIUFvP/zCB7qL7WF+BuPOAUAD4gIIPuSA+YD2iEZklzKH qzIbcoWBR1e9cQxdUkyhKoa5Zh14deY= X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6750B40008 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H70TQFE1; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: 9y9uh3n8i4f6mmhejrqbzd3rtmgem6c1 X-HE-Tag: 1664547599-548168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that GUP no longer requires VM_FAULT_WRITE, break_ksm() is the sole remaining user of VM_FAULT_WRITE. As we also want to stop triggering a fake write fault and instead use FAULT_FLAG_UNSHARE -- similar to GUP-triggered unsharing when taking a R/O pin on a shared anonymous page (including KSM pages), let's stop relying on VM_FAULT_WRITE. Let's rework break_ksm() to not rely on the return value of handle_mm_fault() anymore to figure out whether COW-breaking was successful. Simply perform another follow_page() lookup to verify the result. While this makes break_ksm() slightly less efficient, we can simplify handle_mm_fault() a little and easily switch to FAULT_FLAG_UNSHARE without introducing similar KSM-specific behavior for FAULT_FLAG_UNSHARE. In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in a performance degradation of ~4% -- 5% (old: ~5250 MiB/s, new: ~5010 MiB/s). I don't think that we particularly care about that performance drop when unmerging. If it ever turns out to be an actual performance issue, we can think about a better alternative for FAULT_FLAG_UNSHARE -- let's just keep it simple for now. Signed-off-by: David Hildenbrand Acked-by: Peter Xu --- mm/ksm.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 0cd2f4b62334..e8d987fb379e 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -473,26 +473,27 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) vm_fault_t ret = 0; do { + bool ksm_page = false; + cond_resched(); page = follow_page(vma, addr, FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); if (IS_ERR_OR_NULL(page)) break; if (PageKsm(page)) - ret = handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, - NULL); - else - ret = VM_FAULT_WRITE; + ksm_page = true; put_page(page); - } while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM))); + + if (!ksm_page) + return 0; + ret = handle_mm_fault(vma, addr, + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); + } while (!(ret & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM))); /* - * We must loop because handle_mm_fault() may back out if there's - * any difficulty e.g. if pte accessed bit gets updated concurrently. - * - * VM_FAULT_WRITE is what we have been hoping for: it indicates that - * COW has been broken, even if the vma does not permit VM_WRITE; - * but note that a concurrent fault might break PageKsm for us. + * We must loop until we no longer find a KSM page because + * handle_mm_fault() may back out if there's any difficulty e.g. if + * pte accessed bit gets updated concurrently. * * VM_FAULT_SIGBUS could occur if we race with truncation of the * backing file, which also invalidates anonymous pages: that's