From patchwork Mon Nov 15 08:01:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12618865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D6C6C433F5 for ; Mon, 15 Nov 2021 08:01:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C577963218 for ; Mon, 15 Nov 2021 08:01:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C577963218 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 661116B007E; Mon, 15 Nov 2021 03:01:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6117A6B0080; Mon, 15 Nov 2021 03:01:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B1D46B008C; Mon, 15 Nov 2021 03:01:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 3D5FE6B007E for ; Mon, 15 Nov 2021 03:01:21 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0008F7F90E for ; Mon, 15 Nov 2021 08:01:20 +0000 (UTC) X-FDA: 78810419562.11.0DC18DC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 78FFAB00018D for ; Mon, 15 Nov 2021 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636963280; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2a8iiYw9cHHedU0g0ICbwjIFnh904x/uUid5qyax/0o=; b=IaQZC/ICGs0jAcIpXLJmc6rvGkjrGO6RzUuGO3mcwd8iHaD8IDsKh8cca7iCUhwuKERk8n ZIHa5jjDa4cLKQLzA9ax46SQ+Q250ennIpwKsWt3K7RBL6RECYOEAWM45Z3qWuV0ra7vXv H890TyLq3TCJ7riwWYHsA9JMvGi+RpM= Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-322-86-xV2RdOxu5YV7xGRoikw-1; Mon, 15 Nov 2021 03:01:19 -0500 X-MC-Unique: 86-xV2RdOxu5YV7xGRoikw-1 Received: by mail-pf1-f197.google.com with SMTP id 184-20020a6217c1000000b0049f9aad0040so9545643pfx.21 for ; Mon, 15 Nov 2021 00:01:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2a8iiYw9cHHedU0g0ICbwjIFnh904x/uUid5qyax/0o=; b=GSnUb7gBwbrTGvmUkuiVTr9LaNbQ/3GdYROdZEnteDs/CBhaRXPd3cw668/9DP3dbJ we7UDWtPzb2xkG/BkRA5XGTLVCIZq5L5UOVFmCyud5FClVEYX64YSZCi3u13jO7KLNFv 94xSpolMwgmZG4epeymlNq98b+QWamAqzDUcahmFC7VDehdJU+bN3HpCbskhbhg7JmWM ux/yGRecnkhvihmLlLHe3ShoueLOdSeOBapR5E3rMAxBrObg0M3rCTklz7xG9s6+/3e7 reHWv71ADP9XUelNW4jM4+OYsjUo3SYmvLjVhlVjTd+nZMb+6bjc/asmtdyVOSUAN6Ft nobA== X-Gm-Message-State: AOAM533/Kv8QOY/tIUY6EU5yLB0SaBSa5Y659pVMavW6gGEMOOz3aejq UFTfcDryLUcgrMJg88wmKmiHChQvLrfuBu+1tXu0bvTM8VRDPT6RwU3LQOVGR6PnpuiZu8+lfft NNBI6Fil8ddD2ECOwU5NiW3lNfVOLYax3HJYgwcng89G556pXZA0VvzBp9JaH X-Received: by 2002:a17:90a:e7c4:: with SMTP id kb4mr62908429pjb.237.1636963277508; Mon, 15 Nov 2021 00:01:17 -0800 (PST) X-Google-Smtp-Source: ABdhPJxMaGzKjVGqxW+/p2kNrUGh0+wmkSEQkrNVusG8+B8od3HYJT9hN/7CV3ovfOezj2dLkfDFdA== X-Received: by 2002:a17:90a:e7c4:: with SMTP id kb4mr62908368pjb.237.1636963277100; Mon, 15 Nov 2021 00:01:17 -0800 (PST) Received: from localhost.localdomain ([94.177.118.89]) by smtp.gmail.com with ESMTPSA id ng9sm19694926pjb.4.2021.11.15.00.01.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Nov 2021 00:01:16 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , peterx@redhat.com, Alistair Popple , Andrew Morton , Mike Kravetz , Mike Rapoport , Matthew Wilcox , Jerome Glisse , Axel Rasmussen , "Kirill A . Shutemov" , David Hildenbrand , Andrea Arcangeli , Hugh Dickins Subject: [PATCH v6 10/23] mm/shmem: Handle uffd-wp during fork() Date: Mon, 15 Nov 2021 16:01:03 +0800 Message-Id: <20211115080103.74640-1-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211115075522.73795-1-peterx@redhat.com> References: <20211115075522.73795-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 78FFAB00018D X-Stat-Signature: 45u9znx4rwqiwyhb5qe6r6e8cqpa4i85 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="IaQZC/IC"; spf=none (imf25.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1636963267-360764 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Normally we skip copy page when fork() for VM_SHARED shmem, but we can't skip it anymore if uffd-wp is enabled on dst vma. This should only happen when the src uffd has UFFD_FEATURE_EVENT_FORK enabled on uffd-wp shmem vma, so that VM_UFFD_WP will be propagated onto dst vma too, then we should copy the pgtables with uffd-wp bit and pte markers, because these information will be lost otherwise. Since the condition checks will become even more complicated for deciding "whether a vma needs to copy the pgtable during fork()", introduce a helper vma_needs_copy() for it, so everything will be clearer. Signed-off-by: Peter Xu --- mm/memory.c | 49 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 41 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fef6a91c5dfb..cc625c616645 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -859,6 +859,14 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (try_restore_exclusive_pte(src_pte, src_vma, addr)) return -EBUSY; return -ENOENT; + } else if (is_pte_marker_entry(entry)) { + /* + * We're copying the pgtable should only because dst_vma has + * uffd-wp enabled, do sanity check. + */ + WARN_ON_ONCE(!userfaultfd_wp(dst_vma)); + set_pte_at(dst_mm, addr, dst_pte, pte); + return 0; } if (!userfaultfd_wp(dst_vma)) pte = pte_swp_clear_uffd_wp(pte); @@ -1227,6 +1235,38 @@ copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, return 0; } +/* + * Return true if the vma needs to copy the pgtable during this fork(). Return + * false when we can speed up fork() by allowing lazy page faults later until + * when the child accesses the memory range. + */ +bool +vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) +{ + /* + * Always copy pgtables when dst_vma has uffd-wp enabled even if it's + * file-backed (e.g. shmem). Because when uffd-wp is enabled, pgtable + * contains uffd-wp protection information, that's something we can't + * retrieve from page cache, and skip copying will lose those info. + */ + if (userfaultfd_wp(dst_vma)) + return true; + + if (src_vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) + return true; + + if (src_vma->anon_vma) + return true; + + /* + * Don't copy ptes where a page fault will fill them correctly. Fork + * becomes much lighter when there are big shared or private readonly + * mappings. The tradeoff is that copy_page_range is more efficient + * than faulting. + */ + return false; +} + int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) { @@ -1240,14 +1280,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) bool is_cow; int ret; - /* - * Don't copy ptes where a page fault will fill them correctly. - * Fork becomes much lighter when there are big shared or private - * readonly mappings. The tradeoff is that copy_page_range is more - * efficient than faulting. - */ - if (!(src_vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) && - !src_vma->anon_vma) + if (!vma_needs_copy(dst_vma, src_vma)) return 0; if (is_vm_hugetlb_page(src_vma))