From patchwork Tue Aug 31 13:21:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12467213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CAD5C432BE for ; Tue, 31 Aug 2021 13:21:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7F9BB61056 for ; Tue, 31 Aug 2021 13:21:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7F9BB61056 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1BD396B0072; Tue, 31 Aug 2021 09:21:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 146AE6B0073; Tue, 31 Aug 2021 09:21:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2A156B0074; Tue, 31 Aug 2021 09:21:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id E2E3E6B0072 for ; Tue, 31 Aug 2021 09:21:27 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8DF681812C29A for ; Tue, 31 Aug 2021 13:21:27 +0000 (UTC) X-FDA: 78535437414.03.2B632F9 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf14.hostedemail.com (Postfix) with ESMTP id 424976001982 for ; Tue, 31 Aug 2021 13:21:27 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id t42so14902030pfg.12 for ; Tue, 31 Aug 2021 06:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gBv4DIzKRcOu69g1wqCeY57nVaNCSDRinCugPrU4tQ4=; b=Br7vpBZuYB1KrRcH3aKb7TMzr+uOaRmUFrH3fj/WmfhYU79w3x5hfLOQLdYXzEluyi qG6kfJdrSnRDysJ3mx5rqJSFGrb8GQ9huouLqjh71elt8wDLUoivxP/y6KeNmpfL2ArN Catlao+CgXWLXb0HaiHJcPptCx4UHRytvDGzWi9XVPcsPRk5aIbJBv7VZmwD2Bhz6Dzm UmA1mvHNW8gBrQyN2a2Wb2CVsSguzQGeLd61qiLWlC7Jx8x8afme3bckRf3xz4HMea5t pVrPLSUZOK7V0Tcb8DdKtd6SlS1NKFOUQFzCF9Dq3A5sqns7HDFWN2EYf8kvkV8gF9Gm rItA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gBv4DIzKRcOu69g1wqCeY57nVaNCSDRinCugPrU4tQ4=; b=Z69NxaK+cJjTTdIRZG5gOgO3yyeqhxdp2y095HXWsk8GMxUa4Y8eQX53Nn8W++5VXl JVtJK42WrLqXtKCUrYooTYI0suQ1G/4t3j6NFkAS2K705G9Q6NjKr1krznDtWc8XOJPk jdZcrHXVfezyix1sLhg4wVEjvZ5V4yG0I60uB9UJWqWxpRoyxuom0/I51VU2gtfJtjAt rOHfhi0/Vlix0qRECc46DguF+GyPkel9i04zDOlIhJE9+4jZRPvC/51LL/mngGptMPUb ojHkGy+0kfdj24vojqPG+0CYRcxX9TIaf2lmIc5TcQZwGQJjxUNdy71IjmxBcovihHJt oQKw== X-Gm-Message-State: AOAM5330v8otcO+u0zoWZO+KDmoQfclEt4oaH6F7djAuaaecgegmqb7G 1Aiw19SXJ4aUgMSJZBPbBcB0Sw== X-Google-Smtp-Source: ABdhPJwQBi8vJ/iQd/BGDlY9g6T4MsHzHwWv2+qPpqkNnzGGbU5YKSYjZBBxI+8Rqp+HGFPKuZantQ== X-Received: by 2002:a63:fc1d:: with SMTP id j29mr26405642pgi.54.1630416086289; Tue, 31 Aug 2021 06:21:26 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id k190sm9548352pgc.11.2021.08.31.06.21.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Aug 2021 06:21:25 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, vbabka@suse.cz Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, Qi Zheng Subject: [PATCH v2 1/2] mm: introduce pmd_install() helper Date: Tue, 31 Aug 2021 21:21:10 +0800 Message-Id: <20210831132111.85437-2-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20210831132111.85437-1-zhengqi.arch@bytedance.com> References: <20210831132111.85437-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=Br7vpBZu; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 424976001982 X-Stat-Signature: puu51q9syacas96he66bmadfjnww5y7n X-HE-Tag: 1630416087-827006 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently we have three times the same few lines repeated in the code. Deduplicate them by newly introduced pmd_install() helper. Signed-off-by: Qi Zheng Reviewed-by: David Hildenbrand Reviewed-by: Muchun Song Acked-by: Kirill A. Shutemov --- include/linux/mm.h | 1 + mm/filemap.c | 11 ++--------- mm/memory.c | 34 ++++++++++++++++------------------ 3 files changed, 19 insertions(+), 27 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a3cc83d64564..0af420a7e382 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2463,6 +2463,7 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud) return ptl; } +extern void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); extern void __init pagecache_init(void); extern void __init free_area_init_memoryless_node(int nid); extern void free_initmem(void); diff --git a/mm/filemap.c b/mm/filemap.c index c90b6e4984c9..923cbba1bf37 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3209,15 +3209,8 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) } } - if (pmd_none(*vmf->pmd)) { - vmf->ptl = pmd_lock(mm, vmf->pmd); - if (likely(pmd_none(*vmf->pmd))) { - mm_inc_nr_ptes(mm); - pmd_populate(mm, vmf->pmd, vmf->prealloc_pte); - vmf->prealloc_pte = NULL; - } - spin_unlock(vmf->ptl); - } + if (pmd_none(*vmf->pmd)) + pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); /* See comment in handle_pte_fault() */ if (pmd_devmap_trans_unstable(vmf->pmd)) { diff --git a/mm/memory.c b/mm/memory.c index 39e7a1495c3c..ef7b1762e996 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -433,9 +433,20 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, } } +void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) +{ + spinlock_t *ptl = pmd_lock(mm, pmd); + + if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + mm_inc_nr_ptes(mm); + pmd_populate(mm, pmd, *pte); + *pte = NULL; + } + spin_unlock(ptl); +} + int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) { - spinlock_t *ptl; pgtable_t new = pte_alloc_one(mm); if (!new) return -ENOMEM; @@ -455,13 +466,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - ptl = pmd_lock(mm, pmd); - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ - mm_inc_nr_ptes(mm); - pmd_populate(mm, pmd, new); - new = NULL; - } - spin_unlock(ptl); + pmd_install(mm, pmd, &new); if (new) pte_free(mm, new); return 0; @@ -4027,17 +4032,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf) return ret; } - if (vmf->prealloc_pte) { - vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); - if (likely(pmd_none(*vmf->pmd))) { - mm_inc_nr_ptes(vma->vm_mm); - pmd_populate(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); - vmf->prealloc_pte = NULL; - } - spin_unlock(vmf->ptl); - } else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) { + if (vmf->prealloc_pte) + pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte); + else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) return VM_FAULT_OOM; - } } /* See comment in handle_pte_fault() */ From patchwork Tue Aug 31 13:21:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12467215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 029F7C432BE for ; Tue, 31 Aug 2021 13:21:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8199960FED for ; Tue, 31 Aug 2021 13:21:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8199960FED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2332C6B0073; Tue, 31 Aug 2021 09:21:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BBA66B0074; Tue, 31 Aug 2021 09:21:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05BFB6B0075; Tue, 31 Aug 2021 09:21:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id EA5956B0073 for ; Tue, 31 Aug 2021 09:21:32 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 956658249980 for ; Tue, 31 Aug 2021 13:21:32 +0000 (UTC) X-FDA: 78535437624.21.E89D476 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf26.hostedemail.com (Postfix) with ESMTP id 5A89420019C8 for ; Tue, 31 Aug 2021 13:21:32 +0000 (UTC) Received: by mail-pg1-f169.google.com with SMTP id g184so16662887pgc.6 for ; Tue, 31 Aug 2021 06:21:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0oFIUw/B2R8Fy4y8pZf4ZATrEqBDSvWc0VVN+2RIKNM=; b=dv7gsA8JEBHbuT5/nY4anDm3k3ha8vBonv8d3w3ZYRwxDR9P16ITSF2zuELrbyuneX zZUeKcMvbQXVzlLXNgy2ueWv7EPZBjNlXS6ggn+xP92COqZjU2mwWQr1DNosc4CVN3yv t5xQc4ZtHyiz6AOay1vde988pd/URZB9jP5HOMdb74b/Mfm+pOX20Ru1i3cIbHmMeWHu qTdB3C6L8cyjCVXpjmxS6FDE3MODvgihefysnbvhDgGqKK6PvmYKHk/8xmGCEnyhemHs kB/x2Q0br+mc2nDbM1UZhybjjaFYIh2ekjN1pH5pHXNEnYoTiZNg7fa6NU4RtovYe5fs NkCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0oFIUw/B2R8Fy4y8pZf4ZATrEqBDSvWc0VVN+2RIKNM=; b=qS50I6nsBUYtWGQbVtOMZotVgEdjldLeyQgOciBVOFtMPnVCYbgLv8PynkRA4ZWAPM fDNx4FXl5YF1cCpAYlum+/zQZg+62Fdou/7i2eQlssoFrgCDapSNMsVax3v0HmlW51tE zsstOjvdelD0g+Yg630kXK9yeLcjeVdUXDuKz9bYaifv8djV1KWgAfcO2/wqagN2CHOZ rEVTe2ZuzX0EN97uQwPdIB24UT2Dn4Z6s/MlAQUXsxUv6MPfnVllGayKwRxbvSGaqa5c P/yA6qQg8kmn+phVfin1rsHpEOevbc8/J2Guz0wd4DSnKACIRqxZCjwiT5YlsLLbh72e Ihbg== X-Gm-Message-State: AOAM533bLzHYUlzoQrwaZcQGYoUVauXnQrpxGx3FjemYUcJw496xpjlX ZiDTFCYQyjVdOuW2sYOKkbL1Hw== X-Google-Smtp-Source: ABdhPJx45YrJ7EPQz7/6vYl2pa3asjDpmlvsNZ8SZB/DSlXLjhGGMwVBGiVd2ITDa2yCNMN+aO/dUA== X-Received: by 2002:a63:a4a:: with SMTP id z10mr26444978pgk.329.1630416091415; Tue, 31 Aug 2021 06:21:31 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id k190sm9548352pgc.11.2021.08.31.06.21.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Aug 2021 06:21:31 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, vbabka@suse.cz Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, Qi Zheng Subject: [PATCH v2 2/2] mm: remove redundant smp_wmb() Date: Tue, 31 Aug 2021 21:21:11 +0800 Message-Id: <20210831132111.85437-3-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20210831132111.85437-1-zhengqi.arch@bytedance.com> References: <20210831132111.85437-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=dv7gsA8J; spf=pass (imf26.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5A89420019C8 X-Stat-Signature: gwt9wrru74h9tpu34efkysdre3a3d64h X-HE-Tag: 1630416092-567666 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The smp_wmb() which is in the __pte_alloc() is used to ensure all ptes setup is visible before the pte is made visible to other CPUs by being put into page tables. We only need this when the pte is actually populated, so move it to pmd_install(). __pte_alloc_kernel(), __p4d_alloc(), __pud_alloc() and __pmd_alloc() are similar to this case. We can also defer smp_wmb() to the place where the pmd entry is really populated by preallocated pte. There are two kinds of user of preallocated pte, one is filemap & finish_fault(), another is THP. The former does not need another smp_wmb() because the smp_wmb() has been done by pmd_install(). Fortunately, the latter also does not need another smp_wmb() because there is already a smp_wmb() before populating the new pte when the THP uses a preallocated pte to split a huge pmd. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song Acked-by: David Hildenbrand Acked-by: Kirill A. Shutemov --- mm/memory.c | 52 +++++++++++++++++++++++----------------------------- mm/sparse-vmemmap.c | 2 +- 2 files changed, 24 insertions(+), 30 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ef7b1762e996..658d8df9c70f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -439,6 +439,20 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ mm_inc_nr_ptes(mm); + /* + * Ensure all pte setup (eg. pte page lock and page clearing) are + * visible before the pte is made visible to other CPUs by being + * put into page tables. + * + * The other side of the story is the pointer chasing in the page + * table walking code (when walking the page table without locking; + * ie. most of the time). Fortunately, these data accesses consist + * of a chain of data-dependent loads, meaning most CPUs (alpha + * being the notable exception) will already guarantee loads are + * seen in-order. See the alpha page table accessors for the + * smp_rmb() barriers in page table walking code. + */ + smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ pmd_populate(mm, pmd, *pte); *pte = NULL; } @@ -451,21 +465,6 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) if (!new) return -ENOMEM; - /* - * Ensure all pte setup (eg. pte page lock and page clearing) are - * visible before the pte is made visible to other CPUs by being - * put into page tables. - * - * The other side of the story is the pointer chasing in the page - * table walking code (when walking the page table without locking; - * ie. most of the time). Fortunately, these data accesses consist - * of a chain of data-dependent loads, meaning most CPUs (alpha - * being the notable exception) will already guarantee loads are - * seen in-order. See the alpha page table accessors for the - * smp_rmb() barriers in page table walking code. - */ - smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ - pmd_install(mm, pmd, &new); if (new) pte_free(mm, new); @@ -478,10 +477,9 @@ int __pte_alloc_kernel(pmd_t *pmd) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&init_mm.page_table_lock); if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + smp_wmb(); /* See comment in pmd_install() */ pmd_populate_kernel(&init_mm, pmd, new); new = NULL; } @@ -3857,7 +3855,6 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } ret = vma->vm_ops->fault(vmf); @@ -3919,7 +3916,6 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) vmf->prealloc_pte = pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); @@ -4144,7 +4140,6 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; - smp_wmb(); /* See comment in __pte_alloc() */ } return vmf->vma->vm_ops->map_pages(vmf, start_pgoff, end_pgoff); @@ -4819,13 +4814,13 @@ int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&mm->page_table_lock); - if (pgd_present(*pgd)) /* Another has populated it */ + if (pgd_present(*pgd)) { /* Another has populated it */ p4d_free(mm, new); - else + } else { + smp_wmb(); /* See comment in pmd_install() */ pgd_populate(mm, pgd, new); + } spin_unlock(&mm->page_table_lock); return 0; } @@ -4842,11 +4837,10 @@ int __pud_alloc(struct mm_struct *mm, p4d_t *p4d, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - spin_lock(&mm->page_table_lock); if (!p4d_present(*p4d)) { mm_inc_nr_puds(mm); + smp_wmb(); /* See comment in pmd_install() */ p4d_populate(mm, p4d, new); } else /* Another has populated it */ pud_free(mm, new); @@ -4867,14 +4861,14 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) if (!new) return -ENOMEM; - smp_wmb(); /* See comment in __pte_alloc */ - ptl = pud_lock(mm, pud); if (!pud_present(*pud)) { mm_inc_nr_pmds(mm); + smp_wmb(); /* See comment in pmd_install() */ pud_populate(mm, pud, new); - } else /* Another has populated it */ + } else { /* Another has populated it */ pmd_free(mm, new); + } spin_unlock(ptl); return 0; } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index bdce883f9286..db6df27c852a 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -76,7 +76,7 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, set_pte_at(&init_mm, addr, pte, entry); } - /* Make pte visible before pmd. See comment in __pte_alloc(). */ + /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); pmd_populate_kernel(&init_mm, pmd, pgtable);