From patchwork Tue Apr 20 15:00:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12214457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B78EC433ED for ; Tue, 20 Apr 2021 15:01:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD262613D4 for ; Tue, 20 Apr 2021 15:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232812AbhDTPBu (ORCPT ); Tue, 20 Apr 2021 11:01:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:58248 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232507AbhDTPBs (ORCPT ); Tue, 20 Apr 2021 11:01:48 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 755236024A; Tue, 20 Apr 2021 15:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618930876; bh=YWZWy73tqvCKAkk9V8YhZ1ig6ylDklRIyvUgj9G3Qqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AvfI48GQEUCgATQPRlh15VnMXoZu3ZrAzlargumz56omxGHSsgZ+X25Jm4uXtaP9d ZD8syRRrFQUMViRfNfGpwdlmr2B9H0YhE5g8MUwgj77Qx2P2KF7DwOhOnpLzhAnjUK 78OGltrp46CIQR/BM0JZX4HYJTh1d4/YWOs9Mpqe8qVUCcsHXj6tcbDuq5s+irERZY 1RNUZhlewg3bN1Hm3jmlnu1QSIsyknis7N9xgq+PavSOMRHXHp3DG9eKucgLw1TPNM wzWKpK6wyMtFcYr3RYfN9G6HVQdYyX+LHEnbZLQi8iWG535paN1bFibNloawn8SNOc xGt5UMlHKdsOQ== From: Mike Rapoport To: Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Matthew Garrett , Mark Rutland , Michal Hocko , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , "Rafael J. Wysocki" , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , Yury Norov , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v3 1/2] secretmem/gup: don't check if page is secretmem without reference Date: Tue, 20 Apr 2021 18:00:48 +0300 Message-Id: <20210420150049.14031-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420150049.14031-1-rppt@kernel.org> References: <20210420150049.14031-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Mike Rapoport The check in gup_pte_range() whether a page belongs to a secretmem mapping is performed before grabbing the page reference. To avoid potential race move the check after try_grab_compound_head(). Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand --- mm/gup.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index c3a17b189064..6515f82b0f32 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2080,13 +2080,15 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); - if (page_is_secretmem(page)) - goto pte_unmap; - head = try_grab_compound_head(page, 1, flags); if (!head) goto pte_unmap; + if (unlikely(page_is_secretmem(page))) { + put_compound_head(head, 1, flags); + goto pte_unmap; + } + if (unlikely(pte_val(pte) != pte_val(*ptep))) { put_compound_head(head, 1, flags); goto pte_unmap; From patchwork Tue Apr 20 15:00:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12214459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC81CC433B4 for ; Tue, 20 Apr 2021 15:01:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A5D56102A for ; Tue, 20 Apr 2021 15:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232948AbhDTPCC (ORCPT ); Tue, 20 Apr 2021 11:02:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:58402 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232953AbhDTPCA (ORCPT ); Tue, 20 Apr 2021 11:02:00 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6BBEE613D1; Tue, 20 Apr 2021 15:01:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618930888; bh=I/WWp3ZhiKpLexAP/6uikhopmlNfpvxqz9r1ZOciMHE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XwXn6wJhOGAPWFLJTXk9f9dCUdigjUcxoFpXuTTS7QeUKVHcKf3zboTSJuyxuYjhb 7S7xoeOnTbujEhT+qOIHjvQZre4ngKMAYrLa0rLDC3mofLug616wj1X5JSuijxJl3M wtWUvSC/otrlPsLI++IhM/Oomn7H4VvxqSgNKO7a+xupoT4mtWTM+1UljzJQw1bV49 aAL3dsbVJgqgEIWEnKruH97rHDTT/mkJA2/canG8R1Uf1jGMHr0nvKzI5l0LQUBq2w wnSLVpbmsRZF+NWjSRwMNhPpfFP2tVySPxwi4WHp6VrQK1Z5FqJbynTDJqoZBqn7aB joNOnC5v3AX0Q== From: Mike Rapoport To: Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Matthew Garrett , Mark Rutland , Michal Hocko , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , "Rafael J. Wysocki" , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , Yury Norov , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org, kernel test robot Subject: [PATCH v3 2/2] secretmem: optimize page_is_secretmem() Date: Tue, 20 Apr 2021 18:00:49 +0300 Message-Id: <20210420150049.14031-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420150049.14031-1-rppt@kernel.org> References: <20210420150049.14031-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Mike Rapoport Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops due to commit "mm: introduce memfd_secret system call to create "secret" memory areas". The perf profile of the test indicated that the regression is caused by page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range): 27.76 +2.5 30.23 perf-profile.children.cycles-pp.gup_pgd_range 0.00 +3.2 3.19 ± 2% perf-profile.children.cycles-pp.page_mapping 0.00 +3.7 3.66 ± 2% perf-profile.children.cycles-pp.page_is_secretmem Further analysis showed that the slow down happens because neither page_is_secretmem() nor page_mapping() are not inline and moreover, multiple page flags checks in page_mapping() involve calling compound_head() several times for the same page. Make page_is_secretmem() inline and replace page_mapping() with page flag checks that do not imply page-to-head conversion. Reported-by: kernel test robot Signed-off-by: Mike Rapoport --- include/linux/secretmem.h | 26 +++++++++++++++++++++++++- mm/secretmem.c | 12 +----------- 2 files changed, 26 insertions(+), 12 deletions(-) diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h index 907a6734059c..b842b38cbeb1 100644 --- a/include/linux/secretmem.h +++ b/include/linux/secretmem.h @@ -4,8 +4,32 @@ #ifdef CONFIG_SECRETMEM +extern const struct address_space_operations secretmem_aops; + +static inline bool page_is_secretmem(struct page *page) +{ + struct address_space *mapping; + + /* + * Using page_mapping() is quite slow because of the actual call + * instruction and repeated compound_head(page) inside the + * page_mapping() function. + * We know that secretmem pages are not compound and LRU so we can + * save a couple of cycles here. + */ + if (PageCompound(page) || !PageLRU(page)) + return false; + + mapping = (struct address_space *) + ((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS); + + if (mapping != page->mapping) + return false; + + return page->mapping->a_ops == &secretmem_aops; +} + bool vma_is_secretmem(struct vm_area_struct *vma); -bool page_is_secretmem(struct page *page); bool secretmem_active(void); #else diff --git a/mm/secretmem.c b/mm/secretmem.c index 3b1ba3991964..0bcd15e1b549 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -151,22 +151,12 @@ static void secretmem_freepage(struct page *page) clear_highpage(page); } -static const struct address_space_operations secretmem_aops = { +const struct address_space_operations secretmem_aops = { .freepage = secretmem_freepage, .migratepage = secretmem_migratepage, .isolate_page = secretmem_isolate_page, }; -bool page_is_secretmem(struct page *page) -{ - struct address_space *mapping = page_mapping(page); - - if (!mapping) - return false; - - return mapping->a_ops == &secretmem_aops; -} - static struct vfsmount *secretmem_mnt; static struct file *secretmem_file_create(unsigned long flags)