From patchwork Sat Apr 25 05:58:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Xinhai X-Patchwork-Id: 11509635 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CCEA92C for ; Sat, 25 Apr 2020 05:59:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B96F1214D8 for ; Sat, 25 Apr 2020 05:59:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="egcoHjXe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B96F1214D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF4518E0005; Sat, 25 Apr 2020 01:59:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A7D648E0003; Sat, 25 Apr 2020 01:59:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 944FD8E0005; Sat, 25 Apr 2020 01:59:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 797FE8E0003 for ; Sat, 25 Apr 2020 01:59:02 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2D50D180AD817 for ; Sat, 25 Apr 2020 05:59:02 +0000 (UTC) X-FDA: 76745324124.23.sugar19_31bd256247e32 X-Spam-Summary: 2,0,0,27b906a7cc366c0f,d41d8cd98f00b204,lixinhai.lxh@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:967:973:988:989:1260:1345:1431:1437:1535:1544:1605:1711:1730:1747:1777:1792:2393:2525:2559:2563:2682:2685:2693:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4118:4250:4321:5007:6119:6120:6261:6653:6755:7514:7875:7901:7903:8603:8660:9025:9391:9413:9592:10004:11026:11473:11658:11914:12043:12219:12296:12297:12438:12517:12519:12555:12895:12986:13095:13148:13230:13255:13845:14096:14181:14394:14687:14721:21063:21080:21433:21444:21451:21627:21666:21749:21819:21990:30003:30012:30054:30064:30070:30074,0,RBL:209.85.208.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:39,LUA_SUMMARY:none X-HE-Tag: sugar19_31bd256247e32 X-Filterd-Recvd-Size: 7032 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Sat, 25 Apr 2020 05:59:01 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id f18so12155336lja.13 for ; Fri, 24 Apr 2020 22:59:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=+8lRXtBp5EzeJtpCbY80kBSmuO2rn16uSCBXJqo1iZw=; b=egcoHjXeji5dqXBAVppq6wtfeaX6aCsdTlce6+pv1uoTH/QGr6ZstLwmlqN8jOOMFM tUVwJOdyxj8k1VA/LT7Du2tMDQalneuS9BtKnn7HxgCIy/KQbBd10e4egUqBb8cLNYs6 0nrE0nnF2Gia7yMPSKgke3nuUNHFZkulRGP48FYnRuHbPIkud0q95w5ijy3HJB2kmy3S PqHHmzh+PJxVnx36loiclDWt9oubmiXveubulORCXxYee1+g0UwM7ivqye9rmZqbwTw6 ebrXT7xEgifbPvZ1r/CHyknPrNJIqkacRh46LcvfXviXn+Cj9lME0S0BFpUFV8Co2UhM 9xDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=+8lRXtBp5EzeJtpCbY80kBSmuO2rn16uSCBXJqo1iZw=; b=k77Nnnhy+G9AHvKBPZq/3fhZIWZ4SxofKTy3Vhsj2nvt8491ZZd0qadGevKe+jV13P YN5EK2mx04pbVi443HuNgeKrBjIsHzXvYhnFOL4O0cY92uuzl9Bh7COYBtMWyarFrYB1 Exbd82lCR9znZvPwrSiYghTaWu+eoJK+lX9KMkdyEpp6DZrhpHa5dKce7Q39oX3PDA4j NOj1Dhc+A7+ges8Ntdmuju2WuFXxUFo5B0+YghISC5dKJVNrkzp3FVDHr/j9ALKVGQId BgOln4hXar7/WJfjWa92ndlQKRlkbqYAh6f6e0CimhEwJV5pXtFKgj1+Bt4+C5o+dqUM jzMQ== X-Gm-Message-State: AGi0PuaVKimiDe0/vf+geB1uNnu9/QtMzcLA36EV9hRt765HnL9DQZEz kvXvuE9EKLPgnqYkzKIivVqIzk3g/XY= X-Google-Smtp-Source: APiQypIudFIwfn43tSYAEE+9bZY7C1BDpjb0DNDUAsquktTZ+VKCT141G0TOsBc2e42ivvfOv5H4iQ== X-Received: by 2002:a2e:7e04:: with SMTP id z4mr8191237ljc.50.1587794339876; Fri, 24 Apr 2020 22:58:59 -0700 (PDT) Received: from localhost.localdomain.localdomain ([131.228.2.21]) by smtp.gmail.com with ESMTPSA id j8sm5879145lfk.88.2020.04.24.22.58.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Apr 2020 22:58:58 -0700 (PDT) From: Li Xinhai To: linux-mm@kvack.org Cc: jgg@mellanox.com, punit.agrawal@arm.com, longpeng2@huawei.com, Mike Kravetz , Andrew Morton Subject: [PATCH v2] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Date: Sat, 25 Apr 2020 05:58:33 +0000 Message-Id: <1587794313-16849-1-git-send-email-lixinhai.lxh@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When huge_pte_offset() is called, the parameter sz can only be PUD_SIZE or PMD_SIZE. If sz is PUD_SIZE and code can reach pud, then *pud must be none, or normal hugetlb entry, or non-present (migration or hwpoisoned) hugetlb entry, and we can directly return pud. When sz is PMD_SIZE, pud must be none or present, and if code can reach pmd, we can directly return pmd. So, after this patch, the code is simplified by first check on the parameter sz, and avoid unnecessary checks in current code. Same semantics of existing code is maintained. More details about relevant commits: commit 9b19df292c66 ("mm/hugetlb.c: make huge_pte_offset() consistent and document behaviour") changed the code path for pud and pmd handling, see comments about why this patch intends to change it. ... pud = pud_offset(p4d, addr); if (sz != PUD_SIZE && pud_none(*pud)) // [1] return NULL; /* hugepage or swap? */ if (pud_huge(*pud) || !pud_present(*pud)) // [2] return (pte_t *)pud; pmd = pmd_offset(pud, addr); if (sz != PMD_SIZE && pmd_none(*pmd)) // [3] return NULL; /* hugepage or swap? */ if (pmd_huge(*pmd) || !pmd_present(*pmd)) // [4] return (pte_t *)pmd; return NULL; // [5] ... [1]: this is necessary, return NULL for sz == PMD_SIZE; [2]: if sz == PUD_SIZE, all valid values of pud entry will cause return; [3]: dead code, sz != PMD_SIZE never true; [4]: all valid values of pmd entry will cause return; [5]: dead code, because of check in [4]. Now, this patch combines [1] and [2] for pud, and combines [3], [4] and [5] for pmd, so avoid unnecessary checks. I don't try to catch any invalid values in page table entry, as that will be checked by caller and avoid extra branch in this function. Also no assert on sz must equal PUD_SIZE or PMD_SIZE, since this function only call for hugetlb mapping. For commit 3c1d7e6ccb64 ("mm/hugetlb: fix a addressing exception caused by huge_pte_offset"), since we don't read the entry more than once now, variable pud_entry and pmd_entry are not needed. Signed-off-by: Li Xinhai Cc: Mike Kravetz Cc: Andrew Morton --- v2: Minor change of comments in code and above this function. Add clarification in commit message as discussed in v1. v1: https://lore.kernel.org/linux-mm/1587646154-26276-1-git-send-email-lixinhai.lxh@gmail.com/ mm/hugetlb.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bcabbe0..bd8f4c5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5355,8 +5355,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, * huge_pte_offset() - Walk the page table to resolve the hugepage * entry at address @addr * - * Return: Pointer to page table or swap entry (PUD or PMD) for - * address @addr, or NULL if a p*d_none() entry is encountered and the + * Return: Pointer to page table entry (PUD or PMD) for + * address @addr, or NULL if a !p*d_present() entry is encountered and the * size @sz doesn't match the hugepage size at this level of the page * table. */ @@ -5365,8 +5365,8 @@ pte_t *huge_pte_offset(struct mm_struct *mm, { pgd_t *pgd; p4d_t *p4d; - pud_t *pud, pud_entry; - pmd_t *pmd, pmd_entry; + pud_t *pud; + pmd_t *pmd; pgd = pgd_offset(mm, addr); if (!pgd_present(*pgd)) @@ -5376,22 +5376,16 @@ pte_t *huge_pte_offset(struct mm_struct *mm, return NULL; pud = pud_offset(p4d, addr); - pud_entry = READ_ONCE(*pud); - if (sz != PUD_SIZE && pud_none(pud_entry)) - return NULL; - /* hugepage or swap? */ - if (pud_huge(pud_entry) || !pud_present(pud_entry)) + if (sz == PUD_SIZE) + /* must be pud huge, non-present or none */ return (pte_t *)pud; - - pmd = pmd_offset(pud, addr); - pmd_entry = READ_ONCE(*pmd); - if (sz != PMD_SIZE && pmd_none(pmd_entry)) + if (!pud_present(*pud)) return NULL; - /* hugepage or swap? */ - if (pmd_huge(pmd_entry) || !pmd_present(pmd_entry)) - return (pte_t *)pmd; + /* must have a valid entry and size to go further */ - return NULL; + pmd = pmd_offset(pud, addr); + /* must be pmd huge, non-present or none */ + return (pte_t *)pmd; } #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */