From patchwork Tue Aug 23 07:50:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12951829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C42ADC32774 for ; Tue, 23 Aug 2022 07:50:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB38C6B007B; Tue, 23 Aug 2022 03:50:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C129C8D0001; Tue, 23 Aug 2022 03:50:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB21E6B007E; Tue, 23 Aug 2022 03:50:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9337B6B007B for ; Tue, 23 Aug 2022 03:50:20 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6BF331A0973 for ; Tue, 23 Aug 2022 07:50:20 +0000 (UTC) X-FDA: 79830084600.28.316E8CE Received: from out199-8.us.a.mail.aliyun.com (out199-8.us.a.mail.aliyun.com [47.90.199.8]) by imf15.hostedemail.com (Postfix) with ESMTP id 9E076A0015 for ; Tue, 23 Aug 2022 07:50:19 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0jApx_1661241015; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0jApx_1661241015) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:16 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] mm/hugetlb: use PMD page lock to protect CONT-PTE entries Date: Tue, 23 Aug 2022 15:50:04 +0800 Message-Id: <88c8a8c68d87429f0fc48e81100f19b71f6e664f.1661240170.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661241020; a=rsa-sha256; cv=none; b=ERAcEk02KAYiJ96Aoxf6q+Myb3kxTozfiYR79/5kE9zB1wQd32sgDD/ZSdWencf6YwwRgv cT5bliqCDmIGuyF1cvXdekffPIC5REfBGqUs6H8NVSCbJN9fcmzWEMYwE91ilmDBvxEiwz mbTiTLyQpH/2MbHtEQk8FQpw8DyKz30= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.8 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661241020; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to:in-reply-to: references:references:references; bh=d2Ha4WQMIISYnm+R/5D+HZGmRRVnm0vkSBD0oMctpi4=; b=tKmhu77lFJqANARHofRYAlua3hDlUo14lpcHY8XAO5xf0TQhld9NTQH8ek1q94k2hrAPQ1 ds6Lg630Qmb1CwDY4VXUSHJWijnSU/+eNdES/Rgz9Jc3c1UzbBDRD1wLNWVHk6zR1ecE97 qlNWN9QLRqr1hk3Wj/WVculF5kkXoLI= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9E076A0015 Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.8 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Stat-Signature: 7ca5fxtk87ainout5ch6mqxghj3f669c X-Rspam-User: X-HE-Tag: 1661241019-372476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Considering the pmd entries of a CONT-PMD hugetlb can not span on multiple PMDs, we can change to use the PMD page lock, which can be much finer grain that lock in the mm. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3a96f67..d4803a89 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -892,9 +892,17 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) static inline spinlock_t *huge_pte_lockptr(struct hstate *h, struct mm_struct *mm, pte_t *pte) { - VM_BUG_ON(huge_page_size(h) == PAGE_SIZE); + unsigned long hp_size = huge_page_size(h); - if (huge_page_size(h) == PMD_SIZE) { + VM_BUG_ON(hp_size == PAGE_SIZE); + + /* + * Considering CONT-PMD size hugetlb, since the CONT-PMD entry + * can not span multiple PMDs, then we can use the fine grained + * PMD page lock. + */ + if (hp_size == PMD_SIZE || + (hp_size > PMD_SIZE && hp_size < PUD_SIZE)) { return pmd_lockptr(mm, (pmd_t *) pte); } else if (huge_page_size(h) < PMD_SIZE) { unsigned long mask = ~(PTRS_PER_PTE * sizeof(pte_t) - 1);