From patchwork Fri Nov 20 06:43:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11919587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C919FC5519F for ; Fri, 20 Nov 2020 06:48:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B055206B6 for ; Fri, 20 Nov 2020 06:48:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="oHuG7Cqv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B055206B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CBF7E6B007B; Fri, 20 Nov 2020 01:48:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C954A6B007D; Fri, 20 Nov 2020 01:48:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BABC86B007E; Fri, 20 Nov 2020 01:48:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 849916B007B for ; Fri, 20 Nov 2020 01:48:17 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 142088249980 for ; Fri, 20 Nov 2020 06:48:17 +0000 (UTC) X-FDA: 77503867434.07.lock50_5c1096327349 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id F0AC91800119E for ; Fri, 20 Nov 2020 06:48:16 +0000 (UTC) X-HE-Tag: lock50_5c1096327349 X-Filterd-Recvd-Size: 5868 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 06:48:16 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id v12so6912709pfm.13 for ; Thu, 19 Nov 2020 22:48:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Do8GcGO4xmr+cusNrmUBw5tiCH8/VB9at1POMWv3fh0=; b=oHuG7CqvR273FTn61sbH+LZ9ciiaqHc/fvv8a2VuhUVaTlyz45DJpff3068NZzNE9n f2Y9K+e3P7w2Y/egDOo4mCzZyAA4wuLl3nZmhdYb7rbZSijfzmdPN0ODv3ZN+R2nnS/w aEw6mRLycjPA2344UXwW95+0Da3049dnNjhMOHWOpkrLQ9dWMX8JYK7PkhPFDaGIaPKS 5y5/qZKW4xEnOcpveJgGLWLEcoyURo/p0FFD4RIkZlwA+yGL4i5aTsTMlFXY1PEde1VD bzJXbBlBWtnqpB91w3r9ijKBKplKusdCXHIFQTEo/rpijlyDBpbjr+6fnKmzbFKHYlk6 er3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Do8GcGO4xmr+cusNrmUBw5tiCH8/VB9at1POMWv3fh0=; b=qrtb/Oj4l2b9Xk0U+XbsygbQpTWU6yJ55ifXgQLnqUF+n7+44ceqDqE59vJHjpB8u7 rszjaXlXwgdNU2P7knJGM+HMManD2CMjtBTnbj0lwpQGhRIphCUgEX1vk+w9Acbapeyi ec/S8Qg6H2Z7SwTTN076h1fX0GRaaPLl3idyLlsCWh3KHMdW26xierEdDOFAN3C/60Kr FBgZcqwaJ1iSwn+qnEkimO3xkEUYy8BAbrycesblIxLcYgrKkwFIvrPVhNoKAt6liFyU CrxClnbQpSx1ecRgHXPv0RouDIHxHdg35dtCPhLq0Ibdq8gphsYeCaKE6/xqqvznkSF5 FLDg== X-Gm-Message-State: AOAM530qnoxLTchRNneRNcjPhc4aapBnCXLHhej9HNfNbIC7dCkmUPPv QbRqjKSR0x6pIa9UCLN3oC+1BA== X-Google-Smtp-Source: ABdhPJydpnegg1dAIfXPFOT/olIsplN9RWAEQRFf2e6HdaBkydyhP4+NMb0O7oxmIGQlT2jDgLzaZA== X-Received: by 2002:a17:90a:ca97:: with SMTP id y23mr8185536pjt.186.1605854895707; Thu, 19 Nov 2020 22:48:15 -0800 (PST) Received: from localhost.localdomain ([103.136.221.72]) by smtp.gmail.com with ESMTPSA id 23sm2220278pfx.210.2020.11.19.22.48.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Nov 2020 22:48:15 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v5 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Date: Fri, 20 Nov 2020 14:43:17 +0800 Message-Id: <20201120064325.34492-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201120064325.34492-1-songmuchun@bytedance.com> References: <20201120064325.34492-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate hugetlb page from buddy, we may need split huge pmd to pte. When we free the hugetlb page, we can merge pte to pmd. So we need to distinguish whether the previous pmd has been split. The page table is not allocated from slab. So we can reuse the PG_slab to indicate that the pmd has been split. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 06e2b8a7b7c8..e2ddc73ce25f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -293,6 +293,25 @@ static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, flush_tlb_kernel_range(start, end); } +static inline bool pmd_split(pmd_t *pmd) +{ + return PageSlab(pmd_page(*pmd)); +} + +static inline void set_pmd_split(pmd_t *pmd) +{ + /* + * We should not use slab for page table allocation. So we can set + * PG_slab to indicate that the pmd has been split. + */ + __SetPageSlab(pmd_page(*pmd)); +} + +static inline void clear_pmd_split(pmd_t *pmd) +{ + __ClearPageSlab(pmd_page(*pmd)); +} + static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, unsigned long end, @@ -357,11 +376,12 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) ptl = vmemmap_pmd_lock(pmd); remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, __remap_huge_page_pte_vmemmap); - if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* * Todo: * Merge pte to huge pmd if it has ever been split. */ + clear_pmd_split(pmd); } spin_unlock(ptl); } @@ -443,8 +463,10 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) BUG_ON(!pmd); ptl = vmemmap_pmd_lock(pmd); - if (vmemmap_pmd_huge(pmd)) + if (vmemmap_pmd_huge(pmd)) { split_vmemmap_huge_page(head, pmd); + set_pmd_split(pmd); + } remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, __free_huge_page_pte_vmemmap);