From patchwork Thu May 23 17:07:50 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2608431 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id F1A2C40077 for ; Thu, 23 May 2013 17:12:09 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ2E-0004ae-Nl; Thu, 23 May 2013 17:10:44 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ1Y-00037v-9s; Thu, 23 May 2013 17:09:56 +0000 Received: from mail-we0-x22a.google.com ([2a00:1450:400c:c03::22a]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0H-0002tW-LI for linux-arm-kernel@lists.infradead.org; Thu, 23 May 2013 17:08:43 +0000 Received: by mail-we0-f170.google.com with SMTP id u59so1780592wes.1 for ; Thu, 23 May 2013 10:08:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=4UUE/vSalKbyY88jU61QQxY6Jym2AerPlM5z7cztpZg=; b=isovc+yDnaeqSn7P+VgxQusWylSlNtK/SdDNsbfuDuARlsOhI/uTR24g5YvaLAc/3P YIigyNR1vf6xTZRMcbd+jTfjyRkQdUFjWfN/eLzs6HrbzLlIrLCtWJpHLXIPmW1soDSh JTa+ifqAHNFEKR5/T3Jqzbg60SuWBZrpmQgdPxmyQOVG9RpFvZufHKvxKQwrpR2VKMGL asUYu/08k1oCDGbEowpuFpFTwczuW2FexuGCPnkCxwEXLeQxhy9+1LVW5oS4aOoXk486 RWkxiHd5YWMdhuIIcT4bkuE/W+3FairPy1X+d6I1j22AJwL2dSQLdnjr7iXANxBVlrU7 XuzA== X-Received: by 10.180.37.133 with SMTP id y5mr26254407wij.20.1369328893524; Thu, 23 May 2013 10:08:13 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.12 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:12 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 03/11] mm: hugetlb: Copy general hugetlb code from x86 to mm. Date: Thu, 23 May 2013 18:07:50 +0100 Message-Id: <1369328878-11706-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQlZgdQgxxNv4T2j/qTplQYnJcJv2p3iAUuSA65oY5oQ+zQgOrkw79/SggxLvfrjW3qeDO9K X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130523_130837_846880_3D5C8D6D X-CRM114-Status: GOOD ( 15.18 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , patches@linaro.org, Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d functions in x86/mm/hugetlbpage.c do not rely on any architecture specific knowledge other than the fact that pmds and puds can be treated as huge ptes. To allow other architectures to use this code (and reduce the need for code duplication), this patch copies these functions into mm, replaces the use of pud_large with pud_huge and provides a config flag to activate them: CONFIG_ARCH_WANT_GENERAL_HUGETLB If CONFIG_ARCH_WANT_HUGE_PMD_SHARE is also active then the huge_pmd_share code will be called by huge_pte_alloc (othewise we call pmd_alloc and skip the sharing code). Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- mm/hugetlb.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 88 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b0bfb29..6321726 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2931,15 +2931,6 @@ out_mutex: return ret; } -/* Can be overriden by architectures */ -__attribute__((weak)) struct page * -follow_huge_pud(struct mm_struct *mm, unsigned long address, - pud_t *pud, int write) -{ - BUG(); - return NULL; -} - long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, @@ -3289,8 +3280,96 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; return 1; } +#define want_pmd_share() (1) +#else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) +{ + return NULL; +} +#define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB +pte_t *huge_pte_alloc(struct mm_struct *mm, + unsigned long addr, unsigned long sz) +{ + pgd_t *pgd; + pud_t *pud; + pte_t *pte = NULL; + + pgd = pgd_offset(mm, addr); + pud = pud_alloc(mm, pgd, addr); + if (pud) { + if (sz == PUD_SIZE) { + pte = (pte_t *)pud; + } else { + BUG_ON(sz != PMD_SIZE); + if (want_pmd_share() && pud_none(*pud)) + pte = huge_pmd_share(mm, addr, pud); + else + pte = (pte_t *)pmd_alloc(mm, pud, addr); + } + } + BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); + + return pte; +} + +pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd = NULL; + + pgd = pgd_offset(mm, addr); + if (pgd_present(*pgd)) { + pud = pud_offset(pgd, addr); + if (pud_present(*pud)) { + if (pud_huge(*pud)) + return (pte_t *)pud; + pmd = pmd_offset(pud, addr); + } + } + return (pte_t *) pmd; +} + +struct page * +follow_huge_pmd(struct mm_struct *mm, unsigned long address, + pmd_t *pmd, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pmd); + if (page) + page += ((address & ~PMD_MASK) >> PAGE_SHIFT); + return page; +} + +struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pud); + if (page) + page += ((address & ~PUD_MASK) >> PAGE_SHIFT); + return page; +} + +#else /* !CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + +/* Can be overriden by architectures */ +__attribute__((weak)) struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + BUG(); + return NULL; +} + +#endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + #ifdef CONFIG_MEMORY_FAILURE /* Should be called in hugetlb_lock */