From patchwork Tue Apr 30 16:30:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2505631 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id AA9513FD85 for ; Tue, 30 Apr 2013 16:32:39 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDTP-00070h-R2; Tue, 30 Apr 2013 16:32:12 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDTB-0005uq-Sk; Tue, 30 Apr 2013 16:31:57 +0000 Received: from mail-wg0-x230.google.com ([2a00:1450:400c:c00::230]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDSi-0005lK-5Q for linux-arm-kernel@lists.infradead.org; Tue, 30 Apr 2013 16:31:30 +0000 Received: by mail-wg0-f48.google.com with SMTP id f11so641692wgh.15 for ; Tue, 30 Apr 2013 09:31:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=Un7Uzzd3hm6CBgLiLdwxTNBWADO9FmfGcEJkEnAZMlY=; b=kMq61VyF6GKHY9zUdvauHkCTv9/xCfuH6eU5SxpRpEyopVc/HwNxiB1dunm8aMaLTv F9t7Byd2CPpPnx6qR7a4HFvQDiyJAyvQCJJDOsWlZlhNTfqhPfUiE2W7S5ScqNEjeeHN ml4Ve9L9BvmBKqBa0kH1JDE8p2jFIXwxBLDLuVRHGlnd8SrBkPVFoO+ptnACVE5hjXVi dYpthLC1Ob0FDFKuT/gsBiYrFrYh0jPFrO5cW3RWHtyyvcJtU4iAiuZYBQaYRRZrrMPs hoEEau+VhObcE+YrwISJEgXxe2YuFwPEWNRN7o6OVYDd5QfX1Z2R+J0cXzXhB34+7qCU JLTg== X-Received: by 10.194.235.169 with SMTP id un9mr32284307wjc.1.1367339469522; Tue, 30 Apr 2013 09:31:09 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id k5sm30044708wiy.5.2013.04.30.09.31.08 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 30 Apr 2013 09:31:09 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 4/9] x86: mm: Remove general hugetlb code from x86. Date: Tue, 30 Apr 2013 17:30:43 +0100 Message-Id: <1367339448-21727-5-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1367339448-21727-1-git-send-email-steve.capper@linaro.org> References: <1367339448-21727-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQmEGB7dGXcgOzbRkjVUPnEGsMWdGuojfku9HTVDEMK1BNvo1D8AR4TUAFqc9onsFrpmM7XL X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130430_123128_407012_02F9F845 X-CRM114-Status: GOOD ( 13.24 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d have already been copied over to mm. This patch removes the x86 copies of these functions and activates the general ones by enabling: CONFIG_ARCH_WANT_GENERAL_HUGETLB Signed-off-by: Steve Capper --- arch/x86/Kconfig | 3 +++ arch/x86/mm/hugetlbpage.c | 67 ----------------------------------------------- 2 files changed, 3 insertions(+), 67 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 60e3c402..db1ff87 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -215,6 +215,9 @@ config ARCH_SUSPEND_POSSIBLE config ARCH_WANT_HUGE_PMD_SHARE def_bool y +config ARCH_WANT_GENERAL_HUGETLB + def_bool y + config ZONE_DMA32 bool default X86_64 diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 7e522a3..7e73e8c 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -16,49 +16,6 @@ #include #include -pte_t *huge_pte_alloc(struct mm_struct *mm, - unsigned long addr, unsigned long sz) -{ - pgd_t *pgd; - pud_t *pud; - pte_t *pte = NULL; - - pgd = pgd_offset(mm, addr); - pud = pud_alloc(mm, pgd, addr); - if (pud) { - if (sz == PUD_SIZE) { - pte = (pte_t *)pud; - } else { - BUG_ON(sz != PMD_SIZE); - if (pud_none(*pud)) - pte = huge_pmd_share(mm, addr, pud); - else - pte = (pte_t *)pmd_alloc(mm, pud, addr); - } - } - BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); - - return pte; -} - -pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) -{ - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd = NULL; - - pgd = pgd_offset(mm, addr); - if (pgd_present(*pgd)) { - pud = pud_offset(pgd, addr); - if (pud_present(*pud)) { - if (pud_large(*pud)) - return (pte_t *)pud; - pmd = pmd_offset(pud, addr); - } - } - return (pte_t *) pmd; -} - #if 0 /* This is just for testing */ struct page * follow_huge_addr(struct mm_struct *mm, unsigned long address, int write) @@ -120,30 +77,6 @@ int pud_huge(pud_t pud) return !!(pud_val(pud) & _PAGE_PSE); } -struct page * -follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int write) -{ - struct page *page; - - page = pte_page(*(pte_t *)pmd); - if (page) - page += ((address & ~PMD_MASK) >> PAGE_SHIFT); - return page; -} - -struct page * -follow_huge_pud(struct mm_struct *mm, unsigned long address, - pud_t *pud, int write) -{ - struct page *page; - - page = pte_page(*(pte_t *)pud); - if (page) - page += ((address & ~PUD_MASK) >> PAGE_SHIFT); - return page; -} - #endif /* x86_64 also uses this file */