From patchwork Thu May 23 17:07:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2608381 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork2.kernel.org (Postfix) with ESMTP id 9307DDFB78 for ; Thu, 23 May 2013 17:09:21 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0j-00041x-NO; Thu, 23 May 2013 17:09:05 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0b-00031s-Td; Thu, 23 May 2013 17:08:58 +0000 Received: from mail-wi0-x22d.google.com ([2a00:1450:400c:c05::22d]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0G-0002tT-SV for linux-arm-kernel@lists.infradead.org; Thu, 23 May 2013 17:08:39 +0000 Received: by mail-wi0-f173.google.com with SMTP id hi5so4709968wib.0 for ; Thu, 23 May 2013 10:08:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=8h9nVMKjHRyYh8fokYpj23BA1y8c2MPNjbbmmrf9YS4=; b=iG8X6cr4o+xtq7nhd/hmsrXs44+W6rmlwYBT5EXDa4gOcEPiK1ONMaYADJyrfw3P+T SgS4wniZyn7H3EB+nGCqmjI5cEZGjew3f4T4OZeIq904kyED3RG0+heYrEviIlrMch76 oQNYp5NjRU/2WWP+nl3kK+GOi4Eub0B1OnnLlWW8WF5aObHtGdgqjeoCu8kFf60TWZi+ MgpGWYECmSbAbgOeW/WfY31uzQinz9Egwr9m1DKA/U9sizrrv2U7nWqPyfkOV4WiMKNh IwVVa4TAF9xsE3BwRINKq+KNbDW0sB0MrqtZXRSXUu4WYqLJccOG97cpeUrdzw+bYQTD 6RSg== X-Received: by 10.180.210.242 with SMTP id mx18mr26298449wic.14.1369328895188; Thu, 23 May 2013 10:08:15 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.14 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:14 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 04/11] x86: mm: Remove general hugetlb code from x86. Date: Thu, 23 May 2013 18:07:51 +0100 Message-Id: <1369328878-11706-5-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQk2wfGxG5Qx9yUp/7dIbm3uEpB1vqqRov/6znltDO+AhtBoyNgf9MbuUgv6vRobCaxvCbGE X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130523_130837_091447_0845E8F2 X-CRM114-Status: GOOD ( 13.24 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , patches@linaro.org, Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d have already been copied over to mm. This patch removes the x86 copies of these functions and activates the general ones by enabling: CONFIG_ARCH_WANT_GENERAL_HUGETLB Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/x86/Kconfig | 3 +++ arch/x86/mm/hugetlbpage.c | 67 ----------------------------------------------- 2 files changed, 3 insertions(+), 67 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 476f786..191c4e3 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -210,6 +210,9 @@ config ARCH_SUSPEND_POSSIBLE config ARCH_WANT_HUGE_PMD_SHARE def_bool y +config ARCH_WANT_GENERAL_HUGETLB + def_bool y + config ZONE_DMA32 bool default X86_64 diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 7e522a3..7e73e8c 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -16,49 +16,6 @@ #include #include -pte_t *huge_pte_alloc(struct mm_struct *mm, - unsigned long addr, unsigned long sz) -{ - pgd_t *pgd; - pud_t *pud; - pte_t *pte = NULL; - - pgd = pgd_offset(mm, addr); - pud = pud_alloc(mm, pgd, addr); - if (pud) { - if (sz == PUD_SIZE) { - pte = (pte_t *)pud; - } else { - BUG_ON(sz != PMD_SIZE); - if (pud_none(*pud)) - pte = huge_pmd_share(mm, addr, pud); - else - pte = (pte_t *)pmd_alloc(mm, pud, addr); - } - } - BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); - - return pte; -} - -pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) -{ - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd = NULL; - - pgd = pgd_offset(mm, addr); - if (pgd_present(*pgd)) { - pud = pud_offset(pgd, addr); - if (pud_present(*pud)) { - if (pud_large(*pud)) - return (pte_t *)pud; - pmd = pmd_offset(pud, addr); - } - } - return (pte_t *) pmd; -} - #if 0 /* This is just for testing */ struct page * follow_huge_addr(struct mm_struct *mm, unsigned long address, int write) @@ -120,30 +77,6 @@ int pud_huge(pud_t pud) return !!(pud_val(pud) & _PAGE_PSE); } -struct page * -follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int write) -{ - struct page *page; - - page = pte_page(*(pte_t *)pmd); - if (page) - page += ((address & ~PMD_MASK) >> PAGE_SHIFT); - return page; -} - -struct page * -follow_huge_pud(struct mm_struct *mm, unsigned long address, - pud_t *pud, int write) -{ - struct page *page; - - page = pte_page(*(pte_t *)pud); - if (page) - page += ((address & ~PUD_MASK) >> PAGE_SHIFT); - return page; -} - #endif /* x86_64 also uses this file */