From patchwork Fri May 2 15:17:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 4102361 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 21452BFF02 for ; Fri, 2 May 2014 15:20:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4324820259 for ; Fri, 2 May 2014 15:19:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 54A442017B for ; Fri, 2 May 2014 15:19:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WgFE0-0000oG-MX; Fri, 02 May 2014 15:18:08 +0000 Received: from mail-wg0-f50.google.com ([74.125.82.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WgFDx-0000lW-HG for linux-arm-kernel@lists.infradead.org; Fri, 02 May 2014 15:18:06 +0000 Received: by mail-wg0-f50.google.com with SMTP id k14so3828074wgh.9 for ; Fri, 02 May 2014 08:17:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=6D9LCmQ+HbyUq3ko9z1vYqGloD25d2fnQv/JHejRu2M=; b=H81CA/hh7unwAPl2Pt7kV29rt/zhsDrAuXeMFJBogzXlIZ7UwuBgWNXuJ1sd/HCAaL bqbko0pJ73zOLB097Y8R9cN2P60uZvldL5uSPm+zAg18C5ZaRV3DiQrhVoVTZ+6im1dP Bx/cJLOPV9ASNMkQtxAr0pL3PIvnlaHkU2jiU1x4nQ/BvdwtSvZzywXFDLJkvU73T0ub 6zrPvJK4AwQDfR05+QuCLRW733+zjHmwP+fERVD/tjZ8oaRiDNGEqYC3hfK+UZl9/o// l8e+rcpc3t0CfHwEOZPQZfXj45y1tMetJe3R2LlUqZMILS88zXZ1feXedXYA4Ogtse+u /IZw== X-Gm-Message-State: ALoCoQkXQjjJTiN3F1IExrT7Sg732TiTAig4oXZigswzd28Sn96BVwNT3tad+Dzp+V0LmOxSD8L9 X-Received: by 10.180.228.42 with SMTP id sf10mr3468815wic.48.1399043860088; Fri, 02 May 2014 08:17:40 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id fu10sm5443964wib.11.2014.05.02.08.17.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 02 May 2014 08:17:39 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH V2] arm64: mm: Create gigabyte kernel logical mappings where possible Date: Fri, 2 May 2014 16:17:09 +0100 Message-Id: <1399043829-9036-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140502_081805_763881_EF81A233 X-CRM114-Status: GOOD ( 15.12 ) X-Spam-Score: -0.7 (/) Cc: catalin.marinas@arm.com, Steve Capper , jays.lee@samsung.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper --- Changed in V2: free the original pmd table from swapper_pg_dir if we replace it with a block pud entry. Catalin, pud_pfn would give us the pud pointed to by a huge pud. (so will resolve to a gigabyte aligned address when << PAGE_SHIFTed). What we want is the pointer to the pmd table. I've opted to go for pmd_offset as it's easier to gauge intent. (I know we convert from PA->VA->PA, but this will probably compile out, and is done once on boot...). I've tested this with 3 and 4 levels on the Model (and a load of debug printing that I've removed from the patch). Cheers, diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4d29332..2ced5f6 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -234,7 +234,30 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, pud = pud_offset(pgd, addr); do { next = pud_addr_end(addr, end); - alloc_init_pmd(pud, addr, next, phys); + + /* + * For 4K granule only, attempt to put down a 1GB block + */ + if ((PAGE_SHIFT == 12) && + ((addr | next | phys) & ~PUD_MASK) == 0) { + pud_t old_pud = *pud; + set_pud(pud, __pud(phys | prot_sect_kernel)); + + /* + * If we have an old value for a pud, it will + * be pointing to a pmd table that we no longer + * need (from swapper_pg_dir). + * + * Look up the old pmd table and free it. + */ + if (!pud_none(old_pud)) { + phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); + memblock_free(table, PAGE_SIZE); + flush_tlb_all(); + } + } else { + alloc_init_pmd(pud, addr, next, phys); + } phys += next - addr; } while (pud++, addr = next, addr != end); }