From patchwork Tue Feb 18 15:27:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 3672561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 64AD39F370 for ; Tue, 18 Feb 2014 15:31:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A6DBA2009C for ; Tue, 18 Feb 2014 15:31:58 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12C0720213 for ; Tue, 18 Feb 2014 15:31:57 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WFmcQ-0006Ls-Cr; Tue, 18 Feb 2014 15:29:59 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WFmbr-0001V5-Ag; Tue, 18 Feb 2014 15:29:23 +0000 Received: from mail-wg0-f46.google.com ([74.125.82.46]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WFmaU-0001Ga-CH for linux-arm-kernel@lists.infradead.org; Tue, 18 Feb 2014 15:28:14 +0000 Received: by mail-wg0-f46.google.com with SMTP id x13so3355100wgg.13 for ; Tue, 18 Feb 2014 07:27:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v/E8+uZHowMLDOHzker2id9yal9L+bALOVOZP6JKlSQ=; b=FfBlj9MsN6oFXjhAQmemTR6gVctkXGNTK2Lmbuw10t7bxJXWTcmVR2KF0StSKkPegK GZhbJdujkiPodl6cq91+lcv2UqRqFrb3U4NMLpzQ0DraZAcIetprqjfOk+QSq4RpIxJS 6Vakr4FUW2R0LQTj77jy8Mpll0X1VSWY8J2dXA5IP6WKB2vB0LxhqOaBlcwm26V+TUWC VYxwceoL5br19aRcmqEoAXXiEdLdIvhqL8RqpJ6M0NU4XcBZ1L5eAJgAmJKgSLxqcyxI /FlPpbezMUoC+I2Rr9bpybiher9JpIdwVQ3BFc4n7+oBRDt7EFi4vz6XKUc7gLjMGcjg Ef2w== X-Gm-Message-State: ALoCoQnsv2df3gj70Q1CUUNkR1MgJKydUlW1MjOFP5lZY+yFPDiqEdiibLD6VCNMF1nbOVI/hDbD X-Received: by 10.194.236.9 with SMTP id uq9mr23189894wjc.31.1392737255422; Tue, 18 Feb 2014 07:27:35 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id f3sm43028211wiv.2.2014.02.18.07.27.34 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Feb 2014 07:27:34 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, linux@arm.linux.org.uk, linux-mm@kvack.org Subject: [PATCH 3/5] arm: mm: Make mmu_gather aware of huge pages Date: Tue, 18 Feb 2014 15:27:13 +0000 Message-Id: <1392737235-27286-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1392737235-27286-1-git-send-email-steve.capper@linaro.org> References: <1392737235-27286-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140218_102758_665413_2D0F1B4B X-CRM114-Status: GOOD ( 11.66 ) X-Spam-Score: -2.6 (--) Cc: Steve Capper , arnd@arndb.de, catalin.marinas@arm.com, will.deacon@arm.com, dsaxena@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Huge pages on short descriptors are arranged as pairs of 1MB sections. We need to be careful and ensure that the TLBs for both sections are flushed when we tlb_add_flush on a HugeTLB page. This patch extends the tlb flush range to HPAGE_SIZE rather than PAGE_SIZE when addresses belonging to huge page VMAs are added to the flush range. Signed-off-by: Steve Capper --- arch/arm/include/asm/tlb.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0..b2498e6 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -81,10 +81,17 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { + unsigned long size = PAGE_SIZE; + if (addr < tlb->range_start) tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; + + if (!config_enabled(CONFIG_ARM_LPAE) && tlb->vma + && is_vm_hugetlb_page(tlb->vma)) + size = HPAGE_SIZE; + + if (addr + size > tlb->range_end) + tlb->range_end = addr + size; } }