From patchwork Fri Dec 13 19:05:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 3342201 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A0A079F2A9 for ; Fri, 13 Dec 2013 19:13:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C2708207F9 for ; Fri, 13 Dec 2013 19:13:19 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5D20207F8 for ; Fri, 13 Dec 2013 19:13:18 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrYAk-0003gl-6K; Fri, 13 Dec 2013 19:13:14 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrYAh-0006aG-Os; Fri, 13 Dec 2013 19:13:11 +0000 Received: from mail-wg0-f48.google.com ([74.125.82.48]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrYAf-0006Zw-3S for linux-arm-kernel@lists.infradead.org; Fri, 13 Dec 2013 19:13:09 +0000 Received: by mail-wg0-f48.google.com with SMTP id z12so2262327wgg.15 for ; Fri, 13 Dec 2013 11:12:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vnyXnm8nWcWTIvnr156gVuD/cLa9xhBSbL30EQynbZ4=; b=lqQ3nwD1+nTSsm5B5aP/yXFYxZ7YJqT1ls+wCja5EZrmj+x3no5d5nzzzcmX3cA5dz jYjJjFEJgmSPXmxeKyoWAnogHkYSBPZeIl5Atif60Y9FNQcaxU7ZAk+a+fpILHkQ62MM kxYmcb5PBi3mOZAtEwAz8zM1dBtJZSJ66B5k4d1Fb797bS8gWHQS30tSX2t4E8EOXfC7 4i4wM1GS/pgt48gkWCR4Xvyfmw55ZUaLViwHNviymfOUFA3PlnrjtWSWinSviJ69df+0 TpAc/tdn1GRj7R7gj18P5UIDiMo2yIGA6bRLYhCyQhbMlYIIu62knu/pAYaC7rpx/0r+ RCXQ== X-Gm-Message-State: ALoCoQlUmwoIKYEJz3qWKmg4C2a3MBxD7fKgFx+Zzf0Pr5shNmSVxCZCTqjKIGXzFQPRd6QKFrYk X-Received: by 10.194.60.103 with SMTP id g7mr3584740wjr.37.1386961559383; Fri, 13 Dec 2013 11:05:59 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id o9sm289604wib.10.2013.12.13.11.05.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Dec 2013 11:05:58 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/6] arm: mm: Make mmu_gather aware of huge pages Date: Fri, 13 Dec 2013 19:05:43 +0000 Message-Id: <1386961546-10061-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> References: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131213_141309_248673_94E73B82 X-CRM114-Status: GOOD ( 11.48 ) X-Spam-Score: -2.6 (--) Cc: deepak.saxena@linaro.org, linux@arm.linux.org.uk, patches@linaro.org, catalin.marinas@arm.com, will.deacon@arm.com, Steve Capper X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Huge pages on short descriptors are arranged as pairs of 1MB sections. We need to be careful and ensure that the TLBs for both sections are flushed when we tlb_add_flush on a huge pages. This patch extends the tlb flush range to HPAGE_SIZE rather than PAGE_SIZE when addresses belonging to huge page VMAs are added to the flush range. Signed-off-by: Steve Capper --- arch/arm/include/asm/tlb.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0..f5ef8b8 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -81,10 +81,16 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { + unsigned long size = PAGE_SIZE; + if (addr < tlb->range_start) tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; + + if (tlb->vma && is_vm_hugetlb_page(tlb->vma)) + size = HPAGE_SIZE; + + if (addr + size > tlb->range_end) + tlb->range_end = addr + size; } }