From patchwork Wed Apr 16 11:46:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 4000281 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 270C59F387 for ; Wed, 16 Apr 2014 11:49:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 604E820272 for ; Wed, 16 Apr 2014 11:49:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 86C712024C for ; Wed, 16 Apr 2014 11:49:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaOJX-0005SH-Nl; Wed, 16 Apr 2014 11:47:39 +0000 Received: from mail-we0-f170.google.com ([74.125.82.170]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaOJB-0005GQ-Dj for linux-arm-kernel@lists.infradead.org; Wed, 16 Apr 2014 11:47:17 +0000 Received: by mail-we0-f170.google.com with SMTP id w61so10801294wes.29 for ; Wed, 16 Apr 2014 04:46:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v/E8+uZHowMLDOHzker2id9yal9L+bALOVOZP6JKlSQ=; b=ff1RVzFcfIteNNyIbmfBlSvZhzd7oKjVXs6Q4TeZ5ulSPRXjDOWPa/KlnefMJq3kW7 kKYzWkynkG+iFyALSWNaEEeEhgr/cfvguFZyXqTPzxoDLButJdT02ZjU277gHWJJNlOf aAxMvxpXcK87ax7W5WdDgogq4iEwlfCstF/JA00T6JOvb4KPxxV7NAY2FjjCDmdeCQVR Dw9Spl1KE45w2QktpUrvWp1H/UwIXLi+WtpAknPLl/U16omcsVMdCg+5I9gDmEho4CXV Dfn2ztWvjNspNH8MJ37wY5u3k+SLUzdhMEEAI8WTkKQmumKJxmU3tNbsVldiJifV3YMO IFDA== X-Gm-Message-State: ALoCoQmKitO1aTcTJDEbGjtwLYpahq3GYI6r+8sWiDKhkdJ8IMvTuV5nF0oSM4ng8Q+UCt6tk2RO X-Received: by 10.180.75.45 with SMTP id z13mr7228609wiv.41.1397648814751; Wed, 16 Apr 2014 04:46:54 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id mw4sm24116996wib.12.2014.04.16.04.46.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 Apr 2014 04:46:54 -0700 (PDT) From: Steve Capper To: linux@arm.linux.org.uk, akpm@linux-foundation.org Subject: [PATCH V2 3/5] arm: mm: Make mmu_gather aware of huge pages Date: Wed, 16 Apr 2014 12:46:41 +0100 Message-Id: <1397648803-15961-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1397648803-15961-1-git-send-email-steve.capper@linaro.org> References: <1397648803-15961-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140416_044717_621045_34109097 X-CRM114-Status: GOOD ( 12.60 ) X-Spam-Score: -0.7 (/) Cc: Steve Capper , catalin.marinas@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, gerald.schaefer@de.ibm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Huge pages on short descriptors are arranged as pairs of 1MB sections. We need to be careful and ensure that the TLBs for both sections are flushed when we tlb_add_flush on a HugeTLB page. This patch extends the tlb flush range to HPAGE_SIZE rather than PAGE_SIZE when addresses belonging to huge page VMAs are added to the flush range. Signed-off-by: Steve Capper --- arch/arm/include/asm/tlb.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0..b2498e6 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -81,10 +81,17 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { + unsigned long size = PAGE_SIZE; + if (addr < tlb->range_start) tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; + + if (!config_enabled(CONFIG_ARM_LPAE) && tlb->vma + && is_vm_hugetlb_page(tlb->vma)) + size = HPAGE_SIZE; + + if (addr + size > tlb->range_end) + tlb->range_end = addr + size; } }