From patchwork Sat Jul 31 17:46:12 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Shilimkar X-Patchwork-Id: 116230 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o6VHkrIg009791 for ; Sat, 31 Jul 2010 17:46:53 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754286Ab0GaRqf (ORCPT ); Sat, 31 Jul 2010 13:46:35 -0400 Received: from comal.ext.ti.com ([198.47.26.152]:45622 "EHLO comal.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754084Ab0GaRq3 (ORCPT ); Sat, 31 Jul 2010 13:46:29 -0400 Received: from dbdp31.itg.ti.com ([172.24.170.98]) by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id o6VHkEar003294 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 31 Jul 2010 12:46:17 -0500 Received: from linfarm476.india.ti.com (localhost [127.0.0.1]) by dbdp31.itg.ti.com (8.13.8/8.13.8) with ESMTP id o6VHkDN0028067; Sat, 31 Jul 2010 23:16:13 +0530 (IST) Received: from linfarm476.india.ti.com (localhost [127.0.0.1]) by linfarm476.india.ti.com (8.12.11/8.12.11) with ESMTP id o6VHkDYs004869; Sat, 31 Jul 2010 23:16:13 +0530 Received: (from a0393909@localhost) by linfarm476.india.ti.com (8.12.11/8.12.11/Submit) id o6VHkCoF004867; Sat, 31 Jul 2010 23:16:12 +0530 From: Santosh Shilimkar To: linux-arm-kernel@lists.infradead.org Cc: linux-omap@vger.kernel.org, Santosh Shilimkar , Catalin Marinas Subject: [PATCH 4/4] ARM: l2x0: Optmise the range based operations Date: Sat, 31 Jul 2010 23:16:12 +0530 Message-Id: <1280598372-4830-1-git-send-email-santosh.shilimkar@ti.com> X-Mailer: git-send-email 1.5.6.6 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sat, 31 Jul 2010 17:46:54 +0000 (UTC) diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index b2938d4..c0d6108 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -116,6 +116,18 @@ static void l2x0_flush_all(void) spin_unlock_irqrestore(&l2x0_lock, flags); } +static void l2x0_clean_all(void) +{ + unsigned long flags; + + /* clean all ways */ + spin_lock_irqsave(&l2x0_lock, flags); + writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_WAY); + cache_wait(l2x0_base + L2X0_CLEAN_WAY, l2x0_way_mask); + cache_sync(); + spin_unlock_irqrestore(&l2x0_lock, flags); +} + static void l2x0_inv_all(void) { unsigned long flags; @@ -171,54 +183,63 @@ static void l2x0_inv_range(unsigned long start, unsigned long end) static void l2x0_clean_range(unsigned long start, unsigned long end) { - void __iomem *base = l2x0_base; - unsigned long flags; - spin_lock_irqsave(&l2x0_lock, flags); - start &= ~(CACHE_LINE_SIZE - 1); - while (start < end) { - unsigned long blk_end = start + min(end - start, 4096UL); + if ((end - start) >= l2x0_size) { + l2x0_clean_all(); + } else { + void __iomem *base = l2x0_base; + unsigned long flags, blk_end; - while (start < blk_end) { - l2x0_clean_line(start); - start += CACHE_LINE_SIZE; - } - - if (blk_end < end) { - spin_unlock_irqrestore(&l2x0_lock, flags); - spin_lock_irqsave(&l2x0_lock, flags); + spin_lock_irqsave(&l2x0_lock, flags); + start &= ~(CACHE_LINE_SIZE - 1); + while (start < end) { + blk_end = start + min(end - start, 4096UL); + + while (start < blk_end) { + l2x0_clean_line(start); + start += CACHE_LINE_SIZE; + } + + if (blk_end < end) { + spin_unlock_irqrestore(&l2x0_lock, flags); + spin_lock_irqsave(&l2x0_lock, flags); + } } + cache_wait(base + L2X0_CLEAN_LINE_PA, 1); + cache_sync(); + spin_unlock_irqrestore(&l2x0_lock, flags); } - cache_wait(base + L2X0_CLEAN_LINE_PA, 1); - cache_sync(); - spin_unlock_irqrestore(&l2x0_lock, flags); } static void l2x0_flush_range(unsigned long start, unsigned long end) { - void __iomem *base = l2x0_base; - unsigned long flags; - - spin_lock_irqsave(&l2x0_lock, flags); - start &= ~(CACHE_LINE_SIZE - 1); - while (start < end) { - unsigned long blk_end = start + min(end - start, 4096UL); - - debug_writel(0x03); - while (start < blk_end) { - l2x0_flush_line(start); - start += CACHE_LINE_SIZE; - } - debug_writel(0x00); + if ((end - start) >= l2x0_size) { + l2x0_flush_all(); + } else { + void __iomem *base = l2x0_base; + unsigned long flags, blk_end; - if (blk_end < end) { - spin_unlock_irqrestore(&l2x0_lock, flags); - spin_lock_irqsave(&l2x0_lock, flags); + spin_lock_irqsave(&l2x0_lock, flags); + start &= ~(CACHE_LINE_SIZE - 1); + while (start < end) { + blk_end = start + min(end - start, 4096UL); + + debug_writel(0x03); + while (start < blk_end) { + l2x0_flush_line(start); + start += CACHE_LINE_SIZE; + } + debug_writel(0x00); + + if (blk_end < end) { + spin_unlock_irqrestore(&l2x0_lock, flags); + spin_lock_irqsave(&l2x0_lock, flags); + } } + cache_wait(base + L2X0_CLEAN_INV_LINE_PA, 1); + cache_sync(); + spin_unlock_irqrestore(&l2x0_lock, flags); } - cache_wait(base + L2X0_CLEAN_INV_LINE_PA, 1); - cache_sync(); - spin_unlock_irqrestore(&l2x0_lock, flags); } static void l2x0_disable(void)