From patchwork Tue Oct 6 21:22:23 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Fleming X-Patchwork-Id: 52036 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n96LSd9B000820 for ; Tue, 6 Oct 2009 21:28:40 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932717AbZJFVYK (ORCPT ); Tue, 6 Oct 2009 17:24:10 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758275AbZJFVYK (ORCPT ); Tue, 6 Oct 2009 17:24:10 -0400 Received: from 124x34x33x190.ap124.ftth.ucom.ne.jp ([124.34.33.190]:58105 "EHLO master.linux-sh.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758224AbZJFVYJ (ORCPT ); Tue, 6 Oct 2009 17:24:09 -0400 Received: from localhost (unknown [127.0.0.1]) by master.linux-sh.org (Postfix) with ESMTP id F2D046377C; Tue, 6 Oct 2009 21:22:40 +0000 (UTC) X-Quarantine-ID: <2daGnFmUHawV> X-Virus-Scanned: amavisd-new at linux-sh.org X-Amavis-Alert: BAD HEADER, Duplicate header field: "In-Reply-To" Received: from master.linux-sh.org ([127.0.0.1]) by localhost (master.linux-sh.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2daGnFmUHawV; Wed, 7 Oct 2009 06:22:40 +0900 (JST) Received: from localhost (82-38-64-26.cable.ubr06.brad.blueyonder.co.uk [82.38.64.26]) by master.linux-sh.org (Postfix) with ESMTP id 4B50963777; Wed, 7 Oct 2009 06:22:40 +0900 (JST) From: Matt Fleming To: Paul Mundt Cc: linux-sh@vger.kernel.org Subject: [PATCH 03/14] sh: Allocate PMB entry slot earlier Date: Tue, 6 Oct 2009 22:22:23 +0100 Message-Id: X-Mailer: git-send-email 1.6.3.3 In-Reply-To: References: <1db0a1123393575aec324e0d808b6369f9837fe4.1254861984.git.matt@console-pimps.org> In-Reply-To: References: Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org diff --git a/arch/sh/mm/pmb.c b/arch/sh/mm/pmb.c index 58f9358..a510c8b 100644 --- a/arch/sh/mm/pmb.c +++ b/arch/sh/mm/pmb.c @@ -99,10 +99,31 @@ static inline void pmb_list_del(struct pmb_entry *pmbe) } } +static int pmb_alloc_entry(void) +{ + unsigned int pos; + +repeat: + pos = find_first_zero_bit(&pmb_map, NR_PMB_ENTRIES); + + if (unlikely(pos > NR_PMB_ENTRIES)) + return -ENOSPC; + + if (test_and_set_bit(pos, &pmb_map)) + goto repeat; + + return pos; +} + struct pmb_entry *pmb_alloc(unsigned long vpn, unsigned long ppn, unsigned long flags) { struct pmb_entry *pmbe; + int pos; + + pos = pmb_alloc_entry(); + if (pos < 0) + return ERR_PTR(pos); pmbe = kmem_cache_alloc(pmb_cache, GFP_KERNEL); if (!pmbe) @@ -111,6 +132,7 @@ struct pmb_entry *pmb_alloc(unsigned long vpn, unsigned long ppn, pmbe->vpn = vpn; pmbe->ppn = ppn; pmbe->flags = flags; + pmbe->entry = pos; spin_lock_irq(&pmb_list_lock); pmb_list_add(pmbe); @@ -131,23 +153,9 @@ void pmb_free(struct pmb_entry *pmbe) /* * Must be in P2 for __set_pmb_entry() */ -int __set_pmb_entry(unsigned long vpn, unsigned long ppn, - unsigned long flags, int *entry) +void __set_pmb_entry(unsigned long vpn, unsigned long ppn, + unsigned long flags, int pos) { - unsigned int pos = *entry; - - if (unlikely(pos == PMB_NO_ENTRY)) - pos = find_first_zero_bit(&pmb_map, NR_PMB_ENTRIES); - -repeat: - if (unlikely(pos > NR_PMB_ENTRIES)) - return -ENOSPC; - - if (test_and_set_bit(pos, &pmb_map)) { - pos = find_first_zero_bit(&pmb_map, NR_PMB_ENTRIES); - goto repeat; - } - ctrl_outl(vpn | PMB_V, mk_pmb_addr(pos)); #ifdef CONFIG_CACHE_WRITETHROUGH @@ -161,21 +169,13 @@ repeat: #endif ctrl_outl(ppn | flags | PMB_V, mk_pmb_data(pos)); - - *entry = pos; - - return 0; } -int __uses_jump_to_uncached set_pmb_entry(struct pmb_entry *pmbe) +void __uses_jump_to_uncached set_pmb_entry(struct pmb_entry *pmbe) { - int ret; - jump_to_uncached(); - ret = __set_pmb_entry(pmbe->vpn, pmbe->ppn, pmbe->flags, &pmbe->entry); + __set_pmb_entry(pmbe->vpn, pmbe->ppn, pmbe->flags, pmbe->entry); back_to_cached(); - - return ret; } void __uses_jump_to_uncached clear_pmb_entry(struct pmb_entry *pmbe) @@ -239,8 +239,6 @@ long pmb_remap(unsigned long vaddr, unsigned long phys, again: for (i = 0; i < ARRAY_SIZE(pmb_sizes); i++) { - int ret; - if (size < pmb_sizes[i].size) continue; @@ -250,12 +248,7 @@ again: goto out; } - ret = set_pmb_entry(pmbe); - if (ret != 0) { - pmb_free(pmbe); - err = -EBUSY; - goto out; - } + set_pmb_entry(pmbe); phys += pmb_sizes[i].size; vaddr += pmb_sizes[i].size; @@ -304,8 +297,17 @@ static void __pmb_unmap(struct pmb_entry *pmbe) do { struct pmb_entry *pmblink = pmbe; - if (pmbe->entry != PMB_NO_ENTRY) - clear_pmb_entry(pmbe); + /* + * We may be called before this pmb_entry has been + * entered into the PMB table via set_pmb_entry(), but + * that's OK because we've allocated a unique slot for + * this entry in pmb_alloc() (even if we haven't filled + * it yet). + * + * Therefore, calling clear_pmb_entry() is safe as no + * other mapping can be using that slot. + */ + clear_pmb_entry(pmbe); pmbe = pmblink->link; @@ -315,11 +317,7 @@ static void __pmb_unmap(struct pmb_entry *pmbe) static void pmb_cache_ctor(void *pmb) { - struct pmb_entry *pmbe = pmb; - memset(pmb, 0, sizeof(struct pmb_entry)); - - pmbe->entry = PMB_NO_ENTRY; } static int __uses_jump_to_uncached pmb_init(void) @@ -342,7 +340,7 @@ static int __uses_jump_to_uncached pmb_init(void) for (entry = 0; entry < nr_entries; entry++) { struct pmb_entry *pmbe = pmb_init_map + entry; - __set_pmb_entry(pmbe->vpn, pmbe->ppn, pmbe->flags, &entry); + __set_pmb_entry(pmbe->vpn, pmbe->ppn, pmbe->flags, entry); } ctrl_outl(0, PMB_IRMCR);