From patchwork Sun May 23 20:02:01 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Gelmini X-Patchwork-Id: 101764 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o4NK2hM3005272 for ; Sun, 23 May 2010 20:02:47 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754686Ab0EWUCr (ORCPT ); Sun, 23 May 2010 16:02:47 -0400 Received: from fg-out-1718.google.com ([72.14.220.153]:10127 "EHLO fg-out-1718.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754313Ab0EWUCr (ORCPT ); Sun, 23 May 2010 16:02:47 -0400 Received: by fg-out-1718.google.com with SMTP id d23so1668454fga.1 for ; Sun, 23 May 2010 13:02:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:from:to:cc:subject :date:message-id:x-mailer:in-reply-to:references; bh=YlI0Vje7cxhu+n2EbeF4lKU+FlT95J9YISEZWH2hS+w=; b=jjxVOMMSu9ubOcMvJJR5yaDoFRaih4U6e0LHruxk6R1sHSHvjGl5hMG1nRyq+by5Bd jc7F/blpjM8QToIK+gHH7WrmRcrSQSAhazl9NSnQSCm++vZ/BeCw3cl3aOTpczQlBA6S 8g54o+Fv3/xlg6Rf6ZcvIWKN6dbPuR5BjJwQQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; b=T6mACvekq7Ro4ZvMmIsm8v6+XuGtvw6O5gR4EDjIv0clxX/MkIUHRTg0piw+pDtE59 KhDy6+T66A0zN6FpJkyKQ7KBf4y/1yqzyG80Z5pExyZVLSiMyeqQqKOcX/YLWeUDwQgV KyPLd2JvDncXySTV52XDDSbXVl/4VLkAwaK4M= Received: by 10.87.70.7 with SMTP id x7mr7164484fgk.77.1274644966069; Sun, 23 May 2010 13:02:46 -0700 (PDT) Received: from localhost.localdomain (net-93-145-200-9.t2.dsl.vodafone.it [93.145.200.9]) by mx.google.com with ESMTPS id d8sm7928329fga.11.2010.05.23.13.02.44 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 23 May 2010 13:02:45 -0700 (PDT) From: Andrea Gelmini To: andrea.gelmini@gelma.net Cc: Kyle McMartin , Helge Deller , "James E.J. Bottomley" , linux-parisc@vger.kernel.org Subject: [PATCH 174/199] arch/parisc/lib/io.c: Checkpatch cleanup Date: Sun, 23 May 2010 22:02:01 +0200 Message-Id: <1274644930-26600-17-git-send-email-andrea.gelmini@gelma.net> X-Mailer: git-send-email 1.7.1.251.gf80a2 In-Reply-To: <1274644930-26600-1-git-send-email-andrea.gelmini@gelma.net> References: <1274644930-26600-1-git-send-email-andrea.gelmini@gelma.net> Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sun, 23 May 2010 20:02:48 +0000 (UTC) diff --git a/arch/parisc/lib/io.c b/arch/parisc/lib/io.c index 7c1406f..c6c9ddb 100644 --- a/arch/parisc/lib/io.c +++ b/arch/parisc/lib/io.c @@ -52,7 +52,7 @@ void memcpy_toio(volatile void __iomem *dst, const void *src, int count) */ void memcpy_fromio(void *dst, const volatile void __iomem *src, int count) { - /* first compare alignment of src/dst */ + /* first compare alignment of src/dst */ if ( (((unsigned long)dst ^ (unsigned long)src) & 1) || (count < 2) ) goto bytecopy; @@ -114,16 +114,15 @@ void memset_io(volatile void __iomem *addr, unsigned char val, int count) addr += 4; count -= 4; } - while (count--) { + while (count--) writeb(val, addr++); - } } /* * Read COUNT 8-bit bytes from port PORT into memory starting at * SRC. */ -void insb (unsigned long port, void *dst, unsigned long count) +void insb(unsigned long port, void *dst, unsigned long count) { unsigned char *p; @@ -163,60 +162,56 @@ void insb (unsigned long port, void *dst, unsigned long count) * the interfaces seems to be slow: just using the inlined version * of the inw() breaks things. */ -void insw (unsigned long port, void *dst, unsigned long count) +void insw(unsigned long port, void *dst, unsigned long count) { unsigned int l = 0, l2; unsigned char *p; p = (unsigned char *)dst; - + if (!count) return; - - switch (((unsigned long)p) & 0x3) - { + + switch (((unsigned long)p) & 0x3) { case 0x00: /* Buffer 32-bit aligned */ - while (count>=2) { - + while (count >= 2) { + count -= 2; l = cpu_to_le16(inw(port)) << 16; l |= cpu_to_le16(inw(port)); *(unsigned int *)p = l; p += 4; } - if (count) { + if (count) *(unsigned short *)p = cpu_to_le16(inw(port)); - } break; - + case 0x02: /* Buffer 16-bit aligned */ *(unsigned short *)p = cpu_to_le16(inw(port)); p += 2; count--; while (count>=2) { - + count -= 2; l = cpu_to_le16(inw(port)) << 16; l |= cpu_to_le16(inw(port)); *(unsigned int *)p = l; p += 4; } - if (count) { + if (count) *(unsigned short *)p = cpu_to_le16(inw(port)); - } break; - + case 0x01: /* Buffer 8-bit aligned */ case 0x03: /* I don't bother with 32bit transfers * in this case, 16bit will have to do -- DE */ --count; - + l = cpu_to_le16(inw(port)); *p = l >> 8; p++; - while (count--) - { + while (count--) { l2 = cpu_to_le16(inw(port)); *(unsigned short *)p = (l & 0xff) << 8 | (l2 >> 8); p += 2; @@ -235,35 +230,32 @@ void insw (unsigned long port, void *dst, unsigned long count) * but the interfaces seems to be slow: just using the inlined version * of the inl() breaks things. */ -void insl (unsigned long port, void *dst, unsigned long count) +void insl(unsigned long port, void *dst, unsigned long count) { unsigned int l = 0, l2; unsigned char *p; p = (unsigned char *)dst; - + if (!count) return; - - switch (((unsigned long) dst) & 0x3) - { + + switch (((unsigned long) dst) & 0x3) { case 0x00: /* Buffer 32-bit aligned */ - while (count--) - { + while (count--) { *(unsigned int *)p = cpu_to_le32(inl(port)); p += 4; } break; - + case 0x02: /* Buffer 16-bit aligned */ --count; - + l = cpu_to_le32(inl(port)); *(unsigned short *)p = l >> 16; p += 2; - - while (count--) - { + + while (count--) { l2 = cpu_to_le32(inl(port)); *(unsigned int *)p = (l & 0xffff) << 16 | (l2 >> 16); p += 4; @@ -273,14 +265,13 @@ void insl (unsigned long port, void *dst, unsigned long count) break; case 0x01: /* Buffer 8-bit aligned */ --count; - + l = cpu_to_le32(inl(port)); *(unsigned char *)p = l >> 24; p++; *(unsigned short *)p = (l >> 8) & 0xffff; p += 2; - while (count--) - { + while (count--) { l2 = cpu_to_le32(inl(port)); *(unsigned int *)p = (l & 0xff) << 24 | (l2 >> 8); p += 4; @@ -290,12 +281,11 @@ void insl (unsigned long port, void *dst, unsigned long count) break; case 0x03: /* Buffer 8-bit aligned */ --count; - + l = cpu_to_le32(inl(port)); *p = l >> 24; p++; - while (count--) - { + while (count--) { l2 = cpu_to_le32(inl(port)); *(unsigned int *)p = (l & 0xffffff) << 8 | l2 >> 24; p += 4; @@ -315,7 +305,7 @@ void insl (unsigned long port, void *dst, unsigned long count) * doing byte reads the "slow" way isn't nearly as slow as * doing byte writes the slow way (no r-m-w cycle). */ -void outsb(unsigned long port, const void * src, unsigned long count) +void outsb(unsigned long port, const void *src, unsigned long count) { const unsigned char *p; @@ -333,68 +323,64 @@ void outsb(unsigned long port, const void * src, unsigned long count) * interfaces seems to be slow: just using the inlined version of the * outw() breaks things. */ -void outsw (unsigned long port, const void *src, unsigned long count) +void outsw(unsigned long port, const void *src, unsigned long count) { unsigned int l = 0, l2; const unsigned char *p; p = (const unsigned char *)src; - + if (!count) return; - - switch (((unsigned long)p) & 0x3) - { + + switch (((unsigned long)p) & 0x3) { case 0x00: /* Buffer 32-bit aligned */ - while (count>=2) { + while (count >= 2) { count -= 2; l = *(unsigned int *)p; p += 4; outw(le16_to_cpu(l >> 16), port); outw(le16_to_cpu(l & 0xffff), port); } - if (count) { - outw(le16_to_cpu(*(unsigned short*)p), port); - } + if (count) + outw(le16_to_cpu(*(unsigned short *)p), port); break; - + case 0x02: /* Buffer 16-bit aligned */ - - outw(le16_to_cpu(*(unsigned short*)p), port); + + outw(le16_to_cpu(*(unsigned short *)p), port); p += 2; count--; - - while (count>=2) { + + while (count >= 2) { count -= 2; l = *(unsigned int *)p; p += 4; outw(le16_to_cpu(l >> 16), port); outw(le16_to_cpu(l & 0xffff), port); } - if (count) { + if (count) outw(le16_to_cpu(*(unsigned short *)p), port); - } break; - - case 0x01: /* Buffer 8-bit aligned */ + + case 0x01: /* Buffer 8-bit aligned */ /* I don't bother with 32bit transfers * in this case, 16bit will have to do -- DE */ - + l = *p << 8; p++; count--; - while (count) - { + while (count) { count--; l2 = *(unsigned short *)p; p += 2; outw(le16_to_cpu(l | l2 >> 8), port); - l = l2 << 8; + l = l2 << 8; } l2 = *(unsigned char *)p; - outw (le16_to_cpu(l | l2>>8), port); + outw(le16_to_cpu(l | l2>>8), port); break; - + } } @@ -405,41 +391,38 @@ void outsw (unsigned long port, const void *src, unsigned long count) * Performance is important, but the interfaces seems to be slow: * just using the inlined version of the outl() breaks things. */ -void outsl (unsigned long port, const void *src, unsigned long count) +void outsl(unsigned long port, const void *src, unsigned long count) { unsigned int l = 0, l2; const unsigned char *p; p = (const unsigned char *)src; - + if (!count) return; - - switch (((unsigned long)p) & 0x3) - { + + switch (((unsigned long)p) & 0x3) { case 0x00: /* Buffer 32-bit aligned */ - while (count--) - { + while (count--) { outl(le32_to_cpu(*(unsigned int *)p), port); p += 4; } break; - + case 0x02: /* Buffer 16-bit aligned */ --count; - + l = *(unsigned short *)p; p += 2; - - while (count--) - { + + while (count--) { l2 = *(unsigned int *)p; p += 4; - outl (le32_to_cpu(l << 16 | l2 >> 16), port); + outl(le32_to_cpu(l << 16 | l2 >> 16), port); l = l2; } l2 = *(unsigned short *)p; - outl (le32_to_cpu(l << 16 | l2), port); + outl(le32_to_cpu(l << 16 | l2), port); break; case 0x01: /* Buffer 8-bit aligned */ --count; @@ -449,33 +432,31 @@ void outsl (unsigned long port, const void *src, unsigned long count) l |= *(unsigned short *)p << 8; p += 2; - while (count--) - { + while (count--) { l2 = *(unsigned int *)p; p += 4; - outl (le32_to_cpu(l | l2 >> 24), port); + outl(le32_to_cpu(l | l2 >> 24), port); l = l2 << 8; } l2 = *p; - outl (le32_to_cpu(l | l2), port); + outl(le32_to_cpu(l | l2), port); break; case 0x03: /* Buffer 8-bit aligned */ --count; - + l = *p << 24; p++; - while (count--) - { + while (count--) { l2 = *(unsigned int *)p; p += 4; - outl (le32_to_cpu(l | l2 >> 8), port); + outl(le32_to_cpu(l | l2 >> 8), port); l = l2 << 24; } l2 = *(unsigned short *)p << 16; p += 2; l2 |= *p; - outl (le32_to_cpu(l | l2), port); + outl(le32_to_cpu(l | l2), port); break; } }