From patchwork Wed Jun 18 20:48:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 4379381 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 439619F314 for ; Wed, 18 Jun 2014 20:48:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 72124201DD for ; Wed, 18 Jun 2014 20:48:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7BB89201C8 for ; Wed, 18 Jun 2014 20:48:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755580AbaFRUsT (ORCPT ); Wed, 18 Jun 2014 16:48:19 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:41643 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755206AbaFRUsR (ORCPT ); Wed, 18 Jun 2014 16:48:17 -0400 Received: from akpm3.mtv.corp.google.com (unknown [216.239.45.95]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id DA078A04; Wed, 18 Jun 2014 20:48:16 +0000 (UTC) Date: Wed, 18 Jun 2014 13:48:15 -0700 From: Andrew Morton To: Joonsoo Kim Cc: "Aneesh Kumar K.V" , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zhang Yanfei Subject: Re: [PATCH v3 -next 4/9] DMA, CMA: support arbitrary bitmap granularity Message-Id: <20140618134815.69c4d0a5f916846f9857e9ff@linux-foundation.org> In-Reply-To: <1402897251-23639-5-git-send-email-iamjoonsoo.kim@lge.com> References: <1402897251-23639-1-git-send-email-iamjoonsoo.kim@lge.com> <1402897251-23639-5-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: Sylpheed 3.2.0beta5 (GTK+ 2.24.10; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, 16 Jun 2014 14:40:46 +0900 Joonsoo Kim wrote: > PPC KVM's CMA area management requires arbitrary bitmap granularity, > since they want to reserve very large memory and manage this region > with bitmap that one bit for several pages to reduce management overheads. > So support arbitrary bitmap granularity for following generalization. > > ... > > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -38,6 +38,7 @@ struct cma { > unsigned long base_pfn; > unsigned long count; > unsigned long *bitmap; > + unsigned int order_per_bit; /* Order of pages represented by one bit */ > struct mutex lock; > }; > > @@ -157,9 +158,37 @@ void __init dma_contiguous_reserve(phys_addr_t limit) > > static DEFINE_MUTEX(cma_mutex); > > +static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) > +{ > + return (1 << (align_order >> cma->order_per_bit)) - 1; > +} Might want a "1UL << ..." here. > +static unsigned long cma_bitmap_maxno(struct cma *cma) > +{ > + return cma->count >> cma->order_per_bit; > +} > + > +static unsigned long cma_bitmap_pages_to_bits(struct cma *cma, > + unsigned long pages) > +{ > + return ALIGN(pages, 1 << cma->order_per_bit) >> cma->order_per_bit; > +} Ditto. I'm not really sure what the compiler will do in these cases, but would prefer not to rely on it anyway! --- a/drivers/base/dma-contiguous.c~dma-cma-support-arbitrary-bitmap-granularity-fix +++ a/drivers/base/dma-contiguous.c @@ -160,7 +160,7 @@ static DEFINE_MUTEX(cma_mutex); static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) { - return (1 << (align_order >> cma->order_per_bit)) - 1; + return (1UL << (align_order >> cma->order_per_bit)) - 1; } static unsigned long cma_bitmap_maxno(struct cma *cma) @@ -171,7 +171,7 @@ static unsigned long cma_bitmap_maxno(st static unsigned long cma_bitmap_pages_to_bits(struct cma *cma, unsigned long pages) { - return ALIGN(pages, 1 << cma->order_per_bit) >> cma->order_per_bit; + return ALIGN(pages, 1UL << cma->order_per_bit) >> cma->order_per_bit; } static void cma_clear_bitmap(struct cma *cma, unsigned long pfn, int count)