From patchwork Tue Dec 1 17:51:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11943437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E2EDC64E7A for ; Tue, 1 Dec 2020 17:52:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 765BC21741 for ; Tue, 1 Dec 2020 17:52:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GsNkioNR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 765BC21741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EC1006B006E; Tue, 1 Dec 2020 12:52:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E72758D0002; Tue, 1 Dec 2020 12:52:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D607D8D0001; Tue, 1 Dec 2020 12:52:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id C11256B006E for ; Tue, 1 Dec 2020 12:52:16 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7B12C824999B for ; Tue, 1 Dec 2020 17:52:16 +0000 (UTC) X-FDA: 77545457472.29.crush20_610a0ca273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id A384D180868CE for ; Tue, 1 Dec 2020 17:51:53 +0000 (UTC) X-HE-Tag: crush20_610a0ca273ac X-Filterd-Recvd-Size: 7352 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:51:52 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id e5so1686349pjt.0 for ; Tue, 01 Dec 2020 09:51:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AB/6a+0q1qLMuNjG8yRoZzr5m++qXs4+JpGB+h7ZvYE=; b=GsNkioNRpEqEv4PmUEZ47G2Nsi0b1FGDOkL5c3BzhVvL5717koBsqxp5YyNaaes87O ya6JejYb9FhEGAapd4adPgHU8ehu9r9XjKCJ0dd3iwRD9zsjS2RxEBLKMazdZRU2EEyZ SoLfCeVpK1bwA5DMImwlFcR6UcGLh0mh2db7ZSnakc4jVBSYXBhgYMSinVpiQKiQ4KAt OfsxLvd4Ugpn98m4QVvoejFLMyBP3CjdWqzFJfW6wvuxFxCdFmqAq+hhGxqw9CEX/1qW XsI9jokfMcw3KxL0wtnRSJwrcOOV0mJN4u7KK4Ls1p0PEQ1bJ3eTFevIkdPGX0Ue1sYX QhtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=AB/6a+0q1qLMuNjG8yRoZzr5m++qXs4+JpGB+h7ZvYE=; b=toybBxTjfXkmiSiErn40I2DMLDGJdG0XZxMYCgsQh7zUoUcQM6BwAtTeD76C0a2sQI GdICmeYGsvYqGuKiAkfWinJCP73o8Djb8iKsZqZYjVynUXWo+Vy/W2o0voPT/ZgRk3UY osKYTIDiuKjB/SaEVAUpXz2deHDFC/qaJDRh07Zy/Nqt/HoUx1/1WkIgjY1frh2QZCN6 kleHUOPq262qFG+MtefwnMBbbADqyLImhX8+pBifKM5jiYD9H8N6PEyRZYhjZI+pLpid ZpA3hvmF7Vy8ofH3yPsey4gKUUWQQwIi7GLGuFtOsil027UtNJd7hrVRZ3uQ5+txIhel yvzA== X-Gm-Message-State: AOAM532jm1DOIA9eON4YDlahiBb5dBaAu+P7a+rkVZ8+JLcWfmsnThRl O4xFyMsK1JemYq+ncURNmwY= X-Google-Smtp-Source: ABdhPJwfTISVlwcaeK8OrIVwQd2JJl5304YRCgPrnXjrKTiC7PJ93jeid5YnVZR3iY0PKrrG1rz9Sg== X-Received: by 2002:a17:902:8605:b029:da:14f6:a4ad with SMTP id f5-20020a1709028605b02900da14f6a4admr3760458plo.79.1606845112164; Tue, 01 Dec 2020 09:51:52 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id q23sm390082pfg.192.2020.12.01.09.51.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Dec 2020 09:51:50 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v2 1/4] mm: introduce alloc_contig_mode Date: Tue, 1 Dec 2020 09:51:41 -0800 Message-Id: <20201201175144.3996569-2-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog In-Reply-To: <20201201175144.3996569-1-minchan@kernel.org> References: <20201201175144.3996569-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are demands to control how hard alloc_contig_range try to increase allocation success ratio. This patch abstract it by adding new enum mode parameter in alloc_contig_range. New API in next patch will add up new mode there to control it. This patch shouldn't change any existing behavior. Suggested-by: David Hildenbrand Signed-off-by: Minchan Kim --- drivers/virtio/virtio_mem.c | 2 +- include/linux/gfp.h | 8 +++++++- mm/cma.c | 3 ++- mm/page_alloc.c | 6 ++++-- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 9fc9ec4a25f5..5585fc67b65e 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1148,7 +1148,7 @@ static int virtio_mem_fake_offline(unsigned long pfn, unsigned long nr_pages) */ for (retry_count = 0; retry_count < 5; retry_count++) { rc = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE, - GFP_KERNEL); + GFP_KERNEL, ALLOC_CONTIG_NORMAL); if (rc == -ENOMEM) /* whoops, out of memory */ return rc; diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c603237e006c..ad5872699692 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -624,9 +624,15 @@ static inline bool pm_suspended_storage(void) #endif /* CONFIG_PM_SLEEP */ #ifdef CONFIG_CONTIG_ALLOC +enum alloc_contig_mode { + /* try several ways to increase success ratio of memory allocation */ + ALLOC_CONTIG_NORMAL, +}; + /* The below functions must be run on a range from a single zone. */ extern int alloc_contig_range(unsigned long start, unsigned long end, - unsigned migratetype, gfp_t gfp_mask); + unsigned migratetype, gfp_t gfp_mask, + enum alloc_contig_mode mode); extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, int nid, nodemask_t *nodemask); #endif diff --git a/mm/cma.c b/mm/cma.c index 3692a34e2353..8010c1ba04b0 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -454,7 +454,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0), + ALLOC_CONTIG_NORMAL); if (ret == 0) { page = pfn_to_page(pfn); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f91df593bf71..adfbfd95fbc3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8533,6 +8533,7 @@ static void __alloc_contig_clear_range(unsigned long start_pfn, * be either of the two. * @gfp_mask: GFP mask to use during compaction. __GFP_ZERO clears allocated * pages. + * @mode: how hard it will try to increase allocation success ratio * * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES * aligned. The PFN range must belong to a single zone. @@ -8546,7 +8547,8 @@ static void __alloc_contig_clear_range(unsigned long start_pfn, * need to be freed with free_contig_range(). */ int alloc_contig_range(unsigned long start, unsigned long end, - unsigned migratetype, gfp_t gfp_mask) + unsigned migratetype, gfp_t gfp_mask, + enum alloc_contig_mode mode) { unsigned long outer_start, outer_end; unsigned int order; @@ -8689,7 +8691,7 @@ static int __alloc_contig_pages(unsigned long start_pfn, unsigned long end_pfn = start_pfn + nr_pages; return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, - gfp_mask); + gfp_mask, ALLOC_CONTIG_NORMAL); } static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, From patchwork Tue Dec 1 17:51:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11943433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3CBFC64E7A for ; Tue, 1 Dec 2020 17:51:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F263621741 for ; Tue, 1 Dec 2020 17:51:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="R4GNhwPX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F263621741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 747AB6B0068; Tue, 1 Dec 2020 12:51:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D24A6B006C; Tue, 1 Dec 2020 12:51:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5706D8D0001; Tue, 1 Dec 2020 12:51:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 3CDDD6B0068 for ; Tue, 1 Dec 2020 12:51:56 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 01919181AEF1A for ; Tue, 1 Dec 2020 17:51:56 +0000 (UTC) X-FDA: 77545456632.12.rock73_5f0484b273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id D3E221800BC34 for ; Tue, 1 Dec 2020 17:51:55 +0000 (UTC) X-HE-Tag: rock73_5f0484b273ac X-Filterd-Recvd-Size: 14752 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:51:55 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id hk16so1707240pjb.4 for ; Tue, 01 Dec 2020 09:51:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3unagI/0BHWWOJ0Y3ZMT8ueBfxzXn5Uqgna//GYokw0=; b=R4GNhwPXvS4+cNCeRN1dv7JxUb9I1Xr8kcTRAhooqHbaqiif7YhXhajWTvdI1bItVr 9EhXS+9mVcK/+IIF+OLKihL9ScqcKINVXIyFoMRLWlMx5VkkFtXOukjtNPi5LRbUzec0 1Ijtk3xneiQU1naVixqxALzGEIZmsgRz0jwFIJS18pCABVCyIYxzC3IH9SSfAIJT6Gjj Du1fi4KZ6+7bF4ui/kseRcYfYCh7mJbDNsFTT1pEA/Fc+bJG9uVjLvsydb4BIbDWjgD9 GYGhOUL0FMsV63SmaQ8LRL4VMj013WFbEHns+2i0DijVR14UyJ7ItCmlT53wPDzsjLvk MVew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=3unagI/0BHWWOJ0Y3ZMT8ueBfxzXn5Uqgna//GYokw0=; b=X5uATtbolpInWXoD7yzEAzZhfy3KpHaIhu5nsXyO1Lnm2kRbKh6n31vZbVQMaX0nm0 GWgl8HYcp5Rf+ZQfrhNfrH7UciqqDtnHP5Kra0FTXTNPmGJUgvSdAw0z3mM2+KGBtTQD Zh8qQBQ3m+ow4Ej8ZvPlLQUP7Sye2eiVKUA/kJBxBTlIgLEO7TpAJSgsFwDicgTqVN/n X8n2FfmNpDc7ZttN9LWbVUZ7WpBnYulMM2YXjnqsZo61fyEwVHo9Jqt1ueuHFtPXLrkU F10lsNSWcSNiKOmrEkXGQS+pLit9rBUQlzPJ7RfL8Iz9mbfIhHOKmID5mkKV0NgnjQpE ZKdQ== X-Gm-Message-State: AOAM531hnafu3nzwXibbaM47ET87BkE4IIUbQR6KADRwu4ooEQ355fvJ PiWjkKbwPElNDuFbVTY+xK8= X-Google-Smtp-Source: ABdhPJxqGmkYiOGzCY2ozTIIBsCp1TH4bT3wdVU7dS7WQ+pQKlms2XBDjLXG6YKwvQKLj7CUH27h/Q== X-Received: by 2002:a17:902:b905:b029:d8:ad03:8c93 with SMTP id bf5-20020a170902b905b02900d8ad038c93mr3958549plb.15.1606845114421; Tue, 01 Dec 2020 09:51:54 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id q23sm390082pfg.192.2020.12.01.09.51.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Dec 2020 09:51:53 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v2 2/4] mm: introduce cma_alloc_bulk API Date: Tue, 1 Dec 2020 09:51:42 -0800 Message-Id: <20201201175144.3996569-3-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog In-Reply-To: <20201201175144.3996569-1-minchan@kernel.org> References: <20201201175144.3996569-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a need for special HW to require bulk allocation of high-order pages. For example, 4800 * order-4 pages, which would be minimum, sometimes, it requires more. To meet the requirement, a option reserves 300M CMA area and requests the whole 300M contiguous memory. However, it doesn't work if even one of those pages in the range is long-term pinned directly or indirectly. The other option is to ask higher-order size (e.g., 2M) than requested order(64K) repeatedly until driver could gather necessary amount of memory. Basically, this approach makes the allocation very slow due to cma_alloc's function slowness and it could be stuck on one of the pageblocks if it encounters unmigratable page. To solve the issue, this patch introduces cma_alloc_bulk. int cma_alloc_bulk(struct cma *cma, unsigned int align, bool fast, unsigned int order, size_t nr_requests, struct page **page_array, size_t *nr_allocated); Most parameters are same with cma_alloc but it additionally passes vector array to store allocated memory. What's different with cma_alloc is it will skip pageblocks without waiting/stopping if it has unmovable page so that API continues to scan other pageblocks to find requested order page. cma_alloc_bulk is best effort approach in that it skips some pageblocks if they have unmovable pages unlike cma_alloc. It doesn't need to be perfect from the beginning at the cost of performance. Thus, the API takes "bool fast parameter" which is propagated into alloc_contig_range to avoid significat overhead functions to inrecase CMA allocation success ratio(e.g., migration retrial, PCP, LRU draining per pageblock) at the cost of less allocation success ratio. If the caller couldn't allocate enough, they could call it with "false" to increase success ratio if they are okay to expense the overhead for the success ratio. Signed-off-by: Minchan Kim --- include/linux/cma.h | 5 ++ include/linux/gfp.h | 2 + mm/cma.c | 126 ++++++++++++++++++++++++++++++++++++++++++-- mm/page_alloc.c | 19 ++++--- 4 files changed, 140 insertions(+), 12 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index 217999c8a762..7375d3131804 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,11 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, bool no_warn); + +extern int cma_alloc_bulk(struct cma *cma, unsigned int align, bool fast, + unsigned int order, size_t nr_requests, + struct page **page_array, size_t *nr_allocated); + extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/include/linux/gfp.h b/include/linux/gfp.h index ad5872699692..75bfb673d75b 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -627,6 +627,8 @@ static inline bool pm_suspended_storage(void) enum alloc_contig_mode { /* try several ways to increase success ratio of memory allocation */ ALLOC_CONTIG_NORMAL, + /* avoid costly functions to make the call fast */ + ALLOC_CONTIG_FAST, }; /* The below functions must be run on a range from a single zone. */ diff --git a/mm/cma.c b/mm/cma.c index 8010c1ba04b0..4459045fa717 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include "cma.h" @@ -397,6 +398,14 @@ static void cma_debug_show_areas(struct cma *cma) static inline void cma_debug_show_areas(struct cma *cma) { } #endif +static void reset_page_kasan_tag(struct page *page, int count) +{ + int i; + + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); +} + /** * cma_alloc() - allocate pages from contiguous area * @cma: Contiguous memory region for which the allocation is performed. @@ -414,7 +423,6 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; - size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -479,10 +487,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, * blocks being marked with different tags. Reset the tags to ignore * those page blocks. */ - if (page) { - for (i = 0; i < count; i++) - page_kasan_tag_reset(page + i); - } + if (page) + reset_page_kasan_tag(page, count); if (ret && !no_warn) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", @@ -494,6 +500,116 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, return page; } +/* + * cma_alloc_bulk() - allocate high order bulk pages from contiguous area with + * best effort. It will usually be used for private @cma + * + * @cma: contiguous memory region for which the allocation is performed. + * @align: requested alignment of pages (in PAGE_SIZE order). + * @fast: will skip costly opeartions if it's true. + * @order: requested page order + * @nr_requests: the number of 2^order pages requested to be allocated as input, + * @page_array: page_array pointer to store allocated pages (must have space + * for at least nr_requests) + * @nr_allocated: the number of 2^order pages allocated as output + * + * This function tries to allocate up to @nr_requests @order pages on specific + * contiguous memory area. If @fast has true, it will avoid costly functions + * to increase allocation success ratio so it will be faster but might return + * less than requested number of pages. User could retry it with true if it is + * needed. + * + * Return: it will return 0 only if all pages requested by @nr_requestsed are + * allocated. Otherwise, it returns negative error code. + * + * Note: Regardless of success/failure, user should check @nr_allocated to see + * how many @order pages are allocated and free those pages when they are not + * needed. + */ +int cma_alloc_bulk(struct cma *cma, unsigned int align, bool fast, + unsigned int order, size_t nr_requests, + struct page **page_array, size_t *nr_allocated) +{ + int ret = 0; + size_t i = 0; + unsigned long nr_pages_needed = nr_requests * (1 << order); + unsigned long nr_chunk_pages, nr_pages; + unsigned long mask, offset; + unsigned long pfn = -1; + unsigned long start = 0; + unsigned long bitmap_maxno, bitmap_no, bitmap_count; + struct page *page = NULL; + enum alloc_contig_mode mode = fast ? ALLOC_CONTIG_FAST : + ALLOC_CONTIG_NORMAL; + *nr_allocated = 0; + if (!cma || !cma->count || !cma->bitmap || !page_array) + return -EINVAL; + + if (!nr_pages_needed) + return 0; + + nr_chunk_pages = 1 << max_t(unsigned int, order, pageblock_order); + + mask = cma_bitmap_aligned_mask(cma, align); + offset = cma_bitmap_aligned_offset(cma, align); + bitmap_maxno = cma_bitmap_maxno(cma); + + lru_add_drain_all(); + drain_all_pages(NULL); + + while (nr_pages_needed) { + nr_pages = min(nr_chunk_pages, nr_pages_needed); + + bitmap_count = cma_bitmap_pages_to_bits(cma, nr_pages); + mutex_lock(&cma->lock); + bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, + bitmap_maxno, start, bitmap_count, mask, + offset); + if (bitmap_no >= bitmap_maxno) { + mutex_unlock(&cma->lock); + break; + } + bitmap_set(cma->bitmap, bitmap_no, bitmap_count); + /* + * It's safe to drop the lock here. If the migration fails + * cma_clear_bitmap will take the lock again and unmark it. + */ + mutex_unlock(&cma->lock); + + pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); + ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA, + GFP_KERNEL|__GFP_NOWARN, mode); + if (ret) { + cma_clear_bitmap(cma, pfn, nr_pages); + if (ret != -EBUSY) + break; + + /* continue to search next block */ + start = (pfn + nr_pages - cma->base_pfn) >> + cma->order_per_bit; + continue; + } + + page = pfn_to_page(pfn); + while (nr_pages) { + page_array[i++] = page; + reset_page_kasan_tag(page, 1 << order); + page += 1 << order; + nr_pages -= 1 << order; + nr_pages_needed -= 1 << order; + } + + start = bitmap_no + bitmap_count; + } + + *nr_allocated = i; + + if (!ret && nr_pages_needed) + ret = -EBUSY; + + return ret; +} + /** * cma_release() - release allocated pages * @cma: Contiguous memory region for which the allocation is performed. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index adfbfd95fbc3..2a1799ff14fc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8463,7 +8463,8 @@ static unsigned long pfn_max_align_up(unsigned long pfn) /* [start, end) must belong to a single zone. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + unsigned int max_tries) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -8491,7 +8492,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0; - } else if (++tries == 5) { + } else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break; } @@ -8553,6 +8554,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, unsigned long outer_start, outer_end; unsigned int order; int ret = 0; + bool fast_mode = mode == ALLOC_CONTIG_FAST; struct compact_control cc = { .nr_migratepages = 0, @@ -8595,7 +8597,8 @@ int alloc_contig_range(unsigned long start, unsigned long end, if (ret) return ret; - drain_all_pages(cc.zone); + if (!fast_mode) + drain_all_pages(cc.zone); /* * In case of -EBUSY, we'd like to know which page causes problem. @@ -8607,7 +8610,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end); + ret = __alloc_contig_migrate_range(&cc, start, end, fast_mode ? 1 : 5); if (ret && ret != -EBUSY) goto done; ret =0; @@ -8629,7 +8632,8 @@ int alloc_contig_range(unsigned long start, unsigned long end, * isolated thus they won't get removed from buddy. */ - lru_add_drain_all(); + if (!fast_mode) + lru_add_drain_all(); order = 0; outer_start = start; @@ -8656,8 +8660,9 @@ int alloc_contig_range(unsigned long start, unsigned long end, /* Make sure the range is really isolated. */ if (test_pages_isolated(outer_start, end, 0)) { - pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", - __func__, outer_start, end); + if (!fast_mode) + pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", + __func__, outer_start, end); ret = -EBUSY; goto done; } From patchwork Tue Dec 1 17:51:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11943439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99FD2C64E7A for ; Tue, 1 Dec 2020 17:52:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 03DF321D46 for ; Tue, 1 Dec 2020 17:52:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Cc5oZnGf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03DF321D46 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 683946B0070; Tue, 1 Dec 2020 12:52:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 60DEB8D0002; Tue, 1 Dec 2020 12:52:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D6B78D0001; Tue, 1 Dec 2020 12:52:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 3392A6B0070 for ; Tue, 1 Dec 2020 12:52:21 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E0AB3180AD837 for ; Tue, 1 Dec 2020 17:52:20 +0000 (UTC) X-FDA: 77545457640.29.ants14_4b0fdf6273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 5974618085CF9 for ; Tue, 1 Dec 2020 17:51:58 +0000 (UTC) X-HE-Tag: ants14_4b0fdf6273ac X-Filterd-Recvd-Size: 6314 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:51:57 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id m9so1620374pgb.4 for ; Tue, 01 Dec 2020 09:51:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4zRLtJ0Q5wi/JJXSMnqzlxIlA9XB0kWpvTg/hxCouGQ=; b=Cc5oZnGf3sYZSv66LApJ0Hgee277p8MonRA3s85p2F4XlF/n+VuGbl8fU1lW2bHbjK VFN0bZnO7OxxcK23RWuO/DPEq3TRvvtTcqhdlvHY2d4aNv295MZTZ7bP0WiGEjaTnNST 6dwK8wsxn5QoaHaqieK4GfYhbB6bKqHjaUnaNae7j8jIQJPFdW8cgPhBmCGdDlfGyWgB HkfsElLLP4bC+N3AQ7lBXPcGyLrKAmGUW5mJNOipjvY+6tty6wHAQ2pHxv8sdAjSNCuL 9s4EOkHKhk/10a9sYS4fOVfuSjatOufSFolSqFs5bDdbIaHPy588RglVWiRtg6LK+UyI dtow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=4zRLtJ0Q5wi/JJXSMnqzlxIlA9XB0kWpvTg/hxCouGQ=; b=LvkGJ9Ffvwq/LOAEcd1lpw/phtREWXPRxZZXHWfVbomiNJfDYFgFAfv7CPhQzI2aj/ 7c4HtljVBc211SMrPosnSFevw/zbdSYTuEP7K/mnJG3hp6eDvLI5CNcERkltsiW1YdiQ REjhfdElhheC/25KSiFJykwkhmrkwu9xr0qAenyofCBUmKBtLTVdDWhAUi2i6R0bXD00 8AGRNRAeuCoNbda0CUzw+gqcFGER5in/5+LlnXiC8dqzew5aOTyx3/Lwg2gYHXsKueW4 7wtLiLRaQp21KG2pRVLVrAgD+tM+KH2Y2++p3iBafbSWDRYzbDMO+HoGT3qMotdkhJV3 H/pQ== X-Gm-Message-State: AOAM532Avvf/VOtx5bASaB3hEBwb/HRfp2XYM8yG3tiMReCgjYXfIy+h U0u+pp4QnlXAJj4N+9umfHI= X-Google-Smtp-Source: ABdhPJzGe4cpt+aeqFv9FnQFMR/FJjDiGsrboHIU1DKsNNU7bUw3lkcdAv3MMGBbgCOWuQDUV/UPeQ== X-Received: by 2002:a63:fb42:: with SMTP id w2mr3229528pgj.354.1606845116922; Tue, 01 Dec 2020 09:51:56 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id q23sm390082pfg.192.2020.12.01.09.51.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Dec 2020 09:51:55 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v2 3/4] dma-buf: add export symbol for dma-heap Date: Tue, 1 Dec 2020 09:51:43 -0800 Message-Id: <20201201175144.3996569-4-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog In-Reply-To: <20201201175144.3996569-1-minchan@kernel.org> References: <20201201175144.3996569-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu The heaps could be added as module, so some functions should be exported to register dma-heaps. And dma-heap of module can use cma area to allocate and free. However the function related cma is not exported now. Let's export them for next patches. Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- drivers/dma-buf/dma-heap.c | 2 ++ kernel/dma/contiguous.c | 1 + mm/cma.c | 5 +++++ 3 files changed, 8 insertions(+) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index afd22c9dbdcf..cc6339cbca09 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -189,6 +189,7 @@ void *dma_heap_get_drvdata(struct dma_heap *heap) { return heap->priv; } +EXPORT_SYMBOL_GPL(dma_heap_get_drvdata); struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { @@ -272,6 +273,7 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) kfree(heap); return err_ret; } +EXPORT_SYMBOL_GPL(dma_heap_add); static char *dma_heap_devnode(struct device *dev, umode_t *mode) { diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..7e9777119b29 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -58,6 +58,7 @@ #endif struct cma *dma_contiguous_default_area; +EXPORT_SYMBOL_GPL(dma_contiguous_default_area); /* * Default global CMA area size can be defined in kernel's .config. diff --git a/mm/cma.c b/mm/cma.c index 4459045fa717..d39cb7066b9e 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include "cma.h" @@ -54,6 +55,7 @@ const char *cma_get_name(const struct cma *cma) { return cma->name; } +EXPORT_SYMBOL_GPL(cma_get_name); static unsigned long cma_bitmap_aligned_mask(const struct cma *cma, unsigned int align_order) @@ -499,6 +501,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pr_debug("%s(): returned %p\n", __func__, page); return page; } +EXPORT_SYMBOL_GPL(cma_alloc); /* * cma_alloc_bulk() - allocate high order bulk pages from contiguous area with @@ -609,6 +612,7 @@ int cma_alloc_bulk(struct cma *cma, unsigned int align, bool fast, return ret; } +EXPORT_SYMBOL_GPL(cma_alloc_bulk); /** * cma_release() - release allocated pages @@ -642,6 +646,7 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) return true; } +EXPORT_SYMBOL_GPL(cma_release); int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data) { From patchwork Tue Dec 1 17:51:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11943435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B81FCC64E8A for ; Tue, 1 Dec 2020 17:52:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF41521D46 for ; Tue, 1 Dec 2020 17:52:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hWBXtKPm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF41521D46 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E9B46B006C; Tue, 1 Dec 2020 12:52:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5992E6B006E; Tue, 1 Dec 2020 12:52:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 461868D0001; Tue, 1 Dec 2020 12:52:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 2957B6B006C for ; Tue, 1 Dec 2020 12:52:01 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D8C34180AD830 for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) X-FDA: 77545456800.17.soda32_1c174e3273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id BCC6C180D0181 for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) X-HE-Tag: soda32_1c174e3273ac X-Filterd-Recvd-Size: 16727 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id 131so1588359pfb.9 for ; Tue, 01 Dec 2020 09:52:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bYRnT6meiNHkxAgZPYwSbGdDS48Uvqnd+UI5WVDlRIo=; b=hWBXtKPmFreatgchOXPxb8I6zutB24f/ziw4eDAqHjbN0a+hiWJJdxgAW8NqnrnLqq v6SqcIafXQGiqd+NS6JkhDZpWGBKMWc3lIBk325OH7p2yTUp+WWhk33Qgm0mQu506ZRz 434w2TTh093Y8mrW5fU27QDbN+Lcnngtij2H4KzLha+GOb+UswF9kZWy6Q41R5CUx/xf IiiQrdJbJBRnN3cAtj6bw9NqEnAL/ZKleL4c3Rzj1n/st3/kuAyjoO1yBBe4au0ZR4Zv pG4x0X9zOBCn58x3W675dSYXFLfhrceoFFDqJutASvmStsm36yHFw2eeSZAv7iIOtiL7 IfMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=bYRnT6meiNHkxAgZPYwSbGdDS48Uvqnd+UI5WVDlRIo=; b=hEwol0lxV9PHwKkr24HzR0+rkEinLlcVx3tRiZC+QB808OdbMju/s9pNUdXPxQL36G GUX/C5DqGYG6RrkjT5nZxaOZmLBdr7kGqKvf7gJ/YQ9AS6148ZH2HV1HIS/teRStrdmd 9UoCMqYWSAkKh19EJt4F/W/653gRY+MUT0sXGsl6rjKnQnUcdcwJzFHv2LRUkwW5MnK9 MiSXvfagBq5T6cqfXr8/jdZRUdCn6kgmrOiw15ZwvrtX0r5bLaU3+u36GiTHiMRGrKY4 P/nkBGFpe7EKWPsrX2kZWJ3ec4m0Fau6ROCS6q9/VUs0CANDT9/GJ50iECKAD302A0XT 4juA== X-Gm-Message-State: AOAM533s8B4KtZRJXnMb2tqDxAdqYDHHtmhR/SocG6pgeG1c6tuD2wV9 0GhbliPXXYvLr0bS8qLIyENXRUxBQmA= X-Google-Smtp-Source: ABdhPJxPMBUU5/Z5mNi1wm+3qVG2Z14gAaQt2uPUeLlEWWh2TAsg5KL2eW8/0LCEd/AJCDUVDQXCfg== X-Received: by 2002:aa7:8595:0:b029:198:159e:52d4 with SMTP id w21-20020aa785950000b0290198159e52d4mr3509713pfn.7.1606845119118; Tue, 01 Dec 2020 09:51:59 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id q23sm390082pfg.192.2020.12.01.09.51.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Dec 2020 09:51:58 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v2 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Tue, 1 Dec 2020 09:51:44 -0800 Message-Id: <20201201175144.3996569-5-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog In-Reply-To: <20201201175144.3996569-1-minchan@kernel.org> References: <20201201175144.3996569-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that arranged into a list a fixed size chunks taken from CMA. The chunk heap doesn't use heap-helper although it can remove duplicated code since heap-helper is under deprecated process.[1] NOTE: This patch only adds the default CMA heap to allocate chunk pages. We will add another CMA memory regions to the dmabuf heaps interface with a later patch (which requires a dt binding) [1] https://lore.kernel.org/patchwork/patch/1336002 Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/Kconfig | 15 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 429 +++++++++++++++++++++++++++++ 3 files changed, 445 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..9153f83afed7 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,18 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + tristate "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers that + arranged into a list of fixed size chunks taken from CMA. + +config DMABUF_HEAPS_CHUNK_ORDER + int "Chunk page order for dmabuf chunk heap" + default 4 + depends on DMABUF_HEAPS_CHUNK + help + Set page order of fixed chunk size to allocate from CMA. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..8faa6cfdc0c5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c new file mode 100644 index 000000000000..0277707a93a9 --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ION Memory Allocator chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + unsigned int order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + a->mapped = false; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped = true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + int ret = 0; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr = buffer->vaddr; + goto done; + } + + vaddr = chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + ret = PTR_ERR(vaddr); + goto err; + } + + buffer->vaddr = vaddr; +done: + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); +err: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap *chunk_heap = buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops = { + .attach = chunk_heap_attach, + .detach = chunk_heap_detach, + .map_dma_buf = chunk_heap_map_dma_buf, + .unmap_dma_buf = chunk_heap_unmap_dma_buf, + .begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access = chunk_heap_dma_buf_end_cpu_access, + .mmap = chunk_heap_mmap, + .vmap = chunk_heap_vmap, + .vunmap = chunk_heap_vunmap, + .release = chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size = PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced = 0; + unsigned int num_retry = 5; + int ret = -ENOMEM; + pgoff_t pg; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = chunk_heap; + buffer->len = ALIGN(len, chunk_size); + count = buffer->len / chunk_size; + + pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (num_retry--) { + unsigned long nr_pages; + + ret = cma_alloc_bulk(chunk_heap->cma, chunk_heap->order, + num_retry ? true : false, + chunk_heap->order, count - alloced, + pages + alloced, &nr_pages); + alloced += nr_pages; + if (alloced == count) + break; + if (ret != -EBUSY) + break; + + } + if (ret < 0) + goto err_alloc; + + table = &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg = table->sgl; + for (pg = 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg = sg_next(sg); + } + + exp_info.ops = &chunk_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg = 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static const struct dma_heap_ops chunk_heap_ops = { + .allocate = chunk_heap_allocate, +}; + +#ifdef CONFIG_DMABUF_HEAPS_CHUNK_ORDER +#define CHUNK_HEAP_ORDER (CONFIG_DMABUF_HEAPS_CHUNK_ORDER) +#else +#define CHUNK_HEAP_ORDER (0) +#endif + +static int __init chunk_heap_init(void) +{ + struct cma *default_cma = dev_get_cma_area(NULL); + struct dma_heap_export_info exp_info; + struct chunk_heap *chunk_heap; + + if (!default_cma) + return 0; + + chunk_heap = kzalloc(sizeof(*chunk_heap), GFP_KERNEL); + if (!chunk_heap) + return -ENOMEM; + + chunk_heap->order = CHUNK_HEAP_ORDER; + chunk_heap->cma = default_cma; + + exp_info.name = cma_get_name(default_cma); + exp_info.ops = &chunk_heap_ops; + exp_info.priv = chunk_heap; + + chunk_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap->heap)) { + int ret = PTR_ERR(chunk_heap->heap); + + kfree(chunk_heap); + return ret; + } + + return 0; +} +module_init(chunk_heap_init); +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2");