From patchwork Wed Jan 13 01:21:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12015539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B6FC433E0 for ; Wed, 13 Jan 2021 01:21:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88C942311F for ; Wed, 13 Jan 2021 01:21:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88C942311F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DBE396B00FF; Tue, 12 Jan 2021 20:21:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D71466B0101; Tue, 12 Jan 2021 20:21:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C899F6B0103; Tue, 12 Jan 2021 20:21:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id ABC016B00FF for ; Tue, 12 Jan 2021 20:21:57 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6EB47181AEF1A for ; Wed, 13 Jan 2021 01:21:57 +0000 (UTC) X-FDA: 77699000274.26.band30_24124a12751a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 5115F1804B647 for ; Wed, 13 Jan 2021 01:21:57 +0000 (UTC) X-HE-Tag: band30_24124a12751a X-Filterd-Recvd-Size: 9969 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 Jan 2021 01:21:56 +0000 (UTC) Received: by mail-pf1-f181.google.com with SMTP id h10so201042pfo.9 for ; Tue, 12 Jan 2021 17:21:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eZjjOD1PAmLkt+KYwX6n9LE4b54UBbbygpUxkzT+or8=; b=XZyicf8PriLrAcopwo6UoBZ2+lhcEdn784PDfMi7NUgzyI7I+N5SgMly//qSIv4VBp VTDA8AnIMjYGYG41U1kxbk6y7LqEEGj/wvRjrRF+VLOtCww1qfCyWBcXzMXC22ch+pFq LULPifbEKlfFp0HwQfIL6TIyV9fjG3N0AbMjAyGr7rATTImzgVddblFTy+wWiFhxitf4 oc/nsHoi+BCOaqcD55v+0n116cokVDpgdnFJ4YaHYdTIUjf5YGBEu24LTdsJIyFAsWP1 nDrIc777QjgBIPMiHPtrsjoMj6PH7/PVUVKUCsrFDt9+ixraYJFBcYRxXyy9vmcZMa0e N4WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=eZjjOD1PAmLkt+KYwX6n9LE4b54UBbbygpUxkzT+or8=; b=etGjlyULT6+fLHuyuaMBcRxLLEqwS8+yiUvJnS/HFpolKjkeQ5LihWThNoNHNagBE0 JE0CQbUyiap0ynl6ai7RjHLP4qbyZx+9b7+LOIMIVnZbZbZ0Q60SGDBNkqK0Lh4VKWEl 0GTL+wZw/CX+Q3aQBFHPfbg93jSDCglRqk4ub+Jjs8EPg95iMa/XJVvVDl8ZODTtD6Ku tLr5wEnt/yxw3fDbaItn5pkGZarl5JRPq0ECMr+mmwvxEuPZ9mFBaen7RapHW/kC5Klx Rx/YTi9GSbcke32eTtZs+QG/clFSN00HgzAnOhVLgTJFhERomS6gQb/h8xW52f4h9FeQ jl3g== X-Gm-Message-State: AOAM530RzhUrS77/nIea2Tr65G71qUo/0IFqsIC+eUzHCIiWdrW5LCAC ebftLol5IBYsQraS9vWn/ZA= X-Google-Smtp-Source: ABdhPJwSdhqG2rRLgSIEqNF8bNii9Cb2Adz5+g/DkcL3+NdOF69XsrCwlWHeitqR7tDvSm7XLb21yQ== X-Received: by 2002:a63:286:: with SMTP id 128mr1849593pgc.246.1610500915906; Tue, 12 Jan 2021 17:21:55 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:54 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Date: Tue, 12 Jan 2021 17:21:40 -0800 Message-Id: <20210113012143.1201105-2-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The upcoming patch will introduce __GFP_NORETRY semantic in alloc_contig_range which is a failfast mode of the API. Instead of adding a additional parameter for gfp, replace no_warn with gfp flag. To keep old behaviors, it follows the rule below. no_warn gfp_flags false GFP_KERNEL true GFP_KERNEL|__GFP_NOWARN gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN) Signed-off-by: Minchan Kim Reviewed-by: Suren Baghdasaryan --- drivers/dma-buf/heaps/cma_heap.c | 2 +- drivers/s390/char/vmcp.c | 2 +- include/linux/cma.h | 2 +- kernel/dma/contiguous.c | 3 ++- mm/cma.c | 12 ++++++------ mm/cma_debug.c | 2 +- mm/hugetlb.c | 6 ++++-- mm/secretmem.c | 3 ++- 8 files changed, 18 insertions(+), 14 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 364fc2f3e499..0afc1907887a 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -298,7 +298,7 @@ static int cma_heap_allocate(struct dma_heap *heap, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); + cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL); if (!cma_pages) goto free_buffer; diff --git a/drivers/s390/char/vmcp.c b/drivers/s390/char/vmcp.c index 9e066281e2d0..78f9adf56456 100644 --- a/drivers/s390/char/vmcp.c +++ b/drivers/s390/char/vmcp.c @@ -70,7 +70,7 @@ static void vmcp_response_alloc(struct vmcp_session *session) * anymore the system won't work anyway. */ if (order > 2) - page = cma_alloc(vmcp_cma, nr_pages, 0, false); + page = cma_alloc(vmcp_cma, nr_pages, 0, GFP_KERNEL); if (page) { session->response = (char *)page_to_phys(page); session->cma_alloc = 1; diff --git a/include/linux/cma.h b/include/linux/cma.h index 217999c8a762..d6c02d08ddbc 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -45,7 +45,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, const char *name, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn); + gfp_t gfp_mask); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..552ed531c018 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -260,7 +260,8 @@ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - return cma_alloc(dev_get_cma_area(dev), count, align, no_warn); + return cma_alloc(dev_get_cma_area(dev), count, align, GFP_KERNEL | + (no_warn ? __GFP_NOWARN : 0)); } /** diff --git a/mm/cma.c b/mm/cma.c index 0ba69cd16aeb..35053b82aedc 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { } * @cma: Contiguous memory region for which the allocation is performed. * @count: Requested number of pages. * @align: Requested alignment of pages (in PAGE_SIZE order). - * @no_warn: Avoid printing message about failed allocation + * @gfp_mask: GFP mask to use during during the cma allocation. * * This function allocates part of contiguous memory on specific * contiguous memory area. */ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn) + gfp_t gfp_mask) { unsigned long mask, offset; unsigned long pfn = -1; @@ -438,8 +438,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, if (!cma || !cma->count || !cma->bitmap) return NULL; - pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma, - count, align); + pr_debug("%s(cma %p, count %zu, align %d gfp_mask 0x%x)\n", __func__, + (void *)cma, count, align, gfp_mask); if (!count) return NULL; @@ -471,7 +471,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + gfp_mask); if (ret == 0) { page = pfn_to_page(pfn); @@ -500,7 +500,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, page_kasan_tag_reset(page + i); } - if (ret && !no_warn) { + if (ret && !(gfp_mask & __GFP_NOWARN)) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); cma_debug_show_areas(cma); diff --git a/mm/cma_debug.c b/mm/cma_debug.c index d5bf8aa34fdc..00170c41cf81 100644 --- a/mm/cma_debug.c +++ b/mm/cma_debug.c @@ -137,7 +137,7 @@ static int cma_alloc_mem(struct cma *cma, int count) if (!mem) return -ENOMEM; - p = cma_alloc(cma, count, 0, false); + p = cma_alloc(cma, count, 0, GFP_KERNEL); if (!p) { kfree(mem); return -ENOMEM; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 737b2dce19e6..695af33aa66c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1266,7 +1266,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, if (hugetlb_cma[nid]) { page = cma_alloc(hugetlb_cma[nid], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } @@ -1277,7 +1278,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, continue; page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } diff --git a/mm/secretmem.c b/mm/secretmem.c index b8a32954ac68..585d55b9f9d8 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -86,7 +86,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) struct page *page; int err; - page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN); + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, + GFP_KERNEL | (gfp & __GFP_NOWARN)); if (!page) return -ENOMEM; From patchwork Wed Jan 13 01:21:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12015541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC08DC433DB for ; Wed, 13 Jan 2021 01:22:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 605322312E for ; Wed, 13 Jan 2021 01:22:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 605322312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D23266B0103; Tue, 12 Jan 2021 20:21:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CAF166B0104; Tue, 12 Jan 2021 20:21:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC35C6B0106; Tue, 12 Jan 2021 20:21:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id A7A146B0103 for ; Tue, 12 Jan 2021 20:21:59 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 79514824556B for ; Wed, 13 Jan 2021 01:21:59 +0000 (UTC) X-FDA: 77699000358.04.jelly52_3a0cba82751a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 5C0A2800E2CD for ; Wed, 13 Jan 2021 01:21:59 +0000 (UTC) X-HE-Tag: jelly52_3a0cba82751a X-Filterd-Recvd-Size: 5408 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 Jan 2021 01:21:58 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id m5so125745pjv.5 for ; Tue, 12 Jan 2021 17:21:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PwrTvDCpQxTz/DIQgW8kuhPAh38es7QArT97GZe3VtU=; b=uaiyVxcKcdDalFZ8wJ+put1IH215gZVoe2plqMdRP+2t2x/Fxizlr/N/MMPOBy+I0b HitSdRYQF8rvO3yB3xWl5iWNM8zM/Io4Fhhz8Qobmt1P7n4QfUQjBwVrWDgj+r666bBv DkJ32IPzh0QlpdPB3YkZ6eMbzbV3ARDqbP38WnfOSzZldntVOmmHoaihJCYYTegPp9am F2LQIXwlIkdYp4QYNBsRxXKBElNr5yvhYGzrhq31yP3OOV63VYiKQtGWWpjzacC868HZ DL3z2TQ8pFoXWFzW3HfQcUecvT+tbK3HLkMB4un/Ef86zwrP5ZZOdFIQFNO6Ig5X8KKd aoUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=PwrTvDCpQxTz/DIQgW8kuhPAh38es7QArT97GZe3VtU=; b=aMqH2MzEnkNR3zw0EhgXEcUytz3sAvYs3XfmoWWGWpbPfu97vyVbZfB3D0Ow3qP4HH AhW+p9/mgreQcfRjYRWKHzrUcUAvWOolVbWeJJ83UcrAXcnLnUAzQOnUXeYlC+MBzMP4 1sUOiwSZswGBz0mgI2N5NEyzaBY4/z3xAAuj4FbCL4tkVdhrLszfOz45O/YLa/EmMGjb Ug9S0m5FUgztiLSrgzXk3pUbVfB3j2JgFAQ+2uZsymmxB3DFb27v3vq4nbkcbJiOSItC Qk9QkYdODRGX2LrQvEzWO++iswL+9wcjjk4ONhMWAxsB5U8QskWoxkV9foRkfa7L5BOc tVMg== X-Gm-Message-State: AOAM530GP/Shu9h/GillwGoQChMcEsWBo2h1f4NCvZ09TLU7JGwbbzLB Hp5b2Jn6xSiGC9WGJ+CPp5Y= X-Google-Smtp-Source: ABdhPJzmO+8D0rnhhJPNIKgMyyHl4myY8RINkNNrnEa0wEBRddADD3DsZUjfzpKGbwWsqPYYVIySqA== X-Received: by 2002:a17:90a:c82:: with SMTP id v2mr475726pja.171.1610500917994; Tue, 12 Jan 2021 17:21:57 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:57 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Date: Tue, 12 Jan 2021 17:21:41 -0800 Message-Id: <20210113012143.1201105-3-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Contiguous memory allocation can be stalled due to waiting on page writeback and/or page lock which causes unpredictable delay. It's a unavoidable cost for the requestor to get *big* contiguous memory but it's expensive for *small* contiguous memory(e.g., order-4) because caller could retry the request in diffrent range where would have easy migratable pages without stalling. This patch introduce __GFP_NORETRY as compaction gfp_mask in alloc_contig_range so it will fail fast without blocking when it encounters pages needed waitting. Signed-off-by: Minchan Kim --- mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5b3923db9158..ff41ceb4db51 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0; + unsigned int max_tries = 5; int ret = 0; struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; + if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC) + max_tries = 1; + migrate_prep(); while (pfn < end || !list_empty(&cc->migratepages)) { @@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0; - } else if (++tries == 5) { + } else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break; } @@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, .nr_migratepages = 0, .order = -1, .zone = page_zone(pfn_to_page(start)), - .mode = MIGRATE_SYNC, + .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC, .ignore_skip_hint = true, .no_set_skip_hint = true, .gfp_mask = current_gfp_context(gfp_mask), From patchwork Wed Jan 13 01:21:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12015543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAD8C433E0 for ; Wed, 13 Jan 2021 01:22:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF0F02311B for ; Wed, 13 Jan 2021 01:22:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF0F02311B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9058A8D0005; Tue, 12 Jan 2021 20:22:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B5D98D0002; Tue, 12 Jan 2021 20:22:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A5DE8D0005; Tue, 12 Jan 2021 20:22:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 5E3738D0002 for ; Tue, 12 Jan 2021 20:22:01 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2B1A01EE6 for ; Wed, 13 Jan 2021 01:22:01 +0000 (UTC) X-FDA: 77699000442.24.slip67_6308e002751a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 12B971A4A5 for ; Wed, 13 Jan 2021 01:22:01 +0000 (UTC) X-HE-Tag: slip67_6308e002751a X-Filterd-Recvd-Size: 5692 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 Jan 2021 01:22:00 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id w2so190004pfc.13 for ; Tue, 12 Jan 2021 17:22:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=moLKqW8sUzdDe7Tu+bx0keAxg66CW63q9xlqRNBYrrA=; b=XUDwakez4EGE0+imMc50hGx3ds5QMmtB9M5WblPQvBHRhPN2royeIYKqS9ueWEaeMt 70V/NvI6mYHZWxn4quFPSoW8gISButU43tslILdi4HOA+4x8p71KnIRyoIG6/6GPhOQJ +dmhxIzPBOsyt+vTmaNf/S9BbzfZS12Jw7bdccqgSk2Txk35XxW4u9ckwu9z31+FajDH ezWy+YqvNWsZbPtSKxF6mqZ29Flxm+bcg9xHOWYCGrpkM2yfXBHjrR0mY4CuaZAr81PZ 5ePUpF69j4The4uiJitpef/JoIe48XGfF9t1oNFv+xGOYGdymXqc0bYHKDIYBG24ihhm kckw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=moLKqW8sUzdDe7Tu+bx0keAxg66CW63q9xlqRNBYrrA=; b=WwSbt/p7IOBlh6HdH+BImOucuRAxcu1uc14pRYeP5cKmcCdMSiySth+IixHs4bbkMI Kvt4u4h0LY4GTjNfa/rgcwzHYuvxjWlBvChT09KevZHpJU63zUYMCHPe65l4kKInPMUO GurRlPUMvc0vTccnL0zl8R7M/uvozzKrggjCq/yDTQggr1PdUzl7Q4xRd/6kQNWgCLov JGRq3zK/C/HmmfKUOBGqdaLv9MUIPP5YGki8Rzfpon7XbqXLqDOcVgxiNV6Rt5mAxjFt 5ysI3TOfB7i6YtJqyhXqSeJ0bHseSwlZCuUuhoP8Emw9eyYl5wZ0ZM28lmzj/g+jp7hd U+Yw== X-Gm-Message-State: AOAM532i5Y3Ou+HHW+IIPf6C7fotRJqINuVW8/u/t+7dEP1Uz0NljOJb S+KutM7HmgdxiSRkGVTORio= X-Google-Smtp-Source: ABdhPJx5ssWUpEIEIBeaTawSvCWhAJ4WrRqcd7JRRzO48BBfVLvjE9R4HsG7zK71sU4cpDiUxflR/w== X-Received: by 2002:a63:cf56:: with SMTP id b22mr1900832pgj.16.1610500919879; Tue, 12 Jan 2021 17:21:59 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:59 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Date: Tue, 12 Jan 2021 17:21:42 -0800 Message-Id: <20210113012143.1201105-4-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu Document devicetree binding for chunk cma heap on dma heap framework. The DMA chunk heap supports the bulk allocation of higher order pages. Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim Signed-off-by: Hridya Valsaraju Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01 --- .../reserved-memory/dma_heap_chunk.yaml | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml new file mode 100644 index 000000000000..3e7fed5fb006 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml @@ -0,0 +1,58 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK + +description: | + The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and + supports bulk allocation of fixed size pages. + +maintainers: + - Hyesoo Yu + - John Stultz + - Minchan Kim + - Hridya Valsaraju + + +properties: + compatible: + enum: + - dma_heap,chunk + + chunk-order: + description: | + order of pages that will get allocated from the chunk DMA heap. + maxItems: 1 + + size: + maxItems: 1 + + alignment: + maxItems: 1 + +required: + - compatible + - size + - alignment + - chunk-order + +additionalProperties: false + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <1>; + + chunk_memory: chunk_memory { + compatible = "dma_heap,chunk"; + size = <0x3000000>; + alignment = <0x0 0x00010000>; + chunk-order = <4>; + }; + }; + + From patchwork Wed Jan 13 01:21:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12015545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A568EC433DB for ; Wed, 13 Jan 2021 01:22:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 129582311B for ; Wed, 13 Jan 2021 01:22:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 129582311B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 471648D0006; Tue, 12 Jan 2021 20:22:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F9BF8D0002; Tue, 12 Jan 2021 20:22:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29B328D0006; Tue, 12 Jan 2021 20:22:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id 0DDEE8D0002 for ; Tue, 12 Jan 2021 20:22:04 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CCC64181AEF1A for ; Wed, 13 Jan 2021 01:22:03 +0000 (UTC) X-FDA: 77699000526.05.elbow24_3a177d42751a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id B1060180357F7 for ; Wed, 13 Jan 2021 01:22:03 +0000 (UTC) X-HE-Tag: elbow24_3a177d42751a X-Filterd-Recvd-Size: 17671 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 Jan 2021 01:22:03 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id v3so131394plz.13 for ; Tue, 12 Jan 2021 17:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U1hE5pPA1ohYP1JDiQz5BQFJjkJmCgttV7NpNfvk7yg=; b=orBqHzjOFXT+CnYLrWa6FJe53PFjtBsCq3xDL7kcXLJOefQa4qWknWDkfL22IuKv3J mNeWd8CVf/v9W9v3RTMWlbAQMQbSo8nvc6ibhZWkbbIPLAx9nTApQVOrUeuTFvhE7TDZ GB0rFOVAXCbYDNRN+pyjeZsmxSNgD+fznFHgvVMaaaHWDxaSJdYAMVWuZQmxV3KNxUwv pF80NlkB+CbePuCYC4jd6MakYvGLLAdHhYmR6TDixuxYJAtvP5nQ4Y4ftIoFlfrmPWrR 2vQF7pNjxyxCA9Pifeu73GPN1cya7uG1Cjq+DSMY/TIUtoOHOmyUzpY9uJvcxxETtVb8 HGAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=U1hE5pPA1ohYP1JDiQz5BQFJjkJmCgttV7NpNfvk7yg=; b=Od4UpDw4osc0Uf5ycH12t60IWLyAceLjhM3QkbyLTaaQ3RmVK1r4WVgx5YkvVmq9ks 3i0hXIq+g6mfObNbG4E7Yh1HJu4B+9ZLhoq5zuDTf2XnYYaJV+Qc5dlzRmdyDNaUFe7q +Au+W7dH+NilZG1MhobrgcOvPRP/kOjtlZZzPVLIIKP2Hk/8ae4E9R0cXA5aeFLZaUzt piAe6g5CXCltJWWlZ7De9wbZ42q+XwGP3nY1+B8XfsYYMNNog98vsERleHjP10+0g45D Y55OkEee3P4IMopoV8cy4cffu5cbBFKvixgRaSi96xrHit2CMzF9Xf6pzlwxlGC/aa50 yxnQ== X-Gm-Message-State: AOAM5319t7bBM3zHU2zjdyeJElnJ4tFEaU/QvfEYYvI9+rmY6M/+Tc41 TnWWtplMG7lgLG66M15D5zo= X-Google-Smtp-Source: ABdhPJyf1HuZLxbHJbfL7oYJnhYSy0tbPZOCYvUsRIHJ+5fQqlCVocwkaGCpMJELChjdPKjKc518Vw== X-Received: by 2002:a17:90a:7106:: with SMTP id h6mr549225pjk.22.1610500922075; Tue, 12 Jan 2021 17:22:02 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.22.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:22:01 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Tue, 12 Jan 2021 17:21:43 -0800 Message-Id: <20210113012143.1201105-5-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that arranged into a list a fixed size chunks taken from CMA. The chunk heap driver is bound directly to a reserved_memory node by following Rob Herring's suggestion in [1]. [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d Signed-off-by: Hyesoo Yu Signed-off-by: Hridya Valsaraju Signed-off-by: Minchan Kim Reported-by: kernel test robot Reported-by: kernel test robot --- drivers/dma-buf/heaps/Kconfig | 8 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 477 +++++++++++++++++++++++++++++ 3 files changed, 486 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..6527233f52a8 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,11 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + bool "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers that + arranged into a list of fixed size chunks taken from CMA. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..8faa6cfdc0c5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c new file mode 100644 index 000000000000..64f748c81e1f --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,477 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMA-BUF chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + uint32_t order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +struct chunk_heap chunk_heaps[MAX_CMA_AREAS]; +unsigned int chunk_heap_count; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + if (a->mapped) + return table; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped = true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr = buffer->vaddr; + } else { + vaddr = chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + mutex_unlock(&buffer->lock); + + return PTR_ERR(vaddr); + } + buffer->vaddr = vaddr; + } + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); + + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap *chunk_heap = buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops = { + .attach = chunk_heap_attach, + .detach = chunk_heap_detach, + .map_dma_buf = chunk_heap_map_dma_buf, + .unmap_dma_buf = chunk_heap_unmap_dma_buf, + .begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access = chunk_heap_dma_buf_end_cpu_access, + .mmap = chunk_heap_mmap, + .vmap = chunk_heap_vmap, + .vunmap = chunk_heap_vunmap, + .release = chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size = PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced = 0; + unsigned int alloc_order = max_t(unsigned int, pageblock_order, chunk_heap->order); + unsigned int nr_chunks_per_alloc = 1 << (alloc_order - chunk_heap->order); + gfp_t gfp_flags = GFP_KERNEL|__GFP_NORETRY; + int ret = -ENOMEM; + pgoff_t pg; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = chunk_heap; + buffer->len = ALIGN(len, chunk_size); + count = buffer->len / chunk_size; + + pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (alloced < count) { + struct page *page; + int i; + + while (count - alloced < nr_chunks_per_alloc) { + alloc_order--; + nr_chunks_per_alloc >>= 1; + } + + page = cma_alloc(chunk_heap->cma, 1 << alloc_order, + alloc_order, gfp_flags); + if (!page) { + if (gfp_flags & __GFP_NORETRY) { + gfp_flags &= ~__GFP_NORETRY; + continue; + } + break; + } + + for (i = 0; i < nr_chunks_per_alloc; i++, alloced++) { + pages[alloced] = page; + page += 1 << chunk_heap->order; + } + } + + if (alloced < count) + goto err_alloc; + + table = &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg = table->sgl; + for (pg = 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg = sg_next(sg); + } + + exp_info.ops = &chunk_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg = 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static const struct dma_heap_ops chunk_heap_ops = { + .allocate = chunk_heap_allocate, +}; + +static int register_chunk_heap(struct chunk_heap *chunk_heap_info) +{ + struct dma_heap_export_info exp_info; + + exp_info.name = cma_get_name(chunk_heap_info->cma); + exp_info.ops = &chunk_heap_ops; + exp_info.priv = chunk_heap_info; + + chunk_heap_info->heap = dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap_info->heap)) + return PTR_ERR(chunk_heap_info->heap); + + return 0; +} + +static int __init chunk_heap_init(void) +{ + unsigned int i; + + for (i = 0; i < chunk_heap_count; i++) + register_chunk_heap(&chunk_heaps[i]); + + return 0; +} +module_init(chunk_heap_init); + +#ifdef CONFIG_OF_EARLY_FLATTREE + +static int __init dmabuf_chunk_heap_area_init(struct reserved_mem *rmem) +{ + int ret; + struct cma *cma; + struct chunk_heap *chunk_heap_info; + const __be32 *chunk_order; + + phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t mask = align - 1; + + if ((rmem->base & mask) || (rmem->size & mask)) { + pr_err("Incorrect alignment for CMA region\n"); + return -EINVAL; + } + + ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma); + if (ret) { + pr_err("Reserved memory: unable to setup CMA region\n"); + return ret; + } + + /* Architecture specific contiguous memory fixup. */ + dma_contiguous_early_fixup(rmem->base, rmem->size); + + chunk_heap_info = &chunk_heaps[chunk_heap_count]; + chunk_heap_info->cma = cma; + + chunk_order = of_get_flat_dt_prop(rmem->fdt_node, "chunk-order", NULL); + + if (chunk_order) + chunk_heap_info->order = be32_to_cpu(*chunk_order); + else + chunk_heap_info->order = 4; + + chunk_heap_count++; + + return 0; +} +RESERVEDMEM_OF_DECLARE(dmabuf_chunk_heap, "dma_heap,chunk", + dmabuf_chunk_heap_area_init); +#endif + +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2");