From patchwork Thu Jan 21 17:54:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12037285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29ED2C433E6 for ; Thu, 21 Jan 2021 17:55:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 75F5D22CE3 for ; Thu, 21 Jan 2021 17:55:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75F5D22CE3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C9FF66B0007; Thu, 21 Jan 2021 12:55:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C4D766B0008; Thu, 21 Jan 2021 12:55:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B66F26B000A; Thu, 21 Jan 2021 12:55:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id 9F8376B0007 for ; Thu, 21 Jan 2021 12:55:12 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 527E5824999B for ; Thu, 21 Jan 2021 17:55:12 +0000 (UTC) X-FDA: 77730533664.27.jump61_1b1606c27565 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 31EDB3D668 for ; Thu, 21 Jan 2021 17:55:12 +0000 (UTC) X-HE-Tag: jump61_1b1606c27565 X-Filterd-Recvd-Size: 10057 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 17:55:11 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id p15so2127558pjv.3 for ; Thu, 21 Jan 2021 09:55:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5aQuxt1nZst2lLKkiFlWOAqj/tVkXqtYs356x95d5f0=; b=FKdfqGEX6B9MgedVW8r+6/43MRAQYZp3W6O+lYUOhIWcHvMRfGHmJxoN9lBT8g64hG xEoIwo264ZbNkC9mApttjnw+IyU4VnDVgkmVVlbEltjJ4DkdE1g6HkjdL9moCRGKgFW4 f0lgOkk2ME5EEXtyG1rq6DQV9WzXsT3U/skp0pYkIDhXAIn5n0b2JnlX4nKoAN1/3R5Z heTu/JHH4VvqGNK2HWJUiYLqnAvHNRZTS6sYGx8x/oeVEfALUxzgSOGxG5K9q9j+C1Y9 k9VlO9x9K5HSo3j4ZRCoi1kDdHZSJVfZ6vsDaRE427BihHUZHnA00HfDLqr8JxY5IquS TizA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=5aQuxt1nZst2lLKkiFlWOAqj/tVkXqtYs356x95d5f0=; b=NnZANuJmigtHIXIlHzUPEaMo5a61PVqMf27YYrxHsClOzULo5fn48jBUlWwAbmE+Hc E8Fqrts7Bjc2hrKhwGEglvYaoVq4repRBsxYUzbMwO0W0bC1NI5znwxA/0jTU9rlYflb zc+XKCDNcXz53o2t1u1CWlLLPPXDxQCfQmCwbgivCacOtaHALX/kgwhXWvwi4scANrr1 jZ76ZhvQGrhkUUpZxtSO5IPOskBxXy7VPnqIMMFBfSYaNyeTlUtWEpD2eoLkRaTfz+8k AO3y6ZgpqZh+Ld8/Xe0Kts8rP/ZbnQ1UDckhBBvSVIYVNvVPgOBV8KvFn3F7YPsiWipv wcwg== X-Gm-Message-State: AOAM530Ejcj0ba1ZJg9MUbbx/IvVzPHwNjKZFAqgm9BsxgiBz0NYrgCW KkpHcwAj0ylUozgGc4pwo+s= X-Google-Smtp-Source: ABdhPJz84d4NP4LNFL0MoEneIS25lKrepaWWshMn2GI68bIrXjZpBav7A7tXuDrjAGcsuJdMwCVHDQ== X-Received: by 2002:a17:902:edc7:b029:de:6cde:8cfd with SMTP id q7-20020a170902edc7b02900de6cde8cfdmr812671plk.59.1611251710651; Thu, 21 Jan 2021 09:55:10 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:74d0:bb24:e25e:dc4d]) by smtp.gmail.com with ESMTPSA id t2sm6897317pju.19.2021.01.21.09.55.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 09:55:09 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v4 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Date: Thu, 21 Jan 2021 09:54:59 -0800 Message-Id: <20210121175502.274391-2-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.296.g2bfb1c46d8-goog In-Reply-To: <20210121175502.274391-1-minchan@kernel.org> References: <20210121175502.274391-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The upcoming patch will introduce __GFP_NORETRY semantic in alloc_contig_range which is a failfast mode of the API. Instead of adding a additional parameter for gfp, replace no_warn with gfp flag. To keep old behaviors, it follows the rule below. no_warn gfp_flags false GFP_KERNEL true GFP_KERNEL|__GFP_NOWARN gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN) Reviewed-by: Suren Baghdasaryan Signed-off-by: Minchan Kim Acked-by: David Hildenbrand Reviewed-by: Suren Baghdasaryan Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/cma_heap.c | 2 +- drivers/s390/char/vmcp.c | 2 +- include/linux/cma.h | 2 +- kernel/dma/contiguous.c | 3 ++- mm/cma.c | 12 ++++++------ mm/cma_debug.c | 2 +- mm/hugetlb.c | 6 ++++-- mm/secretmem.c | 3 ++- 8 files changed, 18 insertions(+), 14 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 364fc2f3e499..0afc1907887a 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -298,7 +298,7 @@ static int cma_heap_allocate(struct dma_heap *heap, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); + cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL); if (!cma_pages) goto free_buffer; diff --git a/drivers/s390/char/vmcp.c b/drivers/s390/char/vmcp.c index 9e066281e2d0..78f9adf56456 100644 --- a/drivers/s390/char/vmcp.c +++ b/drivers/s390/char/vmcp.c @@ -70,7 +70,7 @@ static void vmcp_response_alloc(struct vmcp_session *session) * anymore the system won't work anyway. */ if (order > 2) - page = cma_alloc(vmcp_cma, nr_pages, 0, false); + page = cma_alloc(vmcp_cma, nr_pages, 0, GFP_KERNEL); if (page) { session->response = (char *)page_to_phys(page); session->cma_alloc = 1; diff --git a/include/linux/cma.h b/include/linux/cma.h index 217999c8a762..d6c02d08ddbc 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -45,7 +45,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, const char *name, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn); + gfp_t gfp_mask); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..552ed531c018 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -260,7 +260,8 @@ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - return cma_alloc(dev_get_cma_area(dev), count, align, no_warn); + return cma_alloc(dev_get_cma_area(dev), count, align, GFP_KERNEL | + (no_warn ? __GFP_NOWARN : 0)); } /** diff --git a/mm/cma.c b/mm/cma.c index 0ba69cd16aeb..d50627686fec 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { } * @cma: Contiguous memory region for which the allocation is performed. * @count: Requested number of pages. * @align: Requested alignment of pages (in PAGE_SIZE order). - * @no_warn: Avoid printing message about failed allocation + * @gfp_mask: GFP mask to use during the cma allocation. * * This function allocates part of contiguous memory on specific * contiguous memory area. */ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn) + gfp_t gfp_mask) { unsigned long mask, offset; unsigned long pfn = -1; @@ -438,8 +438,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, if (!cma || !cma->count || !cma->bitmap) return NULL; - pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma, - count, align); + pr_debug("%s(cma %p, count %zu, align %d gfp_mask 0x%x)\n", __func__, + (void *)cma, count, align, gfp_mask); if (!count) return NULL; @@ -471,7 +471,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + gfp_mask); if (ret == 0) { page = pfn_to_page(pfn); @@ -500,7 +500,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, page_kasan_tag_reset(page + i); } - if (ret && !no_warn) { + if (ret && !(gfp_mask & __GFP_NOWARN)) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); cma_debug_show_areas(cma); diff --git a/mm/cma_debug.c b/mm/cma_debug.c index d5bf8aa34fdc..00170c41cf81 100644 --- a/mm/cma_debug.c +++ b/mm/cma_debug.c @@ -137,7 +137,7 @@ static int cma_alloc_mem(struct cma *cma, int count) if (!mem) return -ENOMEM; - p = cma_alloc(cma, count, 0, false); + p = cma_alloc(cma, count, 0, GFP_KERNEL); if (!p) { kfree(mem); return -ENOMEM; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a6bad1f686c5..4209a2ed1e1b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1266,7 +1266,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, if (hugetlb_cma[nid]) { page = cma_alloc(hugetlb_cma[nid], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } @@ -1277,7 +1278,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, continue; page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } diff --git a/mm/secretmem.c b/mm/secretmem.c index b8a32954ac68..585d55b9f9d8 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -86,7 +86,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) struct page *page; int err; - page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN); + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, + GFP_KERNEL | (gfp & __GFP_NOWARN)); if (!page) return -ENOMEM; From patchwork Thu Jan 21 17:55:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12037289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 098B4C433DB for ; Thu, 21 Jan 2021 17:55:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 83BD523A22 for ; Thu, 21 Jan 2021 17:55:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 83BD523A22 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C27D66B000A; Thu, 21 Jan 2021 12:55:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD9196B000C; Thu, 21 Jan 2021 12:55:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEFC76B000D; Thu, 21 Jan 2021 12:55:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 9A5AA6B000A for ; Thu, 21 Jan 2021 12:55:28 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4A22D5DF9 for ; Thu, 21 Jan 2021 17:55:28 +0000 (UTC) X-FDA: 77730534336.01.bite70_4203da527565 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 84B0F10056561 for ; Thu, 21 Jan 2021 17:55:14 +0000 (UTC) X-HE-Tag: bite70_4203da527565 X-Filterd-Recvd-Size: 5454 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 17:55:13 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id md11so2143294pjb.0 for ; Thu, 21 Jan 2021 09:55:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gPMjM19aJ4pm3ETJElwKqqAg8qqzpXB97r3PbN2RvL8=; b=qDC+o38edBIaLnG8PWToxRhj0mPMeIfGyClNNZGIdiSSXZ2tJZWeGMSjqb6Huno5Je +s9NpaIPVTqyciyZTwKM5kcwYq2dDpuaC0zoITN6bASo1/oCbfUYZKU1WDHQFpR4oTNg Q+2MMzLxdK1Ode8CPyEpokjv+oVgNPg+KGhhdt3i2NU2HG/DmJKDhjdILm/UOXIjpOai er6lOvs1xiyuvzkQUHTmK/7TJ35rFG3Qq4hWmU2S0uFXV8TnSr3Uu+mgP7eff89fbX4D Pa584ZSykXNIjOtQdY8kLxVwg4OVsCPqbdHgBWm//V/4HNQhBVmyo2f3msnkwFwkGnCO 2CGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=gPMjM19aJ4pm3ETJElwKqqAg8qqzpXB97r3PbN2RvL8=; b=ApD9NQGq1NxSRdRGEB3NdS1V5nb6GAGUIXUyM6zo2GFRS2TdIMshvp5hovGGSkz0PV B+tqX7grom9Kgg4HOcbRKFR7Bb4cnIwLPwU9TYdTEH8eggVp3HPcchRMxc5mVZ6ME2mT ETOQ5nwqJq7jeZRW6vkXKCYIZ7iwv2hovGt4dC+XIT78FLwK94mNzVBQFOhUuIt0Krjy iQVkTKKp2ryhFK5IHKDbatmx/DWhVIHbZv0HQfoXhW00kmxlSRRwt49u0mfmgx5H8t/J 1V5nQkrhvFVSMZycPWisaWAYPIqwf3tzg3noqxTwA3wfQO5SRvAuEAKdxoFdMFDD9G9p jAfA== X-Gm-Message-State: AOAM530eDnueVgLBHc2Nwof3ODwrBM7CLAAdHW6CXn0EGKnU3+x5D7/v yRUEnUuESoSNm4i49f61SdY= X-Google-Smtp-Source: ABdhPJwua3dS85PxEqSvzV1XHGSp3rtEG3Evxsrqo12dFSbOhRShx1GThlZKPz/+p0B1yNRZEi2OWw== X-Received: by 2002:a17:902:59c7:b029:de:25e7:2426 with SMTP id d7-20020a17090259c7b02900de25e72426mr424509plj.21.1611251713128; Thu, 21 Jan 2021 09:55:13 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:74d0:bb24:e25e:dc4d]) by smtp.gmail.com with ESMTPSA id t2sm6897317pju.19.2021.01.21.09.55.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 09:55:12 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v4 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Date: Thu, 21 Jan 2021 09:55:00 -0800 Message-Id: <20210121175502.274391-3-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.296.g2bfb1c46d8-goog In-Reply-To: <20210121175502.274391-1-minchan@kernel.org> References: <20210121175502.274391-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Contiguous memory allocation can be stalled due to waiting on page writeback and/or page lock which causes unpredictable delay. It's a unavoidable cost for the requestor to get *big* contiguous memory but it's expensive for *small* contiguous memory(e.g., order-4) because caller could retry the request in different range where would have easy migratable pages without stalling. This patch introduce __GFP_NORETRY as compaction gfp_mask in alloc_contig_range so it will fail fast without blocking when it encounters pages needed waiting. Signed-off-by: Minchan Kim --- mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..1cdc3ee0b22e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0; + unsigned int max_tries = 5; int ret = 0; struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; + if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC) + max_tries = 1; + migrate_prep(); while (pfn < end || !list_empty(&cc->migratepages)) { @@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0; - } else if (++tries == 5) { + } else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break; } @@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, .nr_migratepages = 0, .order = -1, .zone = page_zone(pfn_to_page(start)), - .mode = MIGRATE_SYNC, + .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC, .ignore_skip_hint = true, .no_set_skip_hint = true, .gfp_mask = current_gfp_context(gfp_mask), From patchwork Thu Jan 21 17:55:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12037287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15FC1C433E0 for ; Thu, 21 Jan 2021 17:55:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C4CE23A03 for ; Thu, 21 Jan 2021 17:55:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C4CE23A03 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E74506B0008; Thu, 21 Jan 2021 12:55:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E4B106B000A; Thu, 21 Jan 2021 12:55:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D61B16B000C; Thu, 21 Jan 2021 12:55:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id C10EE6B0008 for ; Thu, 21 Jan 2021 12:55:18 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8D717181AF5D3 for ; Thu, 21 Jan 2021 17:55:18 +0000 (UTC) X-FDA: 77730533916.28.guide70_0806f5327565 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id A306A6C33 for ; Thu, 21 Jan 2021 17:55:16 +0000 (UTC) X-HE-Tag: guide70_0806f5327565 X-Filterd-Recvd-Size: 6286 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 17:55:15 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id s15so1691571plr.9 for ; Thu, 21 Jan 2021 09:55:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K8nQ7vZePLiBr4AwY3n+MiyEOXEnc2UqVALdrd2awxo=; b=PVpV8V+s0wyvXhKyAIPSie0XgUgDIJQGqojYz53hp4jmRi5GJwvTzVJiYzHzAwaxP+ tL3Hf4oxKufPJomneZrTCnhcasC4lil6Di9IvP1gQCbW/ReneiTOrtnen2GYgi78QPWc 2u0VLyN+agWRvFgsJ5WB4tYzDpmsIJayReSKTcHMWJQcPHPp26WBTd4P1BYW0qdHQoFa Uks8tfAiQ1qZS0ebe4lh2D1ZZp/1pgkbWiJr506XrVWNgSa0bFETsPheHdcNn+xvw0dg Gn4iNfhnMyCv0kU4oX8y6/jPKoygMjwET/5AzqQuB/IG3svfpXUEXSjyF1v2CiG96qR5 rA0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=K8nQ7vZePLiBr4AwY3n+MiyEOXEnc2UqVALdrd2awxo=; b=BxczNtlzmFznPsK3m1V6Teuffxwp1336tT5P7LcK80rW43QD/geAUFOZ57r2G5RKaW QOkSgTFiDsAV8uM7CUQCw4wfK7AvUckvrsd2L/fPeADCk3WPH1DWaZJhhhcJOs4JhpOL gYuPR7bf39hv2KqRgXrj87h234P2dtaMam9Ypg9atK7D+Hw/DJgW1Qq7CGduswe2HltU pW5dv2ZxLZtQNnRXbuzLAmPv08JtglkYRFxPtXSgoHG5OSRifM+N2jY/nQkGNNPL7lYb mm9MCxG61dPT0ZgNVFNj9T8PBzmcmlHYBzvTbzJeCcgFguproryk9/RF7klOWwPAPlYc PakA== X-Gm-Message-State: AOAM5303fcNaQnMkqHA8ut88k+9V9wPqDuK8eT2TI782Ht3OMzZO78uB xyYj4iFMamUFitTvGKC0EP8= X-Google-Smtp-Source: ABdhPJwoClfcVILfT+MG/hOAtpiA8kxG4mvryV2nmV2o6uqTe/csRvO9CeiR4QmEKb0A/Qe1bAhjgA== X-Received: by 2002:a17:902:b496:b029:da:d356:be8c with SMTP id y22-20020a170902b496b02900dad356be8cmr854145plr.56.1611251715067; Thu, 21 Jan 2021 09:55:15 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:74d0:bb24:e25e:dc4d]) by smtp.gmail.com with ESMTPSA id t2sm6897317pju.19.2021.01.21.09.55.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 09:55:14 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v4 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Date: Thu, 21 Jan 2021 09:55:01 -0800 Message-Id: <20210121175502.274391-4-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.296.g2bfb1c46d8-goog In-Reply-To: <20210121175502.274391-1-minchan@kernel.org> References: <20210121175502.274391-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu Document devicetree binding for chunk cma heap on dma heap framework. The DMA chunk heap supports the bulk allocation of higher order pages. The chunk heap's allocator allocates from the CMA area. It is optimized to perform bulk allocation of higher order pages in an efficient manner. For this purpose, the heap needs an exclusive CMA area that will only be used for allocation by the heap. This is the reason why we need to use the DT to create and configure a reserved memory region for use by the chunk CMA heap driver. Since all allocation from DMA-BUF heaps happen from the user-space, there is no other appropriate device-driver that we can use to register the chunk CMA heap and configure the reserved memory region for its use. Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim Signed-off-by: Hridya Valsaraju --- .../reserved-memory/dma_heap_chunk.yaml | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml new file mode 100644 index 000000000000..00db0ae6af61 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml @@ -0,0 +1,56 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK + +description: | + The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and + supports bulk allocation of fixed size pages. + +maintainers: + - Hyesoo Yu + - John Stultz + - Minchan Kim + - Hridya Valsaraju + + +properties: + compatible: + enum: + - dma_heap,chunk + + chunk-order: + description: | + order of pages that will get allocated from the chunk DMA heap. + maxItems: 1 + + size: + maxItems: 1 + + alignment: + maxItems: 1 + +required: + - compatible + - size + - alignment + - chunk-order + +additionalProperties: false + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <1>; + + chunk_memory: chunk_memory { + compatible = "dma_heap,chunk"; + size = <0x3000000>; + alignment = <0x0 0x00010000>; + chunk-order = <4>; + }; + }; From patchwork Thu Jan 21 17:55:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12037291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47E09C433DB for ; Thu, 21 Jan 2021 17:55:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8E39523A22 for ; Thu, 21 Jan 2021 17:55:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E39523A22 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 153B76B000C; Thu, 21 Jan 2021 12:55:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 105A46B000D; Thu, 21 Jan 2021 12:55:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F11FE6B000E; Thu, 21 Jan 2021 12:55:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id D43A56B000C for ; Thu, 21 Jan 2021 12:55:30 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9A0472494 for ; Thu, 21 Jan 2021 17:55:30 +0000 (UTC) X-FDA: 77730534420.01.class94_51028be27565 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 47DB01005657F for ; Thu, 21 Jan 2021 17:55:19 +0000 (UTC) X-HE-Tag: class94_51028be27565 X-Filterd-Recvd-Size: 18119 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 17:55:18 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id b8so1681181plh.12 for ; Thu, 21 Jan 2021 09:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sqZNw/T1J/E/9UGmlJLcTnXPapVNbINXggtl16AjclU=; b=LG0m86z96LUYGjLIwKd/UyY7/GUZ/3y4H0LtpJxW9mQZlFdbnuRSbo7S1Ilt1ddUCw GOuPXwsY1WLTSl4zTJVUHwMj3pgPCC6yXz38CIs4BPxbT7NffbRvdiPm8YvRUJooR85d 9tRq4P6zTa56bZ9rMwnpUJwpRw1PR7xkzu1eEH5+8/iBXvpEEgIOu8psNbjG+KIuwiT1 WEdZIKqNLXlmkRdSPb0qfXNeWibM+iM1cf/yC6z4Un4SyaB4pbKHxRx2qWuohe4qqKU2 P16WHvRxPCeqLQ3bwT/n49QX0ESg2Ghs+kiGVWw7M6RJTjj1dThOrwxsvIvjvpnA/kIQ Bgvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=sqZNw/T1J/E/9UGmlJLcTnXPapVNbINXggtl16AjclU=; b=a0iF3PQViD6jgWXAUCqqcDaM8KIpiqEZeRWjisNGuBpVgfThAl2HdYZ4Ay+6aWmAyg fPIHXVY/p4v8BABx3J32D9z5UIpG77JBy7mwqf2ytol+/3/L3pEmUReH4pdMAq7+ii8/ 9vWPMhTNxRUS617KNlU/LA/nzbki3B099NFzkXCKLTVfZdF9HDfZJ8gL8iT64VPJ0/61 ofwWaBoyvs0/HQN9mdUOPWBWYh6OP6JBznboYJ+86GnGbE9Q7UsaBMbKUwbD0wrCY+oZ V6voyjf1OZtl3ffUvi5n0uMmo42BglO7A5WAiWbexsYSryyubFrX6BG8MUuKt7DCTEdk gLiQ== X-Gm-Message-State: AOAM5305QTENxrORagOC/1sWsVt/x3QfEuPk53enRqHQtrfRk6gcBunA kKWKkVRnuHUMyQF/cJWyb8A= X-Google-Smtp-Source: ABdhPJwcvbV7XJT44syWS5zibxqABA1MBF0Lh0RXbsqV2BWzbyFvIg56cOzQqhIvYUwdVeRTy/K02g== X-Received: by 2002:a17:90a:fc6:: with SMTP id 64mr576038pjz.79.1611251717456; Thu, 21 Jan 2021 09:55:17 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:74d0:bb24:e25e:dc4d]) by smtp.gmail.com with ESMTPSA id t2sm6897317pju.19.2021.01.21.09.55.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 09:55:16 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v4 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Thu, 21 Jan 2021 09:55:02 -0800 Message-Id: <20210121175502.274391-5-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.296.g2bfb1c46d8-goog In-Reply-To: <20210121175502.274391-1-minchan@kernel.org> References: <20210121175502.274391-1-minchan@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that arranged into a list a fixed size chunks taken from CMA. The chunk heap driver is bound directly to a reserved_memory node by following Rob Herring's suggestion in [1]. [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d Reviewed-by: Suren Baghdasaryan Signed-off-by: Hyesoo Yu Signed-off-by: Hridya Valsaraju Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/Kconfig | 8 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 492 +++++++++++++++++++++++++++++ 3 files changed, 501 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..e9595e26f831 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,11 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + bool "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers that + are arranged into a list of fixed size chunks taken from CMA. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..8faa6cfdc0c5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c new file mode 100644 index 000000000000..15df42acee4b --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,492 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMA-BUF chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + uint32_t order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +struct chunk_heap chunk_heaps[MAX_CMA_AREAS]; +unsigned int chunk_heap_count; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + if (a->mapped) + return table; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped = true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr = buffer->vaddr; + } else { + vaddr = chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + mutex_unlock(&buffer->lock); + + return PTR_ERR(vaddr); + } + buffer->vaddr = vaddr; + } + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); + + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap *chunk_heap = buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops = { + .attach = chunk_heap_attach, + .detach = chunk_heap_detach, + .map_dma_buf = chunk_heap_map_dma_buf, + .unmap_dma_buf = chunk_heap_unmap_dma_buf, + .begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access = chunk_heap_dma_buf_end_cpu_access, + .mmap = chunk_heap_mmap, + .vmap = chunk_heap_vmap, + .vunmap = chunk_heap_vunmap, + .release = chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size = PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced = 0; + unsigned int alloc_order = max_t(unsigned int, pageblock_order, chunk_heap->order); + unsigned int nr_chunks_per_alloc = 1 << (alloc_order - chunk_heap->order); + gfp_t gfp_flags = GFP_KERNEL|__GFP_NORETRY; + int ret = -ENOMEM; + pgoff_t pg; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = chunk_heap; + buffer->len = ALIGN(len, chunk_size); + count = buffer->len / chunk_size; + + pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (alloced < count) { + struct page *page; + int i; + + while (count - alloced < nr_chunks_per_alloc) { + alloc_order--; + nr_chunks_per_alloc >>= 1; + } + + page = cma_alloc(chunk_heap->cma, 1 << alloc_order, + alloc_order, gfp_flags); + if (!page) { + if (gfp_flags & __GFP_NORETRY) { + gfp_flags &= ~__GFP_NORETRY; + continue; + } + break; + } + + for (i = 0; i < nr_chunks_per_alloc; i++, alloced++) { + pages[alloced] = page; + page += 1 << chunk_heap->order; + } + } + + if (alloced < count) + goto err_alloc; + + table = &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg = table->sgl; + for (pg = 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg = sg_next(sg); + } + + exp_info.ops = &chunk_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg = 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static const struct dma_heap_ops chunk_heap_ops = { + .allocate = chunk_heap_allocate, +}; + +#define CHUNK_PREFIX "chunk-" + +static int register_chunk_heap(struct chunk_heap *chunk_heap_info) +{ + struct dma_heap_export_info exp_info; + const char *name = cma_get_name(chunk_heap_info->cma); + size_t len = strlen(CHUNK_PREFIX) + strlen(name) + 1; + char *buf = kmalloc(len, GFP_KERNEL); + + if (!buf) + return -ENOMEM; + + sprintf(buf, CHUNK_PREFIX"%s", cma_get_name(chunk_heap_info->cma)); + buf[len] = '\0'; + + exp_info.name = buf; + exp_info.name = cma_get_name(chunk_heap_info->cma); + exp_info.ops = &chunk_heap_ops; + exp_info.priv = chunk_heap_info; + + chunk_heap_info->heap = dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap_info->heap)) { + kfree(buf); + return PTR_ERR(chunk_heap_info->heap); + } + + return 0; +} + +static int __init chunk_heap_init(void) +{ + unsigned int i; + + for (i = 0; i < chunk_heap_count; i++) + register_chunk_heap(&chunk_heaps[i]); + + return 0; +} +module_init(chunk_heap_init); + +#ifdef CONFIG_OF_EARLY_FLATTREE + +static int __init dmabuf_chunk_heap_area_init(struct reserved_mem *rmem) +{ + int ret; + struct cma *cma; + struct chunk_heap *chunk_heap_info; + const __be32 *chunk_order; + + phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t mask = align - 1; + + if ((rmem->base & mask) || (rmem->size & mask)) { + pr_err("Incorrect alignment for CMA region\n"); + return -EINVAL; + } + + ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma); + if (ret) { + pr_err("Reserved memory: unable to setup CMA region\n"); + return ret; + } + + /* Architecture specific contiguous memory fixup. */ + dma_contiguous_early_fixup(rmem->base, rmem->size); + + chunk_heap_info = &chunk_heaps[chunk_heap_count]; + chunk_heap_info->cma = cma; + + chunk_order = of_get_flat_dt_prop(rmem->fdt_node, "chunk-order", NULL); + + if (chunk_order) + chunk_heap_info->order = be32_to_cpu(*chunk_order); + else + chunk_heap_info->order = 4; + + chunk_heap_count++; + + return 0; +} +RESERVEDMEM_OF_DECLARE(dmabuf_chunk_heap, "dma_heap,chunk", + dmabuf_chunk_heap_area_init); +#endif + +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2");