From patchwork Wed Sep 30 17:51:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11810001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85343618 for ; Wed, 30 Sep 2020 17:53:09 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 59D6720706 for ; Wed, 30 Sep 2020 17:53:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DQMmlnFg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59D6720706 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.858.2963 (Exim 4.92) (envelope-from ) id 1kNgH5-0005f7-Bh; Wed, 30 Sep 2020 17:52:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 858.2963; Wed, 30 Sep 2020 17:52:19 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgH5-0005ey-80; Wed, 30 Sep 2020 17:52:19 +0000 Received: by outflank-mailman (input) for mailman id 858; Wed, 30 Sep 2020 17:52:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgH4-0005QF-P4 for xen-devel@lists.xenproject.org; Wed, 30 Sep 2020 17:52:18 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6e6f9784-31de-432b-a12c-b662f270801b; Wed, 30 Sep 2020 17:51:56 +0000 (UTC) Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNgGP-0001Bs-MG; Wed, 30 Sep 2020 17:51:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgH4-0005QF-P4 for xen-devel@lists.xenproject.org; Wed, 30 Sep 2020 17:52:18 +0000 X-Inumbo-ID: 6e6f9784-31de-432b-a12c-b662f270801b Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6e6f9784-31de-432b-a12c-b662f270801b; Wed, 30 Sep 2020 17:51:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=71LvYnHcMs7/rXGs508r2vzZrQvLA4mfS9GwJnz8e8I=; b=DQMmlnFg6WAYeylc76zZu+7vNK Y7Dy7Hk+84PRUsKFyWhQc9sB5+zPoig2xujJ2u9hIT8LWeR04zfy86BxI9Tg9lkl3aegreAvuv0UI W7cjIEn7vxyAYoLf+cTgHdVTu4L+t3DDAcCoBnv8CJlHXBQNg74qbtCWXPgT7+hL9mLVTGddGoiVI eFpmiqUu+XCdR0ihnpC0zWAmcfbRaMjSMeOrEHONXWVFaku/aUAkhAz/hlX+HjiMQfVpwJ8SEbETO Ldn9h1nrZYduwY7/TaQFjcI2THjl6OacN+hzEWEKr3z6CZN8FFYwV8CDtDH3Jet2r0Kk9VdJgAoB6 lDRQXPhA==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNgGP-0001Bs-MG; Wed, 30 Sep 2020 17:51:38 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Chris Wilson , Matthew Auld , Rodrigo Vivi , Minchan Kim , Matthew Wilcox , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 02/10] mm: add a VM_MAP_PUT_PAGES flag for vmap Date: Wed, 30 Sep 2020 19:51:25 +0200 Message-Id: <20200930175133.1252382-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930175133.1252382-1-hch@lst.de> References: <20200930175133.1252382-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Add a flag so that vmap takes ownership of the passed in page array. When vfree is called on such an allocation it will put one reference on each page, and free the page array itself. Signed-off-by: Christoph Hellwig --- include/linux/vmalloc.h | 1 + mm/vmalloc.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0221f852a7e1a3..b899681e3ff9f0 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -24,6 +24,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_MAP_PUT_PAGES 0x00000100 /* put pages and free array in vfree */ /* * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8770260419af06..ffad65f052c3f9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2377,8 +2377,11 @@ EXPORT_SYMBOL(vunmap); * @flags: vm_area->flags * @prot: page protection for the mapping * - * Maps @count pages from @pages into contiguous kernel virtual - * space. + * Maps @count pages from @pages into contiguous kernel virtual space. + * If @flags contains %VM_MAP_PUT_PAGES the ownership of the pages array itself + * (which must be kmalloc or vmalloc memory) and one reference per pages in it + * are transferred from the caller to vmap(), and will be freed / dropped when + * vfree() is called on the return value. * * Return: the address of the area or %NULL on failure */ @@ -2404,6 +2407,8 @@ void *vmap(struct page **pages, unsigned int count, return NULL; } + if (flags & VM_MAP_PUT_PAGES) + area->pages = pages; return area->addr; } EXPORT_SYMBOL(vmap);