From patchwork Tue Dec 12 07:38:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13488637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FE69C4167B for ; Tue, 12 Dec 2023 08:01:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1FCFA6B029C; Tue, 12 Dec 2023 03:01:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 137686B029E; Tue, 12 Dec 2023 03:01:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECDC46B02A0; Tue, 12 Dec 2023 03:01:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D88F36B029C for ; Tue, 12 Dec 2023 03:01:35 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9F5F64091D for ; Tue, 12 Dec 2023 08:01:35 +0000 (UTC) X-FDA: 81557421750.20.A81A6A4 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by imf17.hostedemail.com (Postfix) with ESMTP id 90D994000C for ; Tue, 12 Dec 2023 08:01:33 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HVRFRBbr; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf17.hostedemail.com: domain of vivek.kasireddy@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702368093; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TmPdM8lTasRCij/S+x27V0M5JMNO4KjKoYtlWEm7eBs=; b=J6YEl5E5kH6Ui9clDke3p+WZtJgxXl4mgJflm+mVWUUTu8CTtkk3OBbny0lVc8ppReiaPo ToVxQ+zagUWPq7V0f0zIrzvXOzrfL6qVxNBwG9wyaC6fXfrv98QjkQy0EPRvQGxz2BRhJn QuTSjVPeh163sMbTzTtzywiaeFjsG+4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HVRFRBbr; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf17.hostedemail.com: domain of vivek.kasireddy@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702368093; a=rsa-sha256; cv=none; b=liysAxT7b5etTTnU4x3EtF4aHQN/Fdf7AAU5ZE+lG6Kwe/H6jfd6IQ11LxikX4/9xDoU3v IhU8tzp5gTZdyuoEQOzBJqcGLVbeWE0gtXhKnsuy2Td4MlU8kSDREbcVmyrxug6JQyHny+ BFjtt9u7du0LnvcVk30BMrFkKcDj9BM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702368093; x=1733904093; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mAcJFIew1ykttdyyN6tBQbZzsTFAdhUHsqSaxgVF/4Y=; b=HVRFRBbrG28NIKto6gvk9lxKghty9u8eYqet6gmyECo9QC3O/kpXGSut L4dbWC6CdvC+E6gvhTJ4Mb2FIcnBMqV+i1XnaFdxh1uS5C9XU1q2D0Yya +1+mCJHBucjQ1JBotWD2bKu5YhBvuZ5hjSywWcy7peIF2VEwvrPRAEOy4 axVjufS40TwgreNjFiwG6Zx1MW7sV29E7j8/epaQNnGJ4N4eEXcG/GEJA YIdRfXppCkSX7be4RXA/ob+819FCki5DZyEzv9ZHok+tl0mC6fynPYa33 rskBdFXtIUbnWwnR76WvKoaMF6N7zvY1SJXm/Ll9GWZY/Nc6LBj6q619O A==; X-IronPort-AV: E=McAfee;i="6600,9927,10921"; a="397553983" X-IronPort-AV: E=Sophos;i="6.04,269,1695711600"; d="scan'208";a="397553983" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 00:01:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10921"; a="802376335" X-IronPort-AV: E=Sophos;i="6.04,269,1695711600"; d="scan'208";a="802376335" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 00:01:25 -0800 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Daniel Vetter , Mike Kravetz , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang Subject: [PATCH v7 5/6] udmabuf: Pin the pages using memfd_pin_folios() API (v5) Date: Mon, 11 Dec 2023 23:38:02 -0800 Message-Id: <20231212073803.3233055-6-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231212073803.3233055-1-vivek.kasireddy@intel.com> References: <20231212073803.3233055-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 90D994000C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 813jbpsenhgxpuya67daabhjy586a5um X-HE-Tag: 1702368093-968130 X-HE-Meta: U2FsdGVkX1/tbOkaYI8FRxe6druB1otnIi5Zsay2vMbhsiXOd3lCKJFSQlxS9NmHGl9BL7C37/cpNiBg7OZv1tUHkKgYeo/mJoF7iIpgW0Ceh+zF07kZZpVBdlQrbEB+YFJDm7Dle/aGlxp67qnm8C48VP7fXxSWC4YQAEwkT2m1eaLwSZ1pvzIZArX9skssK4JiXCYn5MdCY+bALnHcVmpitqbZNegqQ17rPm/6GrWOorAfuCX3uGLtQIQsETOcMcgyr7uoU2ceyvrrsjScaHPsTFzlCUR6FkPRiPPaZln2krcCmkwZa8iwLlS+8R7I6NYr6dCFEXXXSIeZDc8ruJhLeIP+fFAiaBotyFCu5y+5huv44DL2sN7LYdgnQ6ByFwVs7SWv1/9PN8ny5FjcIhv9ArZcjJinjkP5CXVcsqPeqzqjox4GINkdbB7D9MHl0/f0ju/k3OEk1PRwCtNQukEQPBJphgdcDjaPLr49rNAjtwb6jgiyCRcELbRDMoX4M9g/m+DbIfRwmuI31NU3gvx0arVTbYndEyymZBZyGqZZqNR11x2uZaqaqCExeRFS17Vc/ZQ0UFDe6OI3Ps9TjD56+HFsMsAbTz0n8yRw8WlSSu4/hZ575CCiJ3A5fApmLt1SndEC1AXEUiN1zeXKe7tZtGdu5IbzZhVyTUz7BTNnuX0XU67tmVTJMG8nN+tZ4aBMmGWWx+gGzHTC1eoEVXa0j0XypCoJocnuTKZd/wbygUwcLPPkxwgozHjK1aYVSHHFUL+QBTJ0JkMS/JBHDU+HGC6uR+OIUh2NDGMrzlk8YqNO8q2odDd/eQ1J3Ufb4yVws7bZjVFWFBqBFmaPNDrmS3IJb5XehCJvc9WZ9MRm73mU2VxG0N7DNdOJoPoSoHBhIe09ygL/FfaJrEVbz3+x+jn6VI1qdjGHiLbKecwfdX0sp5yamdZWxcLI+Mzv+0BK8/bG4vZca4H/T8g MMMrmDAK uJ+b74niHbPgPWjTwS0q96scgRmtwxXfMG0lc9fBuDz0U6LXDG3T77isNNy15Naw2eMmeKZTYZA6Vm/KXE5RU1jX/hWGetU46yKMt0g24KFd3AE5X8TwwncnOzDoKKVmsL8ZJh3KExec6TPoOs6SYmN4tPwki1VUrnaERacKcxV9ldZ3G1ZMGX03TSrDQMDeatY/bwQuVvwgFEiH5zlaaEXMx+visrQljMlOmLUH3M4JulLgOajPZtWB0ezuGlUhxRk0LRQeXPuG72TAmDVSlZ0/4RS4PnwGeW4SekuXRANX6Y34= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Using memfd_pin_folios() will ensure that the pages are pinned correctly using FOLL_PIN. And, this also ensures that we don't accidentally break features such as memory hotunplug as it would not allow pinning pages in the movable zone. Using this new API also simplifies the code as we no longer have to deal with extracting individual pages from their mappings or handle shmem and hugetlb cases separately. v2: - Adjust to the change in signature of pin_user_pages_fd() by passing in file * instead of fd. v3: - Limit the changes in this patch only to those that are required for using pin_user_pages_fd() - Slightly improve the commit message v4: - Adjust to the change in name of the API (memfd_pin_user_pages) v5: - Adjust to the changes in memfd_pin_folios which now populates a list of folios and offsets Cc: David Hildenbrand Cc: Daniel Vetter Cc: Mike Kravetz Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Signed-off-by: Vivek Kasireddy --- drivers/dma-buf/udmabuf.c | 85 ++++++--------------------------------- 1 file changed, 12 insertions(+), 73 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index e1b8da3c9b2a..a614e720837d 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -42,7 +42,7 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) if (pgoff >= ubuf->pagecount) return VM_FAULT_SIGBUS; - pfn = page_to_pfn(&ubuf->folios[pgoff]->page); + pfn = page_to_pfn(folio_page(ubuf->folios[pgoff], 0)); pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT; return vmf_insert_pfn(vma, vmf->address, pfn); @@ -79,7 +79,7 @@ static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map) return -ENOMEM; for (pg = 0; pg < ubuf->pagecount; pg++) - pages[pg] = &ubuf->folios[pg]->page; + pages[pg] = folio_page(ubuf->folios[pg], 0); vaddr = vm_map_ram(pages, ubuf->pagecount, -1); kfree(pages); @@ -163,7 +163,8 @@ static void release_udmabuf(struct dma_buf *buf) put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL); for (pg = 0; pg < ubuf->pagecount; pg++) - folio_put(ubuf->folios[pg]); + unpin_user_page(folio_page(ubuf->folios[pg], 0)); + kfree(ubuf->offsets); kfree(ubuf->folios); kfree(ubuf); @@ -218,65 +219,6 @@ static const struct dma_buf_ops udmabuf_ops = { #define SEALS_WANTED (F_SEAL_SHRINK) #define SEALS_DENIED (F_SEAL_WRITE) -static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd, - pgoff_t offset, pgoff_t pgcnt, - pgoff_t *pgbuf) -{ - struct hstate *hpstate = hstate_file(memfd); - pgoff_t mapidx = offset >> huge_page_shift(hpstate); - pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT; - pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT; - struct folio *folio = NULL; - pgoff_t pgidx; - - mapidx <<= huge_page_order(hpstate); - for (pgidx = 0; pgidx < pgcnt; pgidx++) { - if (!folio) { - folio = __filemap_get_folio(memfd->f_mapping, - mapidx, - FGP_ACCESSED, 0); - if (IS_ERR(folio)) - return PTR_ERR(folio); - } - - folio_get(folio); - ubuf->folios[*pgbuf] = folio; - ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT; - (*pgbuf)++; - if (++subpgoff == maxsubpgs) { - folio_put(folio); - folio = NULL; - subpgoff = 0; - mapidx += pages_per_huge_page(hpstate); - } - } - - if (folio) - folio_put(folio); - - return 0; -} - -static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd, - pgoff_t offset, pgoff_t pgcnt, - pgoff_t *pgbuf) -{ - pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT; - struct folio *folio = NULL; - - for (pgidx = 0; pgidx < pgcnt; pgidx++) { - folio = shmem_read_folio(memfd->f_mapping, - pgoff + pgidx); - if (IS_ERR(folio)) - return PTR_ERR(folio); - - ubuf->folios[*pgbuf] = folio; - (*pgbuf)++; - } - - return 0; -} - static int check_memfd_seals(struct file *memfd) { int seals; @@ -325,7 +267,7 @@ static long udmabuf_create(struct miscdevice *device, pgoff_t pgcnt, pgbuf = 0, pglimit; struct file *memfd = NULL; struct udmabuf *ubuf; - int ret = -EINVAL; + long ret = -EINVAL; u32 i, flags; ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL); @@ -366,17 +308,13 @@ static long udmabuf_create(struct miscdevice *device, goto err; pgcnt = list[i].size >> PAGE_SHIFT; - if (is_file_hugepages(memfd)) - ret = handle_hugetlb_pages(ubuf, memfd, - list[i].offset, - pgcnt, &pgbuf); - else - ret = handle_shmem_pages(ubuf, memfd, - list[i].offset, - pgcnt, &pgbuf); + ret = memfd_pin_folios(memfd, list[i].offset, pgcnt, + ubuf->folios + pgbuf, + ubuf->offsets + pgbuf); if (ret < 0) goto err; + pgbuf += pgcnt; fput(memfd); memfd = NULL; } @@ -389,8 +327,9 @@ static long udmabuf_create(struct miscdevice *device, return ret; err: - while (pgbuf > 0) - folio_put(ubuf->folios[--pgbuf]); + while (pgbuf-- > 0) + if (ubuf->folios[pgbuf]) + unpin_user_page(folio_page(ubuf->folios[pgbuf], 0)); if (memfd) fput(memfd); kfree(ubuf->offsets);