From patchwork Sat Nov 18 06:32:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13459887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05BE3C2BB3F for ; Sat, 18 Nov 2023 06:56:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 341FE6B053C; Sat, 18 Nov 2023 01:56:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2978D6B053E; Sat, 18 Nov 2023 01:56:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 075646B053C; Sat, 18 Nov 2023 01:56:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EB6296B0536 for ; Sat, 18 Nov 2023 01:56:13 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C243C140ECE for ; Sat, 18 Nov 2023 06:56:13 +0000 (UTC) X-FDA: 81470165826.14.4CDD2AB Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by imf19.hostedemail.com (Postfix) with ESMTP id C99251A000C for ; Sat, 18 Nov 2023 06:56:11 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UIa8KHUq; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of vivek.kasireddy@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700290572; a=rsa-sha256; cv=none; b=f4NjXMmNbvc/qxqcezTzcIgUCe2XBC7g/Cs8oolTpDsykQltq89v/Szq9IDKreAAhvjDwQ HWGrzC4iivBAMCTq+5Q86OOnHyM0a0HWrXQ2n8QmLNIC+lcRrKkVZ8us0B+SRgY1L0SgbC e5w6oaitWOyvL3WWNoUE4jydsXAz4FE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UIa8KHUq; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of vivek.kasireddy@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700290572; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wSX8DVYyL+2CfxA74KiR5uRA4uAXoO1SDkpTnwsvGc8=; b=e4J7z978b+/xtXcfcl9ftjAchwq87KZ9kxiJ1/QckjbjAFVSJygj/D1VbJ4ebdOzolgQW9 Z2UnnB1cp3ighnYb2nWTGMH5KcX02FIES5uclC1zRMRId3LUQDFwJQV27IEnLVbrF6C9mq wJI6+GvmQJwU51IbAbiEklRsEbe0bKs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700290571; x=1731826571; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4gHoMhYSuE9PY2sLaMjLckWZ81rKPYGfJDvuxgZ60RU=; b=UIa8KHUqzXDh06Re/hfsvd9G4Y/MPTDlJthQbj+tpjduGTXCwfnt6qP6 oBYyq6kTyfQclRH801q+C5ziLnQv6/845GBnRAmhwQXu5WRCmwfSMoO9U pioS1MzZunJVnqYMPD86V0oXZkstPyL3HvwcSKFToB4fyZxh6RInhPKJo 4l8a16azBkxaenmV0qTI6wFFlOzi4RFP3BkpczfRqcWRhQi02k0Ik63VC fZwo5ggxzfK46ndt/g4CKc3PxWg+sSon1XdLCCsmZXAU8sv/38/0ZwThr DIR/CzJM5KExjccBvhF2xMg9cgreirZh1N+WCHDykelzhwYNKZbvS92Uc g==; X-IronPort-AV: E=McAfee;i="6600,9927,10897"; a="457912360" X-IronPort-AV: E=Sophos;i="6.04,207,1695711600"; d="scan'208";a="457912360" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Nov 2023 22:56:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10897"; a="800686578" X-IronPort-AV: E=Sophos;i="6.04,207,1695711600"; d="scan'208";a="800686578" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Nov 2023 22:56:07 -0800 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Daniel Vetter , Mike Kravetz , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang Subject: [PATCH v4 2/5] udmabuf: Add back support for mapping hugetlb pages (v3) Date: Fri, 17 Nov 2023 22:32:30 -0800 Message-Id: <20231118063233.733523-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231118063233.733523-1-vivek.kasireddy@intel.com> References: <20231118063233.733523-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C99251A000C X-Stat-Signature: 1i9gpmu3cts3uay7a1nhxndfg7ynrjmr X-HE-Tag: 1700290571-314725 X-HE-Meta: U2FsdGVkX181sAp8q4oIn6FyrUuXVvJyPNaJiwCnl7NeWPq5L5hSDDZdeusJuaboNQ4hMcQxoWz8xhdvUgaxhUxlpjc1Ez6six+JCAgBnRjldhFqAZgDepXf4EsleOn+jreRoYxJIUbYnz+aulZLsT8Mxdgf77xr9F+hbalb12mrnLuhKsRTLAQLULsKMcg9AqluneUZ1UUnqSEENFQHLe9AovhYz/UzkJ8yUsDTDZsVVQUQpaTu22Ciip7MNcekwZmKgkU6qu8cAk3wLurgnBNwwEMrpG3usRpPUKAO8TgBFUH+LV0pv2JGFEsQF57q2aXYsGUaF8ivewPQAfvL8ZKhz/AJBWYcAe7QfCC+sXuJYRU7RUw0kmCvngWmB0n623qQQz+LD5tx2/LWFsX9qjKNrUzq1j6D9K81htkXh0N34fvwQQwO4BefJ6L8u3oZgz6d5EvjqUmX0SVRfsvUnHMebbk98H0CX+ZXLApfq4Sf77pG4PLQn1wib4xOE524V6IL3wXj/8WccCDNbWqblDgt2LLQcw9cZiTlGvJL+Y18AdQpRhhx/lkD+Z83hbdno/lPQmZrR7WMCmIiz6HLUzxi1EblSulCIr3AVD5kVKDqPGJr+Mqrrrufr2ypS4fx+9ltPAcoxJ0rebMW2uoRtAAxdiWtN1ylU7xzwYfuJisZY58QaPoiRAuxXPTId+KiQqVeXU6X1xlY8diccmptEDicUZPXhf+D0fMAPJLRP+Ag/2uSkm3CNJLYqL8XP3tIj4bHyQip+yiOdYpfNkJPj3YvxRBXnSCICMhVWV+ThE4YV5T3KBfk2ctDvtp63Es8lziXgdpgXQFR2kzVDwnVnLIW4+JsoJFLF1nRJ5WqQI5kpIb6gpK0szmaqojIUV4EmtFr/hKAl7qAxXzIr1kEiE+fvuxPBKvGfq9MJAjXveU2l8+jhcEK5B4lVF0aor+XFI2c9ozEqrzKDsob9dn Pb5E31mL +hT8bPFOOR2PBnhvFq9Q+pEA9tE+FQGBd4QDfVp9G+NQQiYlPJCHBmb7/1x6lSqdkdy0TVVZNZuJfVRc1vZeSdJnUY2t1eNBLQ4c8vc9Y1vvxWJDguaP7l5+7EZGmORtOFFOHUwzPBNPsKmELrXIcHFpQHF8BJMNP4/3JbCa0D/tTHW4/laVQeKRDIyZWW/7ST1iZ5gSWerAwl14qlp+YArr3WaRulcO7jp05+R7Swta0Hj/Ydv9C4moTjnze0EZUHTDa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A user or admin can configure a VMM (Qemu) Guest's memory to be backed by hugetlb pages for various reasons. However, a Guest OS would still allocate (and pin) buffers that are backed by regular 4k sized pages. In order to map these buffers and create dma-bufs for them on the Host, we first need to find the hugetlb pages where the buffer allocations are located and then determine the offsets of individual chunks (within those pages) and use this information to eventually populate a scatterlist. Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 options were passed to the Host kernel and Qemu was launched with these relevant options: qemu-system-x86_64 -m 4096m.... -device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080 -display gtk,gl=on -object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M -machine memory-backend=mem1 Replacing -display gtk,gl=on with -display gtk,gl=off above would exercise the mmap handler. v2: Updated get_sg_table() to manually populate the scatterlist for both huge page and non-huge-page cases. v3: s/offsets/subpgoff/g s/hpoff/mapidx/g Cc: David Hildenbrand Cc: Daniel Vetter Cc: Mike Kravetz Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Acked-by: Mike Kravetz (v2) Signed-off-by: Vivek Kasireddy --- drivers/dma-buf/udmabuf.c | 85 +++++++++++++++++++++++++++++++++------ 1 file changed, 72 insertions(+), 13 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 820c993c8659..1a41c4a069ea 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -28,6 +29,7 @@ struct udmabuf { struct page **pages; struct sg_table *sg; struct miscdevice *device; + pgoff_t *subpgoff; }; static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) @@ -41,6 +43,10 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) return VM_FAULT_SIGBUS; pfn = page_to_pfn(ubuf->pages[pgoff]); + if (ubuf->subpgoff) { + pfn += ubuf->subpgoff[pgoff] >> PAGE_SHIFT; + } + return vmf_insert_pfn(vma, vmf->address, pfn); } @@ -90,23 +96,31 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf, { struct udmabuf *ubuf = buf->priv; struct sg_table *sg; + struct scatterlist *sgl; + pgoff_t offset; + unsigned long i = 0; int ret; sg = kzalloc(sizeof(*sg), GFP_KERNEL); if (!sg) return ERR_PTR(-ENOMEM); - ret = sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount, - 0, ubuf->pagecount << PAGE_SHIFT, - GFP_KERNEL); + + ret = sg_alloc_table(sg, ubuf->pagecount, GFP_KERNEL); if (ret < 0) - goto err; + goto err_alloc; + + for_each_sg(sg->sgl, sgl, ubuf->pagecount, i) { + offset = ubuf->subpgoff ? ubuf->subpgoff[i] : 0; + sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, offset); + } ret = dma_map_sgtable(dev, sg, direction, 0); if (ret < 0) - goto err; + goto err_map; return sg; -err: +err_map: sg_free_table(sg); +err_alloc: kfree(sg); return ERR_PTR(ret); } @@ -143,6 +157,7 @@ static void release_udmabuf(struct dma_buf *buf) for (pg = 0; pg < ubuf->pagecount; pg++) put_page(ubuf->pages[pg]); + kfree(ubuf->subpgoff); kfree(ubuf->pages); kfree(ubuf); } @@ -206,7 +221,9 @@ static long udmabuf_create(struct miscdevice *device, struct udmabuf *ubuf; struct dma_buf *buf; pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit; - struct page *page; + struct page *page, *hpage = NULL; + pgoff_t mapidx, chunkoff, maxchunks; + struct hstate *hpstate; int seals, ret = -EINVAL; u32 i, flags; @@ -242,7 +259,7 @@ static long udmabuf_create(struct miscdevice *device, if (!memfd) goto err; mapping = memfd->f_mapping; - if (!shmem_mapping(mapping)) + if (!shmem_mapping(mapping) && !is_file_hugepages(memfd)) goto err; seals = memfd_fcntl(memfd, F_GET_SEALS, 0); if (seals == -EINVAL) @@ -253,16 +270,57 @@ static long udmabuf_create(struct miscdevice *device, goto err; pgoff = list[i].offset >> PAGE_SHIFT; pgcnt = list[i].size >> PAGE_SHIFT; + if (is_file_hugepages(memfd)) { + if (!ubuf->subpgoff) { + ubuf->subpgoff = kmalloc_array(ubuf->pagecount, + sizeof(*ubuf->subpgoff), + GFP_KERNEL); + if (!ubuf->subpgoff) { + ret = -ENOMEM; + goto err; + } + } + hpstate = hstate_file(memfd); + mapidx = list[i].offset >> huge_page_shift(hpstate); + chunkoff = (list[i].offset & + ~huge_page_mask(hpstate)) >> PAGE_SHIFT; + maxchunks = huge_page_size(hpstate) >> PAGE_SHIFT; + } for (pgidx = 0; pgidx < pgcnt; pgidx++) { - page = shmem_read_mapping_page(mapping, pgoff + pgidx); - if (IS_ERR(page)) { - ret = PTR_ERR(page); - goto err; + if (is_file_hugepages(memfd)) { + if (!hpage) { + hpage = find_get_page_flags(mapping, mapidx, + FGP_ACCESSED); + if (!hpage) { + ret = -EINVAL; + goto err; + } + } + get_page(hpage); + ubuf->pages[pgbuf] = hpage; + ubuf->subpgoff[pgbuf++] = chunkoff << PAGE_SHIFT; + if (++chunkoff == maxchunks) { + put_page(hpage); + hpage = NULL; + chunkoff = 0; + mapidx++; + } + } else { + mapidx = pgoff + pgidx; + page = shmem_read_mapping_page(mapping, mapidx); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err; + } + ubuf->pages[pgbuf++] = page; } - ubuf->pages[pgbuf++] = page; } fput(memfd); memfd = NULL; + if (hpage) { + put_page(hpage); + hpage = NULL; + } } exp_info.ops = &udmabuf_ops; @@ -287,6 +345,7 @@ static long udmabuf_create(struct miscdevice *device, put_page(ubuf->pages[--pgbuf]); if (memfd) fput(memfd); + kfree(ubuf->subpgoff); kfree(ubuf->pages); kfree(ubuf); return ret;