From patchwork Thu Dec 5 13:21:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13895257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A09C7E7716C for ; Thu, 5 Dec 2024 13:22:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23F9A6B00A0; Thu, 5 Dec 2024 08:22:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C7976B00A1; Thu, 5 Dec 2024 08:22:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0F746B00A2; Thu, 5 Dec 2024 08:22:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CC28F6B00A0 for ; Thu, 5 Dec 2024 08:22:14 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8199C16175B for ; Thu, 5 Dec 2024 13:22:14 +0000 (UTC) X-FDA: 82860968862.29.29EC1E8 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf13.hostedemail.com (Postfix) with ESMTP id 2BA2F2001F for ; Thu, 5 Dec 2024 13:21:56 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LormMVO4; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733404925; a=rsa-sha256; cv=none; b=op/LryFnL2PTp6tg2+1z9amKGpLzDXfNK3JgbY6axjG/CAsjgFfFO8WUzwPfMnyVpmzOIl byLo6VA728sC+6zmK7HFboNsP42597nSdaSZUocRxjwr5F/S7E+xs0GhB1mmnT8wBPTjKs PPYm1sdEta9wy7TSAy5QHBBl2JBen/c= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LormMVO4; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733404925; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6yQ2rSFv92DtBGpphymSQaues8rEpqWzpk14XviiUck=; b=1zep5Hkje//lEmAEjIzwrtkS2C0TTFQlmFiPQwF4ekb2nL6nlZTauiBYnNtXsQgWiqkodt Vu52r67lF3p9XB5DdoAOal5QUWvhYRr+YY9kcr4gGT64rUx5N+Fi1+AAjljn6CJ+45H/zF AhsN69+skvcd8H0nRXwcvX93Xm9Lf5A= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 29C4B5C70AA; Thu, 5 Dec 2024 13:21:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDD77C4CED1; Thu, 5 Dec 2024 13:22:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733404931; bh=GYKFLxFW0CYXNWuiPU0aCZhm6hh5vt2InRoRMGBEU+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LormMVO4QIjeKc344CmUd8A6BTemFNMH9FZFRB2P+MNP9lubXQVZa2LUbxc2DJtC2 Z1Xo3e+opT206OuBKyqvk6Iq112414FjtEKSNNNCuo7CeoScpMAUmHLAkCvNygGXXs RXEHXlekoxV5X8tYvOPNh5yfgZo+ABGgDrILLc+w0ejzkiFxYFGEkqig1pdsJGFq5t AWrZHqnWtow9U1Vjt/ftRH+urSnMN1XzV8WnT/FBIpUGRAn+fFDIXDGzZktSpXC3Nw V5QyE1PspKoMm25zPOvugU0R+4AVotb8XZ+6TPpviaMr9RC5DgsbpHlFkBdQx9mjDX 1JSU8zntYE7/w== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v4 11/18] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Thu, 5 Dec 2024 15:21:10 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2BA2F2001F X-Stat-Signature: 6anrhdkto1tq59e1m7ax97itqpewbt6a X-HE-Tag: 1733404915-15344 X-HE-Meta: U2FsdGVkX19KG+sDKmYWpUiagty9bJMqQuU+h2iuU+j7NcQ5h9zRQeMzkUkz+voJKZcgYXXSrOJeNbQ5+ZJAPqzuqTWdi97fvvIvhYbIQAi9+ZfM7PP90wX56HX1SXfY1+73yP66ln9GbIRWVoP7dW9M5+NvmBMVPdlIEcL2Lh26fDDs3xSRIhaKEVKcjLa0zPSukptnR+hsduPtn5EpUdqF+Nmb+s3BmvT7esLR0poQmSd6YZ477+SKkn54/n9B1DiI8PW8ZZlAm7pHRL4612R9L2olNySU0AGz53rvKzb+3ITszrB+z9uuyLWzXi1gTUuIlCsJ5KQcBeSyMFX775GPZpyA4OYkL8NIOX1lVnlj3fLNWPP6VemiblEfuFG5ZrQaDtasAeS8hAVJUottuuCUfZTBCrA/srpLAgw2x6hOqjZ1NEodNRlKjJloAiewmkccBcvCoD+RHm2Amw6M6qeSO7EGc0w/WkplbMZpM175gKvMTe+oIlimRrGqHO6B6Q2Hpry797CVikSPBSy53cieCDjFRu0Fq9cq5FvGAizjWZ07DJa1Ir7/lSD/tdQszsvIgajSWgqCS39pAB3ujPfmawoHMPy2+C4lqMH1paRd0/sn0gi3JZl3Gf1ZkcbQ0BI9F+GG8Cku9kEp7YmbhkfmnSBuIkR7qbus7+8kWr4xIV2b6hSkeCghCL5Fw9di9oKqfot5Q3Fz9GPUMnf03HY/kkcEPjh9ISREXbc9mXHK8GqBVQqRuDwkIKzfm+xJZCYiD+BYqSOiL/kOpV7VppzGsIv86uXcuFRp6M4qXB/aBh/ADxUZpzz1RN3lR9qqSqaSejm0LKchcxHvSY5XDICf3LzFnNNp++Ec+htv+hsycYxPb7WjM+imHq56IdhnJQ2UEogYnVZEar7Ii/woulUMm0g2dap2vsMI2Zwggw1wlZMFmZfLVL9PtXTWi/Iy/8S5OFF6jYJI8/yQUdo xIVLIl8h S4TBS9cV7tGOqCs+EAzb8c2ZxK7Xa6rvD5ZbYn7+EM3jzcrBeExc0E12aVAuNIH640+YTdxN4Hzn7Z337eYTSbD8KfyiWv96zySCdOdLUWYQZ/BGsJfRO57EnR7uMb507JqYZYCYZwtqLQ/LeZRkeIoCEBl8ruwlNF5nY5YLJEzpVSN1tYJJJnrv4blaTl9Sl0hiLIKkmqgdIljEgUS8C/hAd67NWOqrrq+/2Tt/B64RgW6A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 17 +++++++++++++++++ mm/hmm.c | 39 ++++++++++++++++++++++++++------------- 2 files changed, 43 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..a1ddbedc19c0 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,13 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + + /* + * Sticky flags, carried from input to output, + * don't forget to update HMM_PFN_INOUT_FLAGS + */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ @@ -57,6 +66,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..c16cfa03430c 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -39,13 +39,20 @@ enum { HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED, +}; + static int hmm_pfns_fill(unsigned long addr, unsigned long end, struct hmm_range *range, unsigned long cpu_flags) { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +209,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -236,7 +245,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_INOUT_FLAGS; return 0; } @@ -253,14 +262,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | swp_offset_pfn(entry) | cpu_flags; return 0; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (!required_fault) { - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_INOUT_FLAGS; return 0; } @@ -304,11 +313,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | HMM_PFN_ERROR; return 0; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | pte_pfn(pte) | cpu_flags; return 0; fault: @@ -448,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0;