From patchwork Tue Sep 6 12:44:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12967463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE1EECAAD5 for ; Tue, 6 Sep 2022 12:52:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240203AbiIFMwt (ORCPT ); Tue, 6 Sep 2022 08:52:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240200AbiIFMwS (ORCPT ); Tue, 6 Sep 2022 08:52:18 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50C1E642EA; Tue, 6 Sep 2022 05:51:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662468716; x=1694004716; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7R85Whub3M9uZCnJrvM1kzuZxCbMYVPKJZigAY4Yo8w=; b=iXP+kLI30TMZuVTJb3XO8PHlNjxivlcqSNwtZs2UYofcfRsBNq2cY/EX UCWGt0qPo5mA3zKM0XE1albVgT1DHSopewymyeB0mMeh+AlmcMURl22V4 e8ADPMItGcm8d78hWKlUyOAk/2euys4GZal1Za939Y5mAtoOHr3mSAGEp RzkQ01FVeJRBzU6nmdxSnXR3lp2m5D1cYBhZwPIS5aezF4DFK8flZtqju kK4DvdfUZcSs12YfotkJ63xL9bxcEVKb1JrsdKauPlcF+aFETD/xV46+T j17MOgUaNDUp4HrU14mQpfrYmB7RywpWNs12ur6bbVedOKFyvUmnqwYBo g==; X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="295317513" X-IronPort-AV: E=Sophos;i="5.93,294,1654585200"; d="scan'208";a="295317513" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2022 05:51:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,294,1654585200"; d="scan'208";a="591253060" Received: from allen-box.sh.intel.com ([10.239.159.48]) by orsmga006.jf.intel.com with ESMTP; 06 Sep 2022 05:51:50 -0700 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Bjorn Helgaas , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy , Jean-Philippe Brucker , Dave Jiang , Fenghua Yu , Vinod Koul Cc: Eric Auger , Liu Yi L , Jacob jun Pan , Zhangfei Gao , Zhu Tony , iommu@lists.linux.dev, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jean-Philippe Brucker Subject: [PATCH v13 12/13] iommu: Per-domain I/O page fault handling Date: Tue, 6 Sep 2022 20:44:57 +0800 Message-Id: <20220906124458.46461-13-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220906124458.46461-1-baolu.lu@linux.intel.com> References: <20220906124458.46461-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Tweak the I/O page fault handling framework to route the page faults to the domain and call the page fault handler retrieved from the domain. This makes the I/O page fault handling framework possible to serve more usage scenarios as long as they have an IOMMU domain and install a page fault handler in it. Some unused functions are also removed to avoid dead code. The iommu_get_domain_for_dev_pasid() which retrieves attached domain for a {device, PASID} pair is used. It will be used by the page fault handling framework which knows {device, PASID} reported from the iommu driver. We have a guarantee that the SVA domain doesn't go away during IOPF handling, because unbind() won't free the domain until all the pending page requests have been flushed from the pipeline. The drivers either call iopf_queue_flush_dev() explicitly, or in stall case, the device driver is required to flush all DMAs including stalled transactions before calling unbind(). This also renames iopf_handle_group() to iopf_handler() to avoid confusing. Signed-off-by: Lu Baolu Reviewed-by: Jean-Philippe Brucker Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Tested-by: Zhangfei Gao Tested-by: Tony Zhu --- drivers/iommu/io-pgfault.c | 68 +++++--------------------------------- 1 file changed, 9 insertions(+), 59 deletions(-) diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index aee9e033012f..d046d89cec55 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, return iommu_page_response(dev, &resp); } -static enum iommu_page_response_code -iopf_handle_single(struct iopf_fault *iopf) -{ - vm_fault_t ret; - struct mm_struct *mm; - struct vm_area_struct *vma; - unsigned int access_flags = 0; - unsigned int fault_flags = FAULT_FLAG_REMOTE; - struct iommu_fault_page_request *prm = &iopf->fault.prm; - enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; - - if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) - return status; - - mm = iommu_sva_find(prm->pasid); - if (IS_ERR_OR_NULL(mm)) - return status; - - mmap_read_lock(mm); - - vma = find_extend_vma(mm, prm->addr); - if (!vma) - /* Unmapped area */ - goto out_put_mm; - - if (prm->perm & IOMMU_FAULT_PERM_READ) - access_flags |= VM_READ; - - if (prm->perm & IOMMU_FAULT_PERM_WRITE) { - access_flags |= VM_WRITE; - fault_flags |= FAULT_FLAG_WRITE; - } - - if (prm->perm & IOMMU_FAULT_PERM_EXEC) { - access_flags |= VM_EXEC; - fault_flags |= FAULT_FLAG_INSTRUCTION; - } - - if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) - fault_flags |= FAULT_FLAG_USER; - - if (access_flags & ~vma->vm_flags) - /* Access fault */ - goto out_put_mm; - - ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL); - status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : - IOMMU_PAGE_RESP_SUCCESS; - -out_put_mm: - mmap_read_unlock(mm); - mmput(mm); - - return status; -} - -static void iopf_handle_group(struct work_struct *work) +static void iopf_handler(struct work_struct *work) { struct iopf_group *group; + struct iommu_domain *domain; struct iopf_fault *iopf, *next; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); + domain = iommu_get_domain_for_dev_pasid(group->dev, + group->last_fault.fault.prm.pasid, 0); + if (!domain || !domain->iopf_handler) + status = IOMMU_PAGE_RESP_INVALID; list_for_each_entry_safe(iopf, next, &group->faults, list) { /* @@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct *work) * faults in the group if there is an error. */ if (status == IOMMU_PAGE_RESP_SUCCESS) - status = iopf_handle_single(iopf); + status = domain->iopf_handler(&iopf->fault, + domain->fault_data); if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) @@ -242,7 +192,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); list_add(&group->last_fault.list, &group->faults); - INIT_WORK(&group->work, iopf_handle_group); + INIT_WORK(&group->work, iopf_handler); /* See if we have partial faults for this group */ list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) {