From patchwork Wed Nov 6 15:19:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11230601 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 380E11747 for ; Wed, 6 Nov 2019 15:21:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F376218AE for ; Wed, 6 Nov 2019 15:21:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F376218AE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN5v-0003eS-VE; Wed, 06 Nov 2019 15:19:39 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN5u-0003eK-Hw for xen-devel@lists.xenproject.org; Wed, 06 Nov 2019 15:19:38 +0000 X-Inumbo-ID: d7bff04c-00a8-11ea-a1ad-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d7bff04c-00a8-11ea-a1ad-12813bfff9fa; Wed, 06 Nov 2019 15:19:36 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F3E2CB172; Wed, 6 Nov 2019 15:19:35 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: Date: Wed, 6 Nov 2019 16:19:45 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Subject: [Xen-devel] [PATCH 3/3] AMD/IOMMU: use notify_dfn() hook to update paging mode X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Sander Eikelenboom Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" update_paging_mode() expects to be invoked with the PCI devices lock held. The check occurring only when the mode actually needs updating, the violation of this rule by the majority of callers did go unnoticed until per-domain IOMMU setup was changed to do away with on-demand creation of IOMMU page tables. Acquiring the necessary lock in amd_iommu_map_page() or intermediate layers in generic IOMMU code is not possible - we'd risk all sorts of lock order violations. Hence the call to update_paging_mode() gets pulled out of the function, to be invoked instead from the new notify_dfn() hook, where no potentially conflicting locks are being held by the callers. Similarly the call to amd_iommu_alloc_root() gets pulled out - now that we receive notification of all DFN range increases, there's no need anymore to do this check when actually mapping a page. Note that this ought to result in a small performance improvement as well: The hook often gets invoked just once for larger blocks of pages, so rather than going through amd_iommu_alloc_root() and update_paging_mode() once per page, we may now invoke it just once per batch. Reported-by: Sander Eikelenboom Signed-off-by: Jan Beulich --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -383,35 +383,16 @@ int amd_iommu_map_page(struct domain *d, unsigned int flags, unsigned int *flush_flags) { struct domain_iommu *hd = dom_iommu(d); - int rc; unsigned long pt_mfn[7]; memset(pt_mfn, 0, sizeof(pt_mfn)); spin_lock(&hd->arch.mapping_lock); - rc = amd_iommu_alloc_root(hd); - if ( rc ) + if ( !hd->arch.root_table ) { spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %"PRI_dfn"\n", - dfn_x(dfn)); - domain_crash(d); - return rc; - } - - /* Since HVM domain is initialized with 2 level IO page table, - * we might need a deeper page table for wider dfn now */ - if ( is_hvm_domain(d) ) - { - if ( update_paging_mode(d, dfn_x(dfn)) ) - { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n", - dfn_x(dfn)); - domain_crash(d); - return -EFAULT; - } + return -ENODATA; } if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn, true) || (pt_mfn[1] == 0) ) @@ -468,6 +449,48 @@ int amd_iommu_unmap_page(struct domain * return 0; } + +int amd_iommu_notify_dfn(struct domain *d, dfn_t dfn) +{ + struct domain_iommu *hd = dom_iommu(d); + int rc; + + ASSERT(is_hvm_domain(d)); + + /* + * Since HVM domain is initialized with 2 level IO page table, + * we might need a deeper page table for wider dfn now. + */ + pcidevs_lock(); + spin_lock(&hd->arch.mapping_lock); + + rc = amd_iommu_alloc_root(hd); + if ( rc ) + { + spin_unlock(&hd->arch.mapping_lock); + pcidevs_unlock(); + AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %"PRI_dfn" (rc %d)\n", + dfn_x(dfn), rc); + domain_crash(d); + return rc; + } + + rc = update_paging_mode(d, dfn_x(dfn)); + if ( rc ) + { + spin_unlock(&hd->arch.mapping_lock); + pcidevs_unlock(); + AMD_IOMMU_DEBUG("Update paging mode failed dfn %"PRI_dfn" (rc %d)\n", + dfn_x(dfn), rc); + domain_crash(d); + return rc; + } + + spin_unlock(&hd->arch.mapping_lock); + pcidevs_unlock(); + + return 0; +} static unsigned long flush_count(unsigned long dfn, unsigned int page_count, unsigned int order) --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -628,6 +628,7 @@ static const struct iommu_ops __initcons .teardown = amd_iommu_domain_destroy, .map_page = amd_iommu_map_page, .unmap_page = amd_iommu_unmap_page, + .notify_dfn = amd_iommu_notify_dfn, .iotlb_flush = amd_iommu_flush_iotlb_pages, .iotlb_flush_all = amd_iommu_flush_iotlb_all, .free_page_table = deallocate_page_table, --- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h @@ -61,6 +61,7 @@ int __must_check amd_iommu_map_page(stru int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn, unsigned int *flush_flags); int __must_check amd_iommu_alloc_root(struct domain_iommu *hd); +int __must_check amd_iommu_notify_dfn(struct domain *d, dfn_t dfn); int amd_iommu_reserve_domain_unity_map(struct domain *domain, paddr_t phys_addr, unsigned long size, int iw, int ir);