From patchwork Tue Mar 18 15:30:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Sultanov X-Patchwork-Id: 14021171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C908C28B28 for ; Tue, 18 Mar 2025 15:32:06 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.919331.1323823 (Exim 4.92) (envelope-from ) id 1tuYv8-0002ik-5M; Tue, 18 Mar 2025 15:31:58 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 919331.1323823; Tue, 18 Mar 2025 15:31:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYv8-0002ib-2R; Tue, 18 Mar 2025 15:31:58 +0000 Received: by outflank-mailman (input) for mailman id 919331; Tue, 18 Mar 2025 15:31:56 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYv6-0001ML-IP for xen-devel@lists.xenproject.org; Tue, 18 Mar 2025 15:31:56 +0000 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [2a00:1450:4864:20::330]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1f151085-040e-11f0-9899-31a8f345e629; Tue, 18 Mar 2025 16:31:54 +0100 (CET) Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-43cef035a3bso25529785e9.1 for ; Tue, 18 Mar 2025 08:31:54 -0700 (PDT) Received: from localhost.localdomain (cpc92320-cmbg19-2-0-cust1786.5-4.cable.virginm.net. [82.13.70.251]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d1ffbcf12sm135384415e9.12.2025.03.18.08.31.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Mar 2025 08:31:52 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1f151085-040e-11f0-9899-31a8f345e629 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742311913; x=1742916713; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=byyrGrfPPMb7yp0dDoy8Ot8gd6oeoDPxj7m5qTsv1dM=; b=i3sJBMYVk3B8jUCjM7Dm+JlGvlpVmzeJKjXbGFprNSghaL5h4ZBrFojbR+mu8LJhhV MLy9lkmJjhuhuzvvQVtifeuhjsmqgAMwMMzafgvZGYN6ZfOOtD3SNijAbPJVTRWQo1m6 hZ7znMGO/11ObJ4+EZ7ryX2FBrU3nW869ZwbyhEsL/+XKi8AJ0I9LYXoGcs/d5JaT0mP CEe6CTEV7VX66jwmZ0o1GvNbURAwONawkCew+ZyOfXhkDMylk1sqXXgDgEI67RyqnyxP uvRcYp0oQyfThHXgc6OFxpR/Ia0rzC0RBoRsSfiEZDs/R3XJ66Z2JviKr8/3hk1/8xwu Hr1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742311913; x=1742916713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=byyrGrfPPMb7yp0dDoy8Ot8gd6oeoDPxj7m5qTsv1dM=; b=tNsjA89TowSJZ+4BeOdrsveMnNc38ff9GM4JmMdMZ4XgIChvBuehCcLZqeuXHAtDJO tWP4z8E26B67lLLejuY4Yd8idOTn/9m3eGS2Bm1R9nz4BMezMexFTAE2v6zJAozNJUcO 6YfVvHjrHh9vbIpNZ+oxXJ8eLN4mxe1by3JlZliFt2niQmjXugkxRS0ABudHbGscG2SS xq/jilo/L1e8EcGx+7OM4qiEi5pSylsdzxJAUcWcYbZx0pHf336gv2xNz73vp58lKK/P GuTC5ZeIrRsQadAQMw1zCe+ERUSr3khDS0He6CcVmXyzLnkeTe5OjZylqGj01e38zszr LHCw== X-Gm-Message-State: AOJu0YyiTzZt+XhGnbENa2NuAZxTC+850yBkNJ+n0y2Jb0/saGvIlPKJ CbyYCutQzDCY6DMv91o3801QzfPhPKxS7/0kt7PXGP226zsAthOIWqIyubnM X-Gm-Gg: ASbGncs0qXypbddwDkb1rBl01UsHFW10Kfu8do9AO9LJJKbajXr2dWbVstsLDKf2zV+ GFnkn0axVZB6HD1MdkZ0xwj1NvfcAXOHQKGf/iwbAIctMA0RBswybplmHgKqMkHmTjTSDz7kfUm Gq+cXa7qO3WKqb3mhVO+ANqMRtzO4Ch/0OtCuSMoL8U4U2uTQ6Cfx+IcVS+Z2FSAIs0KDMey0OZ TTewf/JIbEyiQnznB2jMplmtdlIfVxqDqQlTg7khNZI5XQIXbXmKkB/qhd1dCjtloQBSX2+2dAY M5BNG6wsXne1pILExceNEKS4bOqEF4vCLufo358Ty6Hh2CgN3K76fExShkSrqD+wcQzPWeXIRKd wtfiap30nhdgJfvU7K+sgMryyWDVXByIV0ByzPoXrglkt X-Google-Smtp-Source: AGHT+IG28ysCqoO/y1XOvhRFRRgBbqZQe7qLWS1Xwb+hlcTmlENpXrnZaaOnsuCV+ZMaf287LqSPtA== X-Received: by 2002:a05:600c:1d2a:b0:43c:e2dd:98f3 with SMTP id 5b1f17b1804b1-43d3b9ce1ddmr26437345e9.21.1742311912942; Tue, 18 Mar 2025 08:31:52 -0700 (PDT) From: Andrii Sultanov To: xen-devel@lists.xenproject.org Cc: Andrii Sultanov , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v3 1/3] drivers: Change amd_iommu struct to contain pci_sbdf_t, simplify code Date: Tue, 18 Mar 2025 15:30:21 +0000 Message-ID: <0d578d1bd063f5d58f3dbaf2ab0e21143333a50e.1742311401.git.sultanovandriy@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Following on from 250d87dc3ff9 ("x86/msi: Change __msi_set_enable() to take pci_sbdf_t"), struct amd_iommu has its seg and bdf fields backwards with relation to pci_sbdf_t. Instead of regenerating sbdf_t from seg+bdf, make the struct contain pci_sbdf_t directly, which simplifies code. I've avoided aliasing pci_sbdf_t's fields here, so all the usages of amd_iommu->seg|bdf dropped down a namespace to amd_iommu->sbdf.seg|bdf and required renaming. Bloat-o-meter reports: add/remove: 0/0 grow/shrink: 6/11 up/down: 135/-327 (-192) Function old new delta _einittext 22028 22092 +64 amd_iommu_prepare 853 897 +44 _hvm_dpci_msi_eoi 157 168 +11 __mon_lengths 2928 2936 +8 _invalidate_all_devices 133 138 +5 amd_iommu_get_reserved_device_memory 521 524 +3 amd_iommu_domain_destroy 46 43 -3 build_info 752 744 -8 amd_iommu_add_device 856 844 -12 amd_iommu_msi_enable 33 20 -13 update_intremap_entry_from_msi_msg 879 859 -20 amd_iommu_get_supported_ivhd_type 86 54 -32 amd_iommu_detect_one_acpi 918 886 -32 iterate_ivrs_mappings 169 129 -40 flush_command_buffer 460 417 -43 set_iommu_interrupt_handler 421 377 -44 enable_iommu 1745 1665 -80 Resolves: https://gitlab.com/xen-project/xen/-/issues/198 Reported-by: Andrew Cooper Signed-off-by: Andrii Sultanov --- Changes in V3: * Dropped the union with seg+bdf/pci_sbdf_t to avoid aliasing, renamed all users appropriately Changes in V2: * Split single commit into several patches * Added the commit title of the referenced patch * Dropped brackets around &(iommu->sbdf) and &(sbdf) --- xen/drivers/passthrough/amd/iommu.h | 4 +-- xen/drivers/passthrough/amd/iommu_acpi.c | 16 +++++----- xen/drivers/passthrough/amd/iommu_cmd.c | 8 ++--- xen/drivers/passthrough/amd/iommu_detect.c | 18 +++++------ xen/drivers/passthrough/amd/iommu_init.c | 35 ++++++++++----------- xen/drivers/passthrough/amd/iommu_intr.c | 26 +++++++-------- xen/drivers/passthrough/amd/iommu_map.c | 6 ++-- xen/drivers/passthrough/amd/pci_amd_iommu.c | 22 ++++++------- 8 files changed, 66 insertions(+), 69 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h index 00e81b4b2a..ba541f7943 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -77,8 +77,8 @@ struct amd_iommu { struct list_head list; spinlock_t lock; /* protect iommu */ - u16 seg; - u16 bdf; + pci_sbdf_t sbdf; + struct msi_desc msi; u16 cap_offset; diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 5bdbfb5ba8..025d9be40f 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -58,7 +58,7 @@ static void __init add_ivrs_mapping_entry( uint16_t bdf, uint16_t alias_id, uint8_t flags, unsigned int ext_flags, bool alloc_irt, struct amd_iommu *iommu) { - struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(iommu->seg); + struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(iommu->sbdf.seg); ASSERT( ivrs_mappings != NULL ); @@ -70,7 +70,7 @@ static void __init add_ivrs_mapping_entry( ivrs_mappings[bdf].device_flags = flags; /* Don't map an IOMMU by itself. */ - if ( iommu->bdf == bdf ) + if ( iommu->sbdf.bdf == bdf ) return; /* Allocate interrupt remapping table if needed. */ @@ -96,7 +96,7 @@ static void __init add_ivrs_mapping_entry( if ( !ivrs_mappings[alias_id].intremap_table ) panic("No memory for %pp's IRT\n", - &PCI_SBDF(iommu->seg, alias_id)); + &PCI_SBDF(iommu->sbdf.seg, alias_id)); } } @@ -112,7 +112,7 @@ static struct amd_iommu * __init find_iommu_from_bdf_cap( struct amd_iommu *iommu; for_each_amd_iommu ( iommu ) - if ( (iommu->seg == seg) && (iommu->bdf == bdf) && + if ( (iommu->sbdf.seg == seg) && (iommu->sbdf.bdf == bdf) && (iommu->cap_offset == cap_offset) ) return iommu; @@ -297,13 +297,13 @@ static int __init register_range_for_iommu_devices( /* reserve unity-mapped page entries for devices */ for ( bdf = rc = 0; !rc && bdf < ivrs_bdf_entries; bdf++ ) { - if ( iommu != find_iommu_for_device(iommu->seg, bdf) ) + if ( iommu != find_iommu_for_device(iommu->sbdf.seg, bdf) ) continue; - req = get_ivrs_mappings(iommu->seg)[bdf].dte_requestor_id; - rc = reserve_unity_map_for_device(iommu->seg, bdf, base, length, + req = get_ivrs_mappings(iommu->sbdf.seg)[bdf].dte_requestor_id; + rc = reserve_unity_map_for_device(iommu->sbdf.seg, bdf, base, length, iw, ir, false) ?: - reserve_unity_map_for_device(iommu->seg, req, base, length, + reserve_unity_map_for_device(iommu->sbdf.seg, req, base, length, iw, ir, false); } diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c index 83c525b84f..4defa0a44d 100644 --- a/xen/drivers/passthrough/amd/iommu_cmd.c +++ b/xen/drivers/passthrough/amd/iommu_cmd.c @@ -40,7 +40,7 @@ static void send_iommu_command(struct amd_iommu *iommu, IOMMU_RING_BUFFER_PTR_MASK) ) { printk_once(XENLOG_ERR "AMD IOMMU %pp: no cmd slot available\n", - &PCI_SBDF(iommu->seg, iommu->bdf)); + &PCI_SBDF(iommu->sbdf.seg, iommu->sbdf.bdf)); cpu_relax(); } @@ -85,7 +85,7 @@ static void flush_command_buffer(struct amd_iommu *iommu, threshold |= threshold << 1; printk(XENLOG_WARNING "AMD IOMMU %pp: %scompletion wait taking too long\n", - &PCI_SBDF(iommu->seg, iommu->bdf), + &iommu->sbdf, timeout_base ? "iotlb " : ""); timeout = 0; } @@ -95,7 +95,7 @@ static void flush_command_buffer(struct amd_iommu *iommu, if ( !timeout ) printk(XENLOG_WARNING "AMD IOMMU %pp: %scompletion wait took %lums\n", - &PCI_SBDF(iommu->seg, iommu->bdf), + &iommu->sbdf, timeout_base ? "iotlb " : "", (NOW() - start) / 10000000); } @@ -300,7 +300,7 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev, if ( !iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) ) return; - req_id = get_dma_requestor_id(iommu->seg, PCI_BDF(pdev->bus, devfn)); + req_id = get_dma_requestor_id(iommu->sbdf.seg, PCI_BDF(pdev->bus, devfn)); queueid = req_id; maxpend = pdev->ats.queue_depth & 0xff; diff --git a/xen/drivers/passthrough/amd/iommu_detect.c b/xen/drivers/passthrough/amd/iommu_detect.c index cede44e651..b9ab09b7c1 100644 --- a/xen/drivers/passthrough/amd/iommu_detect.c +++ b/xen/drivers/passthrough/amd/iommu_detect.c @@ -162,8 +162,8 @@ int __init amd_iommu_detect_one_acpi( spin_lock_init(&iommu->lock); INIT_LIST_HEAD(&iommu->ats_devices); - iommu->seg = ivhd_block->pci_segment_group; - iommu->bdf = ivhd_block->header.device_id; + iommu->sbdf.seg = ivhd_block->pci_segment_group; + iommu->sbdf.bdf = ivhd_block->header.device_id; iommu->cap_offset = ivhd_block->capability_offset; iommu->mmio_base_phys = ivhd_block->base_address; @@ -210,16 +210,16 @@ int __init amd_iommu_detect_one_acpi( /* override IOMMU HT flags */ iommu->ht_flags = ivhd_block->header.flags; - bus = PCI_BUS(iommu->bdf); - dev = PCI_SLOT(iommu->bdf); - func = PCI_FUNC(iommu->bdf); + bus = PCI_BUS(iommu->sbdf.bdf); + dev = PCI_SLOT(iommu->sbdf.bdf); + func = PCI_FUNC(iommu->sbdf.bdf); - rt = get_iommu_capabilities(iommu->seg, bus, dev, func, + rt = get_iommu_capabilities(iommu->sbdf.seg, bus, dev, func, iommu->cap_offset, iommu); if ( rt ) goto out; - rt = get_iommu_msi_capabilities(iommu->seg, bus, dev, func, iommu); + rt = get_iommu_msi_capabilities(iommu->sbdf.seg, bus, dev, func, iommu); if ( rt ) goto out; @@ -228,10 +228,10 @@ int __init amd_iommu_detect_one_acpi( if ( !iommu->domid_map ) goto out; - rt = pci_ro_device(iommu->seg, bus, PCI_DEVFN(dev, func)); + rt = pci_ro_device(iommu->sbdf.seg, bus, PCI_DEVFN(dev, func)); if ( rt ) printk(XENLOG_ERR "Could not mark config space of %pp read-only (%d)\n", - &PCI_SBDF(iommu->seg, iommu->bdf), rt); + &iommu->sbdf, rt); list_add_tail(&iommu->list, &amd_iommu_head); rt = 0; diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c index bb25b55c85..58d657060a 100644 --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -409,9 +409,7 @@ static void iommu_reset_log(struct amd_iommu *iommu, static void amd_iommu_msi_enable(struct amd_iommu *iommu, int flag) { - pci_sbdf_t sbdf = { .seg = iommu->seg, .bdf = iommu->bdf }; - - __msi_set_enable(sbdf, iommu->msi.msi_attrib.pos, flag); + __msi_set_enable(iommu->sbdf, iommu->msi.msi_attrib.pos, flag); } static void cf_check iommu_msi_unmask(struct irq_desc *desc) @@ -566,7 +564,7 @@ static void cf_check parse_event_log_entry(struct amd_iommu *iommu, u32 entry[]) printk(XENLOG_ERR "AMD-Vi: %s: %pp d%u addr %016"PRIx64 " flags %#x%s%s%s%s%s%s%s%s%s%s\n", - code_str, &PCI_SBDF(iommu->seg, device_id), + code_str, &PCI_SBDF(iommu->sbdf.seg, device_id), domain_id, addr, flags, (flags & 0xe00) ? " ??" : "", (flags & 0x100) ? " TR" : "", @@ -583,8 +581,8 @@ static void cf_check parse_event_log_entry(struct amd_iommu *iommu, u32 entry[]) amd_iommu_print_entries(iommu, device_id, daddr_to_dfn(addr)); for ( bdf = 0; bdf < ivrs_bdf_entries; bdf++ ) - if ( get_dma_requestor_id(iommu->seg, bdf) == device_id ) - pci_check_disable_device(iommu->seg, PCI_BUS(bdf), + if ( get_dma_requestor_id(iommu->sbdf.seg, bdf) == device_id ) + pci_check_disable_device(iommu->sbdf.seg, PCI_BUS(bdf), PCI_DEVFN(bdf)); } else @@ -643,7 +641,7 @@ static void cf_check parse_ppr_log_entry(struct amd_iommu *iommu, u32 entry[]) struct pci_dev *pdev; pcidevs_lock(); - pdev = pci_get_real_pdev(PCI_SBDF(iommu->seg, device_id)); + pdev = pci_get_real_pdev(PCI_SBDF(iommu->sbdf.seg, device_id)); pcidevs_unlock(); if ( pdev ) @@ -752,12 +750,11 @@ static bool __init set_iommu_interrupt_handler(struct amd_iommu *iommu) } pcidevs_lock(); - iommu->msi.dev = pci_get_pdev(NULL, PCI_SBDF(iommu->seg, iommu->bdf)); + iommu->msi.dev = pci_get_pdev(NULL, iommu->sbdf); pcidevs_unlock(); if ( !iommu->msi.dev ) { - AMD_IOMMU_WARN("no pdev for %pp\n", - &PCI_SBDF(iommu->seg, iommu->bdf)); + AMD_IOMMU_WARN("no pdev for %pp\n", &iommu->sbdf); return 0; } @@ -779,7 +776,7 @@ static bool __init set_iommu_interrupt_handler(struct amd_iommu *iommu) hw_irq_controller *handler; u16 control; - control = pci_conf_read16(PCI_SBDF(iommu->seg, iommu->bdf), + control = pci_conf_read16(iommu->sbdf, iommu->msi.msi_attrib.pos + PCI_MSI_FLAGS); iommu->msi.msi.nvec = 1; @@ -843,22 +840,22 @@ static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu) (boot_cpu_data.x86_model > 0x1f) ) return; - pci_conf_write32(PCI_SBDF(iommu->seg, iommu->bdf), 0xf0, 0x90); - value = pci_conf_read32(PCI_SBDF(iommu->seg, iommu->bdf), 0xf4); + pci_conf_write32(iommu->sbdf, 0xf0, 0x90); + value = pci_conf_read32(iommu->sbdf, 0xf4); if ( value & (1 << 2) ) return; /* Select NB indirect register 0x90 and enable writing */ - pci_conf_write32(PCI_SBDF(iommu->seg, iommu->bdf), 0xf0, 0x90 | (1 << 8)); + pci_conf_write32(iommu->sbdf, 0xf0, 0x90 | (1 << 8)); - pci_conf_write32(PCI_SBDF(iommu->seg, iommu->bdf), 0xf4, value | (1 << 2)); + pci_conf_write32(iommu->sbdf, 0xf4, value | (1 << 2)); printk(XENLOG_INFO "AMD-Vi: Applying erratum 746 workaround for IOMMU at %pp\n", - &PCI_SBDF(iommu->seg, iommu->bdf)); + &iommu->sbdf); /* Clear the enable writing bit */ - pci_conf_write32(PCI_SBDF(iommu->seg, iommu->bdf), 0xf0, 0x90); + pci_conf_write32(iommu->sbdf, 0xf0, 0x90); } static void enable_iommu(struct amd_iommu *iommu) @@ -1095,7 +1092,7 @@ static int __init amd_iommu_init_one(struct amd_iommu *iommu, bool intr) goto error_out; /* Make sure that the device table has been successfully allocated. */ - ivrs_mappings = get_ivrs_mappings(iommu->seg); + ivrs_mappings = get_ivrs_mappings(iommu->sbdf.seg); if ( !IVRS_MAPPINGS_DEVTAB(ivrs_mappings) ) goto error_out; @@ -1363,7 +1360,7 @@ static bool __init amd_sp5100_erratum28(void) static int __init amd_iommu_prepare_one(struct amd_iommu *iommu) { - int rc = alloc_ivrs_mappings(iommu->seg); + int rc = alloc_ivrs_mappings(iommu->sbdf.seg); if ( !rc ) rc = map_iommu_mmio_region(iommu); diff --git a/xen/drivers/passthrough/amd/iommu_intr.c b/xen/drivers/passthrough/amd/iommu_intr.c index 9abdc38053..754e9051dc 100644 --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -132,7 +132,7 @@ static int get_intremap_requestor_id(int seg, int bdf) static unsigned int alloc_intremap_entry(const struct amd_iommu *iommu, unsigned int bdf, unsigned int nr) { - const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(iommu->seg); + const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(iommu->sbdf.seg); unsigned long *inuse = ivrs_mappings[bdf].intremap_inuse; unsigned int nr_ents = intremap_table_entries(ivrs_mappings[bdf].intremap_table, iommu); @@ -167,7 +167,7 @@ static union irte_ptr get_intremap_entry(const struct amd_iommu *iommu, unsigned int bdf, unsigned int index) { union irte_ptr table = { - .ptr = get_ivrs_mappings(iommu->seg)[bdf].intremap_table + .ptr = get_ivrs_mappings(iommu->sbdf.seg)[bdf].intremap_table }; ASSERT(table.ptr && (index < intremap_table_entries(table.ptr, iommu))); @@ -184,7 +184,7 @@ static void free_intremap_entry(const struct amd_iommu *iommu, unsigned int bdf, unsigned int index) { union irte_ptr entry = get_intremap_entry(iommu, bdf, index); - struct ivrs_mappings *ivrs = get_ivrs_mappings(iommu->seg); + struct ivrs_mappings *ivrs = get_ivrs_mappings(iommu->sbdf.seg); if ( iommu->ctrl.ga_en ) { @@ -281,8 +281,8 @@ static int update_intremap_entry_from_ioapic( unsigned int dest, offset; bool fresh = false; - req_id = get_intremap_requestor_id(iommu->seg, bdf); - lock = get_intremap_lock(iommu->seg, req_id); + req_id = get_intremap_requestor_id(iommu->sbdf.seg, bdf); + lock = get_intremap_lock(iommu->sbdf.seg, req_id); delivery_mode = rte->delivery_mode; vector = rte->vector; @@ -419,10 +419,10 @@ static int update_intremap_entry_from_msi_msg( unsigned int dest, offset, i; bool fresh = false; - req_id = get_dma_requestor_id(iommu->seg, bdf); - alias_id = get_intremap_requestor_id(iommu->seg, bdf); + req_id = get_dma_requestor_id(iommu->sbdf.seg, bdf); + alias_id = get_intremap_requestor_id(iommu->sbdf.seg, bdf); - lock = get_intremap_lock(iommu->seg, req_id); + lock = get_intremap_lock(iommu->sbdf.seg, req_id); spin_lock_irqsave(lock, flags); if ( msg == NULL ) @@ -486,10 +486,10 @@ static int update_intremap_entry_from_msi_msg( */ if ( ( req_id != alias_id ) && - get_ivrs_mappings(iommu->seg)[alias_id].intremap_table != NULL ) + get_ivrs_mappings(iommu->sbdf.seg)[alias_id].intremap_table != NULL ) { - BUG_ON(get_ivrs_mappings(iommu->seg)[req_id].intremap_table != - get_ivrs_mappings(iommu->seg)[alias_id].intremap_table); + BUG_ON(get_ivrs_mappings(iommu->sbdf.seg)[req_id].intremap_table != + get_ivrs_mappings(iommu->sbdf.seg)[alias_id].intremap_table); } return fresh; @@ -500,7 +500,7 @@ static struct amd_iommu *_find_iommu_for_device(int seg, int bdf) struct amd_iommu *iommu; for_each_amd_iommu ( iommu ) - if ( iommu->seg == seg && iommu->bdf == bdf ) + if ( iommu->sbdf.seg == seg && iommu->sbdf.bdf == bdf ) return NULL; iommu = find_iommu_for_device(seg, bdf); @@ -730,7 +730,7 @@ static void dump_intremap_table(const struct amd_iommu *iommu, if ( ivrs_mapping ) { printk(" %pp:\n", - &PCI_SBDF(iommu->seg, ivrs_mapping->dte_requestor_id)); + &PCI_SBDF(iommu->sbdf.seg, ivrs_mapping->dte_requestor_id)); ivrs_mapping = NULL; } diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index dde393645a..78bde0aeb6 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -558,14 +558,14 @@ void amd_iommu_print_entries(const struct amd_iommu *iommu, unsigned int dev_id, if ( !dt[dev_id].tv ) { - printk("%pp: no root\n", &PCI_SBDF(iommu->seg, dev_id)); + printk("%pp: no root\n", &PCI_SBDF(iommu->sbdf.seg, dev_id)); return; } pt_mfn = _mfn(dt[dev_id].pt_root); level = dt[dev_id].paging_mode; printk("%pp root @ %"PRI_mfn" (%u levels) dfn=%"PRI_dfn"\n", - &PCI_SBDF(iommu->seg, dev_id), mfn_x(pt_mfn), level, dfn_x(dfn)); + &PCI_SBDF(iommu->sbdf.seg, dev_id), mfn_x(pt_mfn), level, dfn_x(dfn)); while ( level ) { @@ -730,7 +730,7 @@ int cf_check amd_iommu_get_reserved_device_memory( * the same alias ID. */ if ( bdf != req && ivrs_mappings[req].iommu && - func(0, 0, PCI_SBDF(seg, req).sbdf, ctxt) ) + func(0, 0, sbdf.sbdf, ctxt) ) continue; if ( global == pending ) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index d00697edb3..55218d9a1d 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -43,7 +43,7 @@ struct amd_iommu *find_iommu_for_device(int seg, int bdf) { unsigned int bd0 = bdf & ~PCI_FUNC(~0); - if ( ivrs_mappings[bd0].iommu && ivrs_mappings[bd0].iommu->bdf != bdf ) + if ( ivrs_mappings[bd0].iommu && ivrs_mappings[bd0].iommu->sbdf.bdf != bdf ) { struct ivrs_mappings tmp = ivrs_mappings[bd0]; @@ -121,7 +121,7 @@ static bool use_ats( { return !ivrs_dev->block_ats && iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) && - pci_ats_device(iommu->seg, pdev->bus, pdev->devfn); + pci_ats_device(iommu->sbdf.seg, pdev->bus, pdev->devfn); } static int __must_check amd_iommu_setup_domain_device( @@ -147,17 +147,17 @@ static int __must_check amd_iommu_setup_domain_device( if ( rc ) return rc; - req_id = get_dma_requestor_id(iommu->seg, pdev->sbdf.bdf); - ivrs_dev = &get_ivrs_mappings(iommu->seg)[req_id]; + req_id = get_dma_requestor_id(iommu->sbdf.seg, pdev->sbdf.bdf); + ivrs_dev = &get_ivrs_mappings(iommu->sbdf.seg)[req_id]; sr_flags = (iommu_hwdom_passthrough && is_hardware_domain(domain) ? 0 : SET_ROOT_VALID) | (ivrs_dev->unity_map ? SET_ROOT_WITH_UNITY_MAP : 0); /* get device-table entry */ - req_id = get_dma_requestor_id(iommu->seg, PCI_BDF(bus, devfn)); + req_id = get_dma_requestor_id(iommu->sbdf.seg, PCI_BDF(bus, devfn)); table = iommu->dev_table.buffer; dte = &table[req_id]; - ivrs_dev = &get_ivrs_mappings(iommu->seg)[req_id]; + ivrs_dev = &get_ivrs_mappings(iommu->sbdf.seg)[req_id]; if ( domain != dom_io ) { @@ -275,7 +275,7 @@ static int __must_check amd_iommu_setup_domain_device( ASSERT(pcidevs_locked()); if ( use_ats(pdev, iommu, ivrs_dev) && - !pci_ats_enabled(iommu->seg, bus, pdev->devfn) ) + !pci_ats_enabled(iommu->sbdf.seg, bus, pdev->devfn) ) { if ( devfn == pdev->devfn ) enable_ats_device(pdev, &iommu->ats_devices); @@ -418,12 +418,12 @@ static void amd_iommu_disable_domain_device(const struct domain *domain, ASSERT(pcidevs_locked()); - if ( pci_ats_device(iommu->seg, bus, pdev->devfn) && - pci_ats_enabled(iommu->seg, bus, pdev->devfn) ) + if ( pci_ats_device(iommu->sbdf.seg, bus, pdev->devfn) && + pci_ats_enabled(iommu->sbdf.seg, bus, pdev->devfn) ) disable_ats_device(pdev); BUG_ON ( iommu->dev_table.buffer == NULL ); - req_id = get_dma_requestor_id(iommu->seg, PCI_BDF(bus, devfn)); + req_id = get_dma_requestor_id(iommu->sbdf.seg, PCI_BDF(bus, devfn)); table = iommu->dev_table.buffer; dte = &table[req_id]; @@ -578,7 +578,7 @@ static int cf_check amd_iommu_add_device(u8 devfn, struct pci_dev *pdev) return -EINVAL; for_each_amd_iommu(iommu) - if ( pdev->seg == iommu->seg && pdev->sbdf.bdf == iommu->bdf ) + if ( pdev->seg == iommu->sbdf.seg && pdev->sbdf.bdf == iommu->sbdf.bdf ) return is_hardware_domain(pdev->domain) ? 0 : -ENODEV; iommu = find_iommu_for_device(pdev->seg, pdev->sbdf.bdf); From patchwork Tue Mar 18 15:30:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Sultanov X-Patchwork-Id: 14021172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F38A1C35FF3 for ; Tue, 18 Mar 2025 15:32:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.919332.1323834 (Exim 4.92) (envelope-from ) id 1tuYv9-0002yN-IS; Tue, 18 Mar 2025 15:31:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 919332.1323834; Tue, 18 Mar 2025 15:31:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYv9-0002yC-Eq; Tue, 18 Mar 2025 15:31:59 +0000 Received: by outflank-mailman (input) for mailman id 919332; Tue, 18 Mar 2025 15:31:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYv8-0001q5-A9 for xen-devel@lists.xenproject.org; Tue, 18 Mar 2025 15:31:58 +0000 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [2a00:1450:4864:20::32a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 203a72a3-040e-11f0-9abb-95dc52dad729; Tue, 18 Mar 2025 16:31:56 +0100 (CET) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-4394345e4d5so25231345e9.0 for ; Tue, 18 Mar 2025 08:31:56 -0700 (PDT) Received: from localhost.localdomain (cpc92320-cmbg19-2-0-cust1786.5-4.cable.virginm.net. [82.13.70.251]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d1ffbcf12sm135384415e9.12.2025.03.18.08.31.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Mar 2025 08:31:54 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 203a72a3-040e-11f0-9abb-95dc52dad729 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742311915; x=1742916715; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eT8CP/qhMWkEmW+NuFuwZ5+end+wYTik2LbAEHOmNAc=; b=gQjHFaUkPDTdlGdAnjJpBL4exyHlN/74EVOLBg8y4C+2wGuEj1rgCRRY9aPv+9Vsd2 p0IXayQGw4MvAE7cgiPDBeP6HEzHCVy2zO0u6m3jOZtANfR6FqRCQ2ZdgHLwmIasLH3n MYZo6ChaDDZzEEjyF3wTZ1vmMRxwgjkxMXgsd7sv3OBxBep4fnrT5d8toYHDJNKjoX5O gGRrwAdea19QmowLMoqJE/STan+J0LlV2yprl2CA8BrJXb9F+zpfQAs6ZKHhjK/XO/eW xRjNeBpX/NAfT1oQR/MztYePZ5F0Od5OEop5Qr7gJBB3xg52RQt8SvlvVspvqn+342YM 1qfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742311915; x=1742916715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eT8CP/qhMWkEmW+NuFuwZ5+end+wYTik2LbAEHOmNAc=; b=H5zMD02QqYvQqy9zALICHuZTq5lW+4p8Ft843/DmzvPcJoSXr7DuJoSPdv+15MqK2N LDWebOS+/BI4XdKe149OPoyB29jwutMxFgpTmcXc5g8cUoPqdemdwcCV8z1/ZPK49GKn yOG/L9LbwBXlIlFsset798jczpt+oANMtT9RURMXCxeuKkJRUt8YaIms1Cira2DMJXrh kpSW18maPKd43fFeF6KIdj7m5Wg0nLRnHnTgaXVcFoAtdAD1ACqVGsGFalIlz6WUmKBP CZns6vtj1mecFpdI/DUinZucWCa2EbivqNLoKe/wls6FAthr9kooY0rOdzVd9zpTAdgd qjUw== X-Gm-Message-State: AOJu0YzCv5bvd7Avs1s6ez34bq7QqmboUCJqVM2Iusizcan1rtm03QJj HFh5g4UQAbBQSFgmA7VTM/nU3MCrKdbCQ+FJbQObTRuMlgddTl+9fg5FxmYu X-Gm-Gg: ASbGncunPMK9FmfVT3QGxqYDBcuVUMtiow7dq9Ttc/X4JSSsu/hWuIQS+G21iDvrigR ZAAEi7BTRqcJ3qBZ9R+b62O8fbItjZByx4LOLbOGlO1ByAVrD6r1uG8Sd0O6UPxiiyvPbJUwRud 2AOFLWJdGoqmSEc6ueWV1jEXb+6P+zAL7jWsMsGhCTJC6kIr5Q1EEzQ2FHRgKzqLWAdVVhdu9hu U3HqnMV0WVhJMauLb22CABHX331/qDPb8y/TO9OIe28hIpjLLTaAFKGDYGeKaAg1fwDchKIEmr9 5GF8RDIN1PHhQ8Y6Xb6vHlWRS/Ja3ed+OfwKw6DJVgss5grD+u6pgdrc3Gnm9X7N0MnrLJ3ycX3 lvJH6lpl8gU+uEqYNRLUW5LsU6xXlsctzcfxdi/I6K13EpbzXKozb/48= X-Google-Smtp-Source: AGHT+IGUg21f2AvL8GMRPBAHmPN0QHUhXcrhkVol8zbFUxLayeoyuuEQKaCgOz1jSrjp3f9UHru9Ng== X-Received: by 2002:a05:600c:1da4:b0:43c:f16a:641e with SMTP id 5b1f17b1804b1-43d3b950117mr27128775e9.6.1742311914721; Tue, 18 Mar 2025 08:31:54 -0700 (PDT) From: Andrii Sultanov To: xen-devel@lists.xenproject.org Cc: Andrii Sultanov , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v3 2/3] drivers: Change find_iommu_for_device function to take pci_sbdf_t, simplify code Date: Tue, 18 Mar 2025 15:30:22 +0000 Message-ID: <3a9690d545547a1b39b20e256613308a814e2c27.1742311401.git.sultanovandriy@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Following a similar change to amd_iommu struct, change the find_iommu_for_device function to take pci_sbdf_t as a single parameter. This removes conversions in the majority of cases, Bloat-o-meter reports (on top of the first patch in the series): add/remove: 0/0 grow/shrink: 11/13 up/down: 90/-154 (-64) Function old new delta amd_iommu_get_supported_ivhd_type 54 86 +32 parse_ivrs_table 3955 3966 +11 amd_iommu_assign_device 271 282 +11 __mon_lengths 2928 2936 +8 update_intremap_entry_from_msi_msg 859 864 +5 iov_supports_xt 270 275 +5 amd_setup_hpet_msi 232 237 +5 amd_iommu_domain_destroy 43 48 +5 find_iommu_for_device 242 246 +4 amd_iommu_ioapic_update_ire 572 575 +3 allocate_domain_resources 82 83 +1 amd_iommu_read_ioapic_from_ire 347 344 -3 reassign_device 843 838 -5 amd_iommu_remove_device 352 347 -5 amd_iommu_get_reserved_device_memory 524 519 -5 amd_iommu_flush_iotlb 359 354 -5 amd_iommu_add_device 844 839 -5 amd_iommu_setup_domain_device 1478 1472 -6 build_info 752 744 -8 amd_iommu_detect_one_acpi 886 876 -10 register_range_for_device 297 281 -16 amd_iommu_msi_msg_update_ire 472 448 -24 parse_ivmd_block 1339 1312 -27 _hvm_dpci_msi_eoi 168 133 -35 Signed-off-by: Andrii Sultanov Acked-by: Jan Beulich --- Changes in V3: * Amended commit message * As the previous patch dropped the aliasing of seg and bdf, renamed users of amd_iommu as appropriate. Changes in V2: * Split single commit into several patches * Dropped brackets around &(iommu->sbdf) and &(sbdf) * Dropped most of the hunk in _invalidate_all_devices - it was bloat-equivalent to the existing code - just convert with PCI_SBDF instead * Dropped the hunk in get_intremap_requestor_id (iommu_intr.c) and amd_iommu_get_reserved_device_memory (iommu_map.c) as they were only increasing the code size. * Kept "/* XXX */" where appropriate * Fixed a slip-up in register_range_for_iommu_devices where iommu->sbdf replaced the usage of *different* seg and bdf. --- xen/drivers/passthrough/amd/iommu.h | 2 +- xen/drivers/passthrough/amd/iommu_acpi.c | 14 ++++----- xen/drivers/passthrough/amd/iommu_cmd.c | 2 +- xen/drivers/passthrough/amd/iommu_init.c | 4 +-- xen/drivers/passthrough/amd/iommu_intr.c | 20 ++++++------- xen/drivers/passthrough/amd/iommu_map.c | 2 +- xen/drivers/passthrough/amd/pci_amd_iommu.c | 32 ++++++++++----------- 7 files changed, 38 insertions(+), 38 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h index ba541f7943..2599800e6a 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -240,7 +240,7 @@ void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf); void amd_iommu_flush_all_caches(struct amd_iommu *iommu); /* find iommu for bdf */ -struct amd_iommu *find_iommu_for_device(int seg, int bdf); +struct amd_iommu *find_iommu_for_device(pci_sbdf_t sbdf); /* interrupt remapping */ bool cf_check iov_supports_xt(void); diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 025d9be40f..9e4fbee953 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -239,17 +239,17 @@ static int __init register_range_for_device( unsigned int bdf, paddr_t base, paddr_t limit, bool iw, bool ir, bool exclusion) { - int seg = 0; /* XXX */ - struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(seg); + pci_sbdf_t sbdf = { .seg = 0 /* XXX */, .bdf = bdf }; + struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(sbdf.seg); struct amd_iommu *iommu; u16 req; int rc = 0; - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(sbdf); if ( !iommu ) { AMD_IOMMU_WARN("IVMD: no IOMMU for device %pp - ignoring constrain\n", - &PCI_SBDF(seg, bdf)); + &sbdf); return 0; } req = ivrs_mappings[bdf].dte_requestor_id; @@ -263,9 +263,9 @@ static int __init register_range_for_device( paddr_t length = limit + PAGE_SIZE - base; /* reserve unity-mapped page entries for device */ - rc = reserve_unity_map_for_device(seg, bdf, base, length, iw, ir, + rc = reserve_unity_map_for_device(sbdf.seg, bdf, base, length, iw, ir, false) ?: - reserve_unity_map_for_device(seg, req, base, length, iw, ir, + reserve_unity_map_for_device(sbdf.seg, req, base, length, iw, ir, false); } else @@ -297,7 +297,7 @@ static int __init register_range_for_iommu_devices( /* reserve unity-mapped page entries for devices */ for ( bdf = rc = 0; !rc && bdf < ivrs_bdf_entries; bdf++ ) { - if ( iommu != find_iommu_for_device(iommu->sbdf.seg, bdf) ) + if ( iommu != find_iommu_for_device(PCI_SBDF(iommu->sbdf.seg, bdf)) ) continue; req = get_ivrs_mappings(iommu->sbdf.seg)[bdf].dte_requestor_id; diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c index 4defa0a44d..e007a4caca 100644 --- a/xen/drivers/passthrough/amd/iommu_cmd.c +++ b/xen/drivers/passthrough/amd/iommu_cmd.c @@ -288,7 +288,7 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev, if ( !pci_ats_enabled(pdev->seg, pdev->bus, pdev->devfn) ) return; - iommu = find_iommu_for_device(pdev->seg, pdev->sbdf.bdf); + iommu = find_iommu_for_device(pdev->sbdf); if ( !iommu ) { diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c index 58d657060a..3f6d2f5db5 100644 --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -1540,13 +1540,13 @@ static void invalidate_all_domain_pages(void) static int cf_check _invalidate_all_devices( u16 seg, struct ivrs_mappings *ivrs_mappings) { - unsigned int bdf; + unsigned int bdf; u16 req_id; struct amd_iommu *iommu; for ( bdf = 0; bdf < ivrs_bdf_entries; bdf++ ) { - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(PCI_SBDF(seg, bdf)); req_id = ivrs_mappings[bdf].dte_requestor_id; if ( iommu ) { diff --git a/xen/drivers/passthrough/amd/iommu_intr.c b/xen/drivers/passthrough/amd/iommu_intr.c index 754e9051dc..b894851ebf 100644 --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -337,7 +337,7 @@ void cf_check amd_iommu_ioapic_update_ire( /* get device id of ioapic devices */ bdf = ioapic_sbdf[idx].bdf; seg = ioapic_sbdf[idx].seg; - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(PCI_SBDF(seg, bdf)); if ( !iommu ) { AMD_IOMMU_WARN("failed to find IOMMU for IO-APIC @ %04x:%04x\n", @@ -383,7 +383,7 @@ unsigned int cf_check amd_iommu_read_ioapic_from_ire( seg = ioapic_sbdf[idx].seg; bdf = ioapic_sbdf[idx].bdf; - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(PCI_SBDF(seg, bdf)); if ( !iommu ) return val; req_id = get_intremap_requestor_id(seg, bdf); @@ -495,19 +495,19 @@ static int update_intremap_entry_from_msi_msg( return fresh; } -static struct amd_iommu *_find_iommu_for_device(int seg, int bdf) +static struct amd_iommu *_find_iommu_for_device(pci_sbdf_t sbdf) { struct amd_iommu *iommu; for_each_amd_iommu ( iommu ) - if ( iommu->sbdf.seg == seg && iommu->sbdf.bdf == bdf ) + if ( iommu->sbdf.sbdf == sbdf.sbdf ) return NULL; - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(sbdf); if ( iommu ) return iommu; - AMD_IOMMU_DEBUG("No IOMMU for MSI dev = %pp\n", &PCI_SBDF(seg, bdf)); + AMD_IOMMU_DEBUG("No IOMMU for MSI dev = %pp\n", &sbdf); return ERR_PTR(-EINVAL); } @@ -523,7 +523,7 @@ int cf_check amd_iommu_msi_msg_update_ire( bdf = pdev ? pdev->sbdf.bdf : hpet_sbdf.bdf; seg = pdev ? pdev->seg : hpet_sbdf.seg; - iommu = _find_iommu_for_device(seg, bdf); + iommu = _find_iommu_for_device(PCI_SBDF(seg, bdf)); if ( IS_ERR_OR_NULL(iommu) ) return PTR_ERR(iommu); @@ -660,8 +660,8 @@ bool __init cf_check iov_supports_xt(void) if ( idx == MAX_IO_APICS ) return false; - if ( !find_iommu_for_device(ioapic_sbdf[idx].seg, - ioapic_sbdf[idx].bdf) ) + if ( !find_iommu_for_device(PCI_SBDF(ioapic_sbdf[idx].seg, + ioapic_sbdf[idx].bdf)) ) { AMD_IOMMU_WARN("no IOMMU for IO-APIC %#x (ID %x)\n", apic, IO_APIC_ID(apic)); @@ -690,7 +690,7 @@ int __init cf_check amd_setup_hpet_msi(struct msi_desc *msi_desc) return -ENODEV; } - iommu = find_iommu_for_device(hpet_sbdf.seg, hpet_sbdf.bdf); + iommu = find_iommu_for_device(PCI_SBDF(hpet_sbdf.seg, hpet_sbdf.bdf)); if ( !iommu ) return -ENXIO; diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index 78bde0aeb6..8463c98abf 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -717,7 +717,7 @@ int cf_check amd_iommu_get_reserved_device_memory( pcidevs_unlock(); if ( pdev ) - iommu = find_iommu_for_device(seg, bdf); + iommu = find_iommu_for_device(sbdf); if ( !iommu ) continue; } diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 55218d9a1d..3a14770855 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -32,35 +32,35 @@ static bool __read_mostly init_done; static const struct iommu_init_ops _iommu_init_ops; -struct amd_iommu *find_iommu_for_device(int seg, int bdf) +struct amd_iommu *find_iommu_for_device(pci_sbdf_t sbdf) { - struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(seg); + struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(sbdf.seg); - if ( !ivrs_mappings || bdf >= ivrs_bdf_entries ) + if ( !ivrs_mappings || sbdf.bdf >= ivrs_bdf_entries ) return NULL; - if ( unlikely(!ivrs_mappings[bdf].iommu) && likely(init_done) ) + if ( unlikely(!ivrs_mappings[sbdf.bdf].iommu) && likely(init_done) ) { - unsigned int bd0 = bdf & ~PCI_FUNC(~0); + unsigned int bd0 = sbdf.bdf & ~PCI_FUNC(~0); - if ( ivrs_mappings[bd0].iommu && ivrs_mappings[bd0].iommu->sbdf.bdf != bdf ) + if ( ivrs_mappings[bd0].iommu && ivrs_mappings[bd0].iommu->sbdf.bdf != sbdf.bdf ) { struct ivrs_mappings tmp = ivrs_mappings[bd0]; tmp.iommu = NULL; if ( tmp.dte_requestor_id == bd0 ) - tmp.dte_requestor_id = bdf; - ivrs_mappings[bdf] = tmp; + tmp.dte_requestor_id = sbdf.bdf; + ivrs_mappings[sbdf.bdf] = tmp; printk(XENLOG_WARNING "%pp not found in ACPI tables;" - " using same IOMMU as function 0\n", &PCI_SBDF(seg, bdf)); + " using same IOMMU as function 0\n", &sbdf); /* write iommu field last */ - ivrs_mappings[bdf].iommu = ivrs_mappings[bd0].iommu; + ivrs_mappings[sbdf.bdf].iommu = ivrs_mappings[bd0].iommu; } } - return ivrs_mappings[bdf].iommu; + return ivrs_mappings[sbdf.bdf].iommu; } /* @@ -107,7 +107,7 @@ static bool any_pdev_behind_iommu(const struct domain *d, if ( pdev == exclude ) continue; - if ( find_iommu_for_device(pdev->seg, pdev->sbdf.bdf) == iommu ) + if ( find_iommu_for_device(pdev->sbdf) == iommu ) return true; } @@ -468,7 +468,7 @@ static int cf_check reassign_device( struct amd_iommu *iommu; int rc; - iommu = find_iommu_for_device(pdev->seg, pdev->sbdf.bdf); + iommu = find_iommu_for_device(pdev->sbdf); if ( !iommu ) { AMD_IOMMU_WARN("failed to find IOMMU: %pp cannot be assigned to %pd\n", @@ -578,10 +578,10 @@ static int cf_check amd_iommu_add_device(u8 devfn, struct pci_dev *pdev) return -EINVAL; for_each_amd_iommu(iommu) - if ( pdev->seg == iommu->sbdf.seg && pdev->sbdf.bdf == iommu->sbdf.bdf ) + if ( pdev->sbdf.sbdf == iommu->sbdf.sbdf ) return is_hardware_domain(pdev->domain) ? 0 : -ENODEV; - iommu = find_iommu_for_device(pdev->seg, pdev->sbdf.bdf); + iommu = find_iommu_for_device(pdev->sbdf); if ( unlikely(!iommu) ) { /* Filter bridge devices. */ @@ -666,7 +666,7 @@ static int cf_check amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev) if ( !pdev->domain ) return -EINVAL; - iommu = find_iommu_for_device(pdev->seg, pdev->sbdf.bdf); + iommu = find_iommu_for_device(pdev->sbdf); if ( !iommu ) { AMD_IOMMU_WARN("failed to find IOMMU: %pp cannot be removed from %pd\n", From patchwork Tue Mar 18 15:30:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Sultanov X-Patchwork-Id: 14021173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77066C28B28 for ; Tue, 18 Mar 2025 15:32:11 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.919333.1323844 (Exim 4.92) (envelope-from ) id 1tuYvA-0003E2-Qi; Tue, 18 Mar 2025 15:32:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 919333.1323844; Tue, 18 Mar 2025 15:32:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYvA-0003Dv-Nu; Tue, 18 Mar 2025 15:32:00 +0000 Received: by outflank-mailman (input) for mailman id 919333; Tue, 18 Mar 2025 15:31:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tuYv9-0001ML-Hx for xen-devel@lists.xenproject.org; Tue, 18 Mar 2025 15:31:59 +0000 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [2a00:1450:4864:20::331]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 20fa0c3d-040e-11f0-9899-31a8f345e629; Tue, 18 Mar 2025 16:31:58 +0100 (CET) Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-43cf680d351so19404645e9.0 for ; Tue, 18 Mar 2025 08:31:57 -0700 (PDT) Received: from localhost.localdomain (cpc92320-cmbg19-2-0-cust1786.5-4.cable.virginm.net. [82.13.70.251]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43d1ffbcf12sm135384415e9.12.2025.03.18.08.31.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Mar 2025 08:31:55 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 20fa0c3d-040e-11f0-9899-31a8f345e629 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742311917; x=1742916717; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4AbE1blNd+/whSggpKdr+EhB2N/B1VEMWE2Q8cWiM14=; b=bpoeTOGk6MU0qNs3BSMxCOHmVy8q76v/2qSEfQDYYvBq4HBeRb+WbSvFDh7V4ku90Q NKSp5AP7eH4mAGfgnT5oOTc8M7Ry3WpJog4/GrRqNs77LK0IDzSC9kisFK8K1EMZkopC c/zlevGFWsnoOjt2NOw8D/zaaV04NDl7McwQnNe7q63W7SJ1UWu+A2TEp7Vpd6qsLq/z SSLMM4fziR0VOmPTi5YXobNXj89HdJ6qsA942Mw6ZrPt3SPd4KjQHJ9zSkO0mdW3tB0P jZu7EtcuISY0/hOeVwE0p0rGCbS3lPQ5yTKS2S0WEMXw7+sADgjXB2qtnzXZwPWV5EAv 7X6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742311917; x=1742916717; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4AbE1blNd+/whSggpKdr+EhB2N/B1VEMWE2Q8cWiM14=; b=eW1bGLp7jusSWetpIc9cB2F118ET8drYC3nlY1WwAGAKZys4BBtKuf6D+Am6npe80j tbBGYaLCMP6cs31dm4CfwpySGy02cQ6Eoct7Ob2ex8IUvaWbJI0WNc5rEOW1Mx86QCj0 b8yx3uaMfeyV64+mqxy67BWSy/Z9J5asNJ2jgbg78EQ27yNfDTgBpsHeUOW29GcPvCO+ OTJAdsuxWXSkpW+Zb9yg9/mi9NOfU6GbsyfWrRJSr+eQjrw5xVSyTPx6Ndy+dCKqz+SN Z+v6Ich1V5mR1Fx+oLX3zipGU/iY5r67u8sexUtqCMoUgzyTnBkXvylI2MfpNtW13p2w T+gw== X-Gm-Message-State: AOJu0YykDoSLpNvu07DR5eYx2Vjb7JI/8cd/CvntDWhkhkJrwvgyKYZm bxSqJNNpLWZk6DLpMw9WtdP5kOWWXMrwMlQIRc66/0yjcpWWVvTROhDJ4hF2 X-Gm-Gg: ASbGncuMFTBWGjGc1P+fWuNBuRLRqBA04SqYEzmXrU/7VTONP5cEEV6sjInSFKK2PuZ 2pRdw217zm5YerXEqjxSENLCk95amK24W+6TCzsqelf0ZVNVgz8G9YWTSrQinf3JpMtxQQs61eU T7zDhsSgV/t1WzeIFy7nTBzKM8UOhxtbhpbAE3bQnuztAmEZel8Ho6qxqcl3qTNsneiqiDecsES L5QeBHV1pAUPVUsdMhIlv77XFuY0TxQnwsqcgWnSDscm0yzwRGgLyDzoIRg3k7EPJRaYRwTrVMt v2cl/bfjeJ8BZBlrrlAk9+/vhwZdE5+WV06EnQKMX9DrGSf46xBmh95rLVGzGsR9+wv8BE86qap /2Z6tPVYExi/4Kz5tv7N1OMCRYc/knozlaupFqalnUra9 X-Google-Smtp-Source: AGHT+IE32+Os7WytfIYeqSjyA3VVAbO1ykDTsOHE3RP5XQ3AQDSC+bYhSK6jQy9gNurWLWEx27VgYw== X-Received: by 2002:a05:600c:4c29:b0:43d:174:2668 with SMTP id 5b1f17b1804b1-43d3b7ce106mr22493785e9.0.1742311916666; Tue, 18 Mar 2025 08:31:56 -0700 (PDT) From: Andrii Sultanov To: xen-devel@lists.xenproject.org Cc: Andrii Sultanov , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v3 3/3] drivers: Make ioapic_sbdf and hpet_sbdf contain pci_sbdf_t Date: Tue, 18 Mar 2025 15:30:23 +0000 Message-ID: <06e7126700f1bd8a0db5199209a275cf2b1e09f2.1742311401.git.sultanovandriy@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Following a similar change to amd_iommu struct, make two more structs take pci_sbdf_t directly instead of seg and bdf separately. This lets us drop several conversions from the latter to the former and simplifies several comparisons and assignments. Signed-off-by: Andrii Sultanov --- This particular commit does not have any effect on code size: add/remove: 0/0 grow/shrink: 2/6 up/down: 96/-96 (0) Function old new delta _einittext 22092 22156 +64 _hvm_dpci_msi_eoi 133 165 +32 amd_iommu_detect_one_acpi 876 868 -8 iov_supports_xt 275 264 -11 amd_iommu_read_ioapic_from_ire 344 332 -12 amd_setup_hpet_msi 237 224 -13 amd_iommu_ioapic_update_ire 575 555 -20 amd_iommu_get_supported_ivhd_type 86 54 -32 So we don't have to take it - but imho it significantly simplifies source code and makes it more readable. Changes in V3: * Dropped aliasing of seg and bdf, renamed users. Changes in V2: * Split single commit into several patches * Change the format specifier to %pp in amd_iommu_ioapic_update_ire --- xen/drivers/passthrough/amd/iommu.h | 5 +-- xen/drivers/passthrough/amd/iommu_acpi.c | 28 +++++++-------- xen/drivers/passthrough/amd/iommu_intr.c | 44 +++++++++++------------- 3 files changed, 38 insertions(+), 39 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h index 2599800e6a..52f748310b 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -262,7 +262,7 @@ int cf_check amd_setup_hpet_msi(struct msi_desc *msi_desc); void cf_check amd_iommu_dump_intremap_tables(unsigned char key); extern struct ioapic_sbdf { - u16 bdf, seg; + pci_sbdf_t sbdf; u8 id; bool cmdline; u16 *pin_2_idx; @@ -273,7 +273,8 @@ unsigned int ioapic_id_to_index(unsigned int apic_id); unsigned int get_next_ioapic_sbdf_index(void); extern struct hpet_sbdf { - u16 bdf, seg, id; + pci_sbdf_t sbdf; + uint16_t id; enum { HPET_NONE, HPET_CMDL, diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 9e4fbee953..bb1eba2aa7 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -707,8 +707,8 @@ static int __init cf_check parse_ivrs_ioapic(const char *str) } } - ioapic_sbdf[idx].bdf = PCI_BDF(bus, dev, func); - ioapic_sbdf[idx].seg = seg; + ioapic_sbdf[idx].sbdf.bdf = PCI_BDF(bus, dev, func); + ioapic_sbdf[idx].sbdf.seg = seg; ioapic_sbdf[idx].id = id; ioapic_sbdf[idx].cmdline = true; @@ -734,8 +734,8 @@ static int __init cf_check parse_ivrs_hpet(const char *str) return -EINVAL; hpet_sbdf.id = id; - hpet_sbdf.bdf = PCI_BDF(bus, dev, func); - hpet_sbdf.seg = seg; + hpet_sbdf.sbdf.bdf = PCI_BDF(bus, dev, func); + hpet_sbdf.sbdf.seg = seg; hpet_sbdf.init = HPET_CMDL; return 0; @@ -780,8 +780,8 @@ static u16 __init parse_ivhd_device_special( */ for ( idx = 0; idx < nr_ioapic_sbdf; idx++ ) { - if ( ioapic_sbdf[idx].bdf == bdf && - ioapic_sbdf[idx].seg == seg && + if ( ioapic_sbdf[idx].sbdf.bdf == bdf && + ioapic_sbdf[idx].sbdf.seg == seg && ioapic_sbdf[idx].cmdline ) break; } @@ -805,8 +805,8 @@ static u16 __init parse_ivhd_device_special( special->handle); else if ( idx != MAX_IO_APICS && ioapic_sbdf[idx].pin_2_idx ) { - if ( ioapic_sbdf[idx].bdf == bdf && - ioapic_sbdf[idx].seg == seg ) + if ( ioapic_sbdf[idx].sbdf.bdf == bdf && + ioapic_sbdf[idx].sbdf.seg == seg ) AMD_IOMMU_WARN("IVHD: duplicate IO-APIC %#x entries\n", special->handle); else @@ -827,8 +827,8 @@ static u16 __init parse_ivhd_device_special( } /* set device id of ioapic */ - ioapic_sbdf[idx].bdf = bdf; - ioapic_sbdf[idx].seg = seg; + ioapic_sbdf[idx].sbdf.bdf = bdf; + ioapic_sbdf[idx].sbdf.seg = seg; ioapic_sbdf[idx].id = special->handle; ioapic_sbdf[idx].pin_2_idx = xmalloc_array( @@ -867,8 +867,8 @@ static u16 __init parse_ivhd_device_special( case HPET_NONE: /* set device id of hpet */ hpet_sbdf.id = special->handle; - hpet_sbdf.bdf = bdf; - hpet_sbdf.seg = seg; + hpet_sbdf.sbdf.bdf = bdf; + hpet_sbdf.sbdf.seg = seg; hpet_sbdf.init = HPET_IVHD; break; default: @@ -1139,9 +1139,9 @@ static int __init cf_check parse_ivrs_table(struct acpi_table_header *table) return -ENXIO; } - if ( !ioapic_sbdf[idx].seg && + if ( !ioapic_sbdf[idx].sbdf.seg && /* SB IO-APIC is always on this device in AMD systems. */ - ioapic_sbdf[idx].bdf == PCI_BDF(0, 0x14, 0) ) + ioapic_sbdf[idx].sbdf.bdf == PCI_BDF(0, 0x14, 0) ) sb_ioapic = 1; if ( ioapic_sbdf[idx].pin_2_idx ) diff --git a/xen/drivers/passthrough/amd/iommu_intr.c b/xen/drivers/passthrough/amd/iommu_intr.c index b894851ebf..b788675be2 100644 --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -323,7 +323,8 @@ void cf_check amd_iommu_ioapic_update_ire( unsigned int apic, unsigned int pin, uint64_t rte) { struct IO_APIC_route_entry new_rte; - int seg, bdf, rc; + pci_sbdf_t sbdf; + int rc; struct amd_iommu *iommu; unsigned int idx; @@ -335,20 +336,18 @@ void cf_check amd_iommu_ioapic_update_ire( new_rte.raw = rte; /* get device id of ioapic devices */ - bdf = ioapic_sbdf[idx].bdf; - seg = ioapic_sbdf[idx].seg; - iommu = find_iommu_for_device(PCI_SBDF(seg, bdf)); + sbdf = ioapic_sbdf[idx].sbdf; + iommu = find_iommu_for_device(sbdf); if ( !iommu ) { - AMD_IOMMU_WARN("failed to find IOMMU for IO-APIC @ %04x:%04x\n", - seg, bdf); + AMD_IOMMU_WARN("failed to find IOMMU for IO-APIC @ %pp\n", &sbdf); __ioapic_write_entry(apic, pin, true, new_rte); return; } /* Update interrupt remapping entry */ rc = update_intremap_entry_from_ioapic( - bdf, iommu, &new_rte, + sbdf.bdf, iommu, &new_rte, &ioapic_sbdf[idx].pin_2_idx[pin]); if ( rc ) @@ -369,7 +368,8 @@ unsigned int cf_check amd_iommu_read_ioapic_from_ire( unsigned int offset; unsigned int val = __io_apic_read(apic, reg); unsigned int pin = (reg - 0x10) / 2; - uint16_t seg, bdf, req_id; + pci_sbdf_t sbdf; + uint16_t req_id; const struct amd_iommu *iommu; union irte_ptr entry; @@ -381,12 +381,11 @@ unsigned int cf_check amd_iommu_read_ioapic_from_ire( if ( offset >= INTREMAP_MAX_ENTRIES ) return val; - seg = ioapic_sbdf[idx].seg; - bdf = ioapic_sbdf[idx].bdf; - iommu = find_iommu_for_device(PCI_SBDF(seg, bdf)); + sbdf = ioapic_sbdf[idx].sbdf; + iommu = find_iommu_for_device(sbdf); if ( !iommu ) return val; - req_id = get_intremap_requestor_id(seg, bdf); + req_id = get_intremap_requestor_id(sbdf.seg, sbdf.bdf); entry = get_intremap_entry(iommu, req_id, offset); if ( !(reg & 1) ) @@ -515,15 +514,15 @@ int cf_check amd_iommu_msi_msg_update_ire( struct msi_desc *msi_desc, struct msi_msg *msg) { struct pci_dev *pdev = msi_desc->dev; - int bdf, seg, rc; + pci_sbdf_t sbdf; + int rc; struct amd_iommu *iommu; unsigned int i, nr = 1; u32 data; - bdf = pdev ? pdev->sbdf.bdf : hpet_sbdf.bdf; - seg = pdev ? pdev->seg : hpet_sbdf.seg; + sbdf = pdev ? pdev->sbdf : hpet_sbdf.sbdf; - iommu = _find_iommu_for_device(PCI_SBDF(seg, bdf)); + iommu = _find_iommu_for_device(sbdf); if ( IS_ERR_OR_NULL(iommu) ) return PTR_ERR(iommu); @@ -532,7 +531,7 @@ int cf_check amd_iommu_msi_msg_update_ire( if ( msi_desc->remap_index >= 0 && !msg ) { - update_intremap_entry_from_msi_msg(iommu, bdf, nr, + update_intremap_entry_from_msi_msg(iommu, sbdf.bdf, nr, &msi_desc->remap_index, NULL, NULL); @@ -543,7 +542,7 @@ int cf_check amd_iommu_msi_msg_update_ire( if ( !msg ) return 0; - rc = update_intremap_entry_from_msi_msg(iommu, bdf, nr, + rc = update_intremap_entry_from_msi_msg(iommu, sbdf.bdf, nr, &msi_desc->remap_index, msg, &data); if ( rc > 0 ) @@ -660,8 +659,7 @@ bool __init cf_check iov_supports_xt(void) if ( idx == MAX_IO_APICS ) return false; - if ( !find_iommu_for_device(PCI_SBDF(ioapic_sbdf[idx].seg, - ioapic_sbdf[idx].bdf)) ) + if ( !find_iommu_for_device(ioapic_sbdf[idx].sbdf) ) { AMD_IOMMU_WARN("no IOMMU for IO-APIC %#x (ID %x)\n", apic, IO_APIC_ID(apic)); @@ -690,14 +688,14 @@ int __init cf_check amd_setup_hpet_msi(struct msi_desc *msi_desc) return -ENODEV; } - iommu = find_iommu_for_device(PCI_SBDF(hpet_sbdf.seg, hpet_sbdf.bdf)); + iommu = find_iommu_for_device(hpet_sbdf.sbdf); if ( !iommu ) return -ENXIO; - lock = get_intremap_lock(hpet_sbdf.seg, hpet_sbdf.bdf); + lock = get_intremap_lock(hpet_sbdf.sbdf.seg, hpet_sbdf.sbdf.bdf); spin_lock_irqsave(lock, flags); - msi_desc->remap_index = alloc_intremap_entry(iommu, hpet_sbdf.bdf, 1); + msi_desc->remap_index = alloc_intremap_entry(iommu, hpet_sbdf.sbdf.bdf, 1); if ( msi_desc->remap_index >= INTREMAP_MAX_ENTRIES ) { msi_desc->remap_index = -1;