From patchwork Wed Aug 21 14:59:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C2E91399 for ; Wed, 21 Aug 2019 15:01:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E034F20870 for ; Wed, 21 Aug 2019 15:01:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="Z4YZhQ3g" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E034F20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5U-00009s-Bw; Wed, 21 Aug 2019 14:59:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5T-00009V-6b for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:47 +0000 X-Inumbo-ID: 4ea4f138-c424-11e9-8980-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4ea4f138-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399582; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qVmlfldA1+402OzpCAHEkAeeGY3Z8Bn3dYHMyvyNz/Y=; b=Z4YZhQ3g0rjc8Ilejf6R+Eq1xC3Vnoa+sIiXzuBiXKYi7zz33kPo3aPU WrH7THtkRIH0/xM2ILq27nwEGCCxytD7jkeUMT7fzXrS4UFLAhpxZf4kq PQwoxRv1IQ+vkVHRh8eRhZILO/iYngx6wv9HhFDimwpZbaOF/5l+oksO2 0=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 09VmLrcV46hdc/5rjHRJMarFV0s0f235L3e8OAMOi+HfFFClapkUMmVNT+MxgxezMLjrO/nIOT 0KBLm+jOlRrq6DJ2nn/i1xe8U1IhqpZ5YjeGesKZWdldB4b6TCCcayfhk0Om9siPcke4iwVF3q uQnEjHQVmhBpbNOU+dflA7K1YSN04g3HnvlPjFVX2OuzBBfoZbiJFlYzJVQZkX1EstDx1fc6tf El9SgmoUpv0He7/dMiIyWH0fRfBosFxix9p7gRya8EEw3ByeBK9iee9G/AG05K3KbdzPp0g4ZP y1g= X-SBRS: 2.7 X-MesageID: 4717081 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4717081" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:01 +0200 Message-ID: <20190821145903.45934-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Pick up on the infrastructure already added for vPCI and allow ioreq to decode accesses to MMCFG regions registered for a domain. This infrastructure is still only accessible from internal callers, so MMCFG regions can only be registered from the internal domain builder used by PVH dom0. Note that the vPCI infrastructure to decode and handle accesses to MMCFG regions will be removed in following patches when vPCI is switched to become an internal ioreq server. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 36 +++++--------- xen/arch/x86/hvm/ioreq.c | 88 +++++++++++++++++++++++++++++++-- xen/include/asm-x86/hvm/io.h | 12 ++++- xen/include/asm-x86/hvm/ioreq.h | 6 +++ 5 files changed, 113 insertions(+), 31 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 029eea3b85..b7a53377a5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -741,7 +741,7 @@ void hvm_domain_destroy(struct domain *d) xfree(ioport); } - destroy_vpci_mmcfg(d); + hvm_ioreq_free_mmcfg(d); } static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a5b0a23f06..6585767c03 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, return CF8_ADDR_LO(cf8) | (addr & 3); } +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf) +{ + addr -= mmcfg->addr; + sbdf->bdf = MMCFG_BDF(addr); + sbdf->bus += mmcfg->start_bus; + sbdf->seg = mmcfg->segment; + + return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); +} + + /* Do some sanity checks. */ static bool vpci_access_allowed(unsigned int reg, unsigned int len) { @@ -383,14 +395,6 @@ void register_vpci_portio_handler(struct domain *d) handler->ops = &vpci_portio_ops; } -struct hvm_mmcfg { - struct list_head next; - paddr_t addr; - unsigned int size; - uint16_t segment; - uint8_t start_bus; -}; - /* Handlers to trap PCI MMCFG config accesses. */ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, paddr_t addr) @@ -558,22 +562,6 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, return 0; } -void destroy_vpci_mmcfg(struct domain *d) -{ - struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions; - - write_lock(&d->arch.hvm.mmcfg_lock); - while ( !list_empty(mmcfg_regions) ) - { - struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions, - struct hvm_mmcfg, next); - - list_del(&mmcfg->next); - xfree(mmcfg); - } - write_unlock(&d->arch.hvm.mmcfg_lock); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d8fea191aa..10c0f7a574 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -690,6 +690,22 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) rangeset_destroy(s->range[i]); } +void hvm_ioreq_free_mmcfg(struct domain *d) +{ + struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions; + + write_lock(&d->arch.hvm.mmcfg_lock); + while ( !list_empty(mmcfg_regions) ) + { + struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions, + struct hvm_mmcfg, next); + + list_del(&mmcfg->next); + xfree(mmcfg); + } + write_unlock(&d->arch.hvm.mmcfg_lock); +} + static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, ioservid_t id) { @@ -1329,6 +1345,19 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } +static const struct hvm_mmcfg *mmcfg_find(const struct domain *d, + paddr_t addr) +{ + const struct hvm_mmcfg *mmcfg; + + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) + return mmcfg; + + return NULL; +} + + struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { @@ -1338,27 +1367,34 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, uint64_t addr; unsigned int id; bool internal = true; + const struct hvm_mmcfg *mmcfg; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) return NULL; cf8 = d->arch.hvm.pci_cf8; - if ( p->type == IOREQ_TYPE_PIO && - (p->addr & ~3) == 0xcfc && - CF8_ENABLED(cf8) ) + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type == IOREQ_TYPE_PIO && + (p->addr & ~3) == 0xcfc && + CF8_ENABLED(cf8)) || + (p->type == IOREQ_TYPE_COPY && + (mmcfg = mmcfg_find(d, p->addr)) != NULL) ) { uint32_t x86_fam; pci_sbdf_t sbdf; unsigned int reg; - reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg = p->type == IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->addr, + &sbdf); /* PCI config data cycle */ type = XEN_DMOP_IO_RANGE_PCI; addr = ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && + if ( p->type == IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && (x86_fam = get_cpu_family( d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && @@ -1377,6 +1413,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; addr = p->addr; } + read_unlock(&d->arch.hvm.mmcfg_lock); retry: FOR_EACH_IOREQ_SERVER(d, id, s) @@ -1629,6 +1666,47 @@ void hvm_ioreq_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg) +{ + struct hvm_mmcfg *mmcfg, *new; + + if ( start_bus > end_bus ) + return -EINVAL; + + new = xmalloc(struct hvm_mmcfg); + if ( !new ) + return -ENOMEM; + + new->addr = addr + (start_bus << 20); + new->start_bus = start_bus; + new->segment = seg; + new->size = (end_bus - start_bus + 1) << 20; + + write_lock(&d->arch.hvm.mmcfg_lock); + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( new->addr < mmcfg->addr + mmcfg->size && + mmcfg->addr < new->addr + new->size ) + { + int ret = -EEXIST; + + if ( new->addr == mmcfg->addr && + new->start_bus == mmcfg->start_bus && + new->segment == mmcfg->segment && + new->size == mmcfg->size ) + ret = 0; + write_unlock(&d->arch.hvm.mmcfg_lock); + xfree(new); + return ret; + } + + list_add(&new->next, &d->arch.hvm.mmcfg_regions); + write_unlock(&d->arch.hvm.mmcfg_lock); + + return 0; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 7ceb119b64..26f0489171 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d); extern void hvm_dpci_msi_eoi(struct domain *d, int vector); -/* Decode a PCI port IO access into a bus/slot/func/reg. */ +struct hvm_mmcfg { + struct list_head next; + paddr_t addr; + unsigned int size; + uint16_t segment; + uint8_t start_bus; +}; + +/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, pci_sbdf_t *sbdf); +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf); /* * HVM port IO handler that performs forwarding of guest IO ports into machine diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 2131c944d4..10b9586885 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -58,6 +58,12 @@ void hvm_ioreq_init(struct domain *d); int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg); + +void hvm_ioreq_free_mmcfg(struct domain *d); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ /*