From patchwork Thu May 11 14:12:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 9721609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8F11160364 for ; Thu, 11 May 2017 14:17:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84B3227F85 for ; Thu, 11 May 2017 14:17:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7938428600; Thu, 11 May 2017 14:17:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5C52327F85 for ; Thu, 11 May 2017 14:17:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d8orX-0007Fm-Qn; Thu, 11 May 2017 14:14:39 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d8orW-0007Fe-C4 for xen-devel@lists.xenproject.org; Thu, 11 May 2017 14:14:38 +0000 Received: from [85.158.139.211] by server-16.bemta-5.messagelabs.com id E5/DC-01752-DC174195; Thu, 11 May 2017 14:14:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupnkeJIrShJLcpLzFFi42LpubySR/dkoUi kwa1T+hbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8a+P23sBS/cKs68esfUwHjfuouRi0NIYCej xLqT05i6GDk42ARMJWb8V+li5OQQEVCQ2Nz7jBWkhlngOqvE+03zGEFqhAV8JXYuZgapYRFQl Th58wgriM0rYCXxYOlssLiEgLzE2S07wWxOAWuJCY33wGqEgGquP50MFhcVEJM4+O4+E0SvoM TJmU9YQMYzC6hLrJ8nNIGRdxaSzCyEzAJGplWMGsWpRWWpRbpGpnpJRZnpGSW5iZk5uoYGpnq 5qcXFiempOYlJxXrJ+bmbGIGBU8/AwLiDcVe73yFGSQ4mJVHe9j/CkUJ8SfkplRmJxRnxRaU5 qcWHGGU4OJQkeMMKRCKFBItS01Mr0jJzgCEMk5bg4FES4S0DSfMWFyTmFmemQ6ROMepyvFv64 T2TEEtefl6qlDjvTpAiAZCijNI8uBGweLrEKCslzMvIwMAgxFOQWpSbWYIq/4pRnINRSZj3O8 gUnsy8ErhNr4COYAI6oh/kft7ikkSElFQDo4R5W5vZKsULr9cdmX9itWGWsyC/RMPp55d/tT4 Od5wp0rOnSa380fpNU3/M2L1nT+4iicnbz4eUbsyRtLjeWbM5/6PR9fwaD16v1dd2cScZsdzN e2W/ac0Hjw9Hp2V2VRoxfjk3f+vCiZOFVJo3bGl27Dxs9dA0TPOO0ybJzwtbtyxX4Vi4ulWJp Tgj0VCLuag4EQDqDIXbogIAAA== X-Env-Sender: gregkh@linuxfoundation.org X-Msg-Ref: server-10.tower-206.messagelabs.com!1494512072!76264183!1 X-Originating-IP: [140.211.169.12] X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG, MAILTO_TO_SPAM_ADDR X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50626 invoked from network); 11 May 2017 14:14:33 -0000 Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org) (140.211.169.12) by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 May 2017 14:14:33 -0000 Received: from localhost (LFbn-1-12060-104.w90-92.abo.wanadoo.fr [90.92.122.104]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1F486B59; Thu, 11 May 2017 14:14:31 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Date: Thu, 11 May 2017 16:12:44 +0200 Message-Id: <20170511141222.882133049@linuxfoundation.org> X-Mailer: git-send-email 2.12.2 In-Reply-To: <20170511141221.109842231@linuxfoundation.org> References: <20170511141221.109842231@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Cc: Juergen Gross , Stefano Stabellini , Ross Lagerwall , linux-pci@vger.kernel.org, Greg Kroah-Hartman , KarimAllah Ahmed , x86@kernel.org, Paul Gortmaker , stable@vger.kernel.org, Vitaly Kuznetsov , Julien Grall , Ingo Molnar , Anthony Liguori , "H. Peter Anvin" , Bjorn Helgaas , xen-devel@lists.xenproject.org, Boris Ostrovsky , Thomas Gleixner Subject: [Xen-devel] [PATCH 4.11 27/28] xen: Revert commits da72ff5bfcb0 and 72a9b186292d X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP 4.11-stable review patch. If anyone has any objections, please let me know. ------------------ From: Boris Ostrovsky commit 84d582d236dc1f9085e741affc72e9ba061a67c2 upstream. Recent discussion (http://marc.info/?l=xen-devel&m=149192184523741) established that commit 72a9b186292d ("xen: Remove event channel notification through Xen PCI platform device") (and thus commit da72ff5bfcb0 ("partially revert "xen: Remove event channel notification through Xen PCI platform device"")) are unnecessary and, in fact, prevent HVM guests from booting on Xen releases prior to 4.0 Therefore we revert both of those commits. The summary of that discussion is below: Here is the brief summary of the current situation: Before the offending commit (72a9b186292): 1) INTx does not work because of the reset_watches path. 2) The reset_watches path is only taken if you have Xen > 4.0 3) The Linux Kernel by default will use vector inject if the hypervisor support. So even INTx does not work no body running the kernel with Xen > 4.0 would notice. Unless he explicitly disabled this feature either in the kernel or in Xen (and this can only be disabled by modifying the code, not user-supported way to do it). After the offending commit (+ partial revert): 1) INTx is no longer support for HVM (only for PV guests). 2) Any HVM guest The kernel will not boot on Xen < 4.0 which does not have vector injection support. Since the only other mode supported is INTx which. So based on this summary, I think before commit (72a9b186292) we were in much better position from a user point of view. Signed-off-by: Boris Ostrovsky Reviewed-by: Juergen Gross Cc: Boris Ostrovsky Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Konrad Rzeszutek Wilk Cc: Bjorn Helgaas Cc: Stefano Stabellini Cc: Julien Grall Cc: Vitaly Kuznetsov Cc: Paul Gortmaker Cc: Ross Lagerwall Cc: xen-devel@lists.xenproject.org Cc: linux-kernel@vger.kernel.org Cc: linux-pci@vger.kernel.org Cc: Anthony Liguori Cc: KarimAllah Ahmed Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/xen/events.h | 11 +++++++++++ arch/x86/pci/xen.c | 2 +- arch/x86/xen/enlighten.c | 16 +++++++++++----- arch/x86/xen/smp.c | 2 ++ arch/x86/xen/time.c | 5 +++++ drivers/xen/events/events_base.c | 26 +++++++++++++++++--------- drivers/xen/platform-pci.c | 13 +++---------- 7 files changed, 50 insertions(+), 25 deletions(-) --- a/arch/x86/include/asm/xen/events.h +++ b/arch/x86/include/asm/xen/events.h @@ -20,4 +20,15 @@ static inline int xen_irqs_disabled(stru /* No need for a barrier -- XCHG is a barrier on x86. */ #define xchg_xen_ulong(ptr, val) xchg((ptr), (val)) +extern int xen_have_vector_callback; + +/* + * Events delivered via platform PCI interrupts are always + * routed to vcpu 0 and hence cannot be rebound. + */ +static inline bool xen_support_evtchn_rebind(void) +{ + return (!xen_hvm_domain() || xen_have_vector_callback); +} + #endif /* _ASM_X86_XEN_EVENTS_H */ --- a/arch/x86/pci/xen.c +++ b/arch/x86/pci/xen.c @@ -447,7 +447,7 @@ void __init xen_msi_init(void) int __init pci_xen_hvm_init(void) { - if (!xen_feature(XENFEAT_hvm_pirqs)) + if (!xen_have_vector_callback || !xen_feature(XENFEAT_hvm_pirqs)) return 0; #ifdef CONFIG_ACPI --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -138,6 +138,8 @@ struct shared_info xen_dummy_shared_info void *xen_initial_gdt; RESERVE_BRK(shared_info_page_brk, PAGE_SIZE); +__read_mostly int xen_have_vector_callback; +EXPORT_SYMBOL_GPL(xen_have_vector_callback); static int xen_cpu_up_prepare(unsigned int cpu); static int xen_cpu_up_online(unsigned int cpu); @@ -1861,7 +1863,9 @@ static int xen_cpu_up_prepare(unsigned i xen_vcpu_setup(cpu); } - if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) + if (xen_pv_domain() || + (xen_have_vector_callback && + xen_feature(XENFEAT_hvm_safe_pvclock))) xen_setup_timer(cpu); rc = xen_smp_intr_init(cpu); @@ -1877,7 +1881,9 @@ static int xen_cpu_dead(unsigned int cpu { xen_smp_intr_free(cpu); - if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) + if (xen_pv_domain() || + (xen_have_vector_callback && + xen_feature(XENFEAT_hvm_safe_pvclock))) xen_teardown_timer(cpu); return 0; @@ -1916,8 +1922,8 @@ static void __init xen_hvm_guest_init(vo xen_panic_handler_init(); - BUG_ON(!xen_feature(XENFEAT_hvm_callback_vector)); - + if (xen_feature(XENFEAT_hvm_callback_vector)) + xen_have_vector_callback = 1; xen_hvm_smp_init(); WARN_ON(xen_cpuhp_setup()); xen_unplug_emulated_devices(); @@ -1958,7 +1964,7 @@ bool xen_hvm_need_lapic(void) return false; if (!xen_hvm_domain()) return false; - if (xen_feature(XENFEAT_hvm_pirqs)) + if (xen_feature(XENFEAT_hvm_pirqs) && xen_have_vector_callback) return false; return true; } --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -742,6 +742,8 @@ static void __init xen_hvm_smp_prepare_c void __init xen_hvm_smp_init(void) { + if (!xen_have_vector_callback) + return; smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus; smp_ops.smp_send_reschedule = xen_smp_send_reschedule; smp_ops.cpu_die = xen_cpu_die; --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -432,6 +432,11 @@ static void xen_hvm_setup_cpu_clockevent void __init xen_hvm_init_time_ops(void) { + /* vector callback is needed otherwise we cannot receive interrupts + * on cpu > 0 and at this point we don't know how many cpus are + * available */ + if (!xen_have_vector_callback) + return; if (!xen_feature(XENFEAT_hvm_safe_pvclock)) { printk(KERN_INFO "Xen doesn't support pvclock on HVM," "disable pv timer\n"); --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -1312,6 +1312,9 @@ static int rebind_irq_to_cpu(unsigned ir if (!VALID_EVTCHN(evtchn)) return -1; + if (!xen_support_evtchn_rebind()) + return -1; + /* Send future instances of this interrupt to other vcpu. */ bind_vcpu.port = evtchn; bind_vcpu.vcpu = xen_vcpu_nr(tcpu); @@ -1645,15 +1648,20 @@ void xen_callback_vector(void) { int rc; uint64_t callback_via; - - callback_via = HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR); - rc = xen_set_callback_via(callback_via); - BUG_ON(rc); - pr_info("Xen HVM callback vector for event delivery is enabled\n"); - /* in the restore case the vector has already been allocated */ - if (!test_bit(HYPERVISOR_CALLBACK_VECTOR, used_vectors)) - alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, - xen_hvm_callback_vector); + if (xen_have_vector_callback) { + callback_via = HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR); + rc = xen_set_callback_via(callback_via); + if (rc) { + pr_err("Request for Xen HVM callback vector failed\n"); + xen_have_vector_callback = 0; + return; + } + pr_info("Xen HVM callback vector for event delivery is enabled\n"); + /* in the restore case the vector has already been allocated */ + if (!test_bit(HYPERVISOR_CALLBACK_VECTOR, used_vectors)) + alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, + xen_hvm_callback_vector); + } } #else void xen_callback_vector(void) {} --- a/drivers/xen/platform-pci.c +++ b/drivers/xen/platform-pci.c @@ -67,7 +67,7 @@ static uint64_t get_callback_via(struct pin = pdev->pin; /* We don't know the GSI. Specify the PCI INTx line instead. */ - return ((uint64_t)0x01 << HVM_CALLBACK_VIA_TYPE_SHIFT) | /* PCI INTx identifier */ + return ((uint64_t)0x01 << 56) | /* PCI INTx identifier */ ((uint64_t)pci_domain_nr(pdev->bus) << 32) | ((uint64_t)pdev->bus->number << 16) | ((uint64_t)(pdev->devfn & 0xff) << 8) | @@ -90,7 +90,7 @@ static int xen_allocate_irq(struct pci_d static int platform_pci_resume(struct pci_dev *pdev) { int err; - if (!xen_pv_domain()) + if (xen_have_vector_callback) return 0; err = xen_set_callback_via(callback_via); if (err) { @@ -138,14 +138,7 @@ static int platform_pci_probe(struct pci platform_mmio = mmio_addr; platform_mmiolen = mmio_len; - /* - * Xen HVM guests always use the vector callback mechanism. - * L1 Dom0 in a nested Xen environment is a PV guest inside in an - * HVM environment. It needs the platform-pci driver to get - * notifications from L0 Xen, but it cannot use the vector callback - * as it is not exported by L1 Xen. - */ - if (xen_pv_domain()) { + if (!xen_have_vector_callback) { ret = xen_allocate_irq(pdev); if (ret) { dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);