From patchwork Thu Apr 14 17:07:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Jackson X-Patchwork-Id: 8839741 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4B7BD9F54F for ; Thu, 14 Apr 2016 17:10:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5F3CF202EB for ; Thu, 14 Apr 2016 17:10:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66BBF20270 for ; Thu, 14 Apr 2016 17:10:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aqkkl-0004Eb-QZ; Thu, 14 Apr 2016 17:08:27 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aqkkk-0004Cu-GX for xen-devel@lists.xensource.com; Thu, 14 Apr 2016 17:08:26 +0000 Received: from [193.109.254.147] by server-6.bemta-14.messagelabs.com id 99/AF-03753-98ECF075; Thu, 14 Apr 2016 17:08:25 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupjkeJIrShJLcpLzFFi42JxWrrBXrfzHH+ 4wcQjYhb3prxnd2D02N63iz2AMYo1My8pvyKBNePsrlfMBdtEKi7P7mRqYPwt0MXIySEh4C9x 6sZnRhCbTUBXomnLXzYQW0RAWeJ40xfWLkYuDmaBO0wSs6e2snQxcnAIC4RK/HlUC1LDIqAq8 efYBUaQMK+Ah8TjzbwQIxUlup9NABvDKeAp8bztATuILQRUcn7dXlaQciEBNYm56+NBwrwCgh InZz5hAbGZBSQkDr54wQwxhlvi9umpzBMY+WYhKZuFpGwBI9MqRvXi1KKy1CJdU72kosz0jJL cxMwcXUNDE73c1OLixPTUnMSkYr3k/NxNjMBwYgCCHYzrFjsfYpTkYFIS5TXdwx8uxJeUn1KZ kVicEV9UmpNafIhRhoNDSYL31xmgnGBRanpqRVpmDjCwYdISHDxKIrwfQdK8xQWJucWZ6RCpU 4yKUuK8v0ESAiCJjNI8uDZYNF1ilJUS5mUEOkSIpyC1KDezBFX+FaM4B6OSMK/5WaApPJl5JX DTXwEtZgJaXPaOF2RxSSJCSqqBcSlnt6lk6J/SBT3XAxa5P/J+5Lv0/tF4WQnLCYn7rPaJ/Tv 14H2p+vNTGpuPevEZczlG33TReKm05fSLjpkO2o0+m1jq3baEbD/FUnTR+ty5h7EO8tfLcpNP 79v1+MfiGacPN4qct/hmtG/31PUJZuvFGmYu2PXebd7FPeH7vphs5Wd2O7zq+UwlluKMREMt5 qLiRADAncHYoQIAAA== X-Env-Sender: prvs=90553d6a3=Ian.Jackson@citrix.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1460653701!35668589!3 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 29501 invoked from network); 14 Apr 2016 17:08:24 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 14 Apr 2016 17:08:24 -0000 X-IronPort-AV: E=Sophos;i="5.24,485,1454976000"; d="scan'208";a="353779163" From: Ian Jackson To: Date: Thu, 14 Apr 2016 18:07:40 +0100 Message-ID: <1460653660-6654-4-git-send-email-ian.jackson@eu.citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1460653660-6654-1-git-send-email-ian.jackson@eu.citrix.com> References: <1460653660-6654-1-git-send-email-ian.jackson@eu.citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Juergen Gross , Wei Liu , George Dunlap , Ian Jackson , Dario Faggioli , Tim Deegan , Jan Beulich Subject: [Xen-devel] [PATCH 3/3] xen: Document XEN_SYSCTL_CPUPOOL_OP_RMCPU anomalous EBUSY result X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is my attempt at understanding the situation, from reading descriptions provided on list in the context of toolstack patches which were attempting to work around the anomaly. The multiple `xxx' entries reflect 1. my lack of complete understanding 2. API defects which I think I have identified. Signed-off-by: Ian Jackson Cc: Wei Liu CC: Dario Faggioli CC: Juergen Gross CC: George Dunlap CC: Jan Beulich CC: Konrad Rzeszutek Wilk --- xen/include/public/sysctl.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h index 0849908..cfccf38 100644 --- a/xen/include/public/sysctl.h +++ b/xen/include/public/sysctl.h @@ -560,6 +560,34 @@ struct xen_sysctl_cpupool_op { typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t; DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t); +/* + * cpupool operations may return EBUSY if the operation cannot be + * executed right now because of another cpupool operation which is + * still in progress. In this case, EBUSY means that the failed + * operation had no effect. + * + * Some operations including at least RMCPU (xxx which others?) may + * also return EBUSY because a guest has temporarily pinned one of its + * vcpus to the pcpu in question. It is the pious hope (xxx) of the + * author of this comment that this can only occur for domains which + * have been granted some kind of hardware privilege (eg passthrough). + * + * In this case the operation may have been partially carried out and + * the pcpu is left in an anomalous state. In this state the pcpu may + * be used by some not readily predictable subset of the vcpus + * (domains) whose vcpus are in the old cpupool. (xxx is this true?) + * + * This can be detected by seeing whether the pcpu can be added to a + * different cpupool. (xxx this is a silly interface; the situation + * should be reported by a different errno value, at least.) If the + * pcpu can't be added to a different cpupool for this reason, + * attempts to do so will returning (xxx what errno value?) + * + * The anomalous situation can be recovered by adding the pcpu back to + * the cpupool it came from (xxx this permits a buggy or malicious + * guest to prevent the cpu ever being removed from its cpupool). + */ + #define ARINC653_MAX_DOMAINS_PER_SCHEDULE 64 /* * This structure is used to pass a new ARINC653 schedule from a