From patchwork Tue Jul 19 13:52:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anthony PERARD X-Patchwork-Id: 12922541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DDA83C433EF for ; Tue, 19 Jul 2022 13:53:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.370671.602542 (Exim 4.92) (envelope-from ) id 1oDneV-000729-1Q; Tue, 19 Jul 2022 13:52:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 370671.602542; Tue, 19 Jul 2022 13:52:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDneU-000722-V6; Tue, 19 Jul 2022 13:52:42 +0000 Received: by outflank-mailman (input) for mailman id 370671; Tue, 19 Jul 2022 13:52:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDneU-00071w-1P for xen-devel@lists.xenproject.org; Tue, 19 Jul 2022 13:52:42 +0000 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 0cf6cfab-076a-11ed-924f-1f966e50362f; Tue, 19 Jul 2022 15:52:40 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0cf6cfab-076a-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658238760; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=bOUbCRR6SPQ2sJ3KVcPFWoF4FEptXHJbHSvMvPXSqPQ=; b=Drl/+gpWQU1OLO9bqEQ4bzlgbMdJfjSTJzQH4RaKsNr4OfjkkaNvXU33 8bJK0NA2lzSwAlymMPYMthoHz58GnLw5W+DU9K/BGBuvdzecLDUPamB5f eyNG9skpia3XK3UqLFITrNKtUl4iCLiKRQhmLGLIfTAVhu5vC9ED50mPM 8=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76130499 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:dvG13K49xGeimmDQkCB8dQxR2vbMJTbqsCcv2wbIaF9S3hjvzP5pOkMGou+yHqcojJg0JnUAMVnC6nwTu7BG8UzxHHWiaGJDaSwnr9Uym3nBj85Uzni5rPURZfFrFRw2dixgtuKJuzph4reJp3z+twdcAJpWtZyXURnV2TwocisxEQBeLuRHZvQsI4FMpYMflsKRNch24Xr66Swl9Y6xynERzvGHrV3teUnddcJDi0yllHrPsGU/KHzYyO9MOPs5ySz3NLYSfe4/LFkdy5Jq+19K+dD8xQP6oHvVFNo+TVgZxTpT8cmDWW38H1xKHU3FgrADqffTCLm5LmmJrZviDGYbVr5P4lMRAFnBOu7Crpxmrms7wBGi8Ix8rda1twOQUsvWAWuXNK7pGZxK4z4a22dvh8W/cuS13JhTpKKNVkDURfA1vEeJXxaYPwT+FArQ68LT4zys5G1N8mGHrI4/4BqYLg17Wgva1Z0YlWBJAAkDTAPzESbTn4ms9cgrUxaHF8LVsM+dgPyxDLAwFVGhHfE5GMQ1NVNTfw4NISHBh17yBIKxjaZGxpVbBe9NYAPa8CSKAT21PDv0LBatDmspNhrWJ/YgsHKhs9+JB4u8qrUHw6iI/xxjgf2nZuOTWNqRMNvw/VQ4JajLnPHxEK2YtJUOXsrZYPlTygn6MMZ0BoKf1aWHanqRxW0UIK/Li25gpaXVV5p5jErtwn4hysavjoOm3fbGG0xUAMGSANxDXVZaJEOl8gTcfCLMNys8FdJtX+xEQWLR0UpeIHhjZd55W2fnGcL3fMmIuV3fu6bbbRE/AjwulaXIiJ9+xiaFtIVuJyqBrYXIMLtfuY83n1D08MsIj5hnv2S/0Wim7cKV1XQU1Ox6PcHGNJQ+cNwuNKkt3MnmVwbJEjjQlpeT6qZo4sVINjJZViqt4iIV9gBcab/2rvXsIYRf6pRa3UGBvEcK8IFsCiAjz2WEtdyuk/8SD+knzW2UlB4VZ0QYGTOI+mMYH+kmiTJ2oqCya6Y3FNGceJEAOBq4XkbJ44p4rBQ4xOWN8Nz+23lGHKTyIAWc4ZaLS07O7Wn4hFY4QgfJC4B+oVaF33bBUZ7dF0l25qhDmmv178+r75iL8KaLWKmPora4rWtmHMICDqBHmgdDDBRm87JxcjimnAw2IKNSS2eGAsZLCMH140RjCrbFVq3sJXtcqy4ZO2I+6x4fgHpx0aq/OWzBoVJJkKZX22f1XRBDyp5rD4RDAB+pl0MIkVp4Pu5K9XTeX979XoIAucI4D7aaNbvAZwy8atYO83bx+EmK851NusVFKIVDL3TC06GFGpLpUvPgRro7hSVeVPXyWLjQkrZt6eZJoM4XeAZaxRwWgcth+ukLu3CXAoC6llFntJ6QZIFYVLABMX57x8NJNPMESsIZtkcqlBMvZ60BD94N4YSfY7q9KY0GDIsDauOIqc1EUEv0M1hul4RH9o7hORohtVoeLBG2UY8z95LNfIPt3Ougt21545qcs+3bKaesRZoEwPagTRqkpr6MXFiI0MJxhjuzxM0P5slfQUMm77unZ6yOKa20tqc2oMWacdYXnZhl0cD2C/mDXpCh+KEs5X1QNWSCehaqvd1Qg0V6tJ9WQX26E3FuDkeLBDB8RCvwF8lux095lyLVgEYqjKvaQSIsHvG/MYfg0z8RjYtdVsdKetgcSv0236D45utDoePTrG8ULcQ8veDaHvMFT1Xi2yQWz3JuXACWL4jbVfpoIgQOjfvb302OyPU0gj3KZO/eCWeTdJ7rBZ4obRKYX5x9/1nD7cfHBx1IzhvDxG0p1vThaBEU41+dfJYiS85CPraCFzdN5yNJtY7hJtdYQ54JUyHxfsYCLrp0OrM3tVNirZkwLMM3Y/T9c3XJP2B6dtSd+/S36xocwqSBes4mtdf1UOC9AUMpDkgxB9ZzkLxtCukdRKCwkIeZ9pYgXmkzeE3h8Ei8kUmrco0XMN15jy63er8eF8sTVL9lxlhml+DBjlU3fSkdcgKiRH13ueYwszytQT6iDlgdz2YomjC6oZJkbX1dr5Wzf7UtPsu0OIz34GFGT5f10phg4nXgU028LB6MxTi5geJmqco9uisnzRnMdadGam8yri3bQAXytMZvEdUySi/inETQKnQCW0liU3Qqz6I3AwefCd9OMLQHZfagIqs0J8xiHRGMh6Pbf2AfaIkTQqWPN8n861LWIN4tUWQshUO96tWe0rFOdOggMZRb5lmJAlvJuLw1kah5vfmWN5wlLnjWSmRkX9DbmtVO4Ovc8ziM6jng7cGKpf0N7csL0fIclzvlJWLksVN4kauLtJ8maRsWxBonTPqhnalrNtnipqH3IGtQENtS6epLTV8wuaa5e1ulO1VrCNNsdHh7bOiIdBcSexFOUq4sF12gBnLpivVeirLzdEz1tu8kqFM97UrOp5ceVPNX7vpH5tmG5LFSnHQlQZaNTY20sotr8j1IILp7mGh1WoQPChcBhlWc5PIaBtrwYKEblQgyTy7dNrj4opzKyn1ptyVIZvV1f/RsWnBleXbaxIDAGxv6W9oVe4CAVBEJMvvn2wdx54ik8JS2aOfSmokiD07J0Os/HksgPjV3JLKC+2Sxk9/20CmAg08DozwqoxAgGlXz7XgPt3VTVtOyk2Uz4SdakuvOO83n/hSqjkQQUrc4PnQm0N8a91vvjfbaOB35h9fA9fBR9XcM4G+Kzhvkil47HXgruoEALpuV38AcLRHRChXhVquExG16MQV1342hyxlRpiSZcyfE9oengEE6G7XUihpnsDE3TURqKJRaDCtvMQ6yLHK0RG8iSBX4H91tIbf9TU/rhFLuWse4plfpGCzwAe7uKsGgl6dH4pJob3spT2eDufsB4yv2jz/H/n1dFGZeTD8lkS26YVBh/6dq0QpnXYogq992HzEKPoim3u16WRKm7DL77H8oXM+PWRkLKk69N/RmVp7K0KWlKmrwcq+PaenUgPGp1FxFs2I1YJhRHOhRlFflBC0= X-IronPort-AV: E=Sophos;i="5.92,284,1650945600"; d="scan'208";a="76130499" From: Anthony PERARD To: CC: Anthony PERARD , , Ard Biesheuvel , Jiewen Yao , Jordan Justen , Gerd Hoffmann , Julien Grall Subject: [PATCH 1/1] OvmfPkg/XenPvBlkDxe: Fix memory barrier macro Date: Tue, 19 Jul 2022 14:52:30 +0100 Message-ID: <20220719135230.32838-1-anthony.perard@citrix.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 From: Anthony PERARD The macro "xen_mb()" needs to be a full memory barrier, that is it needs to also prevent stores from been reorder after loads which an x86 CPU can do (as I understand from reading [1]). So this patch makes use of "mfence" instruction. Currently, there's a good chance that OvmfXen hang in XenPvBlockSync(), in an infinite loop, waiting for the last request to be consumed by the backend. On the other hand, the backend didn't know there were a new request and don't do anything. This is because there is two ways the backend look for request, either it's working on one and use RING_FINAL_CHECK_FOR_REQUESTS(), or it's waiting for an event/notification. So the frontend (OvmfXen) doesn't need to send a notification if the backend is already working, checking for needed notification is done by RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(). That last marco is where order of store vs load is important, the macro first store the fact that there's a new request, then load the value of the last event that the backend have done to check if an asynchronous notification is needed. If those store and load are reorder, OvmfXen could take the wrong decision of not sending a notification and both sides just wait. To fix this, we need to tell the CPU to not reorder stores after loads. Aarch64 implementation of MemoryFence() is using "dmb sy" which seems to prevent any reordering. [1] https://en.wikipedia.org/wiki/Memory_ordering#Runtime_memory_ordering Signed-off-by: Anthony PERARD --- I'm not sure what would be the correct implementation on MSFT, _ReadWriteBarrier() seems to be only a compiler barrier, and I don't know whether _mm_mfence() is just "mfence" or if it act as a compiler barrier as well. Cc: Ard Biesheuvel Cc: Jiewen Yao Cc: Jordan Justen Cc: Gerd Hoffmann Cc: Julien Grall --- OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf | 8 ++++++ OvmfPkg/XenPvBlkDxe/FullMemoryFence.h | 27 ++++++++++++++++++++ OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h | 3 ++- OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c | 20 +++++++++++++++ OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c | 22 ++++++++++++++++ 5 files changed, 79 insertions(+), 1 deletion(-) create mode 100644 OvmfPkg/XenPvBlkDxe/FullMemoryFence.h create mode 100644 OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c create mode 100644 OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c diff --git a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf index 5dd8e8be1183..dc91865265c1 100644 --- a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf +++ b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf @@ -30,9 +30,17 @@ [Sources] ComponentName.c ComponentName.h DriverBinding.h + FullMemoryFence.h XenPvBlkDxe.c XenPvBlkDxe.h +[Sources.IA32] + X86GccFullMemoryFence.c | GCC + X86MsftFullMemoryFence.c | MSFT + +[Sources.X64] + X86GccFullMemoryFence.c | GCC + X86MsftFullMemoryFence.c | MSFT [LibraryClasses] UefiDriverEntryPoint diff --git a/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h b/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h new file mode 100644 index 000000000000..e3d1df3d0e9d --- /dev/null +++ b/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h @@ -0,0 +1,27 @@ +/** @file + Copyright (C) 2022, Citrix Ltd. + + SPDX-License-Identifier: BSD-2-Clause-Patent +**/ + +#if defined (MDE_CPU_IA32) || defined (MDE_CPU_X64) + +// +// Like MemoryFence() but prevent stores from been reorded with loads by +// the CPU on X64. +// +VOID +EFIAPI +FullMemoryFence ( + VOID + ); + +#else + +// +// Only implement FullMemoryFence() on X86 as MemoryFence() is probably +// fine on other platform. +// +#define FullMemoryFence() MemoryFence() + +#endif diff --git a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h index 350b7bd309c0..67ee1899e9a8 100644 --- a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h +++ b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h @@ -11,8 +11,9 @@ #define __EFI_XEN_PV_BLK_DXE_H__ #include +#include "FullMemoryFence.h" -#define xen_mb() MemoryFence() +#define xen_mb() FullMemoryFence() #define xen_rmb() MemoryFence() #define xen_wmb() MemoryFence() diff --git a/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c b/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c new file mode 100644 index 000000000000..92d107def470 --- /dev/null +++ b/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c @@ -0,0 +1,20 @@ +/** @file + Copyright (C) 2022, Citrix Ltd. + + SPDX-License-Identifier: BSD-2-Clause-Patent +**/ + +#include "FullMemoryFence.h" + +// +// Like MemoryFence() but prevent stores from been reorded with loads by +// the CPU on X64. +// +VOID +EFIAPI +FullMemoryFence ( + VOID + ) +{ + __asm__ __volatile__ ("mfence":::"memory"); +} diff --git a/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c b/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c new file mode 100644 index 000000000000..fcb08f7601cd --- /dev/null +++ b/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c @@ -0,0 +1,22 @@ +/** @file + Copyright (C) 2022, Citrix Ltd. + + SPDX-License-Identifier: BSD-2-Clause-Patent +**/ + +#include "FullMemoryFence.h" +#include + +// +// Like MemoryFence() but prevent stores from been reorded with loads by +// the CPU on X64. +// +VOID +EFIAPI +FullMemoryFence ( + VOID + ) +{ + _ReadWriteBarrier (); + _mm_mfence (); +}