From patchwork Wed Dec 21 13:26:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13078766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 16556C4332F for ; Wed, 21 Dec 2022 13:26:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.467897.726956 (Exim 4.92) (envelope-from ) id 1p7z7K-0004yn-Jl; Wed, 21 Dec 2022 13:26:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 467897.726956; Wed, 21 Dec 2022 13:26:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p7z7K-0004ye-H5; Wed, 21 Dec 2022 13:26:42 +0000 Received: by outflank-mailman (input) for mailman id 467897; Wed, 21 Dec 2022 13:26:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p7z7I-0004I8-Pw for xen-devel@lists.xenproject.org; Wed, 21 Dec 2022 13:26:41 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2054.outbound.protection.outlook.com [40.107.20.54]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 19bfbfdf-8133-11ed-8fd4-01056ac49cbb; Wed, 21 Dec 2022 14:26:39 +0100 (CET) Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25) by DU2PR04MB8582.eurprd04.prod.outlook.com (2603:10a6:10:2d9::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 13:26:35 +0000 Received: from VE1PR04MB6560.eurprd04.prod.outlook.com ([fe80::4da2:ea8b:e71e:b8d8]) by VE1PR04MB6560.eurprd04.prod.outlook.com ([fe80::4da2:ea8b:e71e:b8d8%4]) with mapi id 15.20.5924.016; Wed, 21 Dec 2022 13:26:35 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 19bfbfdf-8133-11ed-8fd4-01056ac49cbb ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BqUoZ67qtEQOON11Ly4g4gWSBcXDwioQWHBlLtsSaZAQ3SGW3U+PzfMdTyVpbsRYfKyDdLCsEwOj8dEcuHV6gatjPxw851XyjhUVLovOiRXNT4uLpCK0YZNhPbj14+3J+PdMgTkrmEsO0XyQzoGzjQBZCtISNYmMKB8ewAQM7kUxxQVUknY9G/TgrcpHc3dp4l0//l52UHJ9SRvduw/k+LWdl7pSU9OpbK/sIGzAEj6wEboPIqIxDDp7IkQ0B1eF1LqzMdTCVbZZmSosdmOqepoSGOadH8tEGBDyj/Ykm4OTP/7fb1VUiEcUrRHsNNdXd9hJY3/zzernfzmgMlqK0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=r07Xy42pMGGEVQDplutw+NfWm8QrmcB72FHciyULsR0=; b=gPltjaS1/SHfHAcoXV7SgxGLRGislKlp31HVKYoJtADDMJznspPNa5akUuoF6TK3D/y2DvTMJwL/wHVsYQmTblKi6eDvJrCSZ0Rj4dEjLPeabaOYCXPa6xuXV9S8RWdmjWQW4NGjNJFIh4tmdLkoyrII+X2jnuDRwito3NO4WCYR5MfW96I0KqZz8mgGsX6F+HQ+K2BeJNnovAEGK5s+Deq/sov/xfcO5UnFS13u2urbLIPkaUJ/wcrsoghumC0Fs4Xrtu5CJW6g7JKehKsOEyHcwtUvOogmr7NHCiEKOJXHYvguc0pwot2zK0Oo1o55T52GMXTYYlxKaEO41mRN9w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r07Xy42pMGGEVQDplutw+NfWm8QrmcB72FHciyULsR0=; b=mgQAh+eHNV09xflz6SG+P1eaGCBTtujN7xSKVYKTVuqTgfU6Z0wsi/4DqGgYLmq8aUn/OmXOS9tGoUhZoPoA2QFNHM8Z4Jk0/JJqC/DTkvb0McuNBgGPofeVJ6ESyUY+C3xyV7RpZPqihjy0TA1MvM80FtJHC0boxiD9+rpNLnesYl+jbT4x/Q3FDl7CWSQcNJCe9BREv37tKxz93I7ZQhIhb/hr4YNQWakPAOxvZmoEQrdLceHGjS0YLdHjtNhgesAOis3GIUAwpOKE+TR013M2hSGojAs/aUpcqp/Kqk2VgqXYlI1O8G3NlvggAK2qrbR2orb2Dn/IgjER+0hWtw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com; Message-ID: <584a986e-08ea-d064-9447-ed23c6e39721@suse.com> Date: Wed, 21 Dec 2022 14:26:34 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: [PATCH 4/8] x86/paging: move and conditionalize flush_tlb() hook Content-Language: en-US From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tim Deegan References: In-Reply-To: X-ClientProxiedBy: FR3P281CA0204.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a5::9) To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8582:EE_ X-MS-Office365-Filtering-Correlation-Id: f49f167a-0c38-4e81-9cd2-08dae356fbbe X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LN79KCfHRqYQKkt34XcYtPV2dokir+tnsskLmurLicJyN8H4Hh0xQyOx1gN16D5rAjIXEL94P29+4MBaoiTYt/kGXtdHRaY+7nk9MfMbL5e+G8Wlle/2sLdWlz43ZVzPM60emVC310t2XcxPL9BqVMbMFdHp+9BwR8SzNO0OQONtBuhUMSL6AOqpy74rwkadxuG6ToylihJMWqjXODystPnxE+Lcr3PI1io8UeNNEE9vBuXH84CwwVYRMxhE6i0veNt5gz+4blRyUL1XR8cX8Orb0BC+xmtla/RO+mQvjY6aWJUGz8iInxxKWZVzqf/oJMQh1RtrYA5VUekQPVmr0CkzXop3LRBdui3p8fspRnAd6NQ7+HEzuVtW8lOUoSH0NPrue2ll/TWjuiV10sC9wgpr3WScPCbG+JMTCWRk2SK+b9DGGLtYIO00eWNsD+chHRGcvl687nMuxFwqcaC0QuCllpHtpvtm7Ptc/ueaiNGu6jVXF1tbgP1o6+Qwkplvu1Qv2hSUHDafWkt86vxqmBECiZah7Qt+C0gi0Zw+MsoIS76ybVzwDTjJQP+Hx+P+KguFf6SXjPmaX1koBzNF+d3F1HQlZrQhuEQEM0FX7l+UluQ0eIoLGEIiOaZGbe2v3cvl72ySys037PJy05J4MEkLm1ZB8+rt0drxdbLrjHGv+0+qcDADfyFN/vYagPLqqu6veDWz3G5r11pL87KeCW6x4J8lcGltW+33bNtt6dk= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(376002)(136003)(396003)(346002)(451199015)(83380400001)(86362001)(31696002)(2616005)(38100700002)(36756003)(478600001)(26005)(186003)(6506007)(6486002)(6512007)(66946007)(66476007)(5660300002)(8676002)(4326008)(66899015)(316002)(6916009)(31686004)(8936002)(66556008)(41300700001)(2906002)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?gMlY6O/GV2R+dYtkxwNgUo+D/CoK?= =?utf-8?q?/89hps3/fcAihg7fOstLs43SvkP6z9ocFzxrd2WLwgpNH5XZRDMcPN1LToz/X75yG?= =?utf-8?q?XqnnSbDQptX/skythTdn84rQ6iD6gTZ3XT2jccX2PjqL8ywNDGkogF9RKuVF0pMxr?= =?utf-8?q?0gj+ZkF0zBSxyqQorD9eh8rgLL6ITmnYFMq/fVvFZpYjmgVDwWBZiENoh2qSRtb+E?= =?utf-8?q?c52s09yttng1ocfOQMjZ+YmShEGX2seto5Mp8537P9gQua6+lC5DhM/W9K/CTja9n?= =?utf-8?q?uuIAeRN8aEAjeAddt3zA3TQgObG/6RbHd6G99hMkgRoc+nf19Q4oKLArh5O7PJfGJ?= =?utf-8?q?e7FIEsN/HCmf4ffDC3w+/efPCr8ve+J5Hd3cx+7scqFBxfxKAR6IpfC9OxgDgQVGA?= =?utf-8?q?xrFTpkm/rBHMkw3HnSuYwkyh9nAcSfJ/XAyZfK0YIzvIp+G/3g606pa8pQnW8uaXa?= =?utf-8?q?vvRWku+fcIMCfqv/7ERg3drlEK0vJ1kafH/hcw9uk0T+YRXPPkViXZhfxu16AX5Y0?= =?utf-8?q?shpYIXA26g6snDwoPOi5htAH+zregJ/WJ0pzta5glf/boR3hQEYR9Gf2wFqxbn9ec?= =?utf-8?q?OqDdcHxzGN9vS8sk4IAHH7FJU4zzsBgQ9saE+VbfZoSuRXblasx/ClGjMeGRFpIUH?= =?utf-8?q?7gUJWSoRByRasVxs5uMaVhThBAq26XBfrIlPxe+zVsu2m7EFpjQ04EHCUFF6e5LY8?= =?utf-8?q?vpNZn3bSrtm2HKBjiroLCsL5+F3sRp+l3BGygK54Wc9iwpkWQp1uQKt7cqw6zfCTV?= =?utf-8?q?1FBZWgf2X+zsGMK9GGD/9v8eSpESK3f3wZNWePcp3iSm9inQeAwK60TE1DNiV4j33?= =?utf-8?q?mP7gnwKwil060HLBtweKjuF9WX/kS037op47iXSwnmQCa0LePytkx8FlvYqJly7o8?= =?utf-8?q?oPDW9R0WZo2YZiA+UOa8iSqKI8xPiqvcxCmfvG43eVi1qo0vZq2fZUucfe+KfWbeL?= =?utf-8?q?aPAq3FPXVjt+G3ikr0U4n4r+IBL1g79IkHxv8fFelBvxRg3QSjZGkqH1hIXCepYiT?= =?utf-8?q?W+RjZKFMWR8NRa0di6UzcZQf8uKvbphyy1rc2gj9AHfpJ9ifZUA4yye3vj4hRius+?= =?utf-8?q?W1nl0mvinrmkFfOMSJDF4icEqt0+tOIeryBHXB0vv4F1m9JYjTJh3FbkgSFbXDrSC?= =?utf-8?q?3sODPA6k8RvkTXk02pZIyuDxBBLDF35hKfpvDXZAzxshgx5bDl0aTEze4JCoBR5rW?= =?utf-8?q?m0ni3ULyqqFv398iqi0EMdnhxp6zxk6f1wU70HcmfdO+1K8rBjFpaGHHdIIfLuVZP?= =?utf-8?q?jfM1xgL0aG1zuGSG0q+HyhzRe+I1Du6jPxuprCmVHpZDfv0ToGsQalG+pKGFNHuq8?= =?utf-8?q?pQHg9G28C4fYYWI/6hQWtLbSVNYUrVWWtD3wKOyw0TqNgPGatkR5n8ANXSPyVFfCo?= =?utf-8?q?XFtEZz1cgzQku3wixcxM8G5HWnPDd+IHvyavgM2IodZiJShKkiR6UvQ4Sk5yCEgtq?= =?utf-8?q?4kFmJWzEq/zJL7PqYeXVRF+33hGedA31BOzlQ6Lr/ZkFu1YY89ML3j4MuTxNetE1A?= =?utf-8?q?vTJXPCVwMCx8?= X-OriginatorOrg: suse.com X-MS-Exchange-CrossTenant-Network-Message-Id: f49f167a-0c38-4e81-9cd2-08dae356fbbe X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 13:26:35.7393 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /ssBx8hmaHvfkmcAHSUOFJZ+9jZGGTDA5Z9tLxG866td7caX28oDoWlmv1WC7bAGKHSwzGioBSVGMeJRs4qnIg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8582 The hook isn't mode dependent, hence it's misplaced in struct paging_mode. (Or alternatively I see no reason why the alloc_page() and free_page() hooks don't also live there.) Move it to struct paging_domain. The hook also is used for HVM guests only, so make respective pieces conditional upon CONFIG_HVM. While there also add __must_check to the hook declaration, as it's imperative that callers deal with getting back "false". While moving the shadow implementation, introduce a "curr" local variable. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper , with two --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -237,6 +237,11 @@ struct paging_domain { void (*free_page)(struct domain *d, struct page_info *pg); void (*update_paging_mode)(struct vcpu *v); + +#ifdef CONFIG_HVM + /* Flush selected vCPUs TLBs. NULL for all. */ + bool __must_check (*flush_tlb)(const unsigned long *vcpu_bitmap); +#endif }; struct paging_vcpu { --- a/xen/arch/x86/include/asm/paging.h +++ b/xen/arch/x86/include/asm/paging.h @@ -140,7 +140,6 @@ struct paging_mode { #endif void (*update_cr3 )(struct vcpu *v, int do_locking, bool noflush); - bool (*flush_tlb )(const unsigned long *vcpu_bitmap); unsigned int guest_levels; @@ -300,6 +299,12 @@ static inline unsigned long paging_ga_to page_order); } +/* Flush selected vCPUs TLBs. NULL for all. */ +static inline bool paging_flush_tlb(const unsigned long *vcpu_bitmap) +{ + return current->domain->arch.paging.flush_tlb(vcpu_bitmap); +} + #endif /* CONFIG_HVM */ /* Update all the things that are derived from the guest's CR3. @@ -408,12 +413,6 @@ static always_inline unsigned int paging return bits; } -/* Flush selected vCPUs TLBs. NULL for all. */ -static inline bool paging_flush_tlb(const unsigned long *vcpu_bitmap) -{ - return paging_get_hostmode(current)->flush_tlb(vcpu_bitmap); -} - #endif /* XEN_PAGING_H */ /* --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -445,6 +445,7 @@ static void hap_destroy_monitor_table(st /************************************************/ static void cf_check hap_update_paging_mode(struct vcpu *v); +static bool cf_check flush_tlb(const unsigned long *vcpu_bitmap); void hap_domain_init(struct domain *d) { @@ -458,6 +459,7 @@ void hap_domain_init(struct domain *d) paging_log_dirty_init(d, &hap_ops); d->arch.paging.update_paging_mode = hap_update_paging_mode; + d->arch.paging.flush_tlb = flush_tlb; } /* return 0 for success, -errno for failure */ @@ -847,7 +849,6 @@ static const struct paging_mode hap_pagi .gva_to_gfn = hap_gva_to_gfn_real_mode, .p2m_ga_to_gfn = hap_p2m_ga_to_gfn_real_mode, .update_cr3 = hap_update_cr3, - .flush_tlb = flush_tlb, .guest_levels = 1 }; @@ -857,7 +858,6 @@ static const struct paging_mode hap_pagi .gva_to_gfn = hap_gva_to_gfn_2_levels, .p2m_ga_to_gfn = hap_p2m_ga_to_gfn_2_levels, .update_cr3 = hap_update_cr3, - .flush_tlb = flush_tlb, .guest_levels = 2 }; @@ -867,7 +867,6 @@ static const struct paging_mode hap_pagi .gva_to_gfn = hap_gva_to_gfn_3_levels, .p2m_ga_to_gfn = hap_p2m_ga_to_gfn_3_levels, .update_cr3 = hap_update_cr3, - .flush_tlb = flush_tlb, .guest_levels = 3 }; @@ -877,7 +876,6 @@ static const struct paging_mode hap_pagi .gva_to_gfn = hap_gva_to_gfn_4_levels, .p2m_ga_to_gfn = hap_p2m_ga_to_gfn_4_levels, .update_cr3 = hap_update_cr3, - .flush_tlb = flush_tlb, .guest_levels = 4 }; --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -68,6 +68,7 @@ int shadow_domain_init(struct domain *d) d->arch.paging.shadow.oos_active = 0; #endif #ifdef CONFIG_HVM + d->arch.paging.flush_tlb = shadow_flush_tlb; d->arch.paging.shadow.pagetable_dying_op = 0; #endif @@ -3134,66 +3135,6 @@ static void cf_check sh_clean_dirty_bitm paging_unlock(d); } - -static bool flush_vcpu(const struct vcpu *v, const unsigned long *vcpu_bitmap) -{ - return !vcpu_bitmap || test_bit(v->vcpu_id, vcpu_bitmap); -} - -/* Flush TLB of selected vCPUs. NULL for all. */ -bool cf_check shadow_flush_tlb(const unsigned long *vcpu_bitmap) -{ - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask = &this_cpu(flush_cpumask); - struct domain *d = current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(v, vcpu_bitmap) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(v, vcpu_bitmap) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(v, vcpu_bitmap) ) - continue; - - paging_update_cr3(v, false); - - cpu = read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - guest_flush_tlb_mask(d, mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(v, vcpu_bitmap) ) - vcpu_unpause(v); - - return true; -} - /**************************************************************************/ /* Shadow-control XEN_DOMCTL dispatcher */ --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -688,6 +688,66 @@ static void sh_emulate_unmap_dest(struct atomic_inc(&v->domain->arch.paging.shadow.gtable_dirty_version); } +static bool flush_vcpu(const struct vcpu *v, const unsigned long *vcpu_bitmap) +{ + return !vcpu_bitmap || test_bit(v->vcpu_id, vcpu_bitmap); +} + +/* Flush TLB of selected vCPUs. NULL for all. */ +bool cf_check shadow_flush_tlb(const unsigned long *vcpu_bitmap) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask = &this_cpu(flush_cpumask); + const struct vcpu *curr = current; + struct domain *d = curr->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v != curr && flush_vcpu(v, vcpu_bitmap) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v != curr && flush_vcpu(v, vcpu_bitmap) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(v, vcpu_bitmap) ) + continue; + + paging_update_cr3(v, false); + + cpu = read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + guest_flush_tlb_mask(d, mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v != curr && flush_vcpu(v, vcpu_bitmap) ) + vcpu_unpause(v); + + return true; +} + mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels) { struct domain *d = v->domain; --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4198,7 +4198,6 @@ const struct paging_mode sh_paging_mode .gva_to_gfn = sh_gva_to_gfn, #endif .update_cr3 = sh_update_cr3, - .flush_tlb = shadow_flush_tlb, .guest_levels = GUEST_PAGING_LEVELS, .shadow.detach_old_tables = sh_detach_old_tables, #ifdef CONFIG_PV