From patchwork Wed Jan 11 09:37:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 13096351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A152C5479D for ; Wed, 11 Jan 2023 09:42:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231598AbjAKJma (ORCPT ); Wed, 11 Jan 2023 04:42:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235829AbjAKJlj (ORCPT ); Wed, 11 Jan 2023 04:41:39 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40768D9F for ; Wed, 11 Jan 2023 01:38:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References: In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2fDiKDpHWQxPlgiPnPqPDR3+XIqmos/N+hJoBE0bo5Q=; b=qqhYbpDRxaw+n8ruRSimOqxTQU c9MaukGhM982W6SpZGgstAaVYbI/MXo056puJ3HoDinx0+YGXxuCrrF9tBNIZm2O2naeCfrdhqtSa 0z5KhB3fPpuPCgN+jAhz+K3ttbv0kUYX7shh/iP8QD5wQbCzTV+/ziti48XxtTf2nSsm/Kg9rZYVo uBRSOW1gYpWg4L/vQ3NO/3jko+BxPpg3kfy4sprgh7wSFSFoJG5k+xykQyuIzPwMkdNVm8h6ExdI5 rb3Q43G72xRjJVEKJRQSXY00zpJ/K/VG2synjRgNblbKaaOMfkfrqHsH+ucP0T/A5WItDDDBazFyc O4tM1L4g==; Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFXYg-0040BG-DI; Wed, 11 Jan 2023 09:38:10 +0000 Message-ID: <83b174d505d47967f3c8762d231b69b6fc48d80a.camel@infradead.org> Subject: [PATCH 3/3] KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule From: David Woodhouse To: Paolo Bonzini , paul , Sean Christopherson Cc: kvm , Peter Zijlstra , Michal Luczaj Date: Wed, 11 Jan 2023 09:37:57 +0000 In-Reply-To: <99b1da6ca8293b201fe0a89fd973a9b2f70dc450.camel@infradead.org> References: <99b1da6ca8293b201fe0a89fd973a9b2f70dc450.camel@infradead.org> User-Agent: Evolution 3.44.4-0ubuntu1 MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Woodhouse Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside vcpu->mutex. But that doesn't actually happen very often; it's only in some esoteric cases like migration with AMD SEV. This means that lockdep usually doesn't notice, and doesn't do its job of keeping us honest. Ensure that lockdep *always* knows about the ordering of these two locks, by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock is held. Signed-off-by: David Woodhouse ---  virt/kvm/kvm_main.c | 7 +++++++  1 file changed, 7 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 07bf29450521..5814037148bd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3924,6 +3924,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)         }           mutex_lock(&kvm->lock); + +#ifdef CONFIG_LOCKDEP +       /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */ +       mutex_lock(&vcpu->mutex); +       mutex_unlock(&vcpu->mutex); +#endif +         if (kvm_get_vcpu_by_id(kvm, id)) {                 r = -EEXIST;                 goto unlock_vcpu_destroy;