From patchwork Wed Nov 27 20:19:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887355 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A2DD202F92 for ; Wed, 27 Nov 2024 20:19:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738786; cv=none; b=CXdGwN4q6kAmZ53R1ZkJ9+KCj1nUPtcnEZl/dUGRtMys7WK/0/Waj/k1sau0u282GJSLSIe+hQFZ5abhNMyZwoW+Ip3w/dfXghowBvAz/cIZJbnvlbXT5FlSfvlGC6PVr4RZuuTFv/GRz0lgnsSJ2YbP0rGGQzyfNJ8DRzLVtuw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738786; c=relaxed/simple; bh=8R+UAJrB3nGcG9KbtGYvVJlRE7sXnKMwDSHzpdOQT08=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CPAOn8sCFH2AKilCbymXdZEuTjhC466kmRaZYER7N6473g1QMetcgcDp8cUO/2wU6l4GU1qVZEwW+/aYf2ZPgjbb89GxbmAn+A0PQRrxWpyGAmIde0bDUqu114yfPbKAy8sQrrA0x0YAzcYmDumJIZfacMYd8XGWr04ay3pXaBA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QMjiYUaf; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QMjiYUaf" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7fbe1a5b5b3so60050a12.1 for ; Wed, 27 Nov 2024 12:19:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738784; x=1733343584; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QeQudC1DydBYOOCm+tDoCUoQUBaQ511Zu4EsGbhYDkU=; b=QMjiYUafgmuFBZczaX957md38LjXdPh697/3v0TT6lyeZhvhHHafjFyJbBOoeTalkh xTmD2uJhiIx/HBZcpR/WivL8quUSG98NOMgEH7XC20eBKJMJYjR2zZunoppCS+YcPQyU R/zOuEi6HA5voJpeKxYx4Q8x7f3LARG2DXlzvsEcXtZQbi6+3i3oWaMFJgnlo8eq5Unt QmQYrFTIxRlJZ98Guj5nm9710HunsGhILvHUyYq1WPp6npsQFaqp6EUifDNm8dF9UfHR azRSdU7d9Rhmm6gb+Pf21RjGee5rWY1nALdhEE0vlNoJ4hfV89tuaar81dmyzmArI6Di n3iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738784; x=1733343584; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QeQudC1DydBYOOCm+tDoCUoQUBaQ511Zu4EsGbhYDkU=; b=Sst944+zENElQ7jdu/pb/1ZW1qeL3ab21/eWRa2LcHrnz0ypJcZwupckrDChxC1UlJ BN2Qrf8FRhkfV3p4h7RBDqoion0bND5h4h51jmmlFYSPwUEap3eHtCRbd4No0LjDrTSA p++MXT1emdSJlg74P9rq8tuT6jujnT7qWTUX6UlNwbdTaVZY3udKGupUs3UsqGGLCzKF VpHW0zlvRYj8BlaCmnDgXr41KZn4vG/0Ea/rQ+R2fhLDAX+hlG2kihrcCw4C1gNcOVdV W9ckifsTJt/1NrS/gV/DeSbumsuG0WNcTIwmgBbfqwSYSmFFuwl22vF1fyyz2Km7oC0B MVQw== X-Gm-Message-State: AOJu0Yxn8kkWzIOfmpsyqo3t1dwGoFrz4A3M/xAu/0X9pnSVWkiRlvjn BWrZJmMGf52MUktIfsds2sHJlciCt+epOB42ZtyuAmSLWQlFldhaOOVL1WLD4JHl6x9hhjE27J7 nBMJbGTzV06nWr282Kf+p9fh6OEFWPBOvgrOTGaYexZRKNaHRGlYDe3DYfH8IzfEfRRI3CnD5vK ZMt373TMjiRTWCSLCSmdjQW7iUmhUUystf+6IOO3op1ly5fm9S+g== X-Google-Smtp-Source: AGHT+IHLs2UzbEZvSFKH6jHmKeRjD7NZxiMiv36yszF2SZ7cKLnh6QHaOXkd/zO1s7Hyy8+TCvbgZuh68UsozJbl X-Received: from pfbbd40.prod.google.com ([2002:a05:6a00:27a8:b0:725:1ef3:c075]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3289:b0:1e0:d14b:d54c with SMTP id adf61e73a8af0-1e0e0b58d0fmr6986078637.30.1732738784179; Wed, 27 Nov 2024 12:19:44 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:15 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-2-aaronlewis@google.com> Subject: [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com From: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 8 ++++---- arch/x86/kvm/vmx/vmx.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index dd15cc6356553..35bcf3a63b606 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -781,14 +781,14 @@ static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read, /* Set the shadow bitmaps to the desired intercept states */ if (read) - set_bit(slot, svm->shadow_msr_intercept.read); + __set_bit(slot, svm->shadow_msr_intercept.read); else - clear_bit(slot, svm->shadow_msr_intercept.read); + __clear_bit(slot, svm->shadow_msr_intercept.read); if (write) - set_bit(slot, svm->shadow_msr_intercept.write); + __set_bit(slot, svm->shadow_msr_intercept.write); else - clear_bit(slot, svm->shadow_msr_intercept.write); + __clear_bit(slot, svm->shadow_msr_intercept.write); } static bool valid_msr_intercept(u32 index) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3d4a8d5b0b808..0577a7961b9f0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4015,9 +4015,9 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) idx = vmx_get_passthrough_msr_slot(msr); if (idx >= 0) { if (type & MSR_TYPE_R) - clear_bit(idx, vmx->shadow_msr_intercept.read); + __clear_bit(idx, vmx->shadow_msr_intercept.read); if (type & MSR_TYPE_W) - clear_bit(idx, vmx->shadow_msr_intercept.write); + __clear_bit(idx, vmx->shadow_msr_intercept.write); } if ((type & MSR_TYPE_R) && @@ -4057,9 +4057,9 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) idx = vmx_get_passthrough_msr_slot(msr); if (idx >= 0) { if (type & MSR_TYPE_R) - set_bit(idx, vmx->shadow_msr_intercept.read); + __set_bit(idx, vmx->shadow_msr_intercept.read); if (type & MSR_TYPE_W) - set_bit(idx, vmx->shadow_msr_intercept.write); + __set_bit(idx, vmx->shadow_msr_intercept.write); } if (type & MSR_TYPE_R) From patchwork Wed Nov 27 20:19:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887356 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42698202F94 for ; Wed, 27 Nov 2024 20:19:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738788; cv=none; b=T7KrxWurZCGv7XI/MEntdrb8zcxzddbrXgyZvAiEHo0fuSjO7cEWEQ3KMWsE0iKPaB3AZsK5R6SEB+emKN6sjDeNutAunCe7veoMAw3UIQNtCZPtlnFL3MhJPtL4+h9RBm9biOx3ld4O7g8Qp5c+MVGXsqEjXjYcvjIGQG6XBGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738788; c=relaxed/simple; bh=55ka0JARxkKK6j/Gd1Rkz0SiD1//nkCrDW7ti/LUP9A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=L41wqul1PFuqMG8PHQ6EB6X9fhApG3E6Ede6IXMT2DG9Wa+2yi3e5z9ewo//E+ibwA/A6FJtyaZVbgJYtCDpYq431YbT0Oo+DGJEBTfQjysiIa06UkmQeRwQdcua5u5vUbBv/Y5vKNs2i5z/v4v3VV5VhLhApuNMgOlaqH1bvG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0c0oQOZ+; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0c0oQOZ+" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee21b04e37so158517a91.0 for ; Wed, 27 Nov 2024 12:19:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738786; x=1733343586; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/aoBOHMDlBzdeA5z8TizKQ9mU30wOoqXCBFWfyqEOWc=; b=0c0oQOZ+2Vx74l/TrznuDbWGSefhBgJT9xE3ZzcTY+48h64Gp6uCb/2JsuoTOIOjKb LRPXpOk2s42+U5/q0KeqlMGZoXXqWJldJtQvMxqMjC+A86CWi49j93i7JhvLhN+ctwpy EH2g6nvhp4pKyrN+33dj1HZemOcpSa1qKyiPDACR7Cg+rTWngYcsB+r50w5XpNIi619U 7uKJzrpA4nEioPRN9IE1FmJn/LJlcSa/8sjhfo4EkLO2O4IAOUT85fjZwlcc6tp9z99i 9/lzZp1vi4b0JpysPRlOAK5mw8Pm1yW29Yvr6tTasDVcCj9Vx5qxW1f69EimlkjLMRXm bWkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738786; x=1733343586; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/aoBOHMDlBzdeA5z8TizKQ9mU30wOoqXCBFWfyqEOWc=; b=X17r3L3TaLi+bnTDVjCF2MMjye9Y4DmebSzyZPKMDvpePaAN8RPKhaoRGeNuFTX0Od wBsS1+G/CNU22VgHyU5BrAa0uUbdlFjQSwetzQkqNhvbYxdxNPNgD2qffcjE+lcrrSyM NyJ4Dpi7xFp+vONPNOBFfNRBpuURKCSAbxmgYDYwSkA+lO8eyzQnKT/gxBw0/DBrwQKp EMJ4QdYDCZncOIul74ti5ag1SHFsbclIM8yTRNtWC6ln3DL+VIbiTbF7OlmQAOR5Urfq jqhtuDE6E1WLLnnZeBR18xDuz9RKxANtcLmYYB3VzyZP/j+jTlzoPnCnpOelWVK8u+hO lInw== X-Gm-Message-State: AOJu0YztdYUHJq3ofi9b+eoHAkVJHgng8g2aJOBugjzu/4tIilXUtBl9 /8k53beYa+l4PBphk8P7T0fpQTrzsuyOfND0achhV+9JZzope4Yem9IQHmNqvnQpax1Ua6pJssx neVo+vYQ2HIKkwB2kXAv84XiTBhCYPcxf6SIpgmwMTY1FTAswZFrcCQ4qYFHuWrs/PqDfbeEj4o RemsNc7vL4aug8r2dbMvpOF4QLmEuC+7sagcbTv3XOuCzpzfNuWw== X-Google-Smtp-Source: AGHT+IE1jYtTGQGaf1HZiuNOX1A6zzbjMtC2bxQRsqs2c0AbUND1MLVmCmDdHVrMd/9NYAJwOdfr3o4rbcPP0IJq X-Received: from pjtd11.prod.google.com ([2002:a17:90b:4b:b0:2ea:5613:4d5d]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1bd2:b0:2ea:4150:3f82 with SMTP id 98e67ed59e1d1-2ee08ecd374mr5032714a91.22.1732738786574; Wed, 27 Nov 2024 12:19:46 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:16 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-3-aaronlewis@google.com> Subject: [PATCH 02/15] KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com From: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 35bcf3a63b606..7433dd2a32925 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -852,8 +852,8 @@ static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm, BUG_ON(offset == MSR_INVALID); - read ? clear_bit(bit_read, &tmp) : set_bit(bit_read, &tmp); - write ? clear_bit(bit_write, &tmp) : set_bit(bit_write, &tmp); + read ? __clear_bit(bit_read, &tmp) : __set_bit(bit_read, &tmp); + write ? __clear_bit(bit_write, &tmp) : __set_bit(bit_write, &tmp); msrpm[offset] = tmp; From patchwork Wed Nov 27 20:19:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887357 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84525202F9D for ; Wed, 27 Nov 2024 20:19:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738790; cv=none; b=Z+PCdSAOE8Zb7Q7JLYzmgrmgCmEacuCv8s2nOXiABv5I4AryhlXNrP3ZU3TRjI8RS0QHPrqwWJCwVda+novTALjLKtZ5ksA/e4v9dfL+zE7V9Kwqkwo1Wy67K111umJ+l11atHO4zkdLXn83E+QZtkBVAdiSN3NK0sTfY9TP1vM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738790; c=relaxed/simple; bh=ogEn1dElzIfJG/QJnYufTDh0/Vncmiw+z9V1eqe1mbE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=D36MaR9+E2Lz8bDTBT62SmRbz9kKGkxS+wpLzfx30XY5yHw8YqC97cYD0eT+L/apzEps6S/Oa71WE+CvUqW0Ayz8zO7siHBHCSDX8R6x8nWfbzESa7MPZHoRB5MgS+7HhqEWkTT/m0Boz+9oNzzQORXVxp3MWt5HdAev7RYXOyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YrcV9T3c; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YrcV9T3c" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ea50564395so1019646a91.0 for ; Wed, 27 Nov 2024 12:19:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738789; x=1733343589; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+ZmPbbGo+TCggACq8xeI+04Q4CHspffd8paLkrS4Ah0=; b=YrcV9T3ctuIe96S8/IzjdD204ZDZYKnzjMhIaMPWrpIH86iSzHs5AfCsoESjQBfPY6 LiDN0q3w+LYoYAa4QOfKwRiUAsw40QhOGMFlvRd2eNtV4NjosFmRmAQFewz0t7piS56l LzQxJcdp7ZBUIv3gjgIDOGWWr844MKv6Hdjs5vdcR7zbnd5pClZbRUIULwqYPzdK+mnd yU48AE6rVpxayUbXgcHjGYqA3Toi8QKzu3UGcG/lbj7iTG1P7Qq2ddrjpqIRPB/9WfV7 VozzCQCIJGUOwo8b5DvUMsUsaCbLHCwjq5gYy85/I6xAAJNQSEb6YKG9jVtzWY6bHX0Y NFIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738789; x=1733343589; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+ZmPbbGo+TCggACq8xeI+04Q4CHspffd8paLkrS4Ah0=; b=AxP/Zk4vFeQ4vm2FFU4xBsJrH7o9wALIDHxHeqIo/5SCiGmK0C9zPRD+EW32+g2bTf 0evh2nX5dw0K+UcbRqispzdXhptWQa5mLewW0sSFJXgLf5AndSuPnWLTBX2gvBSGYNFF 5mCGgjhhMVQoDRxu+8DAKrE+gLDWbUm3hTDQtWAdKXBUGlPQKo+OgQiQnnqqWV1UiJuJ A/YOqgOAQowXV3Bc2nl+jQVSEjfr0Pu19VNlMxR/jsrkkpOiHk+y23tOyRZ1zCMvGPTc 5QH+G0bT4+XdvqC4hWsHHHT9I8sLmkyIzRkp0dHKx5T84R7qI9WqHjZqK4sHDMTXJech Ykbg== X-Gm-Message-State: AOJu0YyzbN5ywje/FwFQo5KrzkBoZjhml81BhigGV+dqx5/UMzv3grC1 iMiYmS/PiSBiHr0vU4v5JBAlKLnTWPQx3gwvhAtCSPd7G+g4+SJV8Jwq6nz3SNXB3NtP5LpIht8 ufFB2gYMfEz+bPOmQGG0ynClGr/PAe06P7ugSGrdRZ0raal5mpC+fg8pbQMlZbT/GxUoSHUm6GT hBFQ3CthBYScRiFHMJasB73aXes5+c3lljJhaa78WzAczJtUGYnA== X-Google-Smtp-Source: AGHT+IEIt5jiq1ueJDDA2kC+ZAzGm6+CvzMikEFnLqs2uy/7I7ZXXWhEg7DQ6yfJwipo0F8Yw4robE20WftzHgL7 X-Received: from pjtd11.prod.google.com ([2002:a17:90b:4b:b0:2ea:5613:4d5d]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4f85:b0:2ea:8aac:6ac1 with SMTP id 98e67ed59e1d1-2ee25b06600mr958860a91.15.1732738788647; Wed, 27 Nov 2024 12:19:48 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:17 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-4-aaronlewis@google.com> Subject: [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Note, a "FIXME" tag was added to svm_msr_filter_changed(). This will be addressed later in the series after the VMX style MSR intercepts are added to SVM. Signed-off-by: Sean Christopherson Co-developed-by: Aaron Lewis --- arch/x86/kvm/svm/svm.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7433dd2a32925..f534cdbba0585 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -781,14 +781,14 @@ static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read, /* Set the shadow bitmaps to the desired intercept states */ if (read) - __set_bit(slot, svm->shadow_msr_intercept.read); - else __clear_bit(slot, svm->shadow_msr_intercept.read); + else + __set_bit(slot, svm->shadow_msr_intercept.read); if (write) - __set_bit(slot, svm->shadow_msr_intercept.write); - else __clear_bit(slot, svm->shadow_msr_intercept.write); + else + __set_bit(slot, svm->shadow_msr_intercept.write); } static bool valid_msr_intercept(u32 index) @@ -934,9 +934,10 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) */ for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { u32 msr = direct_access_msrs[i].index; - u32 read = test_bit(i, svm->shadow_msr_intercept.read); - u32 write = test_bit(i, svm->shadow_msr_intercept.write); + u32 read = !test_bit(i, svm->shadow_msr_intercept.read); + u32 write = !test_bit(i, svm->shadow_msr_intercept.write); + /* FIXME: Align the polarity of the bitmaps and params. */ set_msr_interception_bitmap(vcpu, svm->msrpm, msr, read, write); } } @@ -1453,6 +1454,10 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto error_free_vmsa_page; + /* All MSRs start out in the "intercepted" state. */ + bitmap_fill(svm->shadow_msr_intercept.read, MAX_DIRECT_ACCESS_MSRS); + bitmap_fill(svm->shadow_msr_intercept.write, MAX_DIRECT_ACCESS_MSRS); + svm->msrpm = svm_vcpu_alloc_msrpm(); if (!svm->msrpm) { err = -ENOMEM; From patchwork Wed Nov 27 20:19:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887358 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF6672036E3 for ; Wed, 27 Nov 2024 20:19:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738793; cv=none; b=ftAk3iN3Tm3tX6XiRLe69//SVA/jFXmvfGkGupRpq4ukIXFjR00e0e4YNVe2fne1cOdmLKjkuGk0EeAZGzqiCMXCADmaYQPvZfrLluQz5flbANdkWCDl/iJ1mwdenN52LJwTzKsgYHTxvGWKcXBhQeU2KM0GMIhEMDZSDkSIwiw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738793; c=relaxed/simple; bh=DS66OSxCm+Ha7a9RcBVdqCMxeY2q+bsAiaZNWMf5dq4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=r7BpVlSIltMEZanjTyER0kkKnKmZgn0O90mRPOPBAa1+A/86UX0xCS2JSUDz7iy7+wUyUhb8tETILIJSQDCzVOg8filSxbN2ggAULRKntNq6yvOSyqU0Mwhh/HSTe1Wzu2mjo54HFV17sSzN+h+vhUwO99QFIaDJX6lyeMMbIQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DtUtjS9K; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DtUtjS9K" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-72467c300a6so806978b3a.0 for ; Wed, 27 Nov 2024 12:19:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738791; x=1733343591; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ElA2tagihpZxHbytD6B3cmgQ2WMQN1TY7NZSTas0rUU=; b=DtUtjS9K9SkusfJF40Kn9pl0sJ9tQwwPPdKGSPPOTeFz8eDH2YfqVg8wChbwosw5Od KwNo9mnqInRsAa95LjY6jY75wPkNY3SCwYqW+HDfieFlpPOdfPqQ/NzJAOTua03hpNHt SMA2rnxLLpV+x6PK7K79HtxEDraU87TIuwFXa5KEQxVPEz1mt4BbT3r5cX7mQq9GgPw5 F8v3PTmD0A7BhdZUy/kRN3sqkmF8Gq0wbqkuQbTQvEHaJN1e4e336HZn8K3ZAcCHAwO4 STEWadyW60HRPH3Fb1+df/lGctDUMBPhHNO+3jhJSuuEaq/5EmlBnonfrk98QwIGWLnA H6AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738791; x=1733343591; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ElA2tagihpZxHbytD6B3cmgQ2WMQN1TY7NZSTas0rUU=; b=eOizegahgpY3ISBKBR2nHYN61dsz2qBMv9FX1KMzNeBF8VeQo0D9slvw9l7/+tNq3X gPc4G8lmSU95i38CRHy6JcskYLI42M1ML5z3eC7xIxdUJPx9CQDntI72v8O6PmGjO6qw Ud7VxvD92/3VbvQ2NV0lkJYg6NQI0Dk9qzuK7syYUNsA4+m+QRAm48z49JS3i6pt/kH5 SS9WSxQOxf6feXlm21tl+PJDArp5Ws4WkqpVRn4qOtRxfptzVeZg8rCpfouGnf8/ozXC yDkSfcUQIUM1DYv0MFgW7Ke3oKSAuxttRQqEb2kAeDoeXeeZpVfo8LJqNRUCURpRqaCr YwwA== X-Gm-Message-State: AOJu0YxRO8fbRb9x9vghJSMDutgmkQSWTKvB/X5Z8gZd8kl9Mb9Bvl7d AFSGEaNZgdpzOH7C4G764AB7pcnoJU5l7C1AVtrNI+W6SELgLYDxBtTCW/V3cta/LwYA/TXBqFK vQfb2VTr3FdpWGe/j0alg455x2EZRnJzJVapZujXmIzKRMXwuWOoJ8wUwiQiTgyGGjSNnuqe47E Hk/GJwDq5w2jPsH/QZblRiB71GtEQelRVnP4RfA1ha7tl03+AlAQ== X-Google-Smtp-Source: AGHT+IHPrL3Xo8+4yQdLuZFUtegruqN+C4g+WVMuBwMhykTZtCocVln+NPRgmmVZPFtY/OBQpUKsLErufyiOOOmj X-Received: from pfwz6.prod.google.com ([2002:a05:6a00:1d86:b0:724:f73b:3c65]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:ac8e:b0:725:3ff5:76bc with SMTP id d2e1a72fcca58-7253ff5773dmr439441b3a.7.1732738790946; Wed, 27 Nov 2024 12:19:50 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:18 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-5-aaronlewis@google.com> Subject: [PATCH 04/15] KVM: SVM: Track MSRPM as "unsigned long", not "u32" From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Use "unsigned long" instead of "u32" to track MSRPM to match the bitmap API. Signed-off-by: Sean Christopherson Co-developed-by: Aaron Lewis --- arch/x86/kvm/svm/svm.c | 18 +++++++++--------- arch/x86/kvm/svm/svm.h | 12 ++++++------ 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f534cdbba0585..5dd621f78e474 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -276,8 +276,8 @@ u32 svm_msrpm_offset(u32 msr) offset = (msr - msrpm_ranges[i]) / 4; /* 4 msrs per u8 */ offset += (i * MSRS_RANGE_SIZE); /* add range offset */ - /* Now we have the u8 offset - but need the u32 offset */ - return offset / 4; + /* Now we have the u8 offset - but need the ulong offset */ + return offset / sizeof(unsigned long); } /* MSR not in any range */ @@ -799,9 +799,9 @@ static bool valid_msr_intercept(u32 index) static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) { u8 bit_write; + unsigned long *msrpm; unsigned long tmp; u32 offset; - u32 *msrpm; /* * For non-nested case: @@ -824,7 +824,7 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) return test_bit(bit_write, &tmp); } -static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm, +static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, int read, int write) { struct vcpu_svm *svm = to_svm(vcpu); @@ -861,18 +861,18 @@ static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm, svm->nested.force_msr_bitmap_recalc = true; } -void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr, +void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, int read, int write) { set_shadow_msr_intercept(vcpu, msr, read, write); set_msr_interception_bitmap(vcpu, msrpm, msr, read, write); } -u32 *svm_vcpu_alloc_msrpm(void) +unsigned long *svm_vcpu_alloc_msrpm(void) { unsigned int order = get_order(MSRPM_SIZE); struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, order); - u32 *msrpm; + unsigned long *msrpm; if (!pages) return NULL; @@ -883,7 +883,7 @@ u32 *svm_vcpu_alloc_msrpm(void) return msrpm; } -void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm) +void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm) { int i; @@ -917,7 +917,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept) svm->x2avic_msrs_intercepted = intercept; } -void svm_vcpu_free_msrpm(u32 *msrpm) +void svm_vcpu_free_msrpm(unsigned long *msrpm) { __free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE)); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 43fa6a16eb191..d73b184675641 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -185,7 +185,7 @@ struct svm_nested_state { u64 last_vmcb12_gpa; /* These are the merged vectors */ - u32 *msrpm; + unsigned long *msrpm; /* A VMRUN has started but has not yet been performed, so * we cannot inject a nested vmexit yet. */ @@ -266,7 +266,7 @@ struct vcpu_svm { */ u64 virt_spec_ctrl; - u32 *msrpm; + unsigned long *msrpm; ulong nmi_iret_rip; @@ -596,9 +596,9 @@ static inline bool is_vnmi_enabled(struct vcpu_svm *svm) extern bool dump_invalid_vmcb; u32 svm_msrpm_offset(u32 msr); -u32 *svm_vcpu_alloc_msrpm(void); -void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm); -void svm_vcpu_free_msrpm(u32 *msrpm); +unsigned long *svm_vcpu_alloc_msrpm(void); +void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm); +void svm_vcpu_free_msrpm(unsigned long *msrpm); void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb); void svm_enable_lbrv(struct kvm_vcpu *vcpu); void svm_update_lbrv(struct kvm_vcpu *vcpu); @@ -612,7 +612,7 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu); bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); void svm_set_gif(struct vcpu_svm *svm, bool value); int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code); -void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr, +void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, int read, int write); void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable); void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode, From patchwork Wed Nov 27 20:19:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887359 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B701202F84 for ; Wed, 27 Nov 2024 20:19:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738796; cv=none; b=qWBBKpTTg8Yq773tFuRLMRAdgi9Z5XPAapLeydL5DnOOYsUgN2KZDNAKh9I5s0KGALzz78QUDPiLym2TEwgONrCkGXHzdQwX8lg6Jwa6ZJlgNBXvgWAr4g+YtZWxwiqsRqveGKTfLZd3wEtX0jB8ihXt9VUhAJVHZlUGYEwQOb0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738796; c=relaxed/simple; bh=Fa+p86enDaBNNJQcICRz+PBQ5cGaFWmvwyNHovbPbRc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T74/D1AcxvSkrVnJn6fKtpzB//n6NlAq8Zb9eEghKUvR+hUrBjv5WwxjtohcYKKRdaXJVWXnprKfhPCacGf+cLgoN7L70XimG9S0c8piMQHWHxLGGpsieYC6vdFQpd58QollGdG83B7GtXIyHhErPVNu/wJqqS0kMWQuyQSSIJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FfINYrk8; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FfINYrk8" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ea5a0f7547so152528a91.1 for ; Wed, 27 Nov 2024 12:19:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738793; x=1733343593; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=85dLC92Z/qdG+ARAKtwvrBosK7GSKlLYsWTsdpGzUqk=; b=FfINYrk8sLg97pVn9wFzJc6yrQTJkkjohS8gy7WD3ztz5Hn+PClH5XVgbzAe/31ucg nhS0AzRcCdKN8Sk7VGE4hwCN1GIbtT9LlLIXROlV1mJhTStlzgz7ZpyeSS40rF+J/A5U F/IFuPhG98VWsktekiMzVIS2Xo2wXDgpoJsWSigDdjf1AMmWO+CWWalNZ5c7+Uwu/+EG Ukz3M85D5gGqikN0/8rqmvcok6qHAJiUtu4Wg+go0Ck4K+pZ7h116enovKkmj1XwT9ph P+2+3fIBQAOSsytdKOjeK3oQ5lKD6FDvgco8SWd5RerZJOcqDE5ZmCQtBq2AR+Ru6uOc yL7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738793; x=1733343593; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=85dLC92Z/qdG+ARAKtwvrBosK7GSKlLYsWTsdpGzUqk=; b=bmFnb25csGy+LXVkoKIIFXDHto0EDxxsQfnKR9kZmQJRTIX9xfH78VE4VCyIBTFvqN zAdUG/HB8QBtC7FO9TJb8ifwVjP1Fa7va2ZVwh/rE2zYk2ue7TElZyS3aoGa6YyK9gHy i01fwIb6nCetu5tI2NxyMIQSIttOCrEHMvHyns6U0ptGKfwBB0bLTcpbwiLNmRmCJb0w c7WGduqX8+JoIaBvXo/XgEIGxFez5m0piny6IRk5qiOotE0fQ4FVnmaeYVgE8rB3uE3I KPDdSc/1UXD0xaAZhHfLtr7d0O9Ky7NlLqPpr79IQ7jK1sYr7N99IJ1OLRIvPRDOxz92 Zgug== X-Gm-Message-State: AOJu0YxlVNwlppBTLqsnz1/ADqR7gxKR4aGR0K+TkIrBNYMOa9QUeKXm GLi6luTGIxRLJkwEVSUZGq/KH9d/HgxQmmfu7+DVFzXyFDdqelEEK+faDShe06jBJclfU9JKbUE ll4Is55Bzl72UqfaHrtz7iH6rrxH8y2WDmVNXY82oelaZq203IdGXALIOvaW3sj9GR9lEzD2PYS 3HwKoo7eDgkGb+bHMmCRPH03qzPlc3WQlMXB6VOTMmJyBoV8GI/Q== X-Google-Smtp-Source: AGHT+IEAim4YzyVoZxu2nPrB28q2NkQoK1/HtoTOpro2rXrqWvbxZllHGV9NTtK3OMT/ZpqOVEEs0LjGl8Ab83oV X-Received: from pjbnd10.prod.google.com ([2002:a17:90b:4cca:b0:2ea:8715:5c92]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1c09:b0:2ea:7755:a0ff with SMTP id 98e67ed59e1d1-2ee08e5e3d5mr5543391a91.6.1732738793342; Wed, 27 Nov 2024 12:19:53 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:19 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-6-aaronlewis@google.com> Subject: [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis , Anish Ghulati VMX MSR interception is done via three functions: vmx_disable_intercept_for_msr(vcpu, msr, type) vmx_enable_intercept_for_msr(vcpu, msr, type) vmx_set_intercept_for_msr(vcpu, msr, type, value) While SVM uses set_msr_interception(vcpu, msrpm, msr, read, write) The SVM code is not very intuitive (using 0 for enable and 1 for disable), and forces both read and write changes with each call which is not always required. Add helpers functions to SVM to match VMX: svm_disable_intercept_for_msr(vcpu, msr, type) svm_enable_intercept_for_msr(vcpu, msr, type) svm_set_intercept_for_msr(vcpu, msr, type, enable_intercept) Additionally, update calls to set_msr_interception() to use the new functions. This update is only made to calls that toggle interception for both read and write. Keep the old paths for now, they will be deleted once all code is converted to the new helpers. Opportunistically, the function svm_get_msr_bitmap_entries() is added to abstract the MSR bitmap from the intercept functions. This will be needed later in the series when this code is hoisted to common code. No functional change. Suggested-by: Sean Christopherson Co-Developed-by: Anish Ghulati Signed-off-by: Aaron Lewis --- arch/x86/kvm/svm/sev.c | 11 ++-- arch/x86/kvm/svm/svm.c | 144 ++++++++++++++++++++++++++++++++++------- arch/x86/kvm/svm/svm.h | 12 ++++ 3 files changed, 138 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index c6c8524859001..cdd3799e71f24 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -4448,7 +4448,8 @@ static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm) bool v_tsc_aux = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || guest_cpuid_has(vcpu, X86_FEATURE_RDPID); - set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, v_tsc_aux, v_tsc_aux); + if (v_tsc_aux) + svm_disable_intercept_for_msr(vcpu, MSR_TSC_AUX, MSR_TYPE_RW); } /* @@ -4466,9 +4467,9 @@ static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm) */ if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_XSS, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_XSS, MSR_TYPE_RW); else - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_XSS, 0, 0); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_XSS, MSR_TYPE_RW); } void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm) @@ -4540,8 +4541,8 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm) svm_clr_intercept(svm, INTERCEPT_XSETBV); /* Clear intercepts on selected MSRs */ - set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_EFER, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_CR_PAT, MSR_TYPE_RW); } void sev_init_vmcb(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 5dd621f78e474..b982729ef7638 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -868,6 +868,102 @@ void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, set_msr_interception_bitmap(vcpu, msrpm, msr, read, write); } +static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, + unsigned long **read_map, u8 *read_bit, + unsigned long **write_map, u8 *write_bit) +{ + struct vcpu_svm *svm = to_svm(vcpu); + u32 offset; + + offset = svm_msrpm_offset(msr); + *read_bit = 2 * (msr & 0x0f); + *write_bit = 2 * (msr & 0x0f) + 1; + BUG_ON(offset == MSR_INVALID); + + *read_map = &svm->msrpm[offset]; + *write_map = &svm->msrpm[offset]; +} + +#define BUILD_SVM_MSR_BITMAP_HELPER(fn, bitop, access) \ +static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \ +{ \ + unsigned long *read_map, *write_map; \ + u8 read_bit, write_bit; \ + \ + svm_get_msr_bitmap_entries(vcpu, msr, &read_map, &read_bit, \ + &write_map, &write_bit); \ + bitop(access##_bit, access##_map); \ +} + +BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_read, __set_bit, read) +BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_write, __set_bit, write) +BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_read, __clear_bit, read) +BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_write, __clear_bit, write) + +void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +{ + struct vcpu_svm *svm = to_svm(vcpu); + int slot; + + slot = direct_access_msr_slot(msr); + WARN_ON(slot == -ENOENT); + if (slot >= 0) { + /* Set the shadow bitmaps to the desired intercept states */ + if (type & MSR_TYPE_R) + __clear_bit(slot, svm->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + __clear_bit(slot, svm->shadow_msr_intercept.write); + } + + /* + * Don't disabled interception for the MSR if userspace wants to + * handle it. + */ + if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) { + svm_set_msr_bitmap_read(vcpu, msr); + type &= ~MSR_TYPE_R; + } + + if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) { + svm_set_msr_bitmap_write(vcpu, msr); + type &= ~MSR_TYPE_W; + } + + if (type & MSR_TYPE_R) + svm_clear_msr_bitmap_read(vcpu, msr); + + if (type & MSR_TYPE_W) + svm_clear_msr_bitmap_write(vcpu, msr); + + svm_hv_vmcb_dirty_nested_enlightenments(vcpu); + svm->nested.force_msr_bitmap_recalc = true; +} + +void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +{ + struct vcpu_svm *svm = to_svm(vcpu); + int slot; + + slot = direct_access_msr_slot(msr); + WARN_ON(slot == -ENOENT); + if (slot >= 0) { + /* Set the shadow bitmaps to the desired intercept states */ + if (type & MSR_TYPE_R) + __set_bit(slot, svm->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + __set_bit(slot, svm->shadow_msr_intercept.write); + } + + if (type & MSR_TYPE_R) + svm_set_msr_bitmap_read(vcpu, msr); + + if (type & MSR_TYPE_W) + svm_set_msr_bitmap_write(vcpu, msr); + + svm_hv_vmcb_dirty_nested_enlightenments(vcpu); + svm->nested.force_msr_bitmap_recalc = true; +} + unsigned long *svm_vcpu_alloc_msrpm(void) { unsigned int order = get_order(MSRPM_SIZE); @@ -890,7 +986,8 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm) for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { if (!direct_access_msrs[i].always) continue; - set_msr_interception(vcpu, msrpm, direct_access_msrs[i].index, 1, 1); + svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index, + MSR_TYPE_RW); } } @@ -910,8 +1007,8 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept) if ((index < APIC_BASE_MSR) || (index > APIC_BASE_MSR + 0xff)) continue; - set_msr_interception(&svm->vcpu, svm->msrpm, index, - !intercept, !intercept); + + svm_set_intercept_for_msr(&svm->vcpu, index, MSR_TYPE_RW, intercept); } svm->x2avic_msrs_intercepted = intercept; @@ -1001,13 +1098,13 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK; - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW); if (sev_es_guest(vcpu->kvm)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW); /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ if (is_guest_mode(vcpu)) @@ -1021,10 +1118,10 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu) KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK; - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 0, 0); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW); /* * Move the LBR msrs back to the vmcb01 to avoid copying them @@ -1216,8 +1313,8 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu) svm_set_intercept(svm, INTERCEPT_VMSAVE); svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW); + svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW); } else { /* * If hardware supports Virtual VMLOAD VMSAVE then enable it @@ -1229,8 +1326,8 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu) svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; } /* No need to intercept these MSRs */ - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW); } } @@ -1359,7 +1456,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu) * of MSR_IA32_SPEC_CTRL. */ if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW); if (kvm_vcpu_apicv_active(vcpu)) avic_init_vmcb(svm, vmcb); @@ -3092,7 +3189,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) * We update the L1 MSR bit as well since it will end up * touching the MSR anyway now. */ - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, + MSR_TYPE_RW); break; case MSR_AMD64_VIRT_SPEC_CTRL: if (!msr->host_initiated && @@ -4430,13 +4528,11 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm_recalc_instruction_intercepts(vcpu, svm); - if (boot_cpu_has(X86_FEATURE_IBPB)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0, - !!guest_has_pred_cmd_msr(vcpu)); + if (boot_cpu_has(X86_FEATURE_IBPB) && guest_has_pred_cmd_msr(vcpu)) + svm_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0, - !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); + if (boot_cpu_has(X86_FEATURE_FLUSH_L1D) && guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)) + svm_disable_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); if (sev_guest(vcpu->kvm)) sev_vcpu_after_set_cpuid(svm); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index d73b184675641..b008c190188a2 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -618,6 +618,18 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable); void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode, int trig_mode, int vec); +void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); +void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); + +static inline void svm_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, + int type, bool enable_intercept) +{ + if (enable_intercept) + svm_enable_intercept_for_msr(vcpu, msr, type); + else + svm_disable_intercept_for_msr(vcpu, msr, type); +} + /* nested.c */ #define NESTED_EXIT_HOST 0 /* Exit handled on host level */ From patchwork Wed Nov 27 20:19:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887360 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D372202F9B for ; Wed, 27 Nov 2024 20:19:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738797; cv=none; b=XOrpaCX6Th+8d7QsrLKfOUDfyeGDn1lkMaVsaJMKKwgaM4PDnTkDm/4pKEliDiQr3tQ6eX6gtPJ/j4joDau+yRlqsbOY8ia5vPXVoxAqrnzbMZQGdssu9bn5qNd2Nhb975KuRo2/gCe9EfZPd0624dc0IyFRl1VNvVAMajrBa7w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738797; c=relaxed/simple; bh=0Sb4Q+0n13lKQuICURh4UVrldcFpp4dmMu8bBqYa5Z4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lGWtLMdObl4C+4aDX3IXnA5+ICx1NiqRASjlzybUWn2YKLR5fJXhi58x/eSlRKDiaTGWXi3GGZTCrGSlfLjZRg4vwhytbxkIZSBYGKwhm9YFvyfvcQT51TGMHhI9kLmPSrqsBzCYcG17AsFHln4srfUUKzSS9D5xAFrr+vMMgGY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=I/j3Sm2S; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I/j3Sm2S" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7fc4206332bso65583a12.1 for ; Wed, 27 Nov 2024 12:19:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738796; x=1733343596; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SEFFHShPWvrfysOCPzjkFHhU3REAM0/B502rfsrxTK4=; b=I/j3Sm2S/l2SJn6JNgGr4uD8cbZv/T7jj8fDTlAaNTyxjiiYbGJLAZ1CerVgfokmAK e/zpV+3bpnPRWaFnslZuWL+N3kbru3qKxWdfiIh2lDjOiWHo+kK59EI8p51m2hkDXqC2 02PX+915Tf6BKumeQtIInp1cELb7rbvCaSfuzxHrLpDCrvH14/ZPrniXcPKVuD/3O37F QwF+67wXgCIziQpUu8fieTyHEyXDQu0WsQxRrNcPvep7hb1NJ5lImTzsfXFGlEO1B5yE xuZrciJUiEre9Tf9UaoBzqSvwOhJ87xbVeC1db3tsdJgrAS1hUi9bHcTNQVzRNQevAGa bWZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738796; x=1733343596; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SEFFHShPWvrfysOCPzjkFHhU3REAM0/B502rfsrxTK4=; b=Ww9h7smC4SEdiGB3jqBr7H9GIIshObvnPAUCE6U2d67YxE0Xss8YRwzHk4Dq9F3V/2 ePcWgwxJk3GKT5ZVQ8buhvKknTgbij5V5+XT71Guc75x/1rCoE7regqNX0YzNGa4KlTS G08I+TB7AfHzuFEYrZJ4wvEAEiN1WuKjuN7if1bD/4c3ucjr9CjcbRsxCly1o/fSsmXX Im8/KmPfCoI90ip2lq+tnOIdKME3Vh2d01SuTUoowqLKx2JULH0ut2UWqxpdtjoNz/bH whX/U5X4InOLe7vqzmAf1/77Fs/qtS4ezzJJWYvMiLwGrBAytL7NZmyM1/l4jg5CniDf qdQw== X-Gm-Message-State: AOJu0YzUUwbcmnsRpWIpgzgklEJV8hQ9iSpMfGtVxY7KDDewwzibd//c lOGtdHvtpaQEgikdRIqNU/VX6beti4psJbM1/7PEbC/GyxGzpL4ejN3eKT5ffeftpPZIlmuPc27 XBPg31iwKi5GGgabjiGFeI+Tg74QaHGjolh7gexBarrkmfrDikf3zQfsQ33oO8yJP8YCMna2+p8 B5M1X8YHV1/48iK+/4eYOw5c40GTzIX/X+1nS+ycFswdulPJb1yA== X-Google-Smtp-Source: AGHT+IFDnJrNl2EwKhUTk9mpKqbvcRkEfQGkw2j40sPx4IsQJwryXEWkoVafi8xo4q2JWSCxTCJdnQ/xIXLI1Siy X-Received: from pgbdl1.prod.google.com ([2002:a05:6a02:d01:b0:7fb:d719:ed47]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:2451:b0:1dc:bdbd:9017 with SMTP id adf61e73a8af0-1e0e0b6c85amr7559366637.40.1732738795671; Wed, 27 Nov 2024 12:19:55 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:20 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-7-aaronlewis@google.com> Subject: [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Anish Ghulati , Aaron Lewis From: Anish Ghulati For all direct access MSRs, disable the MSR interception explicitly. svm_disable_intercept_for_msr() checks the new MSR filter and ensures that KVM enables interception if userspace wants to filter the MSR. This change is similar to the VMX change: d895f28ed6da ("KVM: VMX: Skip filter updates for MSRs that KVM is already intercepting") Adopting in SVM to align the implementations. Suggested-by: Sean Christopherson Co-developed-by: Aaron Lewis Signed-off-by: Anish Ghulati --- arch/x86/kvm/svm/svm.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b982729ef7638..37b8683849ed2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1025,17 +1025,21 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) u32 i; /* - * Set intercept permissions for all direct access MSRs again. They - * will automatically get filtered through the MSR filter, so we are - * back in sync after this. + * Redo intercept permissions for MSRs that KVM is passing through to + * the guest. Disabling interception will check the new MSR filter and + * ensure that KVM enables interception if usersepace wants to filter + * the MSR. MSRs that KVM is already intercepting don't need to be + * refreshed since KVM is going to intercept them regardless of what + * userspace wants. */ for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { u32 msr = direct_access_msrs[i].index; - u32 read = !test_bit(i, svm->shadow_msr_intercept.read); - u32 write = !test_bit(i, svm->shadow_msr_intercept.write); - /* FIXME: Align the polarity of the bitmaps and params. */ - set_msr_interception_bitmap(vcpu, svm->msrpm, msr, read, write); + if (!test_bit(i, svm->shadow_msr_intercept.read)) + svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); + + if (!test_bit(i, svm->shadow_msr_intercept.write)) + svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); } } From patchwork Wed Nov 27 20:19:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887361 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A52FB2036F7 for ; Wed, 27 Nov 2024 20:19:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738800; cv=none; b=S7h7nEfoiMbAwt5eWxQ/Ij6sNo6sjJN136rs+96TSCd4qOwFCVToy7hNEiT8Bi37zIYYwKd6pQkBcckrUXN6UrMrE96u2kdWfujIHldPM1OQU9xrEPvHI5c4Vqws95ZGEMgcimzz+QEM1NHtbGHduLRzDfl+5cZQDOLe1Xo3QcM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738800; c=relaxed/simple; bh=y2FN9r7ugbV5qha0nZVbYrWu0Zt8xbdSyx6wL3Pf/V8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DTKiL386EbpnMnZ7hSHb+dInCMI2GAaBOSZB3mAQBnwubbZXvcSD+CQ37j2EBqF6k+spam4Ke3B2wP3dsbB9aP6rLuaFheXBAOnubj2UmXbr9NxI3f8H42LZA7ruFrNOjbb+8Weqrc4y/KFC8/SedBrC1FxvNxSahJCg5si6YoM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JQXbZd4K; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JQXbZd4K" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ea46465ce4so146492a91.2 for ; Wed, 27 Nov 2024 12:19:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738798; x=1733343598; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ay83d91yLBygZMe29qkA3QWrnm4hDElp535HS+RqI+g=; b=JQXbZd4KO9ibvbluLxM1iRSHMuG3EVTY2OV0dpbAm2H/pHftOpREkCIFTAavvF12ED zvg0MH2lUyP0FiGbGGWLy7PFMSv76bWoHyYbKfpnYW4yAoQ27a99leIx1iB61diZ8nwo 5PiMBhrjaZD995dTIFCiqHxOebJR3cYU3CPp/nSHjt+1MPxIGkREm6qEeOKzbVOFz89A x4jKPByGW607BS7gNGePoTKqXHCB6PjP2khmtgrrJdsaDAlKS9WSkYhEF1kBjTsC+n4H U6V2ic50ivlRddGvmZSDI3W5v3GIlytoj4LHnX8D22DWmd1xljMwr3l9KKMmjn7vfyu2 yWCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738798; x=1733343598; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ay83d91yLBygZMe29qkA3QWrnm4hDElp535HS+RqI+g=; b=e1XjsychswNYZMiEFv9vEPoSiHmHnzywlEctTF/EYFJ6ErUY1Y3TIKKUq6GG4GyW8G lq/b95uyJCqN2RDpEn+GrTxKF6EVEcm+N5nfpFn5L1hVai2gHQPzPpr2RGELDg03SZjW 1cm3pa+lIg/fFs6bqrr5hA8Q2Rr7nYfbYJsIEPhChSP9gDDM+SsKKuGEf9AU78ns24j2 uSMCHd5jzr7ON2EtakPSp89YfDhAI0THxTr47pN9TW1Hi5qQvoV4rus0fWs7175c7ROy aJ5fA0ROGWi2t9LmzW+I+HHTE7uu4GT5KMHtpGGWIKyTI8FMHWMFoQGuXzF3C44c7FcM NOTg== X-Gm-Message-State: AOJu0Yz5a5ASPjRQ6pQ1Tlj6ZlAV/uC1HGiPNxjX3aL7FU0ozezjvet3 ZTk97Ktm5M3ROL053bPJ4L4GA3BZqbJIiWHJ2bqmQ0cTbchDBlsyY2iBGdGs1Tq9gNJx1H3gfMk rIPjzZbgTfM8EP5ggFulNjZ7cbcoFvLzHk1bzbrkqQfcIMeQMPgzjW2eKezla4F7OI9ntUFWbJG +ffGjvwRCp/IO9oTAyO8RFHbgOyMXItu3IfnQAHsm8W/UeK6UTOg== X-Google-Smtp-Source: AGHT+IFtRlDulV0vltu4nNvYU/EjvZzV9Huxgq+br6DUuYALoGcXW5/lsXwGyuScv9Xt170kV4+LwwwNX1E7I2vm X-Received: from pjuj11.prod.google.com ([2002:a17:90a:d00b:b0:2ea:7d73:294e]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1d01:b0:2ea:7bb3:ea47 with SMTP id 98e67ed59e1d1-2ee094cb157mr4671706a91.28.1732738797907; Wed, 27 Nov 2024 12:19:57 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:21 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-8-aaronlewis@google.com> Subject: [PATCH 07/15] KVM: SVM: Delete old SVM MSR management code From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Anish Ghulati From: Anish Ghulati Delete the old SVM code to manage MSR interception. There are no more calls to these functions: set_msr_interception_bitmap() set_msr_interception() set_shadow_msr_intercept() valid_msr_intercept() Suggested-by: Sean Christopherson Signed-off-by: Anish Ghulati --- arch/x86/kvm/svm/svm.c | 70 ------------------------------------------ arch/x86/kvm/svm/svm.h | 2 -- 2 files changed, 72 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 37b8683849ed2..2380059727168 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -770,32 +770,6 @@ static int direct_access_msr_slot(u32 msr) return -ENOENT; } -static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read, - int write) -{ - struct vcpu_svm *svm = to_svm(vcpu); - int slot = direct_access_msr_slot(msr); - - if (slot == -ENOENT) - return; - - /* Set the shadow bitmaps to the desired intercept states */ - if (read) - __clear_bit(slot, svm->shadow_msr_intercept.read); - else - __set_bit(slot, svm->shadow_msr_intercept.read); - - if (write) - __clear_bit(slot, svm->shadow_msr_intercept.write); - else - __set_bit(slot, svm->shadow_msr_intercept.write); -} - -static bool valid_msr_intercept(u32 index) -{ - return direct_access_msr_slot(index) != -ENOENT; -} - static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) { u8 bit_write; @@ -824,50 +798,6 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) return test_bit(bit_write, &tmp); } -static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, unsigned long *msrpm, - u32 msr, int read, int write) -{ - struct vcpu_svm *svm = to_svm(vcpu); - u8 bit_read, bit_write; - unsigned long tmp; - u32 offset; - - /* - * If this warning triggers extend the direct_access_msrs list at the - * beginning of the file - */ - WARN_ON(!valid_msr_intercept(msr)); - - /* Enforce non allowed MSRs to trap */ - if (read && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) - read = 0; - - if (write && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) - write = 0; - - offset = svm_msrpm_offset(msr); - bit_read = 2 * (msr & 0x0f); - bit_write = 2 * (msr & 0x0f) + 1; - tmp = msrpm[offset]; - - BUG_ON(offset == MSR_INVALID); - - read ? __clear_bit(bit_read, &tmp) : __set_bit(bit_read, &tmp); - write ? __clear_bit(bit_write, &tmp) : __set_bit(bit_write, &tmp); - - msrpm[offset] = tmp; - - svm_hv_vmcb_dirty_nested_enlightenments(vcpu); - svm->nested.force_msr_bitmap_recalc = true; -} - -void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, - int read, int write) -{ - set_shadow_msr_intercept(vcpu, msr, read, write); - set_msr_interception_bitmap(vcpu, msrpm, msr, read, write); -} - static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, unsigned long **read_map, u8 *read_bit, unsigned long **write_map, u8 *write_bit) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index b008c190188a2..2513990c5b6e6 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -612,8 +612,6 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu); bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); void svm_set_gif(struct vcpu_svm *svm, bool value); int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code); -void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr, - int read, int write); void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable); void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode, int trig_mode, int vec); From patchwork Wed Nov 27 20:19:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887362 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A18D202F9B for ; Wed, 27 Nov 2024 20:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738802; cv=none; b=uvOKfRzfLr87msKnVhk2kuQRTVXYivDIZabnlyg6g3w/FUpivpSY9ToqZd3ccHlVgxD2I8i42guLbGlPPVzQlNuXaF55G5kGaz2ZW7x+9UCpwML9WpqgdJtGtuJZIyZ8zFD/o2cRZ1fwnpmzFoNb0dyshxhmDDf/0ndBApfm4Ag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738802; c=relaxed/simple; bh=M9Jkgz2EO8xRRD8BrzQJ7+MC6wPnkehquZQSBslCRwk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IUEIgCDyO52DNaqdgXOBVzaVzEWoW30ghQFMqOrr14ONZyPVf0jBT8+13KD8nVtjzOP+x5WlX2NPbNpZH4JX1M5p74msCKh4I258YF4CIaBRWgz7rY9+T6gekY08gggq54inR91lQlILaleQhDhtKyIfOMafLZZ1zD2W8sUxwxw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=m3iRyrKT; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="m3iRyrKT" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2edeef8a994so142576a91.2 for ; Wed, 27 Nov 2024 12:20:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738800; x=1733343600; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=crARhPJrJmzI1ceolaka7M30zQE5TrnLUEpjfAnpX0U=; b=m3iRyrKTx74WglV43DLPNQ4D47MHz9TinUYRBTMfVCQg91w7jwmMx5Q3Jww7KUtqvO 4cDQdQq1lxBfFXc/DlofHIRsec8BbFGbt+98RoIzzdA16wMwsnRP25U3/Sent9h676U+ 5nemvN21dWh2wKGLxxbpT5HpMoXCfEg+noElqdGQUJbUgfXPfGi7DlGrI2CJu0R2bmNW Mxh4g+YGxHxTPqvaxhbopAwrlW4/PVwmBE+wfTS0tp9XPSq8e15I1tlvmNyaJ2nQ0PP2 FgpqOAUtWbGQxCPXgyxIamIF+S9EQxGcQUOM2Hzs4ZjnQ/lBl6ARztyZnHMaLlD9EY5Q b7kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738800; x=1733343600; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=crARhPJrJmzI1ceolaka7M30zQE5TrnLUEpjfAnpX0U=; b=MiYFhC4jPSYTSLUaD+qBoAlrc0OXD9J9etHbixUBn5CQT6lXDLpjMIMbZP3eEzAQod 7EjdEkFqmlyH1nIFmlIOTwktvdK50OR0crWaFZdaU9sms/ONxUXlAVxLySekMUElN/Lv dEz0nqVNki7IBam4jsvksk0p370yY9bdYFSVzPRxezoDxwX87j1RKAzfx6DTUqSslk9S /OMSiPftvac66d6e3tPCoa/sZOoVLrHif9QTZLF+gqUM+qyYffpJbg+hmfK0tnXiL22u 93+MwO8dayFH8dvFIUTKFeOQITHpWEXTqSKO0WI82FSVJhfPUGANjREk+oido/hharNX ty4g== X-Gm-Message-State: AOJu0YzePHicAZKU0u+VYRtwhVNAp/+7V/Xu5bF+xpDVUhLzxvhchqDY cbzODPOfbyfTu4dsQAZtU+YN41awUYj1pudXk8iwaeO2vzNBtgYKOPLM6IM1UW9CjSqtopAfhlP J8IfdnDf0QWoFUMxCW/rAaaRS6/I1CCNcN5krsYMxsJa/wemW/Og6P4fvACGXLTAldJ3Co9plP1 NYRknUc5e5DC+vkS1yx61MKtzVtH5gbLLj5aViJHUWtBsDkU7GJg== X-Google-Smtp-Source: AGHT+IE9IP38JoU3dpJ/46ihSXUGqm4EZse8JoozdPfC116+7DVIACjO+u2FJgMwRCY3evgNV2vfg2IqpMqbI4G4 X-Received: from pjbpt8.prod.google.com ([2002:a17:90b:3d08:b0:2ea:aa56:49c]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2882:b0:2ea:83a0:47a6 with SMTP id 98e67ed59e1d1-2ee097dd78bmr5723308a91.33.1732738800057; Wed, 27 Nov 2024 12:20:00 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:22 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-9-aaronlewis@google.com> Subject: [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com From: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2380059727168..25d41709a0eaa 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -108,7 +108,7 @@ static const struct svm_direct_access_msrs { { .index = MSR_IA32_XSS, .always = false }, { .index = MSR_EFER, .always = false }, { .index = MSR_IA32_CR_PAT, .always = false }, - { .index = MSR_AMD64_SEV_ES_GHCB, .always = true }, + { .index = MSR_AMD64_SEV_ES_GHCB, .always = false }, { .index = MSR_TSC_AUX, .always = false }, { .index = X2APIC_MSR(APIC_ID), .always = false }, { .index = X2APIC_MSR(APIC_LVR), .always = false }, @@ -919,6 +919,9 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm) svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index, MSR_TYPE_RW); } + + if (sev_es_guest(vcpu->kvm)) + svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW); } void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept) From patchwork Wed Nov 27 20:19:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887363 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0E47203706 for ; Wed, 27 Nov 2024 20:20:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738804; cv=none; b=QGpQZ6ZDwlceFuAKLYIr84JU1/E9F9LbL49S2l4CBFwg7THenodW/9fq9k6PG03NRpTyqlApf9/UQFR+eSRBG7HrGNDUBihZOJZPchOTYIxJ9cwAF5V380800PO+Z3Fy1eZwoEeuvghUs3K3TtfcBTzX355lSNhftnTxvKogFlY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738804; c=relaxed/simple; bh=eNSVYv5B1xseFGh1DR/QMn9+gQm27dzmcxcCk5toSx0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tyY7wmy1simjydN/794Z6Sxfz2dbvooXdElgkf2Q5CaYoWT4vjH89VX3daJJCu1bu4R0qt6tM2CyPXpRjnTNBo3lBaTSTlnFC3aj0XkFcsAUXcmKn/GO6B1sSrg8tUMano3JQsVkTLSS4mebMO6JZQHqVV/L/rQNtMWFQGQ31KQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0/ZU9ezV; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0/ZU9ezV" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ea873d84edso124079a91.1 for ; Wed, 27 Nov 2024 12:20:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738802; x=1733343602; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r3AUb3S/hhOpmdohfQBrWaKmWg8lsoot3L2/mMn3yIA=; b=0/ZU9ezVb2bnkqQAZiuIXqtQSNEVispC1rst8LB9mgZkcJHOsw/KX0Xnb9JEhkVrvJ 2e3xTv5abkgIMQgyghR7KCNdWZSh4GL9MCRGDAS12itVHkwJ47PcBVK4WAFK27y8yBvl LsSyJCkzaavXyJtwvIu8YW4qp6yYA80uwnQxf9Khbd4ibE5WCWuQTTq5x/QOJVTGMz+q pYaQdaRPk90Sep2cz6I7JmpAsSFeL+EC3EUNcHkDtKB1ot5sfndpFjJqs1cs66X3mxgm YuED1nknZ75Bi/v+PachWu1VgOs1iL15gNDBt33BmgfdJijqrSTj5vqfdXUjv4m0rz4W fXdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738802; x=1733343602; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r3AUb3S/hhOpmdohfQBrWaKmWg8lsoot3L2/mMn3yIA=; b=i6tgfF9Ku18IBnHirkF0BaMemIPN/e2ls6W2PLBS4Ef4p9zj71On5+w9jOY7QKqlO/ q+UUHo0DrFW5LtvcRq0qmFdztNym+KX9nx/R7xssAz/IfAolL89oQttG2QWjBrrnEbJN kG9aKvsIdk4Bj3eRhf/M4BvsjT9kv05IP3Bv2xhXxyIkV/3YUkIjWww1NhZMJSaKqPwC 6Wc4CleXt98QmMHzexS89LbKT6JDkcRgDksAhR129C0LWZibCBSy5YoMKAtCPmwzgQw/ v3+Ab5RNYNI5Z1OwMKNSNFWs/dzypUv6px8sRul4xPAEn9uzUL3+HnlbxgG6Zr5gFnMI t7Cg== X-Gm-Message-State: AOJu0Yx9PMZAb1M0c1iBSWbBisV5w/iWV7r6M9eLoS+o14q1w6gXqpBp eL1AR7B0kz06bY6fyyVHYbstr/7AmZqPgIZ3ipj+OVfZuokv3lsjy/y05LHq3Hr+F1PTQDBNzIb wCcCcnuXgHgU2RrayOq/Vevic5oPUM9p9MHV/4idu/UEwB5B2YdzNdjw33WpCKxyYgDcHbtKLA/ dWhwESOSeJCcgJeGPutC/jwKyBuIit3yd/69y0Jiy8MuBYZFjTvQ== X-Google-Smtp-Source: AGHT+IEY/2O8CRDULXauTh88uV8h43PA4CyDCTmAsv15d5dCMigbNTwzXW3+eLa0GHRG3LDP6DUFFUlZpEO7Ut3f X-Received: from pjbsw12.prod.google.com ([2002:a17:90b:2c8c:b0:2ea:5069:aaac]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3ec9:b0:2ea:8565:65bc with SMTP id 98e67ed59e1d1-2ee097c59f0mr5693205a91.23.1732738802086; Wed, 27 Nov 2024 12:20:02 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:23 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-10-aaronlewis@google.com> Subject: [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com From: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 134 ++++++++++++++++++++--------------------- 1 file changed, 67 insertions(+), 67 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 25d41709a0eaa..3813258497e49 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -81,51 +81,48 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio); #define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4)) -static const struct svm_direct_access_msrs { - u32 index; /* Index of the MSR */ - bool always; /* True if intercept is initially cleared */ -} direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = { - { .index = MSR_STAR, .always = true }, - { .index = MSR_IA32_SYSENTER_CS, .always = true }, - { .index = MSR_IA32_SYSENTER_EIP, .always = false }, - { .index = MSR_IA32_SYSENTER_ESP, .always = false }, +static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = { + MSR_STAR, + MSR_IA32_SYSENTER_CS, + MSR_IA32_SYSENTER_EIP, + MSR_IA32_SYSENTER_ESP, #ifdef CONFIG_X86_64 - { .index = MSR_GS_BASE, .always = true }, - { .index = MSR_FS_BASE, .always = true }, - { .index = MSR_KERNEL_GS_BASE, .always = true }, - { .index = MSR_LSTAR, .always = true }, - { .index = MSR_CSTAR, .always = true }, - { .index = MSR_SYSCALL_MASK, .always = true }, + MSR_GS_BASE, + MSR_FS_BASE, + MSR_KERNEL_GS_BASE, + MSR_LSTAR, + MSR_CSTAR, + MSR_SYSCALL_MASK, #endif - { .index = MSR_IA32_SPEC_CTRL, .always = false }, - { .index = MSR_IA32_PRED_CMD, .always = false }, - { .index = MSR_IA32_FLUSH_CMD, .always = false }, - { .index = MSR_IA32_DEBUGCTLMSR, .always = false }, - { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false }, - { .index = MSR_IA32_LASTBRANCHTOIP, .always = false }, - { .index = MSR_IA32_LASTINTFROMIP, .always = false }, - { .index = MSR_IA32_LASTINTTOIP, .always = false }, - { .index = MSR_IA32_XSS, .always = false }, - { .index = MSR_EFER, .always = false }, - { .index = MSR_IA32_CR_PAT, .always = false }, - { .index = MSR_AMD64_SEV_ES_GHCB, .always = false }, - { .index = MSR_TSC_AUX, .always = false }, - { .index = X2APIC_MSR(APIC_ID), .always = false }, - { .index = X2APIC_MSR(APIC_LVR), .always = false }, - { .index = X2APIC_MSR(APIC_TASKPRI), .always = false }, - { .index = X2APIC_MSR(APIC_ARBPRI), .always = false }, - { .index = X2APIC_MSR(APIC_PROCPRI), .always = false }, - { .index = X2APIC_MSR(APIC_EOI), .always = false }, - { .index = X2APIC_MSR(APIC_RRR), .always = false }, - { .index = X2APIC_MSR(APIC_LDR), .always = false }, - { .index = X2APIC_MSR(APIC_DFR), .always = false }, - { .index = X2APIC_MSR(APIC_SPIV), .always = false }, - { .index = X2APIC_MSR(APIC_ISR), .always = false }, - { .index = X2APIC_MSR(APIC_TMR), .always = false }, - { .index = X2APIC_MSR(APIC_IRR), .always = false }, - { .index = X2APIC_MSR(APIC_ESR), .always = false }, - { .index = X2APIC_MSR(APIC_ICR), .always = false }, - { .index = X2APIC_MSR(APIC_ICR2), .always = false }, + MSR_IA32_SPEC_CTRL, + MSR_IA32_PRED_CMD, + MSR_IA32_FLUSH_CMD, + MSR_IA32_DEBUGCTLMSR, + MSR_IA32_LASTBRANCHFROMIP, + MSR_IA32_LASTBRANCHTOIP, + MSR_IA32_LASTINTFROMIP, + MSR_IA32_LASTINTTOIP, + MSR_IA32_XSS, + MSR_EFER, + MSR_IA32_CR_PAT, + MSR_AMD64_SEV_ES_GHCB, + MSR_TSC_AUX, + X2APIC_MSR(APIC_ID), + X2APIC_MSR(APIC_LVR), + X2APIC_MSR(APIC_TASKPRI), + X2APIC_MSR(APIC_ARBPRI), + X2APIC_MSR(APIC_PROCPRI), + X2APIC_MSR(APIC_EOI), + X2APIC_MSR(APIC_RRR), + X2APIC_MSR(APIC_LDR), + X2APIC_MSR(APIC_DFR), + X2APIC_MSR(APIC_SPIV), + X2APIC_MSR(APIC_ISR), + X2APIC_MSR(APIC_TMR), + X2APIC_MSR(APIC_IRR), + X2APIC_MSR(APIC_ESR), + X2APIC_MSR(APIC_ICR), + X2APIC_MSR(APIC_ICR2), /* * Note: @@ -134,15 +131,15 @@ static const struct svm_direct_access_msrs { * the AVIC hardware would generate GP fault. Therefore, always * intercept the MSR 0x832, and do not setup direct_access_msr. */ - { .index = X2APIC_MSR(APIC_LVTTHMR), .always = false }, - { .index = X2APIC_MSR(APIC_LVTPC), .always = false }, - { .index = X2APIC_MSR(APIC_LVT0), .always = false }, - { .index = X2APIC_MSR(APIC_LVT1), .always = false }, - { .index = X2APIC_MSR(APIC_LVTERR), .always = false }, - { .index = X2APIC_MSR(APIC_TMICT), .always = false }, - { .index = X2APIC_MSR(APIC_TMCCT), .always = false }, - { .index = X2APIC_MSR(APIC_TDCR), .always = false }, - { .index = MSR_INVALID, .always = false }, + X2APIC_MSR(APIC_LVTTHMR), + X2APIC_MSR(APIC_LVTPC), + X2APIC_MSR(APIC_LVT0), + X2APIC_MSR(APIC_LVT1), + X2APIC_MSR(APIC_LVTERR), + X2APIC_MSR(APIC_TMICT), + X2APIC_MSR(APIC_TMCCT), + X2APIC_MSR(APIC_TDCR), + MSR_INVALID, }; /* @@ -763,9 +760,10 @@ static int direct_access_msr_slot(u32 msr) { u32 i; - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) - if (direct_access_msrs[i].index == msr) + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { + if (direct_access_msrs[i] == msr) return i; + } return -ENOENT; } @@ -911,15 +909,17 @@ unsigned long *svm_vcpu_alloc_msrpm(void) void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm) { - int i; - - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { - if (!direct_access_msrs[i].always) - continue; - svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index, - MSR_TYPE_RW); - } + svm_disable_intercept_for_msr(vcpu, MSR_STAR, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW); +#ifdef CONFIG_X86_64 + svm_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_LSTAR, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_CSTAR, MSR_TYPE_RW); + svm_disable_intercept_for_msr(vcpu, MSR_SYSCALL_MASK, MSR_TYPE_RW); +#endif if (sev_es_guest(vcpu->kvm)) svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW); } @@ -935,7 +935,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept) return; for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) { - int index = direct_access_msrs[i].index; + int index = direct_access_msrs[i]; if ((index < APIC_BASE_MSR) || (index > APIC_BASE_MSR + 0xff)) @@ -965,8 +965,8 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) * refreshed since KVM is going to intercept them regardless of what * userspace wants. */ - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { - u32 msr = direct_access_msrs[i].index; + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { + u32 msr = direct_access_msrs[i]; if (!test_bit(i, svm->shadow_msr_intercept.read)) svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); @@ -1009,10 +1009,10 @@ static void init_msrpm_offsets(void) memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets)); - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { u32 offset; - offset = svm_msrpm_offset(direct_access_msrs[i].index); + offset = svm_msrpm_offset(direct_access_msrs[i]); BUG_ON(offset == MSR_INVALID); add_msr_offset(offset); From patchwork Wed Nov 27 20:19:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887364 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D328C2036E7 for ; Wed, 27 Nov 2024 20:20:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738806; cv=none; b=Oy4BpkuO4UqS7VNjvHaLAfVC7UFJMlIdy6e5Y9RKkmtNKnrd/emIoRBJsrLx5iZbJx4Sk1D6CyN1JYMjyxG4a/7v//Z3/Vkgj9gYddDutnTGymnF4660DE0K8H9zL/uv09cKpkBfb3QPw054KkmpyveP9qSDzGm2tNRYqrl0oyw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738806; c=relaxed/simple; bh=a6Y640EiqlGM+gu5ctvPf0mXrN55ztAszHO3hlhnOcA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pHoXNJn0VFCpS2yHQHJT/H3vdY+fqmfM8vZqt7aRfeCUzX3iBp4hb0YIN9DdEdZ4/IXlN14Sz1Mcb9182uNsMa23PtF7U/tDaeEYbjLtdhIRaaET3nGcdo6KGtI5kfEIb8tHmZSLpFkdhW24/TOReEh1hPMZkBYV3FCh1DJGUag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jNn14hiI; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jNn14hiI" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21267df35a5so803055ad.1 for ; Wed, 27 Nov 2024 12:20:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738804; x=1733343604; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j7YT08yMb1KEjpOIy5QpMxABflHuEvjlRxoqxNPECDo=; b=jNn14hiIjnnPuGZjmoAcuvL3SPJJK6YKi57VVspLraijRMy+3Y5z4+7Sg7kSH8N29C N792/g5tOs3Re8nXZaX1RCUQjA8cIXLesUYl51+MQ3J61TIFKfxzBhG6rXLKXxSu2EdG aSHsZ7ovEW3CIi1UIRe7c/ml5z3szoJGoauyvB0QufaCRXgoKSCHssscQ649srYqy0oE w0on/SeGPecOaBMNXtOtAYq/+qDwR1n1UGL93gLfOdjviWeTJQ/8SkezrGf/SF6jlXxr 6R1xSfpgdnvzCdAXouEf6U6SrP7aXmlsHsEhEmxAlHrykrdvf044jZqKTsUvMozePKVp EvkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738804; x=1733343604; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j7YT08yMb1KEjpOIy5QpMxABflHuEvjlRxoqxNPECDo=; b=YmfFC3z5c3IyzMZZ6X6Z/t04m4IYq95otwgJqDI5M0wgZIQE0tyMQGXq+ee/3QCT6g goMPg2lzEorJ96hWD068Ob9YUap2Y8ZcqgIAkbwjT+6lhA9lUCWIZmUYzRb3D3SbaScp EFRRfnS8PnMiqBtKfzaEXMTBrxK/R60rYZT8Te0p5x+OwKVQUqP/ieI+/f/HdDHYKb4J Ob1oU16kwTNGyCMP+ogscrRSBJ0xxV81lJWycZv6bTedN522x+e2mEwq58FFx8YRMxLw KLPvNm5cz1bmq5jVXVJWFz+WI8O8gSNyhx1tBBA7LLkI6c6Dm/ZR2GmWPx/wxCqeMZWn 6IEA== X-Gm-Message-State: AOJu0YzeBrc/0Z75L2JmEUbePjY/3XxC728yLWs2BGEcy8h7fJcVWrjg k2yN6DYw13yd4PhvlUAssfQuNzJOVWxq8fofKuyN8G5cRSG3HWF7wv+gVbboCfZ2Aj+SfzigaB2 3Dn1DSqqCezH+PYpxEUOS0SckhCfFYAY5WDeONcmxwdn4f/czKBePayKOTFWTMnhl5VOiA4DuRb UNKzghqoODTFdC2L2vL0lOtD4WUC7lffEZCRpovstSl9Z1I8Qdlw== X-Google-Smtp-Source: AGHT+IFk53l85DXU/wNl2IRSe84ryY6aNfayfG+8hfq9Pk37i2y2PizUu7aOAE+ZyVNMUyFa+mA63HRfuhYKlqJe X-Received: from pfbct13.prod.google.com ([2002:a05:6a00:f8d:b0:725:22dd:a775]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d50e:b0:212:20c2:5fcd with SMTP id d9443c01a7336-21501385212mr46107605ad.26.1732738804123; Wed, 27 Nov 2024 12:20:04 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:24 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-11-aaronlewis@google.com> Subject: [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the list of possible passthrough MSRs From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Signed-off-by: Sean Christopherson Co-developed-by: Aaron Lewis --- arch/x86/kvm/svm/svm.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 3813258497e49..4e30efe90c541 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -81,7 +81,7 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio); #define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4)) -static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = { +static const u32 direct_access_msrs[] = { MSR_STAR, MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_EIP, @@ -139,7 +139,6 @@ static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = { X2APIC_MSR(APIC_TMICT), X2APIC_MSR(APIC_TMCCT), X2APIC_MSR(APIC_TDCR), - MSR_INVALID, }; /* @@ -760,7 +759,7 @@ static int direct_access_msr_slot(u32 msr) { u32 i; - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { if (direct_access_msrs[i] == msr) return i; } @@ -934,7 +933,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept) if (!x2avic_enabled) return; - for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) { + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { int index = direct_access_msrs[i]; if ((index < APIC_BASE_MSR) || @@ -965,7 +964,7 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) * refreshed since KVM is going to intercept them regardless of what * userspace wants. */ - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { u32 msr = direct_access_msrs[i]; if (!test_bit(i, svm->shadow_msr_intercept.read)) @@ -1009,7 +1008,7 @@ static void init_msrpm_offsets(void) memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets)); - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) { + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { u32 offset; offset = svm_msrpm_offset(direct_access_msrs[i]); From patchwork Wed Nov 27 20:19:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887365 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9EBB203714 for ; Wed, 27 Nov 2024 20:20:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738808; cv=none; b=QgjcaA1u3DBs4/Rpfvwn9MJWeexsKrquCsVdttrVuKX0yWBQzI6Wm+OvnH46UcnX2h+o7fXmAtP4pOGN3WEiN39I7kjJ8hUySOaN1JPP7zQM/B21hGDW7o1vkyXQ3YoHdtGQNoxoEA26aidnWrC9IknPU+PcIBtZRDuwuk7DiUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738808; c=relaxed/simple; bh=sgFjzIy4og3iSZFWVYtstA7CQqck6vD0BgX7j5xnkdE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n/k/Qf2sSBtdhzJ88R1OnfwnpwwCpBBmSAL+lSPJuMmbWxVaHykSMNGchty3v3b9tPZwTHzl3uXrve37qa8cJ13HiAAs3m+Oui+VgTvt5hHSIMnVsf6OhclhLBZpve9jkGoxU/8ZRuE3Bal0uUe5RVQ5h7nIwvqdbRH76ozgDiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rb00Mci4; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rb00Mci4" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ea396ba511so145082a91.3 for ; Wed, 27 Nov 2024 12:20:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738806; x=1733343606; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nEtZF4sU5IT1Lhy8Yz5CiCGBeJE0iqSK+FBKQHZElUE=; b=Rb00Mci46mSo3subkEqreMqeHJRHEiYizbQwa7HGTGsHHpNqZn6oL40wrUwLw8QP4t Bch9QNZeeVa+ze1G3HLXL0vrgWOJDKnWVFqS9Jow7brWud7PDx07jFDzUfFAOaZNFgyw w5jMTmZTelJ8taDknaVAHBgwZSawUBEKCPhhDPGRyqEtr3QgbpC04Czx6OchTgl4d4y4 k4QbudvIu2RltBLcF/96Dl8AwBHChdfla/Jrp6H/z415PljtESrB5/bpPNTm+VkT8RL1 YnV/0XnkgbtMTVgpqU9awFJEHotK3DsiPX95wlwmRnVyAeOxqYP+JplN8ubEo4+vpAtQ pcPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738806; x=1733343606; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nEtZF4sU5IT1Lhy8Yz5CiCGBeJE0iqSK+FBKQHZElUE=; b=FJvLWAbnmxbQaeuFUxFDE98Gz98EYqbb+EUlyyN1664odfiqcIleqBmVtnJ0t3Tp9L A6JhcqL8sBg4BnwgxfSQB95epAxGP7fTwky+Whbs8sLEd1gqOI6VXrMKC1P0G8uRzNdo vVtMlac9LTbB9zLlHs9lBQpBgJjhPHrZYBRVqX7HuckTRb58aS8wdz9Lb6FwQH43lHeY EyGduzfjNyMR5i6zk+j+zm9wY43FWsui06rtI4DgoiuVrkUpXpHifYs8yk+MTtl0bPB+ MNnJSFTsn5zeqro3ViQ8U6Rs4yOpz3UHfMdd1QFLaHKLCAeoqDSyuP/lg5Vbhdfb4OVP qm3g== X-Gm-Message-State: AOJu0YzPW3rvownRFNPCsuutYU71AuqGdJLGcY4lVQY6x65qsPCP1jFI efk/wsuh1sz+7p/a16PpoBVLUsumHi6cMSNTgv/N3DNvUmtYIsR/mXN8zo2YmTaS4NKwZJv1atu 89zXGftuYgAlJz9ThwQOHbwhSi5x68xkc52KxHsioKzsrOOsedxD135u+MxwLOBGVz5SM1aPlMt 4kMms6lIxenwwnz8u+3e1MXmJ5OFO3pRrPMIOCmYq9IkkFQkt4lA== X-Google-Smtp-Source: AGHT+IHi5kqbT6hdRXLkmGjsSge8edKwoRfFkYEpfqLDEbV1y6lR+p9ohljvlMdI+LJjPEsgiDHjjuNuewakgtyV X-Received: from pjbli10.prod.google.com ([2002:a17:90b:48ca:b0:2e1:8750:2b46]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3910:b0:2ea:838c:7f20 with SMTP id 98e67ed59e1d1-2ee097c59fbmr5794280a91.35.1732738806236; Wed, 27 Nov 2024 12:20:06 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:25 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-12-aaronlewis@google.com> Subject: [PATCH 11/15] KVM: VMX: Make list of possible passthrough MSRs "const" From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com From: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0577a7961b9f0..bc64e7cc02704 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -167,7 +167,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); * List of MSRs that can be directly passed to the guest. * In addition to these x2apic, PT and LBR MSRs are handled specially. */ -static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = { +static const u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = { MSR_IA32_SPEC_CTRL, MSR_IA32_PRED_CMD, MSR_IA32_FLUSH_CMD, From patchwork Wed Nov 27 20:19:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887366 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03233203718 for ; Wed, 27 Nov 2024 20:20:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738810; cv=none; b=nl58b4z3YIH/Zk/P6gzoU9ilxa/mBPaJj+Fc2ll6W4IR/c9N+ifFcHzaghuzVmtv4/zrSN97IqLNdAR3YlyTjurMP4P/eTFZYTKOGfOJlCiCkzt9e7q8uYhwTXK5Pk6617uRF+U8anjLYl19SgQRnnsYkNr/hYej36v2DSSkbEg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738810; c=relaxed/simple; bh=5yCIHfL6l2qkR7DUG3ULusr8QF1ROkRn5VWELBESHeY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=M0bcPtiPI3iN94G1m0PvW7i9mD5rLNq8KkVO7PqjfpWcOg82tqeAI4Ovs/TtBhESwI3Nl/H0M0mLmzJC6aftuCv2TPkEwfBF1zBmvaAbXxS+c9P7paruUcDqBmUHy0Gq2uFEBzDF+ZFEy6oxhVb4m/YqOMkp7E4b1ybL6X0nyM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AVcQH1o8; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AVcQH1o8" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2edeef8a994so142676a91.2 for ; Wed, 27 Nov 2024 12:20:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738808; x=1733343608; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hN4rCKuG9RjUkutHBlEkAp/6p8hoxoVrTp3+V8iwhZU=; b=AVcQH1o8nWoPVrYdDs8+JB10kT3AJbNVm+5FJxNuspTLWgSCHweSchrFn3NFH44zm8 TQxerl1WVJB3dfIElj+pEcSkjCCKFrleH8GK3mBpD5zWTGwRLu7fUI3iSMibtb9laD/E r0AybRJBjMWIanhSO1W/aqYVh2vZHUwACnQ/b/IKCPzk3YmSenYRRtU8opiRm/mR2Hjf qcvdB6KHo4NESVGOb2ijSPoHCnX8uAcB2Y1F9vBafq/pptxK4fo0RkuH0ByTRzI6of+a xZWYatwL0ZLTYAn5vNkA0Y+lksB2Wye2iTnk5geZTuGChaR2ezDxMrYouc4nNVn0+HHC bthw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738808; x=1733343608; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hN4rCKuG9RjUkutHBlEkAp/6p8hoxoVrTp3+V8iwhZU=; b=CJDDHGN17e5gIk6GUlTBeZJ2bL7H718VKJckCud9DgkjD55sqbOmdk2orrAc/Xx8Wg acC0WA+gT9ceCMfQD4SLya8wfJIpSR6JivtjF0yU6afo4/gil3BGP97yxPB5FFN6I1E0 lF6EuuZFOvH/axLEarpY6382jOGj5W2JWswd1bOk4OjOesr0nD9MC01/Apx6TTEczINe y85qCRzsKi3hIVidI+gBwjyIjb7VU+SE/3osNVfp0pfjFkk8w3rjwo+xp1atmti6/FxF Rcs4+zRnSdt1RIpHQj9GvazOgDhil/tRvDjoB1Slm5C7ZoHytIQEuS2zMLHoJD/sP44m uYcg== X-Gm-Message-State: AOJu0YxXn7+AEDT+cKKxVffsmQvjg4/CsWvbtOazxqmxPEuRNl1QO2xN iYZrxa8bt5i//paHUArayX5JvlwilXxPhnHZi4CB+j0rT83TB0i88zYMwqcq7pTy13VNdHUx99q 5QXmf6sVYHuFGybG+urbmTl4zouF+VA+EKjk9keCXb5aGrtXI89uqVHZ6VwCiDtV4YkLwT4V4od YFz+zr+TNT6wQULM74QvN1nWmFpOCqMiHDY3k4uBv8cArfreH78Q== X-Google-Smtp-Source: AGHT+IGM7z35sOQKGVXiJB6QQ9+rdwGe3YwWc1Izop4nOcwe0FiWVoZpxUJF+x5wfqvNJ2pZ7QKNMUmdySxU3nA4 X-Received: from pjbrr15.prod.google.com ([2002:a17:90b:2b4f:b0:2ea:adc3:8daa]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:fe06:b0:2ea:8aac:6ac4 with SMTP id 98e67ed59e1d1-2ee08eb070fmr6218888a91.16.1732738808396; Wed, 27 Nov 2024 12:20:08 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:26 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-13-aaronlewis@google.com> Subject: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Move the possible passthrough MSRs to kvm_x86_ops. Doing this allows them to be accessed from common x86 code. In order to set the passthrough MSRs in kvm_x86_ops for VMX, "vmx_possible_passthrough_msrs" had to be relocated to main.c, and with that vmx_msr_filter_changed() had to be moved too because it uses "vmx_possible_passthrough_msrs". Signed-off-by: Sean Christopherson Co-developed-by: Aaron Lewis --- arch/x86/include/asm/kvm_host.h | 3 ++ arch/x86/kvm/svm/svm.c | 18 ++------- arch/x86/kvm/vmx/main.c | 58 ++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 67 ++------------------------------- arch/x86/kvm/x86.c | 13 +++++++ arch/x86/kvm/x86.h | 1 + 6 files changed, 83 insertions(+), 77 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3e8afc82ae2fb..7e9fee4d36cc2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1817,6 +1817,9 @@ struct kvm_x86_ops { int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu); void (*migrate_timers)(struct kvm_vcpu *vcpu); + + const u32 * const possible_passthrough_msrs; + const u32 nr_possible_passthrough_msrs; void (*msr_filter_changed)(struct kvm_vcpu *vcpu); int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4e30efe90c541..23e6515bb7904 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -755,18 +755,6 @@ static void clr_dr_intercepts(struct vcpu_svm *svm) recalc_intercepts(svm); } -static int direct_access_msr_slot(u32 msr) -{ - u32 i; - - for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { - if (direct_access_msrs[i] == msr) - return i; - } - - return -ENOENT; -} - static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) { u8 bit_write; @@ -832,7 +820,7 @@ void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) struct vcpu_svm *svm = to_svm(vcpu); int slot; - slot = direct_access_msr_slot(msr); + slot = kvm_passthrough_msr_slot(msr); WARN_ON(slot == -ENOENT); if (slot >= 0) { /* Set the shadow bitmaps to the desired intercept states */ @@ -871,7 +859,7 @@ void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) struct vcpu_svm *svm = to_svm(vcpu); int slot; - slot = direct_access_msr_slot(msr); + slot = kvm_passthrough_msr_slot(msr); WARN_ON(slot == -ENOENT); if (slot >= 0) { /* Set the shadow bitmaps to the desired intercept states */ @@ -5165,6 +5153,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .apic_init_signal_blocked = svm_apic_init_signal_blocked, + .possible_passthrough_msrs = direct_access_msrs, + .nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs), .msr_filter_changed = svm_msr_filter_changed, .complete_emulated_msr = svm_complete_emulated_msr, diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 92d35cc6cd15d..6d52693b0fd6c 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -7,6 +7,62 @@ #include "pmu.h" #include "posted_intr.h" +/* + * List of MSRs that can be directly passed to the guest. + * In addition to these x2apic, PT and LBR MSRs are handled specially. + */ +static const u32 vmx_possible_passthrough_msrs[] = { + MSR_IA32_SPEC_CTRL, + MSR_IA32_PRED_CMD, + MSR_IA32_FLUSH_CMD, + MSR_IA32_TSC, +#ifdef CONFIG_X86_64 + MSR_FS_BASE, + MSR_GS_BASE, + MSR_KERNEL_GS_BASE, + MSR_IA32_XFD, + MSR_IA32_XFD_ERR, +#endif + MSR_IA32_SYSENTER_CS, + MSR_IA32_SYSENTER_ESP, + MSR_IA32_SYSENTER_EIP, + MSR_CORE_C1_RES, + MSR_CORE_C3_RESIDENCY, + MSR_CORE_C6_RESIDENCY, + MSR_CORE_C7_RESIDENCY, +}; + +void vmx_msr_filter_changed(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + u32 i; + + if (!cpu_has_vmx_msr_bitmap()) + return; + + /* + * Redo intercept permissions for MSRs that KVM is passing through to + * the guest. Disabling interception will check the new MSR filter and + * ensure that KVM enables interception if usersepace wants to filter + * the MSR. MSRs that KVM is already intercepting don't need to be + * refreshed since KVM is going to intercept them regardless of what + * userspace wants. + */ + for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { + u32 msr = vmx_possible_passthrough_msrs[i]; + + if (!test_bit(i, vmx->shadow_msr_intercept.read)) + vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); + + if (!test_bit(i, vmx->shadow_msr_intercept.write)) + vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); + } + + /* PT MSRs can be passed through iff PT is exposed to the guest. */ + if (vmx_pt_mode_is_host_guest()) + pt_update_intercept_for_msr(vcpu); +} + #define VMX_REQUIRED_APICV_INHIBITS \ (BIT(APICV_INHIBIT_REASON_DISABLED) | \ BIT(APICV_INHIBIT_REASON_ABSENT) | \ @@ -152,6 +208,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .apic_init_signal_blocked = vmx_apic_init_signal_blocked, .migrate_timers = vmx_migrate_timers, + .possible_passthrough_msrs = vmx_possible_passthrough_msrs, + .nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs), .msr_filter_changed = vmx_msr_filter_changed, .complete_emulated_msr = kvm_complete_insn_gp, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bc64e7cc02704..1c2c0c06f3d35 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -163,31 +163,6 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED | \ RTIT_STATUS_BYTECNT)) -/* - * List of MSRs that can be directly passed to the guest. - * In addition to these x2apic, PT and LBR MSRs are handled specially. - */ -static const u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = { - MSR_IA32_SPEC_CTRL, - MSR_IA32_PRED_CMD, - MSR_IA32_FLUSH_CMD, - MSR_IA32_TSC, -#ifdef CONFIG_X86_64 - MSR_FS_BASE, - MSR_GS_BASE, - MSR_KERNEL_GS_BASE, - MSR_IA32_XFD, - MSR_IA32_XFD_ERR, -#endif - MSR_IA32_SYSENTER_CS, - MSR_IA32_SYSENTER_ESP, - MSR_IA32_SYSENTER_EIP, - MSR_CORE_C1_RES, - MSR_CORE_C3_RESIDENCY, - MSR_CORE_C6_RESIDENCY, - MSR_CORE_C7_RESIDENCY, -}; - /* * These 2 parameters are used to config the controls for Pause-Loop Exiting: * ple_gap: upper bound on the amount of time between two successive @@ -669,7 +644,7 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu) static int vmx_get_passthrough_msr_slot(u32 msr) { - int i; + int r; switch (msr) { case 0x800 ... 0x8ff: @@ -692,13 +667,10 @@ static int vmx_get_passthrough_msr_slot(u32 msr) return -ENOENT; } - for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { - if (vmx_possible_passthrough_msrs[i] == msr) - return i; - } + r = kvm_passthrough_msr_slot(msr); - WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); - return -ENOENT; + WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); + return r; } struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) @@ -4145,37 +4117,6 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu) } } -void vmx_msr_filter_changed(struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - u32 i; - - if (!cpu_has_vmx_msr_bitmap()) - return; - - /* - * Redo intercept permissions for MSRs that KVM is passing through to - * the guest. Disabling interception will check the new MSR filter and - * ensure that KVM enables interception if usersepace wants to filter - * the MSR. MSRs that KVM is already intercepting don't need to be - * refreshed since KVM is going to intercept them regardless of what - * userspace wants. - */ - for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { - u32 msr = vmx_possible_passthrough_msrs[i]; - - if (!test_bit(i, vmx->shadow_msr_intercept.read)) - vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); - - if (!test_bit(i, vmx->shadow_msr_intercept.write)) - vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); - } - - /* PT MSRs can be passed through iff PT is exposed to the guest. */ - if (vmx_pt_mode_is_host_guest()) - pt_update_intercept_for_msr(vcpu); -} - static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu, int pi_vec) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8637bc0010965..20b6cce793af5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1806,6 +1806,19 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type) } EXPORT_SYMBOL_GPL(kvm_msr_allowed); +int kvm_passthrough_msr_slot(u32 msr) +{ + u32 i; + + for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) { + if (kvm_x86_ops.possible_passthrough_msrs[i] == msr) + return i; + } + + return -ENOENT; +} +EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot); + /* * Write @data into the MSR specified by @index. Select MSR specific fault * checks are bypassed if @host_initiated is %true. diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index ec623d23d13d2..208f0698c64e2 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -555,6 +555,7 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r, struct x86_exception *e); int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva); bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); +int kvm_passthrough_msr_slot(u32 msr); enum kvm_msr_access { MSR_TYPE_R = BIT(0), From patchwork Wed Nov 27 20:19:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887367 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FC23204085 for ; Wed, 27 Nov 2024 20:20:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738813; cv=none; b=DCLJ0uVJMN7PRq9lcCAQT2uJjLRsNap3kFLMPvnzHeOwy5hK6D1T7jHOyO20iGJhOmEhkqtvjeG6r7gX+3/k96Vqwigy9jwzIInvkaHWFtU8SMKX7Gqow+mPj6P5edwf0tH9aV2aMs7ffsffZNEGrrZPCycWawHZrsuCW63Jvng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738813; c=relaxed/simple; bh=g/dl6v5JSn82RfydqRxv1FB6+0KdRK4244vXkZ+8ZT8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lmbAdG+BZEin8jqpn3Hz+Cdr4Had+Msf/Hq1jezwY85RfMfZ35MlJj3hExlFMn5Ylq/zBsodCdzPo0Y/ny5Ejst11oncKpKXOVypvT9rgip7qpqkFhbJkQp7HWrkLZ1waEc/ZuJcDKVVyQtEspD0gyPnn2BzDFNbHOUQJJ450Fs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=I8UOEY2A; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I8UOEY2A" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-2970b792b98so113679fac.1 for ; Wed, 27 Nov 2024 12:20:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738810; x=1733343610; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZwBijepV3qdSa5PW7c6B2ta0sO0Sgux9WgSa1RA6CHQ=; b=I8UOEY2AQ2G4gayDpjBRbNSUbJyzAxNrMz4MP0m5x67AU5zOnoSdaHNIlvJu0oPBKq MXVrvTaz68OLcemwCYQuVnIlYew6LkS3k0RZsDzGiI7IYCDjLn3cnt4ol2LgFOdh97R3 7MDvzPws5Y8al/AcicLG6nNFhWvcLvoBXMpaAhOqtRg8ckGNCV/pVzvY/YBTRNFjgiir hzcPEy+1YuN8mRz/x0RR13gpZjrfhyM5lqqlnD7MdnzzK8Y3RGG3QmKDuyJddLXawOZZ f/r63VKldqJaZ2OYY6XJYgjkKSoBZAxBlrzORy8SFTwUw1K15TUFyQJeu5os8ztHTlAh BK0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738810; x=1733343610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZwBijepV3qdSa5PW7c6B2ta0sO0Sgux9WgSa1RA6CHQ=; b=GPUNwvthJcKPHG4BhuAv1nXW7k46Cw4AvR9r2pO/VBS6tk5f5hxeDcwsoGYlmpmCyL Rm38U2dntl65GXx7Btncr5t+mbe77EWDyDAgKM1oJ8W5QBc0r6/NXlGoMjzx+QaYUdui h0NhDMGjMLwpGUYEd7lk9rlkI97AB2EAGVjCOOAQ7ROu1WImtgrrhoLG/G1reywLErGJ dmrPdZxCXQjWmYwu4w5lfQxlmQ7WBEO/EPxFaX/5D7Fz5+QE0mgb3UD6hDBHwFHdfUoi N9yJT6TeLwcbseafffuEvtRVPbsPxPpJrxg8xhNYMQ7J0ERmXIt+WOxA2RRKB/Zq/IR+ COPQ== X-Gm-Message-State: AOJu0Yw6iPq5PcZE0mcYogAIl83fZF4I0xfTI1xtnnEN0ndk2YearA1+ TP45nVi3K87C2dKE2XrlYnElE8i2fXTTAFwBe9Tbe7X3Sc9UuFkmHjOYJQlYVok8ljRQ8e9VAHK XiHmulfMkUMrClnb6mPAfPzFCeKJwR6/dRfdBCB7CZC1bnkngK2eRP9TX/8I3MHTh3w0TipzcW5 ZetWq9QxfMMlKRus2t2G8512bfyzMtc5TNkKOGyPOjxfIw01w1DQ== X-Google-Smtp-Source: AGHT+IGdZHSbxgZvba3P3DLDX13RaPp0HbZ+744UxU8XwtxfkeY1fO7RtAr0RQtojyrLRaoPLasVy+qv+nPZwYaR X-Received: from oad18.prod.google.com ([2002:a05:6870:4012:b0:29d:c30e:d034]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:3b8a:b0:29d:c8fd:15ba with SMTP id 586e51a60fabf-29de0bed081mr570594fac.10.1732738810560; Wed, 27 Nov 2024 12:20:10 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:27 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-14-aaronlewis@google.com> Subject: [PATCH 13/15] KVM: x86: Move ownership of passthrough MSR "shadow" to common x86 From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Signed-off-by: Sean Christopherson Co-developed-by: Aaron Lewis --- arch/x86/include/asm/kvm-x86-ops.h | 3 ++- arch/x86/include/asm/kvm_host.h | 11 +++++++++ arch/x86/kvm/svm/svm.c | 38 ++++-------------------------- arch/x86/kvm/svm/svm.h | 6 ----- arch/x86/kvm/vmx/main.c | 32 +------------------------ arch/x86/kvm/vmx/vmx.c | 22 ++++++++++------- arch/x86/kvm/vmx/vmx.h | 7 ------ arch/x86/kvm/x86.c | 37 ++++++++++++++++++++++++++++- 8 files changed, 69 insertions(+), 87 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 5aff7222e40fa..124c2e1e42026 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -131,7 +131,8 @@ KVM_X86_OP(check_emulate_instruction) KVM_X86_OP(apic_init_signal_blocked) KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) KVM_X86_OP_OPTIONAL(migrate_timers) -KVM_X86_OP(msr_filter_changed) +KVM_X86_OP_OPTIONAL(msr_filter_changed) +KVM_X86_OP(disable_intercept_for_msr) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7e9fee4d36cc2..808b5365e4bd2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -777,6 +777,16 @@ struct kvm_vcpu_arch { u64 arch_capabilities; u64 perf_capabilities; + /* + * KVM's "shadow" of the MSR intercepts, i.e. bitmaps that track KVM's + * desired behavior irrespective of userspace MSR filtering. + */ +#define KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS 64 + struct { + DECLARE_BITMAP(read, KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS); + DECLARE_BITMAP(write, KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS); + } shadow_msr_intercept; + /* * Paging state of the vcpu * @@ -1820,6 +1830,7 @@ struct kvm_x86_ops { const u32 * const possible_passthrough_msrs; const u32 nr_possible_passthrough_msrs; + void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 23e6515bb7904..31ed6c68e8194 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -825,9 +825,9 @@ void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) if (slot >= 0) { /* Set the shadow bitmaps to the desired intercept states */ if (type & MSR_TYPE_R) - __clear_bit(slot, svm->shadow_msr_intercept.read); + __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read); if (type & MSR_TYPE_W) - __clear_bit(slot, svm->shadow_msr_intercept.write); + __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write); } /* @@ -864,9 +864,9 @@ void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) if (slot >= 0) { /* Set the shadow bitmaps to the desired intercept states */ if (type & MSR_TYPE_R) - __set_bit(slot, svm->shadow_msr_intercept.read); + __set_bit(slot, vcpu->arch.shadow_msr_intercept.read); if (type & MSR_TYPE_W) - __set_bit(slot, svm->shadow_msr_intercept.write); + __set_bit(slot, vcpu->arch.shadow_msr_intercept.write); } if (type & MSR_TYPE_R) @@ -939,30 +939,6 @@ void svm_vcpu_free_msrpm(unsigned long *msrpm) __free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE)); } -static void svm_msr_filter_changed(struct kvm_vcpu *vcpu) -{ - struct vcpu_svm *svm = to_svm(vcpu); - u32 i; - - /* - * Redo intercept permissions for MSRs that KVM is passing through to - * the guest. Disabling interception will check the new MSR filter and - * ensure that KVM enables interception if usersepace wants to filter - * the MSR. MSRs that KVM is already intercepting don't need to be - * refreshed since KVM is going to intercept them regardless of what - * userspace wants. - */ - for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) { - u32 msr = direct_access_msrs[i]; - - if (!test_bit(i, svm->shadow_msr_intercept.read)) - svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); - - if (!test_bit(i, svm->shadow_msr_intercept.write)) - svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); - } -} - static void add_msr_offset(u32 offset) { int i; @@ -1475,10 +1451,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto error_free_vmsa_page; - /* All MSRs start out in the "intercepted" state. */ - bitmap_fill(svm->shadow_msr_intercept.read, MAX_DIRECT_ACCESS_MSRS); - bitmap_fill(svm->shadow_msr_intercept.write, MAX_DIRECT_ACCESS_MSRS); - svm->msrpm = svm_vcpu_alloc_msrpm(); if (!svm->msrpm) { err = -ENOMEM; @@ -5155,7 +5127,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .possible_passthrough_msrs = direct_access_msrs, .nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs), - .msr_filter_changed = svm_msr_filter_changed, + .disable_intercept_for_msr = svm_disable_intercept_for_msr, .complete_emulated_msr = svm_complete_emulated_msr, .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 2513990c5b6e6..a73da8ca73b49 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -313,12 +313,6 @@ struct vcpu_svm { struct list_head ir_list; spinlock_t ir_list_lock; - /* Save desired MSR intercept (read: pass-through) state */ - struct { - DECLARE_BITMAP(read, MAX_DIRECT_ACCESS_MSRS); - DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS); - } shadow_msr_intercept; - struct vcpu_sev_es_state sev_es; bool guest_state_loaded; diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 6d52693b0fd6c..5279c82648fe6 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -32,37 +32,6 @@ static const u32 vmx_possible_passthrough_msrs[] = { MSR_CORE_C7_RESIDENCY, }; -void vmx_msr_filter_changed(struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - u32 i; - - if (!cpu_has_vmx_msr_bitmap()) - return; - - /* - * Redo intercept permissions for MSRs that KVM is passing through to - * the guest. Disabling interception will check the new MSR filter and - * ensure that KVM enables interception if usersepace wants to filter - * the MSR. MSRs that KVM is already intercepting don't need to be - * refreshed since KVM is going to intercept them regardless of what - * userspace wants. - */ - for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { - u32 msr = vmx_possible_passthrough_msrs[i]; - - if (!test_bit(i, vmx->shadow_msr_intercept.read)) - vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R); - - if (!test_bit(i, vmx->shadow_msr_intercept.write)) - vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W); - } - - /* PT MSRs can be passed through iff PT is exposed to the guest. */ - if (vmx_pt_mode_is_host_guest()) - pt_update_intercept_for_msr(vcpu); -} - #define VMX_REQUIRED_APICV_INHIBITS \ (BIT(APICV_INHIBIT_REASON_DISABLED) | \ BIT(APICV_INHIBIT_REASON_ABSENT) | \ @@ -210,6 +179,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .possible_passthrough_msrs = vmx_possible_passthrough_msrs, .nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs), + .disable_intercept_for_msr = vmx_disable_intercept_for_msr, .msr_filter_changed = vmx_msr_filter_changed, .complete_emulated_msr = kvm_complete_insn_gp, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1c2c0c06f3d35..4cb3e9a8df2c0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3987,9 +3987,9 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) idx = vmx_get_passthrough_msr_slot(msr); if (idx >= 0) { if (type & MSR_TYPE_R) - __clear_bit(idx, vmx->shadow_msr_intercept.read); + __clear_bit(idx, vcpu->arch.shadow_msr_intercept.read); if (type & MSR_TYPE_W) - __clear_bit(idx, vmx->shadow_msr_intercept.write); + __clear_bit(idx, vcpu->arch.shadow_msr_intercept.write); } if ((type & MSR_TYPE_R) && @@ -4029,9 +4029,9 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) idx = vmx_get_passthrough_msr_slot(msr); if (idx >= 0) { if (type & MSR_TYPE_R) - __set_bit(idx, vmx->shadow_msr_intercept.read); + __set_bit(idx, vcpu->arch.shadow_msr_intercept.read); if (type & MSR_TYPE_W) - __set_bit(idx, vmx->shadow_msr_intercept.write); + __set_bit(idx, vcpu->arch.shadow_msr_intercept.write); } if (type & MSR_TYPE_R) @@ -4117,6 +4117,16 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu) } } +void vmx_msr_filter_changed(struct kvm_vcpu *vcpu) +{ + if (!cpu_has_vmx_msr_bitmap()) + return; + + /* PT MSRs can be passed through iff PT is exposed to the guest. */ + if (vmx_pt_mode_is_host_guest()) + pt_update_intercept_for_msr(vcpu); +} + static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu, int pi_vec) { @@ -7513,10 +7523,6 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu) evmcs->hv_enlightenments_control.msr_bitmap = 1; } - /* The MSR bitmap starts with all ones */ - bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS); - bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS); - vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R); #ifdef CONFIG_X86_64 vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 43f573f6ca46a..c40e7c880764f 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -353,13 +353,6 @@ struct vcpu_vmx { struct pt_desc pt_desc; struct lbr_desc lbr_desc; - /* Save desired MSR intercept (read: pass-through) state */ -#define MAX_POSSIBLE_PASSTHROUGH_MSRS 16 - struct { - DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS); - DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS); - } shadow_msr_intercept; - /* ve_info must be page aligned. */ struct vmx_ve_information *ve_info; }; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 20b6cce793af5..2082ae8dc5db1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1819,6 +1819,31 @@ int kvm_passthrough_msr_slot(u32 msr) } EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot); +static void kvm_msr_filter_changed(struct kvm_vcpu *vcpu) +{ + u32 msr, i; + + /* + * Redo intercept permissions for MSRs that KVM is passing through to + * the guest. Disabling interception will check the new MSR filter and + * ensure that KVM enables interception if usersepace wants to filter + * the MSR. MSRs that KVM is already intercepting don't need to be + * refreshed since KVM is going to intercept them regardless of what + * userspace wants. + */ + for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) { + msr = kvm_x86_ops.possible_passthrough_msrs[i]; + + if (!test_bit(i, vcpu->arch.shadow_msr_intercept.read)) + static_call(kvm_x86_disable_intercept_for_msr)(vcpu, msr, MSR_TYPE_R); + + if (!test_bit(i, vcpu->arch.shadow_msr_intercept.write)) + static_call(kvm_x86_disable_intercept_for_msr)(vcpu, msr, MSR_TYPE_W); + } + + static_call_cond(kvm_x86_msr_filter_changed)(vcpu); +} + /* * Write @data into the MSR specified by @index. Select MSR specific fault * checks are bypassed if @host_initiated is %true. @@ -9747,6 +9772,10 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) rdmsrl(MSR_IA32_ARCH_CAPABILITIES, kvm_host.arch_capabilities); + if (ops->runtime_ops->nr_possible_passthrough_msrs > + KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS) + return -E2BIG; + r = ops->hardware_setup(); if (r != 0) goto out_mmu_exit; @@ -10851,7 +10880,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_APF_READY, vcpu)) kvm_check_async_pf_completion(vcpu); if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu)) - kvm_x86_call(msr_filter_changed)(vcpu); + kvm_msr_filter_changed(vcpu); if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu)) kvm_x86_call(update_cpu_dirty_logging)(vcpu); @@ -12305,6 +12334,12 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.hv_root_tdp = INVALID_PAGE; #endif + /* All MSRs start out in the "intercepted" state. */ + bitmap_fill(vcpu->arch.shadow_msr_intercept.read, + KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS); + bitmap_fill(vcpu->arch.shadow_msr_intercept.write, + KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS); + r = kvm_x86_call(vcpu_create)(vcpu); if (r) goto free_guest_fpu; From patchwork Wed Nov 27 20:19:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887368 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33B44202F84 for ; Wed, 27 Nov 2024 20:20:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738814; cv=none; b=YJlXk+gYpRzjofETjOp9QzzPIkR9hZOhZaQieD9sKmqXarSPTIJ6viTtiF4klilASSoNXS3DEkd5gKtmmyfCOOHIWLWy0qiWSdWxL/wqcDUWwT2XCWS7MK54MZdd+Fe7BwlA6m7BXe5KFa+F7svfLuIFpxpGquVsojzpShcUNh0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738814; c=relaxed/simple; bh=Elri52haCKBwzOvoskc2RjZrEOm+JNXfS6Tq4FZlWxc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pXl+p6YW9nfkI79yfSLf0ar5dy/Nb7zdlFHWZEZAeISKm+ax1Knr0tOC5IGpYFLDOxo7xlIxsTrEax9nUpYXPZ6gnr2b4vcqAYj/I6cokHfy/j7Eb9U7FQS/XyNkx7Z8VBVgLiDuSNlsxjk9+5aOtwZZfKM3lvs9NOrGnfUkyIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IZ2wn/cz; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IZ2wn/cz" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ea3c9178f6so137820a91.1 for ; Wed, 27 Nov 2024 12:20:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738813; x=1733343613; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=q0Rmp1Y4NI9uBs3wof7PI3i1xrNqpza7YTs8RbErkz0=; b=IZ2wn/czprZVBOdOyr7LaI5hYPlqLalMy2CjpT4jB1st8DZClbLpnurubwrFu4AA89 QiF2zns/XUmwGzqxlQ9fpQUc7NikmPvT8/mPEtYJFJ7Lkwquksudj3PDSGDl1L5Zmmtm TjSEWwUn5jIjDW3ATeveR0iwulICoFcIidiSbh9kTmJveKPsiWwiVHBM9MJj/K7qX2H7 3M1Zo/cM3INM/v2iv+dlTL72ccJGgokEzWpOxKxd494VjWsnRK3J/WEN+GwARrTdu0oN l4nbyAkSTsL6WbskaO8GFguvlR8RI806Dr7NctFlIT2XAjdUg+x+g6zY1ckgdTmj0HjJ iaIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738813; x=1733343613; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=q0Rmp1Y4NI9uBs3wof7PI3i1xrNqpza7YTs8RbErkz0=; b=e/Ys6lzRMMZjt6tkuVy4mcPUJrQfopKXu/XcvDCazQujj2yHOAo6j88+du+mtJ+Guj yvYTXrB83+2hvASpcD8fPwDVrPLzuW3G5mtutf684KrY6PdV15GxUG0oKOKhpLx5rrl6 hgYcvX1UMHBt8FfcTMGj6yyWS4KnueP+5jPZ2XbjAGgyK9Wlc+nL/9RAAsRnueJHMBWn r5AkqtrMV7Qbb0pV96B2sVdEFvvTIuEYfiu/s/bAFVMvqnwnGfglKjBYrvTUVEryM5CX S3mWRzHjfLpmnduptN+P8uHapwrhr4xhRM91lCo9mnV0gKlIHswQSUwiFG2p1G5C37rp AMoA== X-Gm-Message-State: AOJu0Yx8EPGIZ9QANsPzp67J/1kisMRkNlbfIcLwbF5VLpe6kqLb6Q6b E47VN6cvyrgK0zGnZgkub2FHtNqbigwlBYBdjOuRydKM2RvBCwLNOjhd7EJSUOn7Lc6CgWS+Cm8 cNQGeLIJHsNfkChehuXs2B5ri0aPZ3+L4lO5WphHEfy2fcBzQKpw5CVoMNDvZJeGTTb9mLmig95 t1H5UROPDRXgBdZppZ0t4wOsTkauoA78Rcx6wO0pG8fcH+FJFyBA== X-Google-Smtp-Source: AGHT+IFD466s37NISGlOum2vfdsfJlHr5s9SgGYKnvdAXKJwLoZH0gLGfmEa0d0T6mwMngQmUvBLZIeAAozRmVkC X-Received: from pjbrr16.prod.google.com ([2002:a17:90b:2b50:b0:2ea:5824:7f25]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3e88:b0:2ea:9309:75a7 with SMTP id 98e67ed59e1d1-2ee08e982b3mr5244665a91.2.1732738812644; Wed, 27 Nov 2024 12:20:12 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:28 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-15-aaronlewis@google.com> Subject: [PATCH 14/15] KVM: x86: Hoist SVM MSR intercepts to common x86 code From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Now that the SVM and VMX implementations for MSR intercepts are the same hoist the SVM implementation to common x86 code. Suggested-by: Sean Christopherson Signed-off-by: Aaron Lewis --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 3 ++ arch/x86/kvm/svm/svm.c | 73 ++--------------------------- arch/x86/kvm/x86.c | 75 ++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 2 + 5 files changed, 86 insertions(+), 68 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 124c2e1e42026..3f10ce4957f74 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -132,6 +132,7 @@ KVM_X86_OP(apic_init_signal_blocked) KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) KVM_X86_OP_OPTIONAL(migrate_timers) KVM_X86_OP_OPTIONAL(msr_filter_changed) +KVM_X86_OP_OPTIONAL(get_msr_bitmap_entries) KVM_X86_OP(disable_intercept_for_msr) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 808b5365e4bd2..763fc054a2c56 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1830,6 +1830,9 @@ struct kvm_x86_ops { const u32 * const possible_passthrough_msrs; const u32 nr_possible_passthrough_msrs; + void (*get_msr_bitmap_entries)(struct kvm_vcpu *vcpu, u32 msr, + unsigned long **read_map, u8 *read_bit, + unsigned long **write_map, u8 *write_bit); void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 31ed6c68e8194..aaf244e233b90 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -799,84 +799,20 @@ static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, *write_map = &svm->msrpm[offset]; } -#define BUILD_SVM_MSR_BITMAP_HELPER(fn, bitop, access) \ -static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \ -{ \ - unsigned long *read_map, *write_map; \ - u8 read_bit, write_bit; \ - \ - svm_get_msr_bitmap_entries(vcpu, msr, &read_map, &read_bit, \ - &write_map, &write_bit); \ - bitop(access##_bit, access##_map); \ -} - -BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_read, __set_bit, read) -BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_write, __set_bit, write) -BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_read, __clear_bit, read) -BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_write, __clear_bit, write) - void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { - struct vcpu_svm *svm = to_svm(vcpu); - int slot; - - slot = kvm_passthrough_msr_slot(msr); - WARN_ON(slot == -ENOENT); - if (slot >= 0) { - /* Set the shadow bitmaps to the desired intercept states */ - if (type & MSR_TYPE_R) - __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write); - } - - /* - * Don't disabled interception for the MSR if userspace wants to - * handle it. - */ - if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) { - svm_set_msr_bitmap_read(vcpu, msr); - type &= ~MSR_TYPE_R; - } - - if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) { - svm_set_msr_bitmap_write(vcpu, msr); - type &= ~MSR_TYPE_W; - } - - if (type & MSR_TYPE_R) - svm_clear_msr_bitmap_read(vcpu, msr); - - if (type & MSR_TYPE_W) - svm_clear_msr_bitmap_write(vcpu, msr); + kvm_disable_intercept_for_msr(vcpu, msr, type); svm_hv_vmcb_dirty_nested_enlightenments(vcpu); - svm->nested.force_msr_bitmap_recalc = true; + to_svm(vcpu)->nested.force_msr_bitmap_recalc = true; } void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { - struct vcpu_svm *svm = to_svm(vcpu); - int slot; - - slot = kvm_passthrough_msr_slot(msr); - WARN_ON(slot == -ENOENT); - if (slot >= 0) { - /* Set the shadow bitmaps to the desired intercept states */ - if (type & MSR_TYPE_R) - __set_bit(slot, vcpu->arch.shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - __set_bit(slot, vcpu->arch.shadow_msr_intercept.write); - } - - if (type & MSR_TYPE_R) - svm_set_msr_bitmap_read(vcpu, msr); - - if (type & MSR_TYPE_W) - svm_set_msr_bitmap_write(vcpu, msr); + kvm_enable_intercept_for_msr(vcpu, msr, type); svm_hv_vmcb_dirty_nested_enlightenments(vcpu); - svm->nested.force_msr_bitmap_recalc = true; + to_svm(vcpu)->nested.force_msr_bitmap_recalc = true; } unsigned long *svm_vcpu_alloc_msrpm(void) @@ -5127,6 +5063,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .possible_passthrough_msrs = direct_access_msrs, .nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs), + .get_msr_bitmap_entries = svm_get_msr_bitmap_entries, .disable_intercept_for_msr = svm_disable_intercept_for_msr, .complete_emulated_msr = svm_complete_emulated_msr, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2082ae8dc5db1..1e607a0eb58a0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1819,6 +1819,81 @@ int kvm_passthrough_msr_slot(u32 msr) } EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot); +#define BUILD_KVM_MSR_BITMAP_HELPER(fn, bitop, access) \ +static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \ +{ \ + unsigned long *read_map, *write_map; \ + u8 read_bit, write_bit; \ + \ + static_call(kvm_x86_get_msr_bitmap_entries)(vcpu, msr, \ + &read_map, &read_bit, \ + &write_map, &write_bit); \ + bitop(access##_bit, access##_map); \ +} + +BUILD_KVM_MSR_BITMAP_HELPER(kvm_set_msr_bitmap_read, __set_bit, read) +BUILD_KVM_MSR_BITMAP_HELPER(kvm_set_msr_bitmap_write, __set_bit, write) +BUILD_KVM_MSR_BITMAP_HELPER(kvm_clear_msr_bitmap_read, __clear_bit, read) +BUILD_KVM_MSR_BITMAP_HELPER(kvm_clear_msr_bitmap_write, __clear_bit, write) + +void kvm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +{ + int slot; + + slot = kvm_passthrough_msr_slot(msr); + WARN_ON(slot == -ENOENT); + if (slot >= 0) { + /* Set the shadow bitmaps to the desired intercept states */ + if (type & MSR_TYPE_R) + __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write); + } + + /* + * Don't disabled interception for the MSR if userspace wants to + * handle it. + */ + if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) { + kvm_set_msr_bitmap_read(vcpu, msr); + type &= ~MSR_TYPE_R; + } + + if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) { + kvm_set_msr_bitmap_write(vcpu, msr); + type &= ~MSR_TYPE_W; + } + + if (type & MSR_TYPE_R) + kvm_clear_msr_bitmap_read(vcpu, msr); + + if (type & MSR_TYPE_W) + kvm_clear_msr_bitmap_write(vcpu, msr); +} +EXPORT_SYMBOL_GPL(kvm_disable_intercept_for_msr); + +void kvm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +{ + int slot; + + slot = kvm_passthrough_msr_slot(msr); + WARN_ON(slot == -ENOENT); + if (slot >= 0) { + /* Set the shadow bitmaps to the desired intercept states */ + if (type & MSR_TYPE_R) + __set_bit(slot, vcpu->arch.shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + __set_bit(slot, vcpu->arch.shadow_msr_intercept.write); + } + + if (type & MSR_TYPE_R) + kvm_set_msr_bitmap_read(vcpu, msr); + + if (type & MSR_TYPE_W) + kvm_set_msr_bitmap_write(vcpu, msr); +} +EXPORT_SYMBOL_GPL(kvm_enable_intercept_for_msr); + static void kvm_msr_filter_changed(struct kvm_vcpu *vcpu) { u32 msr, i; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 208f0698c64e2..239cc4de49c58 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -556,6 +556,8 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r, int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva); bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); int kvm_passthrough_msr_slot(u32 msr); +void kvm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); +void kvm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); enum kvm_msr_access { MSR_TYPE_R = BIT(0), From patchwork Wed Nov 27 20:19:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13887369 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 969A1204090 for ; Wed, 27 Nov 2024 20:20:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738817; cv=none; b=kC24Ytk7pLuNd+Pc6uDrs8Iqj6dvAOR364dAHtCJVBAf6l/Xvejif9WwmKzhy8yMIk0jReexEdjoEMJuEmDo/vbi5Xdc1Gh8YlpjIqwHP2voJ6p9pq7E5Yj/0ckuy1B7j/0lmz0FqoevIB+WFqT5+Nugg9ScCKJCGfBhDaKo/20= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732738817; c=relaxed/simple; bh=EetIMtp7JVxm98o/xw+kAm0xx+HlD81H3VpbqeZkXvM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pbcc04KOZLCGeE7gQHCrbxjfFXXbxoLKlmTrF8il2KMgpWOIq595SBYstJjMTg4PEQV43RcGxVUlBKpFZ0T6t/tnqI9+DFPL6xQooZWL/+LXGNEbyf6E1nk5Gay1DMzSYqrkdiirLxbNEx1CaZKGEqzn7oX30/MZpib0oZ/igWs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=10HVUhg0; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aaronlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="10HVUhg0" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ea5bf5354fso130465a91.3 for ; Wed, 27 Nov 2024 12:20:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732738815; x=1733343615; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qdy0ysM+G41B1MvY5WPlXEIQxuhHoZWa9tmGrp301XM=; b=10HVUhg0AdqNDgjPvu0pmcNtEcZG2eQ/+5wMKoUBF2YN1snWtHoOtY/hIIj1Dzh+cn XsVD4HhVZYqxjW5UjdMnETV+0BwbXBxrfbQQZhDal5x4RsWR7324Jv5EBFXxH3/4/LTy oUsh6fcfWdV41iXzoBewbf28b229+fr3dCIE1lTVjiVMjrEbAQ01nUVrSRmJyPmTerzU z4rWTQ4jtHNTMk1NtkHM/cClyxyzG67ho8WlzW8kv99v+bnSiPkf7zNsmysEDLMciCg5 2PRQ0k028JXA2V8J+1lCum8gKwp3oJXNTt+GnrS3Upc8ETUTLRDRTPsNsrYs6VfKyohp myvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732738815; x=1733343615; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qdy0ysM+G41B1MvY5WPlXEIQxuhHoZWa9tmGrp301XM=; b=nGmPnRRbAOwYJZjsvtSdbGyKnuf4zBDKkPpeZpPzFRHuyY5EynTEhO6xhhg/eacrf3 ORfoXjYHBk0jE9eM2ESa1af/VONtDmNUjPm9FrS20DEtQ47/ZgtcPVPlmzc8Fkgq8VLb wgqRBdcLOS+63aMF3RgcWK2nXD++DaDeLDc0TQTAlaJic/eNAhUvWqe2nfKSYsS6OYSt VqJtjgKmmU298BnWXOtDG1EaStIMGgSuN3Z16Lr6XlYQVdz45Y8OslEl8SJiVpambbkp Pnrm9avQZKjw69str/GzZpCIvgP9Ug2a2VJ1yz7OFE3hrOFYOtTUqEfzFWiFcxRZW8rp vkqQ== X-Gm-Message-State: AOJu0Yzd2s7VEk+/45l6fMQfc4sG5+BE8RknxwcbWu2OMwJ7PK2OGRFr 0VokQZs/ttrOVIQhBTFsPGt1C7SWGDc8ocUMbNljBE4AjkDZjfdHMbKwP42tmsdjgWc+sjuxMm0 c3q/wG6S4ZxlkRq3CaDBKDlqJg1OHSqOvLEg7gCknfByC8G2EHFCvEUcoT1fWFbRAlosxz7Ym02 mc2yBCGaaLwIRe69bXFwSZpI+lDa+mzUlwAqBpf+SbiS3hMzzE5Q== X-Google-Smtp-Source: AGHT+IG2iZY/BcwNZC03IsukKXPoEKXnQvS9OO6OSMWHcJ0IevkNrl5bKQilvfbiN2ouZaBfMbmtj1ColXDs9RWE X-Received: from pjbpw2.prod.google.com ([2002:a17:90b:2782:b0:2e0:915d:d594]) (user=aaronlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4d0d:b0:2ea:853b:2761 with SMTP id 98e67ed59e1d1-2ee097e3d26mr5726025a91.37.1732738814645; Wed, 27 Nov 2024 12:20:14 -0800 (PST) Date: Wed, 27 Nov 2024 20:19:29 +0000 In-Reply-To: <20241127201929.4005605-1-aaronlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241127201929.4005605-1-aaronlewis@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241127201929.4005605-16-aaronlewis@google.com> Subject: [PATCH 15/15] KVM: x86: Hoist VMX MSR intercepts to common x86 code From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Complete the transition of unifying the MSR intercepts for x86 by hoisting the VMX implementation to common x86 code. The only new addition to the common implementation over what SVM already contributed is the check for is_valid_passthrough_msr() which VMX uses to disallow MSRs from being used as possible passthrough MSRs. To distinguish between MSRs that are not valid from MSRs that are missing from the list kvm_passthrough_msr_slot() returns -EINVAL for MSRs that are not allowed to be in the list and -ENOENT for MSRs that it is expecting to be in the list, but aren't. For the latter case KVM warns. Suggested-by: Sean Christopherson Signed-off-by: Aaron Lewis --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/svm.c | 6 ++ arch/x86/kvm/vmx/main.c | 2 + arch/x86/kvm/vmx/vmx.c | 91 +++++++++--------------------- arch/x86/kvm/vmx/vmx.h | 4 ++ arch/x86/kvm/x86.c | 4 ++ 7 files changed, 45 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 3f10ce4957f74..db1e0fc002805 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -134,6 +134,7 @@ KVM_X86_OP_OPTIONAL(migrate_timers) KVM_X86_OP_OPTIONAL(msr_filter_changed) KVM_X86_OP_OPTIONAL(get_msr_bitmap_entries) KVM_X86_OP(disable_intercept_for_msr) +KVM_X86_OP(is_valid_passthrough_msr) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 763fc054a2c56..22ae4dfa94f2c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1834,6 +1834,7 @@ struct kvm_x86_ops { unsigned long **read_map, u8 *read_bit, unsigned long **write_map, u8 *write_bit); void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type); + bool (*is_valid_passthrough_msr)(u32 msr); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index aaf244e233b90..2e746abeda215 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -799,6 +799,11 @@ static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, *write_map = &svm->msrpm[offset]; } +static bool svm_is_valid_passthrough_msr(u32 msr) +{ + return true; +} + void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { kvm_disable_intercept_for_msr(vcpu, msr, type); @@ -5065,6 +5070,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs), .get_msr_bitmap_entries = svm_get_msr_bitmap_entries, .disable_intercept_for_msr = svm_disable_intercept_for_msr, + .is_valid_passthrough_msr = svm_is_valid_passthrough_msr, .complete_emulated_msr = svm_complete_emulated_msr, .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 5279c82648fe6..e89c472179dd5 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -179,7 +179,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .possible_passthrough_msrs = vmx_possible_passthrough_msrs, .nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs), + .get_msr_bitmap_entries = vmx_get_msr_bitmap_entries, .disable_intercept_for_msr = vmx_disable_intercept_for_msr, + .is_valid_passthrough_msr = vmx_is_valid_passthrough_msr, .msr_filter_changed = vmx_msr_filter_changed, .complete_emulated_msr = kvm_complete_insn_gp, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4cb3e9a8df2c0..5493a24febd50 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -642,14 +642,12 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu) return flexpriority_enabled && lapic_in_kernel(vcpu); } -static int vmx_get_passthrough_msr_slot(u32 msr) +bool vmx_is_valid_passthrough_msr(u32 msr) { - int r; - switch (msr) { case 0x800 ... 0x8ff: /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */ - return -ENOENT; + return false; case MSR_IA32_RTIT_STATUS: case MSR_IA32_RTIT_OUTPUT_BASE: case MSR_IA32_RTIT_OUTPUT_MASK: @@ -664,13 +662,10 @@ static int vmx_get_passthrough_msr_slot(u32 msr) case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ - return -ENOENT; + return false; } - r = kvm_passthrough_msr_slot(msr); - - WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); - return r; + return true; } struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) @@ -3969,76 +3964,44 @@ static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx) vmx->nested.force_msr_bitmap_recalc = true; } -void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +void vmx_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, + unsigned long **read_map, u8 *read_bit, + unsigned long **write_map, u8 *write_bit) { - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; - int idx; - - if (!cpu_has_vmx_msr_bitmap()) - return; + unsigned long *bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap; + u32 offset; - vmx_msr_bitmap_l01_changed(vmx); + *read_bit = *write_bit = msr & 0x1fff; - /* - * Mark the desired intercept state in shadow bitmap, this is needed - * for resync when the MSR filters change. - */ - idx = vmx_get_passthrough_msr_slot(msr); - if (idx >= 0) { - if (type & MSR_TYPE_R) - __clear_bit(idx, vcpu->arch.shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - __clear_bit(idx, vcpu->arch.shadow_msr_intercept.write); - } + if (msr <= 0x1fff) + offset = 0; + else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) + offset = 0x400; + else + BUG(); - if ((type & MSR_TYPE_R) && - !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) { - vmx_set_msr_bitmap_read(msr_bitmap, msr); - type &= ~MSR_TYPE_R; - } + *read_map = bitmap + (0 + offset) / sizeof(unsigned long); + *write_map = bitmap + (0x800 + offset) / sizeof(unsigned long); +} - if ((type & MSR_TYPE_W) && - !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) { - vmx_set_msr_bitmap_write(msr_bitmap, msr); - type &= ~MSR_TYPE_W; - } +void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) +{ + if (!cpu_has_vmx_msr_bitmap()) + return; - if (type & MSR_TYPE_R) - vmx_clear_msr_bitmap_read(msr_bitmap, msr); + kvm_disable_intercept_for_msr(vcpu, msr, type); - if (type & MSR_TYPE_W) - vmx_clear_msr_bitmap_write(msr_bitmap, msr); + vmx_msr_bitmap_l01_changed(to_vmx(vcpu)); } void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; - int idx; - if (!cpu_has_vmx_msr_bitmap()) return; - vmx_msr_bitmap_l01_changed(vmx); - - /* - * Mark the desired intercept state in shadow bitmap, this is needed - * for resync when the MSR filter changes. - */ - idx = vmx_get_passthrough_msr_slot(msr); - if (idx >= 0) { - if (type & MSR_TYPE_R) - __set_bit(idx, vcpu->arch.shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - __set_bit(idx, vcpu->arch.shadow_msr_intercept.write); - } - - if (type & MSR_TYPE_R) - vmx_set_msr_bitmap_read(msr_bitmap, msr); + kvm_enable_intercept_for_msr(vcpu, msr, type); - if (type & MSR_TYPE_W) - vmx_set_msr_bitmap_write(msr_bitmap, msr); + vmx_msr_bitmap_l01_changed(to_vmx(vcpu)); } static void vmx_update_msr_bitmap_x2apic(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c40e7c880764f..6b87dcab46e48 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -409,8 +409,12 @@ bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu); +void vmx_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr, + unsigned long **read_map, u8 *read_bit, + unsigned long **write_map, u8 *write_bit); void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type); +bool vmx_is_valid_passthrough_msr(u32 msr); u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu); u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1e607a0eb58a0..3c4a580d51517 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1810,6 +1810,10 @@ int kvm_passthrough_msr_slot(u32 msr) { u32 i; + if (!static_call(kvm_x86_is_valid_passthrough_msr)(msr)) { + return -EINVAL; + } + for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) { if (kvm_x86_ops.possible_passthrough_msrs[i] == msr) return i;