From patchwork Mon Jun 10 13:42:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13692045 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B3929158D92; Mon, 10 Jun 2024 13:44:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718027074; cv=none; b=sYcKYhY6VMNt25BpUmHuqiLBLuP4RszTcsKShpJVFagU9qJkwgsYLokpfQ5EZ7vONZTn4FqVU7GIIm9ikqBGrmG2a9rNU5x7iPZfZLKQiqb0YTa4NrswjPNAMbAb7ZuODiSJQcV25BzGkg8C1695S4dadAFQDsG0Z6CF8Upv7j0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718027074; c=relaxed/simple; bh=r/KzGjTy1oCCo7hJcIq5YAmatu23kAJ2+nsKGNw48so=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ElohRxLGHwGZb0xGFufhJQKiUXbnsGoGpyWB97XtjuF2QJ/TeJqVcN5c9eBM6CY+osD51DKN9u1j6twDRiBtWdukLRGDg1K8rjuVjGD8lO1cehh83JSBIfkAQbP8noYciPKv8xWmd4BE9DT1I04vBj5VhXCBxe8K2baRSHrBPJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B29311688; Mon, 10 Jun 2024 06:44:56 -0700 (PDT) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 323543F58B; Mon, 10 Jun 2024 06:44:29 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: [PATCH v3 42/43] arm64: kvm: Expose support for private memory Date: Mon, 10 Jun 2024 14:42:01 +0100 Message-Id: <20240610134202.54893-43-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610134202.54893-1-steven.price@arm.com> References: <20240610134202.54893-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Select KVM_GENERIC_PRIVATE_MEM and provide the necessary support functions. Signed-off-by: Steven Price --- Changes since v2: * Switch kvm_arch_has_private_mem() to a macro to avoid overhead of a function call. * Guard definitions of kvm_arch_{pre,post}_set_memory_attributes() with #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES. * Early out in kvm_arch_post_set_memory_attributes() if the WARN_ON should trigger. --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/mmu.c | 22 ++++++++++++++++++++++ 3 files changed, 29 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 804da7337439..b7ad94149c61 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1302,6 +1302,12 @@ struct kvm *kvm_arch_alloc_vm(void); #define vcpu_is_protected(vcpu) kvm_vm_is_protected((vcpu)->kvm) +#ifdef CONFIG_KVM_PRIVATE_MEM +#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.is_realm) +#else +#define kvm_arch_has_private_mem(kvm) false +#endif + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 58f09370d17e..8da57e74c86a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -37,6 +37,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GENERIC_PRIVATE_MEM help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b3793c2b8499..e693887af424 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2141,6 +2141,28 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, return ret; } +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)); + return false; +} + +bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + return false; + + if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE) + range->only_shared = true; + kvm_unmap_gfn_range(kvm, range); + + return false; +} +#endif + void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { }