From patchwork Wed Sep 25 05:06:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 11160029 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A4CB14E5 for ; Wed, 25 Sep 2019 05:07:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3563921D81 for ; Wed, 25 Sep 2019 05:07:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3563921D81 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D1F66B0275; Wed, 25 Sep 2019 01:07:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E9456B0276; Wed, 25 Sep 2019 01:07:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E30C76B0277; Wed, 25 Sep 2019 01:07:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 31CAC6B0276 for ; Wed, 25 Sep 2019 01:07:18 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id D6221181AC9BA for ; Wed, 25 Sep 2019 05:07:17 +0000 (UTC) X-FDA: 75972259314.26.able34_56d63bcc1547 X-Spam-Summary: 2,0,0,a48a9776faaf7b9f,d41d8cd98f00b204,bharata@linux.ibm.com,:linuxppc-dev@lists.ozlabs.org:kvm-ppc@vger.kernel.org::paulus@au1.ibm.com:aneesh.kumar@linux.vnet.ibm.com:jglisse@redhat.com:linuxram@us.ibm.com:sukadev@linux.vnet.ibm.com:cclaudio@linux.ibm.com:hch@lst.de:bharata@linux.ibm.com,RULES_HIT:4:41:69:355:379:541:800:960:966:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:2196:2199:2393:2559:2562:2638:2693:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:4398:4605:5007:6119:6261:7558:7875:7903:7904:8568:8603:8660:9036:10004:11026:11232:11473:11657:11658:11914:12043:12291:12296:12297:12438:12555:12663:12895:12986:13141:13148:13161:13229:13230:13894:14093:14096:14394:21080:21433:21451:21611:21627:30054:30070:30089,0,RBL:148.163.158.5:@linux.ibm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL: none,Cus X-HE-Tag: able34_56d63bcc1547 X-Filterd-Recvd-Size: 15800 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Sep 2019 05:07:17 +0000 (UTC) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x8P52lAk050141 for ; Wed, 25 Sep 2019 01:07:16 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2v80hptfsj-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 25 Sep 2019 01:07:16 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 25 Sep 2019 06:07:14 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 25 Sep 2019 06:07:12 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x8P57AjQ40173586 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 25 Sep 2019 05:07:10 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A8F22A404D; Wed, 25 Sep 2019 05:07:10 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A34EBA4040; Wed, 25 Sep 2019 05:07:08 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.52.56]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 25 Sep 2019 05:07:08 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, cclaudio@linux.ibm.com, hch@lst.de, Bharata B Rao Subject: [PATCH v9 7/8] KVM: PPC: Support reset of secure guest Date: Wed, 25 Sep 2019 10:36:48 +0530 X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190925050649.14926-1-bharata@linux.ibm.com> References: <20190925050649.14926-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19092505-0012-0000-0000-0000035043E1 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19092505-0013-0000-0000-0000218AD615 Message-Id: <20190925050649.14926-8-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-09-25_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1909250050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF. This ioctl will be issued by QEMU during reset and includes the the following steps: - Ask UV to terminate the guest via UV_SVM_TERMINATE ucall - Unpin the VPA pages so that they can be migrated back to secure side when guest becomes secure again. This is required because pinned pages can't be migrated. - Reinitialize guest's partitioned scoped page tables. These are freed when guest becomes secure (H_SVM_INIT_DONE) - Release all device pages of the secure guest. After these steps, guest is ready to issue UV_ESM call once again to switch to secure mode. Signed-off-by: Bharata B Rao Signed-off-by: Sukadev Bhattiprolu [Implementation of uv_svm_terminate() and its call from guest shutdown path] Signed-off-by: Ram Pai [Unpinning of VPA pages] --- Documentation/virt/kvm/api.txt | 19 ++++++ arch/powerpc/include/asm/kvm_book3s_uvmem.h | 7 ++ arch/powerpc/include/asm/kvm_ppc.h | 2 + arch/powerpc/include/asm/ultravisor-api.h | 1 + arch/powerpc/include/asm/ultravisor.h | 5 ++ arch/powerpc/kvm/book3s_hv.c | 74 +++++++++++++++++++++ arch/powerpc/kvm/book3s_hv_uvmem.c | 60 +++++++++++++++++ arch/powerpc/kvm/powerpc.c | 12 ++++ include/uapi/linux/kvm.h | 1 + 9 files changed, 181 insertions(+) diff --git a/Documentation/virt/kvm/api.txt b/Documentation/virt/kvm/api.txt index 2d067767b617..8e7a02e547e9 100644 --- a/Documentation/virt/kvm/api.txt +++ b/Documentation/virt/kvm/api.txt @@ -4111,6 +4111,25 @@ Valid values for 'action': #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +4.121 KVM_PPC_SVM_OFF + +Capability: basic +Architectures: powerpc +Type: vm ioctl +Parameters: none +Returns: 0 on successful completion, +Errors: + EINVAL: if ultravisor failed to terminate the secure guest + ENOMEM: if hypervisor failed to allocate new radix page tables for guest + +This ioctl is used to turn off the secure mode of the guest or transition +the guest from secure mode to normal mode. This is invoked when the guest +is reset. This has no effect if called for a normal guest. + +This ioctl issues an ultravisor call to terminate the secure guest, +unpins the VPA pages, reinitializes guest's partition scoped page +tables and releases all the device pages that are used to track the +secure pages by hypervisor. 5. The kvm_run structure ------------------------ diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h index fc924ef00b91..6b8cc8edd0ab 100644 --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h @@ -13,6 +13,8 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long page_shift); unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); +void kvmppc_uvmem_free_memslot_pfns(struct kvm *kvm, + struct kvm_memslots *slots); #else static inline unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra, @@ -37,5 +39,10 @@ static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) { return H_UNSUPPORTED; } + +static inline void kvmppc_uvmem_free_memslot_pfns(struct kvm *kvm, + struct kvm_memslots *slots) +{ +} #endif /* CONFIG_PPC_UV */ #endif /* __POWERPC_KVM_PPC_HMM_H__ */ diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 2484e6a8f5ca..e4093d067354 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -177,6 +177,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm, extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm); extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm); extern void kvmppc_setup_partition_table(struct kvm *kvm); +extern int kvmppc_reinit_partition_table(struct kvm *kvm); extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, struct kvm_create_spapr_tce_64 *args); @@ -321,6 +322,7 @@ struct kvmppc_ops { int size); int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, int size); + int (*svm_off)(struct kvm *kvm); }; extern struct kvmppc_ops *kvmppc_hv_ops; diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h index cf200d4ce703..3a27a0c0be05 100644 --- a/arch/powerpc/include/asm/ultravisor-api.h +++ b/arch/powerpc/include/asm/ultravisor-api.h @@ -30,5 +30,6 @@ #define UV_PAGE_IN 0xF128 #define UV_PAGE_OUT 0xF12C #define UV_PAGE_INVAL 0xF138 +#define UV_SVM_TERMINATE 0xF13C #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */ diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h index b333241bbe4c..754a37de646d 100644 --- a/arch/powerpc/include/asm/ultravisor.h +++ b/arch/powerpc/include/asm/ultravisor.h @@ -62,4 +62,9 @@ static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift) return ucall_norets(UV_PAGE_INVAL, lpid, gpa, page_shift); } +static inline int uv_svm_terminate(u64 lpid) +{ + return ucall_norets(UV_SVM_TERMINATE, lpid); +} + #endif /* _ASM_POWERPC_ULTRAVISOR_H */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c5320cc0a534..d0014ca8d66a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2433,6 +2433,15 @@ static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa) vpa->dirty); } +static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa) +{ + unpin_vpa(kvm, vpa); + vpa->gpa = 0; + vpa->pinned_addr = NULL; + vpa->dirty = false; + vpa->update_pending = 0; +} + static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu) { spin_lock(&vcpu->arch.vpa_update_lock); @@ -4593,6 +4602,22 @@ void kvmppc_setup_partition_table(struct kvm *kvm) kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1); } +/* + * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure + * guest is converted back to normal guest. + */ +int kvmppc_reinit_partition_table(struct kvm *kvm) +{ + int ret; + + ret = kvmppc_init_vm_radix(kvm); + if (ret) + return ret; + + kvmppc_setup_partition_table(kvm); + return 0; +} + /* * Set up HPT (hashed page table) and RMA (real-mode area). * Must be called with kvm->arch.mmu_setup_lock held. @@ -4981,6 +5006,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm) if (nesting_enabled(kvm)) kvmhv_release_all_nested(kvm); kvm->arch.process_table = 0; + uv_svm_terminate(kvm->arch.lpid); kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0); } kvmppc_free_lpid(kvm->arch.lpid); @@ -5422,6 +5448,53 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, return rc; } +/* + * IOCTL handler to turn off secure mode of guest + * + * - Issue ucall to terminate the guest on the UV side + * - Unpin the VPA pages (Enables these pages to be migrated back + * when VM becomes secure again) + * - Recreate partition table as the guest is transitioning back to + * normal mode + * - Release all device pages + */ +static int kvmhv_svm_off(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int srcu_idx; + int ret = 0; + int i; + + if (!kvmppc_is_guest_secure(kvm)) + return ret; + + ret = uv_svm_terminate(kvm->arch.lpid); + if (ret != U_SUCCESS) { + ret = -EINVAL; + goto out; + } + + kvm_for_each_vcpu(i, vcpu, kvm) { + spin_lock(&vcpu->arch.vpa_update_lock); + unpin_vpa_reset(kvm, &vcpu->arch.dtl); + unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow); + unpin_vpa_reset(kvm, &vcpu->arch.vpa); + spin_unlock(&vcpu->arch.vpa_update_lock); + } + + ret = kvmppc_reinit_partition_table(kvm); + if (ret) + goto out; + + srcu_idx = srcu_read_lock(&kvm->srcu); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) + kvmppc_uvmem_free_memslot_pfns(kvm, __kvm_memslots(kvm, i)); + srcu_read_unlock(&kvm->srcu, srcu_idx); + kvm->arch.secure_guest = 0; +out: + return ret; +} + static struct kvmppc_ops kvm_ops_hv = { .get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv, .set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv, @@ -5464,6 +5537,7 @@ static struct kvmppc_ops kvm_ops_hv = { .enable_nested = kvmhv_enable_nested, .load_from_eaddr = kvmhv_load_from_eaddr, .store_to_eaddr = kvmhv_store_to_eaddr, + .svm_off = kvmhv_svm_off, }; static int kvm_init_subcore_bitmap(void) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 9250d1917a45..14494f6af949 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -74,6 +74,7 @@ #include #include #include +#include static struct dev_pagemap kvmppc_uvmem_pgmap; static unsigned long *kvmppc_uvmem_pfn_bitmap; @@ -122,9 +123,68 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) return H_UNSUPPORTED; kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE; + if (kvm_is_radix(kvm)) { + kvmppc_free_radix(kvm); + pr_info("LPID %d went secure, freed HV side radix pgtables\n", + kvm->arch.lpid); + } return H_SUCCESS; } +/* + * Drop device pages that we maintain for the secure guest + * + * We first mark the pages to be skipped from UV_PAGE_OUT when there + * is HV side fault on these pages. Next we *get* these pages, forcing + * fault on them, do fault time migration to replace the device PTEs in + * QEMU page table with normal PTEs from newly allocated pages. + */ +static void kvmppc_uvmem_drop_pages(struct kvm_memory_slot *free, + struct kvm *kvm) +{ + int i; + struct kvmppc_uvmem_page_pvt *pvt; + unsigned long pfn; + + for (i = 0; i < free->npages; i++) { + unsigned long *rmap = &free->arch.rmap[i]; + struct page *uvmem_page; + + mutex_lock(&kvm->arch.uvmem_lock); + if (kvmppc_rmap_type(rmap) != KVMPPC_RMAP_UVMEM_PFN) { + mutex_unlock(&kvm->arch.uvmem_lock); + continue; + } + + uvmem_page = pfn_to_page(*rmap & ~KVMPPC_RMAP_UVMEM_PFN); + pvt = uvmem_page->zone_device_data; + pvt->skip_page_out = true; + mutex_unlock(&kvm->arch.uvmem_lock); + + pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT); + if (is_error_noslot_pfn(pfn)) + continue; + kvm_release_pfn_clean(pfn); + } +} + +/* + * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset + * + * UV has already cleaned up the guest, we release any device pages + * that we maintain + */ +void kvmppc_uvmem_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots *slots) +{ + struct kvm_memory_slot *memslot; + + if (!slots) + return; + + kvm_for_each_memslot(memslot, slots) + kvmppc_uvmem_drop_pages(memslot, kvm); +} + /* * Get a free device PFN from the pool * diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 3e566c2e6066..3f7393177fba 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -31,6 +31,8 @@ #include #include #endif +#include +#include #include "timing.h" #include "irq.h" @@ -2410,6 +2412,16 @@ long kvm_arch_vm_ioctl(struct file *filp, r = -EFAULT; break; } + case KVM_PPC_SVM_OFF: { + struct kvm *kvm = filp->private_data; + + r = 0; + if (!kvm->arch.kvm_ops->svm_off) + goto out; + + r = kvm->arch.kvm_ops->svm_off(kvm); + break; + } default: { struct kvm *kvm = filp->private_data; r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg); diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5e3f12d5359e..c2393a347680 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1332,6 +1332,7 @@ struct kvm_s390_ucas_mapping { #define KVM_PPC_GET_CPU_CHAR _IOR(KVMIO, 0xb1, struct kvm_ppc_cpu_char) /* Available with KVM_CAP_PMU_EVENT_FILTER */ #define KVM_SET_PMU_EVENT_FILTER _IOW(KVMIO, 0xb2, struct kvm_pmu_event_filter) +#define KVM_PPC_SVM_OFF _IO(KVMIO, 0xb3) /* ioctl for vm fd */ #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device)