From patchwork Wed Nov 21 05:28:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10691979 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8CA825A4 for ; Wed, 21 Nov 2018 05:28:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 79EB32B329 for ; Wed, 21 Nov 2018 05:28:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A87E2B3F4; Wed, 21 Nov 2018 05:28:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29CDC2B329 for ; Wed, 21 Nov 2018 05:28:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DECC86B243F; Wed, 21 Nov 2018 00:28:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D730D6B2441; Wed, 21 Nov 2018 00:28:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BED836B2442; Wed, 21 Nov 2018 00:28:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 5D9946B243F for ; Wed, 21 Nov 2018 00:28:28 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id ay11so5839582plb.20 for ; Tue, 20 Nov 2018 21:28:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:mime-version :content-transfer-encoding:message-id; bh=fs75roA7pd2i4yJavdpFI+8mHBj8x6WuyVyNbxrRkEI=; b=Bmoc64P+YDxCSP2hb5Lk4MjEOl7qUuoCxU2EsNG9+MSViOCsQoC5Wi1wejhPZQUZw1 MZ+MhpCTIpNg9+nOh1oX8uw2Yfc/YHtTSyiijvzCtCjTEeh72UCP1MnCHFcOcM9yRchH uGqMUoKCG5iqxMAyegHPruWzffgo4riQsFSIoEI5sjJd8S07/sCsyXnXCwikIaJA4pzd YfyRCquUu92lfu0iJCCOOnAzg3W7pa0acsPAkGQgrUxt2FPRYnF4VB/53SaqroCgv+kl jMtA7yLbSprWyv1MsMKSGhJQl+L61xg1lJuYV6DiLbIgFlDKlOktju5uMYq8K3ektQZV +Ykg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AA+aEWZuLcjo+bvbCBYwaSugxKskLq19HMNCqcM9dd42TG4npahCNiNk c0iKeaedoSjig2Lz66emyR9OKRRg5WRkDOgKiMMoi2GK6rdFB3dack655+9FAsSY9RTZ+J+PqTW vl5QI+r+WEPQ2dbnb8itHzkn2FkDlpZ1KJKnO3zQcAXUT4dccGvOBROj+GxmEIk0ErA== X-Received: by 2002:a17:902:f20d:: with SMTP id gn13mr5139898plb.11.1542778107967; Tue, 20 Nov 2018 21:28:27 -0800 (PST) X-Google-Smtp-Source: AFSGD/V2uxD4KEurm5enKb92h6KZObX2/Z1HESGKD8rbGn2tBuahXD5vmXxCRu4KjqwnLyVXu+oP X-Received: by 2002:a17:902:f20d:: with SMTP id gn13mr5139831plb.11.1542778106136; Tue, 20 Nov 2018 21:28:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542778106; cv=none; d=google.com; s=arc-20160816; b=RBMb9jNANa9ezf5DtYCqw42nswjaLVnF5x6I6QZ2fHzchf0ARlz2xe98EXhS/O9cqU uE2/p2mcvCtReV/2rjazEBHGdVCQ/75zruf4L7bNbro0DeP8UGlJPR48mtI7oxIBx9dJ x/Z+fX8awOcZPWFBe5MVrG98zpwztcwPgJEPQOw8iwQCUOdmBbeYyqHWAa+o7x73z72m sSh5Y6Jn2cQ4DlK90egdiieJqE1BSP9V0n8F7Mz4GOeFJlfZmEN8xBC1q/idOvOG0CNx m4RXZB1XZwXGSTRfHaqzi4DqfEJPjeY8KTZOwhDkpJPL/Xc/QjAd028RZm17msAeRFNn CsPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:content-transfer-encoding:mime-version:references :in-reply-to:date:subject:cc:to:from; bh=fs75roA7pd2i4yJavdpFI+8mHBj8x6WuyVyNbxrRkEI=; b=SJkJ+DM95KvFVzKqPb1MMLs+lRNnDtnjcbPqFnfTXBLBuDASt2S586bqB7nNDiMbcM 21HtRo5U2FwSWxQMh8wcujby6Z/8PmOm16Sls5sSAEJ1Uqv5tE5iufxFniVujdgay8xJ h0DLUIvYmWEnW+tpx67/Aiz+OkqvQlA3u+WxYKUHqCptMUfia2HC504wJROKJxW4TFtu X06kZOXj1tuYLjUHRRiqLRUpxl/rdQhV84hprvJyMXXDITE2tPPu43jhjAgMRHBV5MCA KLTeLWkezk9E0wDLbPl8tHOvsaxDVKL1v5+uOb/X24sLwfGpuUxfiVrpHTvWeG5Uoq+d lXWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id y3-v6si44983599pfe.42.2018.11.20.21.28.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 21:28:26 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5SLRO043484 for ; Wed, 21 Nov 2018 00:28:25 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2nw0kg1b6x-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:28:25 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:22 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 21 Nov 2018 05:28:20 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wAL5SIm27995818 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Nov 2018 05:28:18 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 806234C044; Wed, 21 Nov 2018 05:28:18 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EA3F04C052; Wed, 21 Nov 2018 05:28:16 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.234]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 21 Nov 2018 05:28:16 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v2 1/4] kvmppc: HMM backend driver to manage pages of secure guest Date: Wed, 21 Nov 2018 10:58:08 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 18112105-0020-0000-0000-000002EB47B9 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18112105-0021-0000-0000-0000213A7D47 Message-Id: <20181121052811.4819-2-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-11-21_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811210050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP HMM driver for KVM PPC to manage page transitions of secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls. H_SVM_PAGE_IN: Move the content of a normal page to secure page H_SVM_PAGE_OUT: Move the content of a secure page to normal page Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 4 + arch/powerpc/include/asm/kvm_host.h | 14 + arch/powerpc/include/asm/kvm_ppc.h | 28 ++ arch/powerpc/include/asm/ucall-api.h | 22 ++ arch/powerpc/kvm/Makefile | 3 + arch/powerpc/kvm/book3s_hv.c | 21 ++ arch/powerpc/kvm/book3s_hv_hmm.c | 457 +++++++++++++++++++++++++++ 7 files changed, 549 insertions(+) create mode 100644 arch/powerpc/include/asm/ucall-api.h create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 33a4fc891947..c900f47c0a9f 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -336,6 +336,10 @@ #define H_ENTER_NESTED 0xF804 #define H_TLB_INVALIDATE 0xF808 +/* Platform-specific hcalls used by the Ultravisor */ +#define H_SVM_PAGE_IN 0xFF00 +#define H_SVM_PAGE_OUT 0xFF04 + /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 #define H_SET_MODE_RESOURCE_SET_DAWR 2 diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index fac6f631ed29..729bdea22250 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -842,4 +842,18 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +#ifdef CONFIG_PPC_SVM +extern int kvmppc_hmm_init(void); +extern void kvmppc_hmm_free(void); +extern void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free); +#else +static inline int kvmppc_hmm_init(void) +{ + return 0; +} + +static inline void kvmppc_hmm_free(void) {} +static inline void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free) {} +#endif /* CONFIG_PPC_SVM */ + #endif /* __POWERPC_KVM_HOST_H__ */ diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 9b89b1918dfc..659c80982497 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -908,4 +908,32 @@ static inline ulong kvmppc_get_ea_indexed(struct kvm_vcpu *vcpu, int ra, int rb) extern void xics_wake_cpu(int cpu); +#ifdef CONFIG_PPC_SVM +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +#else +static inline unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} +#endif #endif /* __POWERPC_KVM_PPC_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h new file mode 100644 index 000000000000..a84dc2abd172 --- /dev/null +++ b/arch/powerpc/include/asm/ucall-api.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_UCALL_API_H +#define _ASM_POWERPC_UCALL_API_H + +#define U_SUCCESS 0 + +/* + * TODO: Dummy uvcalls, will be replaced by real calls + */ +static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 flags, + u64 page_shift) +{ + return U_SUCCESS; +} + +static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags, + u64 page_shift) +{ + return U_SUCCESS; +} + +#endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index 64f1135e7732..a9547318662e 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -76,6 +76,9 @@ kvm-hv-y += \ book3s_64_mmu_radix.o \ book3s_hv_nested.o +kvm-hv-$(CONFIG_PPC_SVM) += \ + book3s_hv_hmm.o + kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \ book3s_hv_tm.o diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d65b961661fb..7e413605e7c4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -74,6 +74,7 @@ #include #include #include +#include #include "book3s.h" @@ -991,6 +992,20 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) if (nesting_enabled(vcpu->kvm)) ret = kvmhv_do_nested_tlbie(vcpu); break; + case H_SVM_PAGE_IN: + ret = kvmppc_h_svm_page_in(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; + case H_SVM_PAGE_OUT: + ret = kvmppc_h_svm_page_out(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; default: return RESUME_HOST; @@ -4345,6 +4360,7 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { if (!dont || free->arch.rmap != dont->arch.rmap) { + kvmppc_hmm_release_pfns(free); vfree(free->arch.rmap); free->arch.rmap = NULL; } @@ -5357,11 +5373,16 @@ static int kvmppc_book3s_init_hv(void) no_mixing_hpt_and_radix = true; } + r = kvmppc_hmm_init(); + if (r < 0) + pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r); + return r; } static void kvmppc_book3s_exit_hv(void) { + kvmppc_hmm_free(); kvmppc_free_host_rm_ops(); if (kvmppc_radix_possible()) kvmppc_radix_exit(); diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c new file mode 100644 index 000000000000..5f2a924a4f16 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -0,0 +1,457 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * HMM driver to manage page migration between normal and secure + * memory. + * + * Based on Jérôme Glisse's HMM dummy driver. + * + * Copyright 2018 Bharata B Rao, IBM Corp. + */ + +/* + * A pseries guest can be run as a secure guest on Ultravisor-enabled + * POWER platforms. On such platforms, this driver will be used to manage + * the movement of guest pages between the normal memory managed by + * hypervisor (HV) and secure memory managed by Ultravisor (UV). + * + * Private ZONE_DEVICE memory equal to the amount of secure memory + * available in the platform for running secure guests is created + * via a HMM device. The movement of pages between normal and secure + * memory is done by ->alloc_and_copy() callback routine of migrate_vma(). + * + * The page-in or page-out requests from UV will come to HV as hcalls and + * HV will call back into UV via uvcalls to satisfy these page requests. + * + * For each page that gets moved into secure memory, a HMM PFN is used + * on the HV side and HMM migration PTE corresponding to that PFN would be + * populated in the QEMU page tables. + */ + +#include +#include +#include +#include + +struct kvmppc_hmm_device { + struct hmm_device *device; + struct hmm_devmem *devmem; + unsigned long *pfn_bitmap; +}; + +static struct kvmppc_hmm_device kvmppc_hmm; +spinlock_t kvmppc_hmm_lock; + +struct kvmppc_hmm_page_pvt { + unsigned long *rmap; + unsigned int lpid; + unsigned long gpa; +}; + +struct kvmppc_hmm_migrate_args { + unsigned long *rmap; + unsigned int lpid; + unsigned long gpa; + unsigned long page_shift; +}; + +#define KVMPPC_PFN_HMM (0x1ULL << 61) + +static inline bool kvmppc_is_hmm_pfn(unsigned long pfn) +{ + return !!(pfn & KVMPPC_PFN_HMM); +} + +void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free) +{ + int i; + + for (i = 0; i < free->npages; i++) { + unsigned long *rmap = &free->arch.rmap[i]; + + if (kvmppc_is_hmm_pfn(*rmap)) + put_page(pfn_to_page(*rmap & ~KVMPPC_PFN_HMM)); + } +} + +/* + * Get a free HMM PFN from the pool + * + * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM + * PFN will be used to keep track of the secure page on HV side. + */ +/* + * TODO: In this and subsequent functions, we pass around and access + * individual elements of kvm_memory_slot->arch.rmap[] without any + * protection. Figure out the safe way to access this. + */ +static struct page *kvmppc_hmm_get_page(unsigned long *rmap, + unsigned long gpa, unsigned int lpid) +{ + struct page *dpage = NULL; + unsigned long bit, hmm_pfn; + unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last - + kvmppc_hmm.devmem->pfn_first; + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + if (kvmppc_is_hmm_pfn(*rmap)) + return NULL; + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns); + if (bit >= nr_pfns) + goto out; + + bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1); + hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first; + dpage = pfn_to_page(hmm_pfn); + + if (!trylock_page(dpage)) + goto out_clear; + + *rmap = hmm_pfn | KVMPPC_PFN_HMM; + pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC); + if (!pvt) + goto out_unlock; + pvt->rmap = rmap; + pvt->gpa = gpa; + pvt->lpid = lpid; + hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt); + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + + get_page(dpage); + return dpage; + +out_unlock: + unlock_page(dpage); +out_clear: + bitmap_clear(kvmppc_hmm.pfn_bitmap, + hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1); +out: + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; +} + +/* + * Release the HMM PFN back to the pool + * + * Called when secure page becomes a normal page during UV_PAGE_OUT. + */ +static void kvmppc_hmm_put_page(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page); + hmm_devmem_page_set_drvdata(page, 0); + + bitmap_clear(kvmppc_hmm.pfn_bitmap, + pfn - kvmppc_hmm.devmem->pfn_first, 1); + *(pvt->rmap) = 0; + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + kfree(pvt); +} + +/* + * migrate_vma() callback to move page from normal memory to secure memory. + * + * We don't capture the return value of uv_page_in() here because when + * UV asks for a page and then fails to copy it over, we don't care. + */ +static void +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfn, + unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ + struct kvmppc_hmm_migrate_args *args = private; + struct page *spage = migrate_pfn_to_page(*src_pfn); + unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT; + struct page *dpage; + + *dst_pfn = 0; + if (!(*src_pfn & MIGRATE_PFN_MIGRATE)) + return; + + dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid); + if (!dpage) + return; + + if (spage) + uv_page_in(args->lpid, pfn << args->page_shift, + args->gpa, 0, args->page_shift); + + *dst_pfn = migrate_pfn(page_to_pfn(dpage)) | + MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED; +} + +/* + * This migrate_vma() callback is typically used to updated device + * page tables after successful migration. We have nothing to do here. + * + * Also as we don't care if UV successfully copied over the page in + * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check + * dst_pfn for any errors here. + */ +static void +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfn, + const unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, +}; + +/* + * Move page from normal memory to secure memory. + */ +unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + unsigned long src_pfn, dst_pfn; + struct kvmppc_hmm_migrate_args args; + struct mm_struct *mm = get_task_mm(current); + struct vm_area_struct *vma; + int srcu_idx; + unsigned long gfn = gpa >> page_shift; + struct kvm_memory_slot *slot; + unsigned long *rmap; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P3; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slot = gfn_to_memslot(kvm, gfn); + rmap = &slot->arch.rmap[gfn - slot->base_gfn]; + addr = gfn_to_hva(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (kvm_is_error_hva(addr)) + return H_PARAMETER; + + end = addr + (1UL << page_shift); + + if (flags) + return H_P2; + + args.rmap = rmap; + args.lpid = kvm->arch.lpid; + args.gpa = gpa; + args.page_shift = page_shift; + + down_read(&mm->mmap_sem); + vma = find_vma_intersection(mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, &args); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&mm->mmap_sem); + return ret; +} + +static void +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfn, + unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ + struct page *dpage, *spage; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int ret = U_SUCCESS; + + *dst_pfn = MIGRATE_PFN_ERROR; + spage = migrate_pfn_to_page(*src_pfn); + if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE)) + return; + if (!is_zone_device_page(spage)) + return; + dpage = hmm_vma_alloc_locked_page(vma, start); + if (!dpage) + return; + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(spage); + + pfn = page_to_pfn(dpage); + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); + if (ret == U_SUCCESS) + *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; +} + +/* + * This migrate_vma() callback is typically used to updated device + * page tables after successful migration. We have nothing to do here. + */ +static void +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfn, + const unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map, +}; + +/* + * Fault handler callback when HV touches any page that has been + * moved to secure memory, we ask UV to give back the page by + * issuing a UV_PAGE_OUT uvcall. + */ +static int kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem, + struct vm_area_struct *vma, + unsigned long addr, + const struct page *page, + unsigned int flags, + pmd_t *pmdp) +{ + unsigned long end = addr + PAGE_SIZE; + unsigned long src_pfn, dst_pfn = 0; + + if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL)) + return VM_FAULT_SIGBUS; + if (dst_pfn == MIGRATE_PFN_ERROR) + return VM_FAULT_SIGBUS; + return 0; +} + +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem, + struct page *page) +{ + kvmppc_hmm_put_page(page); +} + +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = { + .free = kvmppc_hmm_devmem_free, + .fault = kvmppc_hmm_devmem_fault, +}; + +/* + * Move page from secure memory to normal memory. + */ +unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + struct mm_struct *mm = get_task_mm(current); + struct vm_area_struct *vma; + unsigned long src_pfn, dst_pfn = 0; + int srcu_idx; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P3; + + if (flags) + return H_P2; + + srcu_idx = srcu_read_lock(&kvm->srcu); + addr = gfn_to_hva(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (kvm_is_error_hva(addr)) + return H_PARAMETER; + + end = addr + (1UL << page_shift); + + down_read(&mm->mmap_sem); + vma = find_vma_intersection(mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&mm->mmap_sem); + return ret; +} + +/* + * TODO: Number of secure pages and the page size order would probably come + * via DT or via some uvcall. Return 8G for now. + */ +static unsigned long kvmppc_get_secmem_size(void) +{ + return (1UL << 33); +} + +static int kvmppc_hmm_pages_init(void) +{ + unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last - + kvmppc_hmm.devmem->pfn_first; + + kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns), + sizeof(unsigned long), GFP_KERNEL); + if (!kvmppc_hmm.pfn_bitmap) + return -ENOMEM; + + spin_lock_init(&kvmppc_hmm_lock); + + return 0; +} + +int kvmppc_hmm_init(void) +{ + int ret = 0; + unsigned long size = kvmppc_get_secmem_size(); + + kvmppc_hmm.device = hmm_device_new(NULL); + if (IS_ERR(kvmppc_hmm.device)) { + ret = PTR_ERR(kvmppc_hmm.device); + goto out; + } + + kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops, + &kvmppc_hmm.device->device, size); + if (IS_ERR(kvmppc_hmm.devmem)) { + ret = PTR_ERR(kvmppc_hmm.devmem); + goto out_device; + } + ret = kvmppc_hmm_pages_init(); + if (ret < 0) + goto out_devmem; + + return ret; + +out_devmem: + hmm_devmem_remove(kvmppc_hmm.devmem); +out_device: + hmm_device_put(kvmppc_hmm.device); +out: + return ret; +} + +void kvmppc_hmm_free(void) +{ + kfree(kvmppc_hmm.pfn_bitmap); + hmm_devmem_remove(kvmppc_hmm.devmem); + hmm_device_put(kvmppc_hmm.device); +} From patchwork Wed Nov 21 05:28:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10691981 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75BEE5A4 for ; Wed, 21 Nov 2018 05:28:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E2812AEDE for ; Wed, 21 Nov 2018 05:28:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D2FA29490; Wed, 21 Nov 2018 05:28:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9FF228872 for ; Wed, 21 Nov 2018 05:28:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D26576B2441; Wed, 21 Nov 2018 00:28:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD65A6B2443; Wed, 21 Nov 2018 00:28:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B03E76B2444; Wed, 21 Nov 2018 00:28:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 60AB56B2441 for ; Wed, 21 Nov 2018 00:28:30 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id 3-v6so5930388plc.18 for ; Tue, 20 Nov 2018 21:28:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=EyrMibaWb8mDPFHQxFaqgp5lnxVDSQIek3j8mCLhEG0=; b=BZaI8DueHoZgA83tiYWjtybbzIc5zBTzLLWJ/t/TRjXlc6CnwqwSNv7aF+9ilNp0RO bK0/uvn3CiQgM0UueNsTERmMIfDLCS9QHpkYshipE4YOYBcdJGyV6G0GYRltAS6VZGRK EtT+K05RMF/NVVvg3MvCjOP9WO1B62eU2NDQOKflnVDyPc9tDG2b0nbU42A6lc4EQtPt yrbkPj61CIkW/ii1zpO+bf0nG+fLNpUiDRfxyyqVGmkr9cvtybAy9pWhj0S3UMH457VG C7WVcAJHD3ghBOtmmqQPiEFMXaGyXJOlKmHjxkPTBJv7uv8z1BeSuG57Y7r/rQbwVOld oOIA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AA+aEWYyNkYBhUbI3CznQz042OWrtbWkuD5hn2sl5HBv7W7TT+SLSlS9 wbnzB8VPDUiH6pncmp/mTRcqKbyesREDtFrGr1HzX86qm9xF3VPEGOFHdxYSjLLOyyqAyw4+7ut fATVrypROzmefBVXnMcckWoaCD1ez3rFfvr1jsD5QCs19qwOerEAbPu+oL61Rgfp8Ug== X-Received: by 2002:a65:484c:: with SMTP id i12mr4633382pgs.309.1542778110031; Tue, 20 Nov 2018 21:28:30 -0800 (PST) X-Google-Smtp-Source: AFSGD/Uy/qI+JzeQyiooZRlq0GLiWiAQCHg6x4Lq8HF/ch0VSQS16U6PGclh073TAZ55cURBGoaM X-Received: by 2002:a65:484c:: with SMTP id i12mr4633358pgs.309.1542778109243; Tue, 20 Nov 2018 21:28:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542778109; cv=none; d=google.com; s=arc-20160816; b=wnLgm8abq5r9/MH5E01m/1gP3jkL8hAUfg8Swrw3NgmJyG5In+OtTqtc5sU/ib+Cqp L/0NwHlePgqzMostti8CcXU85MbOEvKt7xRMOfPCdBRNrDN9s48WE5VheDox35d+m58X Vr2iIlnezZyGkiSAcINeRecusfgUX772B1snYMgw8RTxW43HDzmUbKsarUA5A30fT9sd 2BrItTRoWMlXnZyAODTsm94/YgPf9cTq2lxTcdS3Qb2hJtst81tNia+U/prKy8kxOXaO UujHyGRT54kLW2P/suh9aWRWFGylksCluOLfytEMXlWQJ+Zym6+BpiqOsBgPd0WZAaaL llag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=EyrMibaWb8mDPFHQxFaqgp5lnxVDSQIek3j8mCLhEG0=; b=pCn+LFqKMDkCTKamxz8KMrrLf9sc46six6ZfI0v0XoqN93GXodM7hQ0cX+n4msAd2r B7hkMzHrmctWoadK5mvCnZsh67tjQ31+7GE/F8SiIMXDdJNcEtUDYk87wie/WRJrml2b Ty/ljIjr33pumo0jKcvNnDJm98F4Z4XHSYFsDlAnOmZHNybwarJyMNz6q5OZ3lkRxaQS 6w8lq9m9hjxhk/xuUFI4TKYQyUpPeZNjW4BnNuSMAFQSG35e7h5lH42pkT3x8gVLP5cp FXbM993iCXLLWtozCR+5OMRoPW2gtOTwPPQIuRaHciG23adU7L2iqAXOqBlLoG5VPVY5 CnzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id f95si7831905plb.60.2018.11.20.21.28.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 21:28:29 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5SMW0089510 for ; Wed, 21 Nov 2018 00:28:28 -0500 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 2nw02cad1s-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:28:28 -0500 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:26 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 21 Nov 2018 05:28:23 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wAL5SLAR47382534 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Nov 2018 05:28:21 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 50E284C044; Wed, 21 Nov 2018 05:28:21 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C30904C046; Wed, 21 Nov 2018 05:28:19 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.234]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 21 Nov 2018 05:28:19 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v2 2/4] kvmppc: Add support for shared pages in HMM driver Date: Wed, 21 Nov 2018 10:58:09 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18112105-0016-0000-0000-0000022A3B53 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18112105-0017-0000-0000-0000328278C3 Message-Id: <20181121052811.4819-3-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-11-21_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=805 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811210050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP A secure guest will share some of its pages with hypervisor (Eg. virtio bounce buffers etc). Support shared pages in HMM driver. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 3 ++ arch/powerpc/kvm/book3s_hv_hmm.c | 58 +++++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index c900f47c0a9f..34791c627f87 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -336,6 +336,9 @@ #define H_ENTER_NESTED 0xF804 #define H_TLB_INVALIDATE 0xF808 +/* Flags for H_SVM_PAGE_IN */ +#define H_PAGE_IN_SHARED 0x1 + /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xFF00 #define H_SVM_PAGE_OUT 0xFF04 diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index 5f2a924a4f16..2730ab832330 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -45,6 +45,7 @@ struct kvmppc_hmm_page_pvt { unsigned long *rmap; unsigned int lpid; unsigned long gpa; + bool skip_page_out; }; struct kvmppc_hmm_migrate_args { @@ -212,6 +213,45 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, }; +/* + * Shares the page with HV, thus making it a normal page. + * + * - If the page is already secure, then provision a new page and share + * - If the page is a normal page, share the existing page + * + * In the former case, uses the HMM fault handler to release the HMM page. + */ +static unsigned long +kvmppc_share_page(struct kvm *kvm, unsigned long *rmap, unsigned long gpa, + unsigned long addr, unsigned long page_shift) +{ + + int ret; + unsigned int lpid = kvm->arch.lpid; + struct page *hmm_page; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int srcu_idx; + + if (kvmppc_is_hmm_pfn(*rmap)) { + hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM); + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(hmm_page); + pvt->skip_page_out = true; + } + + srcu_idx = srcu_read_lock(&kvm->srcu); + pfn = gfn_to_pfn(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (is_error_noslot_pfn(pfn)) + return H_PARAMETER; + + ret = uv_page_in(lpid, pfn << page_shift, gpa, 0, page_shift); + kvm_release_pfn_clean(pfn); + + return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER; +} + /* * Move page from normal memory to secure memory. */ @@ -243,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, end = addr + (1UL << page_shift); - if (flags) + if (flags & ~H_PAGE_IN_SHARED) return H_P2; + if (flags & H_PAGE_IN_SHARED) + return kvmppc_share_page(kvm, rmap, gpa, addr, page_shift); + args.rmap = rmap; args.lpid = kvm->arch.lpid; args.gpa = gpa; @@ -292,8 +335,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, hmm_devmem_page_get_drvdata(spage); pfn = page_to_pfn(dpage); - ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, - pvt->gpa, 0, PAGE_SHIFT); + + /* + * This same alloc_and_copy() callback is used in two cases: + * - When HV touches a secure page, for which we do page-out + * - When a secure page is converted to shared page, we touch + * the page to essentially discard the HMM page. In this case we + * skip page-out. + */ + if (!pvt->skip_page_out) + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); if (ret == U_SUCCESS) *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; } From patchwork Wed Nov 21 05:28:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10691985 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F57C5A4 for ; Wed, 21 Nov 2018 05:29:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D8312B33B for ; Wed, 21 Nov 2018 05:29:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E22B2B342; Wed, 21 Nov 2018 05:29:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B91D2B33B for ; Wed, 21 Nov 2018 05:29:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92F4D6B2447; Wed, 21 Nov 2018 00:29:12 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 904606B2448; Wed, 21 Nov 2018 00:29:12 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CDD86B2449; Wed, 21 Nov 2018 00:29:12 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 220816B2447 for ; Wed, 21 Nov 2018 00:29:12 -0500 (EST) Received: by mail-ed1-f72.google.com with SMTP id x15so2468893edd.2 for ; Tue, 20 Nov 2018 21:29:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=ULHIgucI09kfv+alwXLIEFOjA7hxYdxi5iwYdgrosh4=; b=KEpFk8rJlPSZlOa7IYNRYhKy/RrqWlyGWfKXwWuFNy+ZCdc59RJOZm2RwHGdVgcOUB 0aUFKIZozAXjy0Wn2h9Gk8/KDHN+/UPusKuUTbB5kvjnceLsoxXaS6stQQlKePjs/+iU 7pQhFDfXi4n3YLaZ/pIbkkrCP8HfcxTusNegul+SKwfCekq38zkQMEv5trmTysduRC0T sIFLDixotc89C5jlI3rT3iiZVj3Mn2s/UoQ5F5YHj8FnEl6edTFGy3gFPQ5V0StWXHS0 dJBIosebbvvNcZ+OuXqQqtFwzc1yFsNLFalPd+wdgfd6tOxLQQnWbY5BLtJT3Cg+ZEm+ TcRg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AA+aEWaXB4+gAiss/mZ76ToKgIXKT+2WAGxa3QSbEI1qgdvOia8RFTRr FcvYBha499AQ+2U4jCLspLN0juNdMQfBmT+zgVURj5n9pzqvKGVuXiiDso9gY+S7VVNbODD5Zkn 6jOivq9kL7hbH3hTavXWdXKEBOi8zuE9HRZwZJY3x/qQGU1NFRvxX+CiZefYR3ERj/w== X-Received: by 2002:a50:b31c:: with SMTP id q28mr4418575edd.241.1542778151651; Tue, 20 Nov 2018 21:29:11 -0800 (PST) X-Google-Smtp-Source: AFSGD/UadDsXAllxDdPSDh44pckILOu9pMQs089NuRDc0mvNzqfrwmAk2uRsOaVCGfnuWRRhfa8z X-Received: by 2002:a50:b31c:: with SMTP id q28mr4418515edd.241.1542778150218; Tue, 20 Nov 2018 21:29:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542778150; cv=none; d=google.com; s=arc-20160816; b=T4GYktmXTqXz+pMR6az4NpMbFbaJsVtIBKMiBHDk14OyorjZY9c+EWoxVhaxN0Cq3u zcR39hLCEWeHnWCi7ceMtWZpFGXBYtdVUfW+/RqWO0AaUFQYioKslm9+8ReERYL88vYi CQ5sxqj78AJ0zaQhnICc1pu6zwyvVr7LqdfsDW6lcVHlnBWPq1Pl6qsOR7fnYRb2tk5w EldUJX8t3RkH2qr9UlFk7W6dmvD2nQX7hKe2dGp8WuDQ90r6zVpveeqzZg87jug4uKO8 X4w2mCPQ7yDnFLdGYQzpyxLNiomPDInErfn5eboGPg6O9DPFDp0ErTkoJ2+zdGwrBYUH Fgxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=ULHIgucI09kfv+alwXLIEFOjA7hxYdxi5iwYdgrosh4=; b=xWEvck4jT4a0T1+phppVegBWLQo5FhzRooPK2i6MFvPCMNEwiNYd5YFqJdsR7X7LNX GPu5UHn/7IVi3TP059DV2myejytprIH2qAJsXguPy9BogGJXiZdajsdIiUWcHA6L41eA YfTKmXFPjhSjt0bwuz2g8j0kDSwuo3VuR4/frqtQ1Oap2xFdhgFrVelq/5ikliBgYDQ2 IvnKlMDzQoHelKY7xCdsMa86sOXldJvAgpAzDMML3z+HFyq5Nie87bkooEYh9gsfMKLS RRpDShMYR0G6kSzsMRB+1TyGWBYE/1t0uFn4j/agffl7HcLDd2ouEj7PQDig4zz2abr9 5JZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id p3si247577edx.140.2018.11.20.21.29.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 21:29:10 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) client-ip=148.163.158.5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5T7a4031146 for ; Wed, 21 Nov 2018 00:29:08 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0b-001b2d01.pphosted.com with ESMTP id 2nw001tchd-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:29:08 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:28 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 21 Nov 2018 05:28:25 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wAL5SORt7078256 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Nov 2018 05:28:24 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 211A64C044; Wed, 21 Nov 2018 05:28:24 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 92AAC4C04A; Wed, 21 Nov 2018 05:28:22 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.234]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 21 Nov 2018 05:28:22 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v2 3/4] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls Date: Wed, 21 Nov 2018 10:58:10 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18112105-0020-0000-0000-000002EB47BD X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18112105-0021-0000-0000-0000213A7D4A Message-Id: <20181121052811.4819-4-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-11-21_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=837 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811210050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP H_SVM_INIT_START: Initiate securing a VM H_SVM_INIT_DONE: Conclude securing a VM During early guest init, these hcalls will be issued by UV. As part of these hcalls, [un]register memslots with UV. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 2 ++ arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/include/asm/kvm_ppc.h | 12 ++++++++++ arch/powerpc/include/asm/ucall-api.h | 6 +++++ arch/powerpc/kvm/book3s_hv.c | 6 +++++ arch/powerpc/kvm/book3s_hv_hmm.c | 33 ++++++++++++++++++++++++++++ 6 files changed, 60 insertions(+) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 34791c627f87..4872b044cca8 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -342,6 +342,8 @@ /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xFF00 #define H_SVM_PAGE_OUT 0xFF04 +#define H_SVM_INIT_START 0xFF08 +#define H_SVM_INIT_DONE 0xFF0C /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 729bdea22250..174aa7e30ff7 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -296,6 +296,7 @@ struct kvm_arch { struct dentry *htab_dentry; struct dentry *radix_dentry; struct kvm_resize_hpt *resize_hpt; /* protected by kvm->lock */ + bool secure; /* Indicates H_SVM_INIT_START has been called */ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE struct mutex hpt_mutex; diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 659c80982497..5f4b6a73789f 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -919,6 +919,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra, unsigned long flags, unsigned long page_shift); +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); #else static inline unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, @@ -935,5 +937,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, { return H_UNSUPPORTED; } + +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} #endif #endif /* __POWERPC_KVM_PPC_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index a84dc2abd172..347637995b1b 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -19,4 +19,10 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags, return U_SUCCESS; } +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, + u64 flags, u64 slotid) +{ + return 0; +} + #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7e413605e7c4..d7aa85330016 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1006,6 +1006,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_get_gpr(vcpu, 6), kvmppc_get_gpr(vcpu, 7)); break; + case H_SVM_INIT_START: + ret = kvmppc_h_svm_init_start(vcpu->kvm); + break; + case H_SVM_INIT_DONE: + ret = kvmppc_h_svm_init_done(vcpu->kvm); + break; default: return RESUME_HOST; diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index 2730ab832330..e138b0edee9f 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -55,6 +55,39 @@ struct kvmppc_hmm_migrate_args { unsigned long page_shift; }; +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int ret = H_SUCCESS; + int srcu_idx; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) { + ret = uv_register_mem_slot(kvm->arch.lpid, + memslot->base_gfn << PAGE_SHIFT, + memslot->npages * PAGE_SIZE, + 0, memslot->id); + if (ret < 0) { + ret = H_PARAMETER; /* TODO: proper retval */ + goto out; + } + } + kvm->arch.secure = true; +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return ret; +} + +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + if (kvm->arch.secure) + return H_SUCCESS; + else + return H_UNSUPPORTED; +} + #define KVMPPC_PFN_HMM (0x1ULL << 61) static inline bool kvmppc_is_hmm_pfn(unsigned long pfn) From patchwork Wed Nov 21 05:28:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10691983 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C06B16B1 for ; Wed, 21 Nov 2018 05:28:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89EF62AFB4 for ; Wed, 21 Nov 2018 05:28:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7B69D2AF07; Wed, 21 Nov 2018 05:28:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D406B2AF07 for ; Wed, 21 Nov 2018 05:28:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 844096B2443; Wed, 21 Nov 2018 00:28:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7F4136B2445; Wed, 21 Nov 2018 00:28:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BC266B2446; Wed, 21 Nov 2018 00:28:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 1D5406B2443 for ; Wed, 21 Nov 2018 00:28:37 -0500 (EST) Received: by mail-pl1-f200.google.com with SMTP id m13so6098471pls.15 for ; Tue, 20 Nov 2018 21:28:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=ZKOcqD48YXAqj8TMeKnjTpqcutPKeFNxZC/X7bxghsI=; b=F3tfN4ro+qAZ2T0k+wpskLl6ubC3kq4Q3Ma9P8L6E9Pa5kvBXYWCKluvZL1mKzQChn KAVnkFnmf/pYL3vhFR02flaO1j015B+TaxNqf6KZIOb7fakjfQM9NA9b+85aUqV1K+R1 hznLwOjB7nAb6EXSh8uzS+bjZ6qYkI2JzZfYBqe/bf+VIHFZNFTsJOOFqXsRXqmmj8UV ROiYy+pAJoBPmPfaNQl7IJTURdcCuAKPyL9LmSOS3YamdZFrA2BTzHcD0cXOPR8Lnm8n 9iVroOg63kAW2R7j7GinkAPvrf0wQ3ob0fH5kKMfpOzjdXingPFf5PNByVmcST4i7Ra3 rdcg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AA+aEWafmVQBT9CENr1etyCXg1/nXAJGX0C2LoTdNLrahzFyvSxwdM6x ThwYoi63clB6Sdtkyrx1D79UGX9g16iij3AhQUp4dWm982f186jDIfUVvxzixlUeYwznUEqWCAV oHP2ti2AMYxWYapVlfWtoSQ+aIyCpTcCk+JTTC4ghPQCaZ++7NLKjOPeB/ubpNhWMLA== X-Received: by 2002:a63:396:: with SMTP id 144mr4720730pgd.68.1542778116702; Tue, 20 Nov 2018 21:28:36 -0800 (PST) X-Google-Smtp-Source: AFSGD/VQBwWDF/Uq2ei3A8kBr3gq9k5phcC+YQ/XNBKOudYwR7W7ToVWb0+XHoNrEbMKPC6wA1hR X-Received: by 2002:a63:396:: with SMTP id 144mr4720685pgd.68.1542778115206; Tue, 20 Nov 2018 21:28:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542778115; cv=none; d=google.com; s=arc-20160816; b=gKYZ78UrG751tqvTSbhSqQTExkAIjD6cMk5oj8pe8CmxE/s59GcnnJwu3+2a4iDEw5 5UcqLF76k9r9a1ibjTXMKvJeskJ8g2qcoL2dwqKyOqpJTxyWkn64CYCkLNzoE/8Mnn7R eVJoHDvJnPD7ZOaeClcCOIIeL05FJt5FTAhWyB2Z1B51O+OBxi+HMEf2xnB8HaVIDGYL APmLbEJ41ft5PSP85d3P9fY5jHxCYRB2CceGaYPW6oVv/5TX7R6TRrnGR05sDklkbBAQ x68BquD/o55X6U8HaKjfosrDrzQkvE+hHsC4X+FPqgWM9EOr4b6/AoUYTqBUZwlW60u1 TJqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=ZKOcqD48YXAqj8TMeKnjTpqcutPKeFNxZC/X7bxghsI=; b=cGD2QlluEKrcbUeJ6s9VD7hDjwj3DR/R7oDhBzv+CVDXrP5YNE6VlaotTOS5sH5L0g jH7olcTLtZ/hnw2EwbFDC8in67mSfdZ47BBTyCTggbtBVPksDlxPr5sD5YxF9uvmqOJU QJN1YG0P0HAY2NaP/aYG+UC6rNyR6USmgvaKROMMNi14m9E8x0lAOG65zThPo/fUJYtT Tcu+lbUFy4/waNyGYwTDr7IyUJyv5TbQjawzu6R8ogNQRTJMMwLU5NjFWhvcOIvRfJVZ LAMbv/G5EmzF+f7Uypq6loeZdnPoGNUCpsDY2+SOuzZ5KGdwBmcUpp3XwWTA8XDpdS6q nagQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id d23si29647316pll.161.2018.11.20.21.28.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 21:28:35 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5SLo8144313 for ; Wed, 21 Nov 2018 00:28:34 -0500 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2nw0fasm95-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:28:34 -0500 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:32 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 21 Nov 2018 05:28:28 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wAL5SRG65964158 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Nov 2018 05:28:27 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DA0AE4C04A; Wed, 21 Nov 2018 05:28:26 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 634944C044; Wed, 21 Nov 2018 05:28:25 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.234]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 21 Nov 2018 05:28:25 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v2 4/4] kvmppc: Handle memory plug/unplug to secure VM Date: Wed, 21 Nov 2018 10:58:11 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18112105-4275-0000-0000-000002E44274 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18112105-4276-0000-0000-000037F16C15 Message-Id: <20181121052811.4819-5-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-11-21_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=956 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811210050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Register the new memslot with UV during plug and unregister the memslot during unplug. This needs addition of kvm_mr_change argument to kvm_ops->commit_memory_region() Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/kvm_ppc.h | 6 ++++-- arch/powerpc/include/asm/ucall-api.h | 5 +++++ arch/powerpc/kvm/book3s.c | 5 +++-- arch/powerpc/kvm/book3s_hv.c | 22 +++++++++++++++++++++- arch/powerpc/kvm/book3s_pr.c | 3 ++- arch/powerpc/kvm/powerpc.c | 2 +- 6 files changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 5f4b6a73789f..1ac920f2e18b 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -224,7 +224,8 @@ extern int kvmppc_core_prepare_memory_region(struct kvm *kvm, extern void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new); + const struct kvm_memory_slot *new, + enum kvm_mr_change change); extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info); extern void kvmppc_core_flush_memslot(struct kvm *kvm, @@ -294,7 +295,8 @@ struct kvmppc_ops { void (*commit_memory_region)(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new); + const struct kvm_memory_slot *new, + enum kvm_mr_change change); int (*unmap_hva_range)(struct kvm *kvm, unsigned long start, unsigned long end); int (*age_hva)(struct kvm *kvm, unsigned long start, unsigned long end); diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index 347637995b1b..02c9be311a4f 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -25,4 +25,9 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, return 0; } +static inline int uv_unregister_mem_slot(u64 lpid, u64 dw0) +{ + return 0; +} + #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index fd9893bc7aa1..a35fb4099094 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -830,9 +830,10 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm, void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { - kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new); + kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change); } int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d7aa85330016..351ce259d8bb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -75,6 +75,7 @@ #include #include #include +#include #include "book3s.h" @@ -4392,7 +4393,8 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { unsigned long npages = mem->memory_size >> PAGE_SHIFT; @@ -4404,6 +4406,24 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, */ if (npages) atomic64_inc(&kvm->arch.mmio_update); + /* + * If UV hasn't yet called H_SVM_INIT_START, don't register memslots. + */ + if (!kvm->arch.secure) + return; + + /* + * TODO: Handle KVM_MR_MOVE + */ + if (change == KVM_MR_CREATE) { + uv_register_mem_slot(kvm->arch.lpid, + new->base_gfn << PAGE_SHIFT, + new->npages * PAGE_SIZE, + 0, + new->id); + } else if (change == KVM_MR_DELETE) { + uv_unregister_mem_slot(kvm->arch.lpid, old->id); + } } /* diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 4efd65d9e828..3aeb17b88de7 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1913,7 +1913,8 @@ static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { return; } diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 2869a299c4ed..6a7a6a101efd 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -696,7 +696,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - kvmppc_core_commit_memory_region(kvm, mem, old, new); + kvmppc_core_commit_memory_region(kvm, mem, old, new, change); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm,