From patchwork Wed Jan 30 06:07:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10787719 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E820746 for ; Wed, 30 Jan 2019 06:07:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B87F2D5E0 for ; Wed, 30 Jan 2019 06:07:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F30E2DF80; Wed, 30 Jan 2019 06:07:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DFBD2D5E0 for ; Wed, 30 Jan 2019 06:07:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13F278E0003; Wed, 30 Jan 2019 01:07:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0ED678E0001; Wed, 30 Jan 2019 01:07:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECFA38E0003; Wed, 30 Jan 2019 01:07:45 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 99CD18E0001 for ; Wed, 30 Jan 2019 01:07:45 -0500 (EST) Received: by mail-pl1-f198.google.com with SMTP id 12so16052613plb.18 for ; Tue, 29 Jan 2019 22:07:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:mime-version :content-transfer-encoding:message-id; bh=mmyKz4kptGbvKwqBTG2lTwIWzDkKtQyO4qCVOR9sy7A=; b=ctOs5cuKmY+SMkU0Yy2W3/TYxXXSissPiAty8QVGcVBExw8l7EkF2Ih+XDhkgExJGV at3yMlciJ/stA5ftEZY526E5YWTnzEsjFRYOxbq45oi0odNDZIqD2jbYdXXhxipSxXdd 6P1feJBduA+sqCs+ebthTMtqvc2tilz4QG8jBh4GAosXOHdJovU/1NWtztC+w8nBz/xw EPkE+1xrX1pwFQhAUJFAil2xdwIMkqdXk+JtLYE4yOxim2jpEPRHZggDduOLXp+NyG3N D9JFJ2PjPqyjzrQuA1fhyx4syIfcnWcNld9J9c35TPbs5iyrycZJTU3EA90twj31b6A3 56Xw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AJcUukeZGwI/DrIaIMVINXfaH2m5D6NSQnhv+W8kJogACj+t1GvmIcFi ufUf5qyGmnt6+BLTFTnZjnEGit8pzvSQfqKPtsL+ssHKO8tmtuSZlhviK1mJ7PfbkpcNwmYNKlG CcmuHEesLB5xKj2b4eBVFfq7lQHx30tHCeW6l7bQhvkx/E2n/ALl2vj3RgO24VkvrPw== X-Received: by 2002:a63:cf48:: with SMTP id b8mr26873740pgj.17.1548828465197; Tue, 29 Jan 2019 22:07:45 -0800 (PST) X-Google-Smtp-Source: ALg8bN6R1DID0otQuujXLAkm09pE8zcIw9TY6mQMOZ35BRBKcMqsBMj9/RsNPOql67bt3OTnroz4 X-Received: by 2002:a63:cf48:: with SMTP id b8mr26873668pgj.17.1548828463734; Tue, 29 Jan 2019 22:07:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548828463; cv=none; d=google.com; s=arc-20160816; b=o6cTlnVy+dynV3TIeiY194KyG6p7/ZGayCIW6FICpNQbkEJKeh/ntq26LjxlumZlp7 g1EVD2jtuocvCWYrkAV+yAkpW3PYZMcJ3k17oAj9qrasAhrIV65eUM1lxkUrnILpkMMT PDKxlXDkNT6QwrohaRngfhVKYyUkU9FO3FFdaO0dkokrH5pedA0jT5RzPB9DnGKRDgyz mLkSI2yv4ZQBFI1pH7UW6Mkl8bJ9Xy97VoVDceaHuUALu9nSaAiRWtdY+x0tymEdldpl RuJe0sAd0nIv7C6vi8jNdl+xdUJ+M+M8CBU0jddp7+lMM3NA7F9/ZiauoLu+BTQ/LhML Copw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:content-transfer-encoding:mime-version:references :in-reply-to:date:subject:cc:to:from; bh=mmyKz4kptGbvKwqBTG2lTwIWzDkKtQyO4qCVOR9sy7A=; b=hzTrD7dZCOd25KJXbjuO9rmkNYHtzNtmDjlBz2uG7+Z5r+feeEYoxrsF8tG593DYq2 /20JkNAltNrUPJ8SOYls4+Uwvn5EmSbmWhLfDy0e+E2u4KnPLjqXlYXbJJdSOM9KPZwk lHKuvu3rGuDPuqy8ZE+rbWb3OmUe5jzeV2TF3jXenrMDMr9nYnsu452IYTCzOcktTOpH ZQvJX5eXd0/nus00JLNlNe3M41LhLbBUYmkkkVj694ZjjxV2iM/yFJLAz+Tj7H7AGglf IXgQK0J3DIVtx3hAEqgDlCHozvdNcn8GqRSUgaIxfJn0+L2U9kgOI4BD5ozz761vvzIY FMnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id 44si660502plb.57.2019.01.29.22.07.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Jan 2019 22:07:43 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x0U64Pmq028443 for ; Wed, 30 Jan 2019 01:07:43 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qb1fpa13w-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 30 Jan 2019 01:07:42 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Jan 2019 06:07:40 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 30 Jan 2019 06:07:37 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0U67ZIP3867112 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Jan 2019 06:07:35 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8E25E5204E; Wed, 30 Jan 2019 06:07:35 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.36.73]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id AEFBC52059; Wed, 30 Jan 2019 06:07:33 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, Bharata B Rao Subject: [RFC PATCH v3 1/4] kvmppc: HMM backend driver to manage pages of secure guest Date: Wed, 30 Jan 2019 11:37:23 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190130060726.29958-1-bharata@linux.ibm.com> References: <20190130060726.29958-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19013006-0028-0000-0000-000003409DC1 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19013006-0029-0000-0000-000023FDA3AF Message-Id: <20190130060726.29958-2-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-01-30_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300046 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP HMM driver for KVM PPC to manage page transitions of secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls. H_SVM_PAGE_IN: Move the content of a normal page to secure page H_SVM_PAGE_OUT: Move the content of a secure page to normal page Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 4 + arch/powerpc/include/asm/kvm_book3s_hmm.h | 33 ++ arch/powerpc/include/asm/kvm_host.h | 14 + arch/powerpc/include/asm/ucall-api.h | 19 + arch/powerpc/include/uapi/asm/uapi_uvcall.h | 3 + arch/powerpc/kvm/Makefile | 3 + arch/powerpc/kvm/book3s_hv.c | 22 + arch/powerpc/kvm/book3s_hv_hmm.c | 474 ++++++++++++++++++++ 8 files changed, 572 insertions(+) create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 463c63a9fcf1..2f6b952deb0f 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -337,6 +337,10 @@ #define H_TLB_INVALIDATE 0xF808 #define H_COPY_TOFROM_GUEST 0xF80C +/* Platform-specific hcalls used by the Ultravisor */ +#define H_SVM_PAGE_IN 0xEF00 +#define H_SVM_PAGE_OUT 0xEF04 + /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 #define H_SET_MODE_RESOURCE_SET_DAWR 2 diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h new file mode 100644 index 000000000000..e61519c17485 --- /dev/null +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __POWERPC_KVM_PPC_HMM_H__ +#define __POWERPC_KVM_PPC_HMM_H__ + +#ifdef CONFIG_PPC_KVM_UV +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +#else +static inline unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} +#endif /* CONFIG_PPC_KVM_UV */ +#endif /* __POWERPC_KVM_PPC_HMM_H__ */ diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 162005ae50e2..15ea03852bf1 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -846,4 +846,18 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +#ifdef CONFIG_PPC_KVM_UV +extern int kvmppc_hmm_init(void); +extern void kvmppc_hmm_free(void); +extern void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free); +#else +static inline int kvmppc_hmm_init(void) +{ + return 0; +} + +static inline void kvmppc_hmm_free(void) {} +static inline void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free) {} +#endif /* CONFIG_PPC_KVM_UV */ + #endif /* __POWERPC_KVM_HOST_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index 82d8edd1e409..6c3bddc97b55 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -9,6 +9,8 @@ #include #include +#define U_SUCCESS 0 + extern unsigned int smf_state; static inline bool smf_enabled(void) { @@ -54,5 +56,22 @@ static inline int uv_restricted_spr_read(u64 reg, u64 *val) return rc; } +static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 flags, + u64 page_shift) +{ + unsigned long retbuf[PLPAR_UCALL_BUFSIZE]; + + return plpar_ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags, + page_shift); +} + +static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags, + u64 page_shift) +{ + unsigned long retbuf[PLPAR_UCALL_BUFSIZE]; + + return plpar_ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags, + page_shift); +} #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/include/uapi/asm/uapi_uvcall.h b/arch/powerpc/include/uapi/asm/uapi_uvcall.h index b657af679ca7..3a30820663a2 100644 --- a/arch/powerpc/include/uapi/asm/uapi_uvcall.h +++ b/arch/powerpc/include/uapi/asm/uapi_uvcall.h @@ -12,4 +12,7 @@ #define UV_RESTRICTED_SPR_WRITE 0xf108 #define UV_RESTRICTED_SPR_READ 0xf10C #define UV_RETURN 0xf11C +#define UV_PAGE_IN 0xF128 +#define UV_PAGE_OUT 0xF12C + #endif /* #ifndef UAPI_UC_H */ diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index 64f1135e7732..ed3a9d974059 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -76,6 +76,9 @@ kvm-hv-y += \ book3s_64_mmu_radix.o \ book3s_hv_nested.o +kvm-hv-$(CONFIG_PPC_KVM_UV) += \ + book3s_hv_hmm.o + kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \ book3s_hv_tm.o diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5a066fc299e1..e7edba1ec16a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -74,6 +74,8 @@ #include #include #include +#include +#include #include "book3s.h" @@ -1001,6 +1003,20 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) if (nesting_enabled(vcpu->kvm)) ret = kvmhv_copy_tofrom_guest_nested(vcpu); break; + case H_SVM_PAGE_IN: + ret = kvmppc_h_svm_page_in(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; + case H_SVM_PAGE_OUT: + ret = kvmppc_h_svm_page_out(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; default: return RESUME_HOST; } @@ -4354,6 +4370,7 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { if (!dont || free->arch.rmap != dont->arch.rmap) { + kvmppc_hmm_release_pfns(free); vfree(free->arch.rmap); free->arch.rmap = NULL; } @@ -5429,11 +5446,16 @@ static int kvmppc_book3s_init_hv(void) no_mixing_hpt_and_radix = true; } + r = kvmppc_hmm_init(); + if (r < 0) + pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r); + return r; } static void kvmppc_book3s_exit_hv(void) { + kvmppc_hmm_free(); kvmppc_free_host_rm_ops(); if (kvmppc_radix_possible()) kvmppc_radix_exit(); diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c new file mode 100644 index 000000000000..edc512acebd3 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -0,0 +1,474 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * HMM driver to manage page migration between normal and secure + * memory. + * + * Based on Jérôme Glisse's HMM dummy driver. + * + * Copyright 2018 Bharata B Rao, IBM Corp. + */ + +/* + * A pseries guest can be run as a secure guest on Ultravisor-enabled + * POWER platforms. On such platforms, this driver will be used to manage + * the movement of guest pages between the normal memory managed by + * hypervisor (HV) and secure memory managed by Ultravisor (UV). + * + * Private ZONE_DEVICE memory equal to the amount of secure memory + * available in the platform for running secure guests is created + * via a HMM device. The movement of pages between normal and secure + * memory is done by ->alloc_and_copy() callback routine of migrate_vma(). + * + * The page-in or page-out requests from UV will come to HV as hcalls and + * HV will call back into UV via uvcalls to satisfy these page requests. + * + * For each page that gets moved into secure memory, a HMM PFN is used + * on the HV side and HMM migration PTE corresponding to that PFN would be + * populated in the QEMU page tables. + */ + +#include +#include +#include +#include + +struct kvmppc_hmm_device { + struct hmm_device *device; + struct hmm_devmem *devmem; + unsigned long *pfn_bitmap; +}; + +static struct kvmppc_hmm_device kvmppc_hmm; +spinlock_t kvmppc_hmm_lock; + +struct kvmppc_hmm_page_pvt { + unsigned long *rmap; + unsigned int lpid; + unsigned long gpa; +}; + +struct kvmppc_hmm_migrate_args { + unsigned long *rmap; + unsigned int lpid; + unsigned long gpa; + unsigned long page_shift; +}; + +#define KVMPPC_PFN_HMM (0x1ULL << 61) + +static inline bool kvmppc_is_hmm_pfn(unsigned long pfn) +{ + return !!(pfn & KVMPPC_PFN_HMM); +} + +void kvmppc_hmm_release_pfns(struct kvm_memory_slot *free) +{ + int i; + + for (i = 0; i < free->npages; i++) { + unsigned long *rmap = &free->arch.rmap[i]; + + if (kvmppc_is_hmm_pfn(*rmap)) + put_page(pfn_to_page(*rmap & ~KVMPPC_PFN_HMM)); + } +} + +/* + * Get a free HMM PFN from the pool + * + * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM + * PFN will be used to keep track of the secure page on HV side. + */ +/* + * TODO: In this and subsequent functions, we pass around and access + * individual elements of kvm_memory_slot->arch.rmap[] without any + * protection. Figure out the safe way to access this. + */ +static struct page *kvmppc_hmm_get_page(unsigned long *rmap, + unsigned long gpa, unsigned int lpid) +{ + struct page *dpage = NULL; + unsigned long bit, hmm_pfn; + unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last - + kvmppc_hmm.devmem->pfn_first; + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + if (kvmppc_is_hmm_pfn(*rmap)) + return NULL; + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns); + if (bit >= nr_pfns) + goto out; + + bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1); + hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first; + dpage = pfn_to_page(hmm_pfn); + + if (!trylock_page(dpage)) + goto out_clear; + + *rmap = hmm_pfn | KVMPPC_PFN_HMM; + pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC); + if (!pvt) + goto out_unlock; + pvt->rmap = rmap; + pvt->gpa = gpa; + pvt->lpid = lpid; + hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt); + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + + get_page(dpage); + return dpage; + +out_unlock: + unlock_page(dpage); +out_clear: + bitmap_clear(kvmppc_hmm.pfn_bitmap, + hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1); +out: + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; +} + +/* + * Release the HMM PFN back to the pool + * + * Called when secure page becomes a normal page during UV_PAGE_OUT. + */ +static void kvmppc_hmm_put_page(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page); + hmm_devmem_page_set_drvdata(page, 0); + + bitmap_clear(kvmppc_hmm.pfn_bitmap, + pfn - kvmppc_hmm.devmem->pfn_first, 1); + *(pvt->rmap) = 0; + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + kfree(pvt); +} + +/* + * migrate_vma() callback to move page from normal memory to secure memory. + * + * We don't capture the return value of uv_page_in() here because when + * UV asks for a page and then fails to copy it over, we don't care. + */ +static void +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfn, + unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ + struct kvmppc_hmm_migrate_args *args = private; + struct page *spage = migrate_pfn_to_page(*src_pfn); + unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT; + struct page *dpage; + + *dst_pfn = 0; + if (!(*src_pfn & MIGRATE_PFN_MIGRATE)) + return; + + dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid); + if (!dpage) + return; + + if (spage) + uv_page_in(args->lpid, pfn << args->page_shift, + args->gpa, 0, args->page_shift); + + *dst_pfn = migrate_pfn(page_to_pfn(dpage)) | + MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED; +} + +/* + * This migrate_vma() callback is typically used to updated device + * page tables after successful migration. We have nothing to do here. + * + * Also as we don't care if UV successfully copied over the page in + * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check + * dst_pfn for any errors here. + */ +static void +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfn, + const unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, +}; + +/* + * Move page from normal memory to secure memory. + */ +unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + unsigned long src_pfn, dst_pfn; + struct kvmppc_hmm_migrate_args args; + struct vm_area_struct *vma; + int srcu_idx; + unsigned long gfn = gpa >> page_shift; + struct kvm_memory_slot *slot; + unsigned long *rmap; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P3; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slot = gfn_to_memslot(kvm, gfn); + rmap = &slot->arch.rmap[gfn - slot->base_gfn]; + addr = gfn_to_hva(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (kvm_is_error_hva(addr)) + return H_PARAMETER; + + end = addr + (1UL << page_shift); + + if (flags) + return H_P2; + + args.rmap = rmap; + args.lpid = kvm->arch.lpid; + args.gpa = gpa; + args.page_shift = page_shift; + + down_read(&kvm->mm->mmap_sem); + vma = find_vma_intersection(kvm->mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, &args); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&kvm->mm->mmap_sem); + return ret; +} + +static void +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfn, + unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ + struct page *dpage, *spage; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int ret = U_SUCCESS; + + *dst_pfn = MIGRATE_PFN_ERROR; + spage = migrate_pfn_to_page(*src_pfn); + if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE)) + return; + if (!is_zone_device_page(spage)) + return; + dpage = hmm_vma_alloc_locked_page(vma, start); + if (!dpage) + return; + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(spage); + + pfn = page_to_pfn(dpage); + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); + if (ret == U_SUCCESS) + *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; +} + +/* + * This migrate_vma() callback is typically used to updated device + * page tables after successful migration. We have nothing to do here. + */ +static void +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfn, + const unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map, +}; + +/* + * Fault handler callback when HV touches any page that has been + * moved to secure memory, we ask UV to give back the page by + * issuing a UV_PAGE_OUT uvcall. + */ +static int kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem, + struct vm_area_struct *vma, + unsigned long addr, + const struct page *page, + unsigned int flags, + pmd_t *pmdp) +{ + unsigned long end = addr + PAGE_SIZE; + unsigned long src_pfn, dst_pfn = 0; + + if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL)) + return VM_FAULT_SIGBUS; + if (dst_pfn == MIGRATE_PFN_ERROR) + return VM_FAULT_SIGBUS; + return 0; +} + +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem, + struct page *page) +{ + kvmppc_hmm_put_page(page); +} + +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = { + .free = kvmppc_hmm_devmem_free, + .fault = kvmppc_hmm_devmem_fault, +}; + +/* + * Move page from secure memory to normal memory. + */ +unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + struct vm_area_struct *vma; + unsigned long src_pfn, dst_pfn = 0; + int srcu_idx; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P3; + + if (flags) + return H_P2; + + srcu_idx = srcu_read_lock(&kvm->srcu); + addr = gfn_to_hva(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (kvm_is_error_hva(addr)) + return H_PARAMETER; + + end = addr + (1UL << page_shift); + + down_read(&kvm->mm->mmap_sem); + vma = find_vma_intersection(kvm->mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&kvm->mm->mmap_sem); + return ret; +} + +static u64 kvmppc_get_secmem_size(void) +{ + struct device_node *np; + int i, len; + const __be32 *prop; + u64 size = 0; + + np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware"); + if (!np) + goto out; + + prop = of_get_property(np, "secure-memory-ranges", &len); + if (!prop) + goto out_put; + + for (i = 0; i < len / (sizeof(*prop) * 4); i++) + size += of_read_number(prop + (i * 4) + 2, 2); + +out_put: + of_node_put(np); +out: + return size; +} + +static int kvmppc_hmm_pages_init(void) +{ + unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last - + kvmppc_hmm.devmem->pfn_first; + + kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns), + sizeof(unsigned long), GFP_KERNEL); + if (!kvmppc_hmm.pfn_bitmap) + return -ENOMEM; + + spin_lock_init(&kvmppc_hmm_lock); + + return 0; +} + +int kvmppc_hmm_init(void) +{ + int ret = 0; + unsigned long size; + + size = kvmppc_get_secmem_size(); + if (!size) { + ret = -ENODEV; + goto out; + } + + kvmppc_hmm.device = hmm_device_new(NULL); + if (IS_ERR(kvmppc_hmm.device)) { + ret = PTR_ERR(kvmppc_hmm.device); + goto out; + } + + kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops, + &kvmppc_hmm.device->device, size); + if (IS_ERR(kvmppc_hmm.devmem)) { + ret = PTR_ERR(kvmppc_hmm.devmem); + goto out_device; + } + ret = kvmppc_hmm_pages_init(); + if (ret < 0) + goto out_device; + + pr_info("KVMPPC-HMM: Secure Memory size %lx\n", size); + return ret; + +out_device: + hmm_device_put(kvmppc_hmm.device); +out: + return ret; +} + +void kvmppc_hmm_free(void) +{ + kfree(kvmppc_hmm.pfn_bitmap); + hmm_device_put(kvmppc_hmm.device); +} From patchwork Wed Jan 30 06:07:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10787721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B6A614E1 for ; Wed, 30 Jan 2019 06:07:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B5E72D5E0 for ; Wed, 30 Jan 2019 06:07:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F38082DF80; Wed, 30 Jan 2019 06:07:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69A932D5E0 for ; Wed, 30 Jan 2019 06:07:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1FE8F8E0004; Wed, 30 Jan 2019 01:07:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1AF448E0001; Wed, 30 Jan 2019 01:07:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 078828E0004; Wed, 30 Jan 2019 01:07:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id CFC078E0001 for ; Wed, 30 Jan 2019 01:07:46 -0500 (EST) Received: by mail-qk1-f197.google.com with SMTP id k203so24573230qke.2 for ; Tue, 29 Jan 2019 22:07:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=auIY83jBHJJiEwioLXssSuoSuXNHrDM6UNE4IFyh6Eo=; b=kIZ2X8vmbloyXIsp7XGvmt4W1LCW7XjL+HtvXygzYxA0btLWbrWRSFyNHny6HM4mcQ JmVcVFEMO5Eg8Y+rlZ3HpvOXu/6IzMnNyTbSMFLGYK8hNEboLv97JZ7/ceLz7fK/xQG6 4Ncx6g4ggXzUKesM6QfKgXirfc6ot0zy8ic9YKercrO3vH2DxLF6UEnfGl0WHULutaQ5 lNp3+Y9zzwDA4yEmtIo1GTvVOLsJWaW92RhSmlizSalk3O9w2hlm0m9d17bdHYPFxS1q 8RBLRO/IzJGMqEAtTtl8Xu7XH94hmxlE60jnczy24ljhUqQ8RsaYoJl9gDMIvORHDHnp kEow== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AJcUukdPHGQu1ZD6CR4mvbCECy5TrMA2LMgU9j8LkJrKOJMXbZwFrxnY IkKxIRTgKS7znyzIju+ZdzjDByf/ymAYA65eJYdKKXTWKmSEIez2ZOH+EkcD49u8oXqr+o2BRfa MfQNYO187mxLl8TCV4lp0iBXv7zMSaQvM+FC3mozxfZjTlqdgDaKfSawGsZdohOs4DQ== X-Received: by 2002:a37:9a89:: with SMTP id c131mr26570889qke.173.1548828466603; Tue, 29 Jan 2019 22:07:46 -0800 (PST) X-Google-Smtp-Source: ALg8bN4JOet8Y/spEObYyeQkHkfc91llNzkKL4RHYY5nOsmybq7pnojyLFCQ2HLypUIr2CbtlnkG X-Received: by 2002:a37:9a89:: with SMTP id c131mr26570831qke.173.1548828465644; Tue, 29 Jan 2019 22:07:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548828465; cv=none; d=google.com; s=arc-20160816; b=SFTjcic1ayCCrR9kxuVAI+BDgv951WE2AqG6cPoRUOT+zBiSd2sU8r8rDAX3/Lh+Zo gURFDdRSJoJhHDwploeBj1zvxTtR5leuEXYq0b7EjUqBk8vGlgpNeoWZX09YsVclVvzb MHQisVb41jzFUoMOieKKDym9FajQ+sNO64UsyVUtsLG/J9v/fOVmuAdYtmoJuYDEVHMV MhAOFx/ckQnYZIPsv8dNmLV48B0IXKnAu/PBDxkxmkLheZIcUIas/vsgzi1zVbPpOlBe YnkZB34siJiYQspALAoJmOIY/jCGPolqk/6tdy2nxIJnXfdJVwcOF3fGOXYbsa5LNb7N T/8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=auIY83jBHJJiEwioLXssSuoSuXNHrDM6UNE4IFyh6Eo=; b=j5Hq/WYPj87ehY00f479Lu26ZX45RiW82NG1oi/S7HO0RAvRdE7bAz+GSejCb2/vjO YKkQNUP/P1BEaucLSoZ9hJPuwLRraXZKuqZYfYGM5bzhs/77kNLAVfq6Et/j1vBMFnHo u5YSOhK/BHKnMvxRwdk8pq3TT97nfa/AIuaAAtUr2bylGtcWFOBJclTGfy+dqvsNZLfv QlLJ1UFlOJub1Kk3ISwPu8MrZz6T/HvHR8eXxWkJ/J6gPCpXyFp2h4iVBqW88ie1C6R1 yXhbQCX4OO2/GCP9gYi6QJtb1HltK6nNCRxfhEuo5D3YZ/aQAcFVHpVXYndE4agROHU4 +bMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id l66si448110qkc.26.2019.01.29.22.07.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Jan 2019 22:07:45 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) client-ip=148.163.158.5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x0U64OrK106604 for ; Wed, 30 Jan 2019 01:07:45 -0500 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0b-001b2d01.pphosted.com with ESMTP id 2qb591thh5-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 30 Jan 2019 01:07:45 -0500 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Jan 2019 06:07:43 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 30 Jan 2019 06:07:40 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0U67c6N45940802 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Jan 2019 06:07:38 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9BF5F5204E; Wed, 30 Jan 2019 06:07:38 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.36.73]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id C898952065; Wed, 30 Jan 2019 06:07:36 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, Bharata B Rao Subject: [RFC PATCH v3 2/4] kvmppc: Add support for shared pages in HMM driver Date: Wed, 30 Jan 2019 11:37:24 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190130060726.29958-1-bharata@linux.ibm.com> References: <20190130060726.29958-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 19013006-0008-0000-0000-000002B7C312 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19013006-0009-0000-0000-000022240866 Message-Id: <20190130060726.29958-3-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-01-30_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=873 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300046 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP A secure guest will share some of its pages with hypervisor (Eg. virtio bounce buffers etc). Support shared pages in HMM driver. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 3 ++ arch/powerpc/kvm/book3s_hv_hmm.c | 58 +++++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 2f6b952deb0f..05b8536f6653 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -337,6 +337,9 @@ #define H_TLB_INVALIDATE 0xF808 #define H_COPY_TOFROM_GUEST 0xF80C +/* Flags for H_SVM_PAGE_IN */ +#define H_PAGE_IN_SHARED 0x1 + /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xEF00 #define H_SVM_PAGE_OUT 0xEF04 diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index edc512acebd3..d8112092a242 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -45,6 +45,7 @@ struct kvmppc_hmm_page_pvt { unsigned long *rmap; unsigned int lpid; unsigned long gpa; + bool skip_page_out; }; struct kvmppc_hmm_migrate_args { @@ -212,6 +213,45 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, }; +/* + * Shares the page with HV, thus making it a normal page. + * + * - If the page is already secure, then provision a new page and share + * - If the page is a normal page, share the existing page + * + * In the former case, uses the HMM fault handler to release the HMM page. + */ +static unsigned long +kvmppc_share_page(struct kvm *kvm, unsigned long *rmap, unsigned long gpa, + unsigned long addr, unsigned long page_shift) +{ + + int ret; + unsigned int lpid = kvm->arch.lpid; + struct page *hmm_page; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int srcu_idx; + + if (kvmppc_is_hmm_pfn(*rmap)) { + hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM); + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(hmm_page); + pvt->skip_page_out = true; + } + + srcu_idx = srcu_read_lock(&kvm->srcu); + pfn = gfn_to_pfn(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (is_error_noslot_pfn(pfn)) + return H_PARAMETER; + + ret = uv_page_in(lpid, pfn << page_shift, gpa, 0, page_shift); + kvm_release_pfn_clean(pfn); + + return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER; +} + /* * Move page from normal memory to secure memory. */ @@ -242,9 +282,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, end = addr + (1UL << page_shift); - if (flags) + if (flags & ~H_PAGE_IN_SHARED) return H_P2; + if (flags & H_PAGE_IN_SHARED) + return kvmppc_share_page(kvm, rmap, gpa, addr, page_shift); + args.rmap = rmap; args.lpid = kvm->arch.lpid; args.gpa = gpa; @@ -291,8 +334,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, hmm_devmem_page_get_drvdata(spage); pfn = page_to_pfn(dpage); - ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, - pvt->gpa, 0, PAGE_SHIFT); + + /* + * This same alloc_and_copy() callback is used in two cases: + * - When HV touches a secure page, for which we do page-out + * - When a secure page is converted to shared page, we touch + * the page to essentially discard the HMM page. In this case we + * skip page-out. + */ + if (!pvt->skip_page_out) + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); if (ret == U_SUCCESS) *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; } From patchwork Wed Jan 30 06:07:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10787723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE46514E1 for ; Wed, 30 Jan 2019 06:07:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DAD82D5E0 for ; Wed, 30 Jan 2019 06:07:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 912962DF80; Wed, 30 Jan 2019 06:07:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE3BA2D5E0 for ; Wed, 30 Jan 2019 06:07:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81E338E0005; Wed, 30 Jan 2019 01:07:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7A5428E0001; Wed, 30 Jan 2019 01:07:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F6C08E0005; Wed, 30 Jan 2019 01:07:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 1E4DD8E0001 for ; Wed, 30 Jan 2019 01:07:50 -0500 (EST) Received: by mail-pf1-f198.google.com with SMTP id o7so18866122pfi.23 for ; Tue, 29 Jan 2019 22:07:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=dpE6+0R2JoBtwDMUcVmMpfwy76dq0y54KJMgBXiiw8I=; b=aMuMWA9CdvCgMloQmtRLfC6ADhqMJp7//zwqNRB2rBANGQiDLn08clkD8WjRw9Rl3t IjL966wLJAUcBGqcLdTqX7OSRmsid0GT/eoI7ScvLE2l2iCP/IUP6dysS4TCKNt+S41F jr5vGGmUn/DmNLI8mhyqmpLMZL6+P48DWF4nYx6rq6itRX9JlNP3mSehJ5MS0fm7m03n W8mhAJhdxam0cYHMmeHwmvJld6kWHcxe+iYma2eDjMFf9fluje1ldBf/7wU0RI8R5xly 7zpUqbcqcGVAkmN1xoXLk66otSQw4Xwsx3P/sfUnI5JjnKvn0xC0bFcVNY8z4GtRJ3Ir zk6g== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AJcUukcpO5y3LUnZ8mPLpYJwM/PBOmdx/xnWNwM/lFqtltbbob/bip/x g0ev54WHoPE+eAunO4nNESghXEs2baS4HYj88kz6N3wBcN5ZjxO+VCEg1awe1qu5vFjM14XWs8I 2i2+3bAyd8nNjJaDEFUjekXCj4F04+xm9ICigwjga51uxvv0jIMHuEy4mpHRnUQqRgA== X-Received: by 2002:a63:cd11:: with SMTP id i17mr26605649pgg.345.1548828469767; Tue, 29 Jan 2019 22:07:49 -0800 (PST) X-Google-Smtp-Source: ALg8bN5tpR+s1toWEwfzApr/wYNdrZ0ID/2gs86E1Azb4zphAk4ySUaSKtyCf7L7125V+c0ACJ8n X-Received: by 2002:a63:cd11:: with SMTP id i17mr26605612pgg.345.1548828468855; Tue, 29 Jan 2019 22:07:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548828468; cv=none; d=google.com; s=arc-20160816; b=iyMa+vnwGIvuy4F4wJ0TqPpTHiQyXFJyCho89eRdhyJ7nFndL2N9a8/lh0jrhyFl5m Ac5TQErMu2nqgSj31Z4ZsMlC/7dZj8Eaq1W6PNyS+meYV8HLt+E2tWaqebXrr3BkVy1O 964ZLD8OlKDWla+fr0uZERd+QAwsU0QGMbf8JH5m7NIzGGYwsYe7qquDWda4SxdCkR5u s2uTHVIARpF4aidkQK/KL1vQodUCZF5On2aMSXx6mP6Wi5s6xqpLhyozLFa6jF4xsRvG fmXNqe8j65lT3kaTPVFhW2IuEG7bb8OOHdr61Hr1zdA02GQWJwtcRXxoJpU+VCIB1IcL b1tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=dpE6+0R2JoBtwDMUcVmMpfwy76dq0y54KJMgBXiiw8I=; b=ZRW9JMPlSVmCT7DTZ12KW/HHDHVILzA/Kgyw9i/urq9sPU7r+y7i5A7EQOeEeIPbZM wABUSZ08TSfaXQkhYKH1HAq5OrAqHQ7/GW92dwJu0ESKNOA600jSGK/FlEnh+yRl2ba8 LbY86Fbzf9OCKypYkfXt5ojtDU6ylDhvaEsY4FDNxdQNVGXTmlYLgVZzDZYGxI2bldZO kA2WNBMSWDjzH9BqOq+5DUTlM7q7lITPa/p7WkGpkfYSd7TnEvcrD6BilaxKlGSQ+VQp mMFpd1st8LSlakYfvPBdnJj2c3a2KYEYGTU96Qyh+QAyuW/ygcON5MyqD9icDaqbXDU+ S2dQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id b30si647294pla.285.2019.01.29.22.07.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Jan 2019 22:07:48 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x0U64MMv055380 for ; Wed, 30 Jan 2019 01:07:48 -0500 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qb1k7hp9d-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 30 Jan 2019 01:07:48 -0500 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Jan 2019 06:07:45 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 30 Jan 2019 06:07:43 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0U67g6G59113698 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Jan 2019 06:07:42 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DE9695204E; Wed, 30 Jan 2019 06:07:41 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.36.73]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id D649452059; Wed, 30 Jan 2019 06:07:39 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, Bharata B Rao Subject: [RFC PATCH v3 3/4] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls Date: Wed, 30 Jan 2019 11:37:25 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190130060726.29958-1-bharata@linux.ibm.com> References: <20190130060726.29958-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 19013006-4275-0000-0000-000003079FD4 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19013006-4276-0000-0000-00003815A45F Message-Id: <20190130060726.29958-4-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-01-30_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=647 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300046 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP H_SVM_INIT_START: Initiate securing a VM H_SVM_INIT_DONE: Conclude securing a VM During early guest init, these hcalls will be issued by UV. As part of these hcalls, [un]register memslots with UV. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 2 ++ arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++ arch/powerpc/include/asm/ucall-api.h | 9 ++++++ arch/powerpc/include/uapi/asm/uapi_uvcall.h | 1 + arch/powerpc/kvm/book3s_hv.c | 7 +++++ arch/powerpc/kvm/book3s_hv_hmm.c | 33 +++++++++++++++++++++ 6 files changed, 64 insertions(+) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 05b8536f6653..fa7695928e30 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -343,6 +343,8 @@ /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xEF00 #define H_SVM_PAGE_OUT 0xEF04 +#define H_SVM_INIT_START 0xEF08 +#define H_SVM_INIT_DONE 0xEF0C /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h index e61519c17485..af093f8b86cf 100644 --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h @@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra, unsigned long flags, unsigned long page_shift); +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); #else static inline unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, @@ -29,5 +31,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, { return H_UNSUPPORTED; } + +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} #endif /* CONFIG_PPC_KVM_UV */ #endif /* __POWERPC_KVM_PPC_HMM_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index 6c3bddc97b55..d266670229cb 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -73,5 +73,14 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags, return plpar_ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags, page_shift); } + +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, + u64 flags, u64 slotid) +{ + unsigned long retbuf[PLPAR_UCALL_BUFSIZE]; + + return plpar_ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa, + size, flags, slotid); +} #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/include/uapi/asm/uapi_uvcall.h b/arch/powerpc/include/uapi/asm/uapi_uvcall.h index 3a30820663a2..79a11a6ee436 100644 --- a/arch/powerpc/include/uapi/asm/uapi_uvcall.h +++ b/arch/powerpc/include/uapi/asm/uapi_uvcall.h @@ -12,6 +12,7 @@ #define UV_RESTRICTED_SPR_WRITE 0xf108 #define UV_RESTRICTED_SPR_READ 0xf10C #define UV_RETURN 0xf11C +#define UV_REGISTER_MEM_SLOT 0xF120 #define UV_PAGE_IN 0xF128 #define UV_PAGE_OUT 0xF12C diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e7edba1ec16a..1dfb42ac9626 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1017,6 +1017,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_get_gpr(vcpu, 6), kvmppc_get_gpr(vcpu, 7)); break; + case H_SVM_INIT_START: + ret = kvmppc_h_svm_init_start(vcpu->kvm); + break; + case H_SVM_INIT_DONE: + ret = kvmppc_h_svm_init_done(vcpu->kvm); + break; + default: return RESUME_HOST; } diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index d8112092a242..b8a980172833 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -55,6 +55,39 @@ struct kvmppc_hmm_migrate_args { unsigned long page_shift; }; +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int ret = H_SUCCESS; + int srcu_idx; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) { + ret = uv_register_mem_slot(kvm->arch.lpid, + memslot->base_gfn << PAGE_SHIFT, + memslot->npages * PAGE_SIZE, + 0, memslot->id); + if (ret < 0) { + ret = H_PARAMETER; /* TODO: proper retval */ + goto out; + } + } + kvm->arch.secure_guest = true; +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return ret; +} + +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + if (!kvm->arch.secure_guest) + return H_UNSUPPORTED; + + return H_SUCCESS; +} + #define KVMPPC_PFN_HMM (0x1ULL << 61) static inline bool kvmppc_is_hmm_pfn(unsigned long pfn) From patchwork Wed Jan 30 06:07:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10787725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD651746 for ; Wed, 30 Jan 2019 06:07:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9AEBE2DFA6 for ; Wed, 30 Jan 2019 06:07:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A0352DF80; Wed, 30 Jan 2019 06:07:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C7D92D5E0 for ; Wed, 30 Jan 2019 06:07:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D72AD8E0006; Wed, 30 Jan 2019 01:07:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CFBAD8E0001; Wed, 30 Jan 2019 01:07:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C10EC8E0006; Wed, 30 Jan 2019 01:07:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 78C0A8E0001 for ; Wed, 30 Jan 2019 01:07:53 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id g13so16084843plo.10 for ; Tue, 29 Jan 2019 22:07:53 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=sbRUPV1sPwWxlA95znkEDMZvUUkuccpMzCUBfcCeo+w=; b=TWMxuZut/PjozgjsWk8hxwqm+ZadW864u2VL19GSpS4upm9lYUvpCNtAppHzJbc+ci YSrml+AR2LN+O95HA79FuX+h2t4l7rEkAE2MySYzqhQRsvYzx2g3qkOsgDQhIvqQeoj8 YbhHfZ6SXYTbWq0AStWnWzSx65g6M+3ZqvDIP1lnOhjvDTZlU16y+5XI5wBh1T3Qef2I e2GkrIOG9jFJbayQrncEvh51V4RzFy6j7oJJYr1n3iwfXS4Ci2PE4vDezEL7846briEg 54DNigajol36BXjFGKD7NZzgIfh6hakTORD48H2+Kzqpm3FeB2qn6vY9uvBxRMfZc6Ae 3fuw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: AJcUukey0ziuvWoZrWtauizZWVqhwEfynonf2PRWNJJ+C62LUmKDPZyl vHI1ErfrHqCZRIepPy6bWn/JJ4rRXwkdCxUuPk3dAMzrIIw+qapZXNx5YQBRvva6Iq62NvLRotJ g70wJJQeCGc//kEVJmsYrLVUW/7q2y+ArccJhDj+HxrmPtb8jYkq27d/d/9sB6xMP0Q== X-Received: by 2002:a63:5455:: with SMTP id e21mr26675429pgm.316.1548828473035; Tue, 29 Jan 2019 22:07:53 -0800 (PST) X-Google-Smtp-Source: ALg8bN4EDK2j7b1PDX3N2NJlX9ZqQCGkfIToTI3zKJuhS7b5eLzR4mvdHRM8N8gPjCZK++LrK85f X-Received: by 2002:a63:5455:: with SMTP id e21mr26675386pgm.316.1548828472223; Tue, 29 Jan 2019 22:07:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548828472; cv=none; d=google.com; s=arc-20160816; b=LbQ2d34AICQKMaEjgr8jSiZOU3e+/mySiPdliG1wy7kzs1pWheZ2XFOkGZZHQ3QfhC eSAV5EvcKdL3KwPtKd8d9SqpesQDah5R6U5EKDUCvQ33WC6a+gWIAbWF40o7zVHReKKl FSALnAecUSnWEhjqUnjFoqYS8cHwNluskEbYjKNs5beXZ80peGaQxFdnaXEx08TiJ8Wl 3bBkEHADkUi9fR0B6CBleLuqiSmxU9kwgt6tij5rlUbx1xGNziIrsHzHu31dITuE9iwR jgy+3+eWwD3Bmo6o5SnuCS097Ng5navn4hi32YqCn1Xn4YPer7IUPjQGQIOqY7qqtRti cZ3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=sbRUPV1sPwWxlA95znkEDMZvUUkuccpMzCUBfcCeo+w=; b=kQBMPA9i+XEEi2to16VcFtB05QYLRFG4fyytsGmLwsdBFcQilPQVn/zhnUSnUtinoL 1Vt3cr1Nox8KYSJczLeXlCzVtXohxWy2wXTpPiFX5JQ2dR9RllV0+jdQgBIgNGQA88fv IAxhiwkfi5NZNkTeyfzKW7Y/7FPsz1WpPW+C3MX1k9f3ip2SAbYZXYP7Q8yLaulz9qGQ /jkBTBqRdJhQR7HtNox3FToGRFUk0rlnszO9atnuwQw9d4FGpdw0IU8RBrOVDKQZMLu7 5In8SySAh78K77M5qyshBVrLmUBN9Lim3VHKqejI/4BKZb+mOl8u+p22dktRumcLs/hM DHpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id n19si620009pgd.271.2019.01.29.22.07.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Jan 2019 22:07:52 -0800 (PST) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x0U64Nth055457 for ; Wed, 30 Jan 2019 01:07:51 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qb1k7hpbk-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 30 Jan 2019 01:07:51 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Jan 2019 06:07:49 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 30 Jan 2019 06:07:46 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0U67juv42467504 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Jan 2019 06:07:45 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E0C4852080; Wed, 30 Jan 2019 06:07:44 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.36.73]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 2EDAC5204F; Wed, 30 Jan 2019 06:07:43 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, Bharata B Rao Subject: [RFC PATCH v3 4/4] kvmppc: Handle memory plug/unplug to secure VM Date: Wed, 30 Jan 2019 11:37:26 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190130060726.29958-1-bharata@linux.ibm.com> References: <20190130060726.29958-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 19013006-0028-0000-0000-000003409DC5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19013006-0029-0000-0000-000023FDA3B1 Message-Id: <20190130060726.29958-5-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-01-30_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=799 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901300046 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Register the new memslot with UV during plug and unregister the memslot during unplug. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/ucall-api.h | 7 +++++++ arch/powerpc/include/uapi/asm/uapi_uvcall.h | 1 + arch/powerpc/kvm/book3s_hv.c | 19 +++++++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index d266670229cb..cbb8bb38eb8b 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -82,5 +82,12 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, return plpar_ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa, size, flags, slotid); } + +static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid) +{ + unsigned long retbuf[PLPAR_UCALL_BUFSIZE]; + + return plpar_ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid); +} #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/include/uapi/asm/uapi_uvcall.h b/arch/powerpc/include/uapi/asm/uapi_uvcall.h index 79a11a6ee436..60e44c7b58c4 100644 --- a/arch/powerpc/include/uapi/asm/uapi_uvcall.h +++ b/arch/powerpc/include/uapi/asm/uapi_uvcall.h @@ -13,6 +13,7 @@ #define UV_RESTRICTED_SPR_READ 0xf10C #define UV_RETURN 0xf11C #define UV_REGISTER_MEM_SLOT 0xF120 +#define UV_UNREGISTER_MEM_SLOT 0xF124 #define UV_PAGE_IN 0xF128 #define UV_PAGE_OUT 0xF12C diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 1dfb42ac9626..61e36c4516d5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -76,6 +76,7 @@ #include #include #include +#include #include "book3s.h" @@ -4433,6 +4434,24 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, if (change == KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) && ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES)) kvmppc_radix_flush_memslot(kvm, old); + /* + * If UV hasn't yet called H_SVM_INIT_START, don't register memslots. + */ + if (!kvm->arch.secure_guest) + return; + + /* + * TODO: Handle KVM_MR_MOVE + */ + if (change == KVM_MR_CREATE) { + uv_register_mem_slot(kvm->arch.lpid, + new->base_gfn << PAGE_SHIFT, + new->npages * PAGE_SIZE, + 0, + new->id); + } else if (change == KVM_MR_DELETE) { + uv_unregister_mem_slot(kvm->arch.lpid, old->id); + } } /*