From patchwork Sun Dec 15 02:11:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 11292555 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B8BB13B6 for ; Sun, 15 Dec 2019 02:11:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9796A214D8 for ; Sun, 15 Dec 2019 02:11:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9796A214D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.vnet.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 068C58E0005; Sat, 14 Dec 2019 21:11:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 019B98E0001; Sat, 14 Dec 2019 21:11:16 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E24238E0005; Sat, 14 Dec 2019 21:11:16 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id C9A578E0001 for ; Sat, 14 Dec 2019 21:11:16 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 89AF08249980 for ; Sun, 15 Dec 2019 02:11:16 +0000 (UTC) X-FDA: 76265748552.30.seat52_7f7665ce76d2d X-Spam-Summary: 50,0,0,6e969a1d59942c74,d41d8cd98f00b204,sukadev@linux.vnet.ibm.com,:mpe@ellerman.id.au:paulus@ozlabs.org:linuxram@us.ibm.com:bharata@linux.ibm.com:kvm-ppc@vger.kernel.org::linuxppc-dev@lists.ozlabs.org,RULES_HIT:41:355:379:800:960:966:967:973:988:989:1260:1261:1277:1312:1313:1314:1345:1437:1516:1518:1519:1535:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2196:2199:2393:2525:2553:2560:2563:2682:2685:2693:2859:2898:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3867:3870:3871:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4321:4385:4605:5007:6117:6119:6261:7558:7576:7903:8568:9025:9545:10004:10400:10967:11026:11232:11473:11657:11658:11914:12043:12262:12297:12438:12555:12679:12740:12895:12986:13439:13845:13895:13972:14096:14097:14181:14394:14721:21080:21451:21611:21627:21939:21990:30054:30070:30080:30089:30090:30091,0,RBL:148.163.156.1:@linux.vnet.ibm.com:.lbl8.mailshell.net-62.14.0.100 64.201.201.201,CacheIP:none,Bay esian:0. X-HE-Tag: seat52_7f7665ce76d2d X-Filterd-Recvd-Size: 6743 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Sun, 15 Dec 2019 02:11:15 +0000 (UTC) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xBF21YEH038325; Sat, 14 Dec 2019 21:11:08 -0500 Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wvsgjh5r1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 14 Dec 2019 21:11:07 -0500 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id xBF2ApA3005742; Sun, 15 Dec 2019 02:11:12 GMT Received: from b01cxnp22033.gho.pok.ibm.com (b01cxnp22033.gho.pok.ibm.com [9.57.198.23]) by ppma01wdc.us.ibm.com with ESMTP id 2wvqc5frkw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 15 Dec 2019 02:11:12 +0000 Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp22033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id xBF2B6tq44106060 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 15 Dec 2019 02:11:06 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5B2C1AE05F; Sun, 15 Dec 2019 02:11:06 +0000 (GMT) Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 31359AE064; Sun, 15 Dec 2019 02:11:06 +0000 (GMT) Received: from suka-w540.localdomain (unknown [9.70.94.45]) by b01ledav005.gho.pok.ibm.com (Postfix) with ESMTP; Sun, 15 Dec 2019 02:11:06 +0000 (GMT) Received: by suka-w540.localdomain (Postfix, from userid 1000) id 9C6CC2E210B; Sat, 14 Dec 2019 18:11:04 -0800 (PST) Date: Sat, 14 Dec 2019 18:11:04 -0800 From: Sukadev Bhattiprolu To: Michael Ellerman Cc: Paul Mackerras , linuxram@us.ibm.com, Bharata B Rao , kvm-ppc@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH V3 1/2] KVM: PPC: Add skip_page_out parameter Message-ID: <20191215021104.GA27378@us.ibm.com> MIME-Version: 1.0 Content-Disposition: inline X-Operating-System: Linux 2.0.32 on an i486 User-Agent: Mutt/1.10.1 (2018-07-13) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-12-14_07:2019-12-13,2019-12-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 spamscore=0 bulkscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-1912150018 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch is based on Bharata's v11 KVM patches for secure guests: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-November/200918.html Reviewed-by: Paul Mackerras --- From: Sukadev Bhattiprolu Date: Fri, 13 Dec 2019 15:06:16 -0600 Subject: [PATCH V3 1/2] KVM: PPC: Add skip_page_out parameter Add 'skip_page_out' parameter to kvmppc_uvmem_drop_pages() which will be needed in a follow-on patch that implements H_SVM_INIT_ABORT hcall. Signed-off-by: Sukadev Bhattiprolu --- arch/powerpc/include/asm/kvm_book3s_uvmem.h | 4 ++-- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_uvmem.c | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h index 50204e228f16..3cf8425b9838 100644 --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h @@ -20,7 +20,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn); void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, - struct kvm *kvm); + struct kvm *kvm, bool skip_page_out); #else static inline int kvmppc_uvmem_init(void) { @@ -69,6 +69,6 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) static inline void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, - struct kvm *kvm) { } + struct kvm *kvm, bool skip_page_out) { } #endif /* CONFIG_PPC_UV */ #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */ diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index da857c8ba6e4..744dba98e5d1 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -1102,7 +1102,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm, unsigned int shift; if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START) - kvmppc_uvmem_drop_pages(memslot, kvm); + kvmppc_uvmem_drop_pages(memslot, kvm, true); if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) return; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 597f4bfecf0e..66d5312be16b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5493,7 +5493,7 @@ static int kvmhv_svm_off(struct kvm *kvm) continue; kvm_for_each_memslot(memslot, slots) { - kvmppc_uvmem_drop_pages(memslot, kvm); + kvmppc_uvmem_drop_pages(memslot, kvm, true); uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); } } diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index f24ac3cfb34c..9a5bbad7d87e 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -259,7 +259,7 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) * QEMU page table with normal PTEs from newly allocated pages. */ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, - struct kvm *kvm) + struct kvm *kvm, bool skip_page_out) { int i; struct kvmppc_uvmem_page_pvt *pvt; @@ -277,7 +277,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, uvmem_page = pfn_to_page(uvmem_pfn); pvt = uvmem_page->zone_device_data; - pvt->skip_page_out = true; + pvt->skip_page_out = skip_page_out; mutex_unlock(&kvm->arch.uvmem_lock); pfn = gfn_to_pfn(kvm, gfn);