From patchwork Wed Jan 8 20:24:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11324679 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7D56109A for ; Wed, 8 Jan 2020 20:28:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B4C6120643 for ; Wed, 8 Jan 2020 20:28:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="H7YseeGv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B4C6120643 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=F/yc9cwWa7a4QhkPiVIKZq4LYZbCPo210mU2YeK207Y=; b=H7YseeGvKgoTep nLoVJsNhg5aPXr3s3D29kMFvWH73wB+RDfzFjnsn1OYGHeye3WnM6niWUbjizbxV6RMHmiQBaTDU/ gbvmcyxmyfLMaHvZXS3iglBR1SrJ0b8Enkn8mDs2iVAAVF9t36tilWjwjVNMLhNLWKx1r5ay3WyMP 0MaXxVLfOFr3Lid+xcWEG3pEjSRW6ZwSXNG7QDfwWol3gKh+s0zqIhwq0xLDZoESo7KITmqQvf/uz SDTK9S5YPmptX4SGgRd62rg8yVmXzmCIrlW2z1aIgLlagFZp4C3RLBdlsrtyjLNVO4kEKkRFgEcG8 B+D8/Uv7XqBe2z1lV/tg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1ipHwU-0001Ds-3N; Wed, 08 Jan 2020 20:28:38 +0000 Received: from mga05.intel.com ([192.55.52.43]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1ipHv2-0008By-N1 for linux-arm-kernel@lists.infradead.org; Wed, 08 Jan 2020 20:27:10 +0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2020 12:27:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,411,1571727600"; d="scan'208";a="211658366" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.202]) by orsmga007.jf.intel.com with ESMTP; 08 Jan 2020 12:27:06 -0800 From: Sean Christopherson To: Paolo Bonzini Subject: [PATCH 04/14] KVM: Play nice with read-only memslots when querying host page size Date: Wed, 8 Jan 2020 12:24:38 -0800 Message-Id: <20200108202448.9669-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com> References: <20200108202448.9669-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200108_122708_819849_2845C537 X-CRM114-Status: GOOD ( 12.99 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [192.55.52.43 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , kvm@vger.kernel.org, David Hildenbrand , Peter Zijlstra , Dave Hansen , Paul Mackerras , linux-mm@kvack.org, kvmarm@lists.cs.columbia.edu, Andrea Arcangeli , Dave Jiang , linux-nvdimm , Marc Zyngier , Joerg Roedel , syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com, Julien Thierry , Barret Rhoden , Suzuki K Poulose , kvm-ppc@vger.kernel.org, Liran Alon , Andy Lutomirski , Dan Williams , Andrew Morton , linux-arm-kernel@lists.infradead.org, Jim Mattson , linux-kernel@vger.kernel.org, Sean Christopherson , James Morse , Jason Zeng , Vitaly Kuznetsov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Open code an equivalent of kvm_vcpu_gfn_to_memslot() when querying the host page size to avoid the "writable" check in __gfn_to_hva_many(), which will always fail on read-only memslots due to gfn_to_hva() assuming writes. Functionally, this allows x86 to create large mappings for read-only memslots that are backed by HugeTLB mappings. Note, the changelog for commit 05da45583de9 ("KVM: MMU: large page support") states "If the largepage contains write-protected pages, a large pte is not used.", but "write-protected" refers to pages that are temporarily read-only, e.g. read-only memslots didn't even exist at the time. Fixes: 4d8b81abc47b ("KVM: introduce readonly memslot") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5f7f06824c2b..d9aced677ddd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1418,15 +1418,23 @@ EXPORT_SYMBOL_GPL(kvm_is_visible_gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) { + struct kvm_memory_slot *slot; struct vm_area_struct *vma; unsigned long addr, size; size = PAGE_SIZE; - addr = kvm_vcpu_gfn_to_hva(vcpu, gfn); - if (kvm_is_error_hva(addr)) + /* + * Manually do the equivalent of kvm_vcpu_gfn_to_hva() to avoid the + * "writable" check in __gfn_to_hva_many(), which will always fail on + * read-only memslots due to gfn_to_hva() assuming writes. + */ + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + if (!slot || slot->flags & KVM_MEMSLOT_INVALID) return PAGE_SIZE; + addr = __gfn_to_hva_memslot(slot, gfn); + down_read(¤t->mm->mmap_sem); vma = find_vma(current->mm, addr); if (!vma)