From patchwork Sat May 7 07:31:36 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 764242 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p477VuJV030444 for ; Sat, 7 May 2011 07:31:56 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752028Ab1EGHbl (ORCPT ); Sat, 7 May 2011 03:31:41 -0400 Received: from mail-pv0-f174.google.com ([74.125.83.174]:61080 "EHLO mail-pv0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751794Ab1EGHbl (ORCPT ); Sat, 7 May 2011 03:31:41 -0400 Received: by pvg12 with SMTP id 12so1723808pvg.19 for ; Sat, 07 May 2011 00:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:date:from:to:cc:subject:message-id:x-mailer :mime-version:content-type:content-transfer-encoding; bh=R/7V0Jkv2R7Pcjr8XvPoMbar1mM9D3Ia4Jejwp1G4xQ=; b=HdmVvPXO2TuGLOjM+8pDUFjw3O6Ha1bBMtkaUdfXK4IcnXK6VOo4LJTh3MiBkaFFif 1CjUc0yI9cqD9hsdUaJBsWDgIrT+tUB+mBe05pjwPF9FGVy1VidILasheKBIefGOkzpB nvkqrazAT49QokWk+j9rBEbtnBmQZKAI4+kiI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:x-mailer:mime-version :content-type:content-transfer-encoding; b=EsqSKmQp6BrSv9Wze7ntMlVAcgFxdxNdr0zC6ouk3ReiIBSlbLXYbvIRRWr0+EWsnQ S60oppdOS1IUfugvttWbtH5iuZLftbUlhL9AEQzXEaZLn7ZHXqA86OSBcXknnmAXpTQa zis5IqzbPFjMmDh2wuhPbk7CAgVw0VM4zv9HY= Received: by 10.142.163.7 with SMTP id l7mr2312219wfe.350.1304753500678; Sat, 07 May 2011 00:31:40 -0700 (PDT) Received: from amd (s198099.dynamic.ppp.asahi-net.or.jp [220.157.198.99]) by mx.google.com with ESMTPS id k7sm5195632wfa.2.2011.05.07.00.31.37 (version=SSLv3 cipher=OTHER); Sat, 07 May 2011 00:31:39 -0700 (PDT) Date: Sat, 7 May 2011 16:31:36 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: andi@firstfloor.org, kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp Subject: [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user() Message-Id: <20110507163136.69222696.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Sat, 07 May 2011 07:31:56 +0000 (UTC) From: Takuya Yoshikawa When we optimized walk_addr_generic() by not using the generic guest memory reader, we replaced copy_from_user() with get_user(): commit e30d2a170506830d5eef5e9d7990c5aedf1b0a51 KVM: MMU: Optimize guest page table walk commit 15e2ac9a43d4d7d08088e404fddf2533a8e7d52e KVM: MMU: Fix 64-bit paging breakage on x86_32 But as Andi pointed out later, copy_from_user() does the same as get_user() as long as we give a constant size to it. So we use copy_from_user() to clean up the code. The only, noticeable, regression introduced by this is 64-bit gpte reading on x86_32 hosts needed for PAE guests. But this can be mitigated by implementing 8-byte get_user() for x86_32, if needed. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/paging_tmpl.h | 16 +--------------- 1 files changed, 1 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index f9d9af1..0803e36 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -113,20 +113,6 @@ static unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, pt_element_t gpte) return access; } -static int FNAME(read_gpte)(pt_element_t *pte, pt_element_t __user *ptep_user) -{ -#if defined(CONFIG_X86_32) && (PTTYPE == 64) - u32 *p = (u32 *)pte; - u32 __user *p_user = (u32 __user *)ptep_user; - - if (unlikely(get_user(*p, p_user))) - return -EFAULT; - return get_user(*(p + 1), p_user + 1); -#else - return get_user(*pte, ptep_user); -#endif -} - /* * Fetch a guest pte for a guest virtual address */ @@ -197,7 +183,7 @@ walk: } ptep_user = (pt_element_t __user *)((void *)host_addr + offset); - if (unlikely(FNAME(read_gpte)(&pte, ptep_user))) { + if (unlikely(copy_from_user(&pte, ptep_user, sizeof(pte)))) { present = false; break; }