From patchwork Mon Aug 12 04:13:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2842853 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4E8A49F239 for ; Mon, 12 Aug 2013 04:15:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7AD222028F for ; Mon, 12 Aug 2013 04:15:20 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7D52B20284 for ; Mon, 12 Aug 2013 04:15:19 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jWT-0003kz-5N; Mon, 12 Aug 2013 04:14:25 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jWA-0005xe-Ed; Mon, 12 Aug 2013 04:14:06 +0000 Received: from mail-pb0-f41.google.com ([209.85.160.41]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jVs-0005tS-4Q for linux-arm-kernel@lists.infradead.org; Mon, 12 Aug 2013 04:13:50 +0000 Received: by mail-pb0-f41.google.com with SMTP id rp2so6413957pbb.14 for ; Sun, 11 Aug 2013 21:13:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=53WpGYs5aAIf9pGPp/o7z1XfV4d+fzwH7Cxf0YAADx4=; b=j2RaU/QhsrP79coaG0w6LeKUPqzCguTG38dgcfsVqgvryuqEAMdpm+UdlVsTijy6u7 demMU0IcU9WGgIKezztyYxr3gjNhpcVCaHKtVYig4CTDuvw+dLZa2sw7RJRTZL5PoDOi VCjHDrNS9eNtIJ/5dub1zPYQZmdl1vyZmEM50WvSeWgpu6acblsds0HSL4Q2w/kUtn3P S9yqxd3x0PINXMdESczWIWx7MgaZMcUtGVfPf8eHx7ifX6q+L6zDOBb5iq5xEu350UkO Opc9y4qT3b0SYbylWBKuNsTuZSK/4zHp3BCis3qRKRGJ5YYvp1YxGwZp3eO36hEuexzO zlqQ== X-Gm-Message-State: ALoCoQnn7vXnAJBpES5dV/80gS3ThZdNum+5xbMqU4vZxwptuQOW91nB9ImgSKA5pUXW7vrtiMBY X-Received: by 10.66.221.8 with SMTP id qa8mr22263012pac.188.1376280806703; Sun, 11 Aug 2013 21:13:26 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id nj9sm34355902pbc.13.2013.08.11.21.13.24 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Aug 2013 21:13:25 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , Gleb Natapov Subject: [PATCH 3/4] arm64: KVM: fix 2-level page tables unmapping Date: Sun, 11 Aug 2013 21:13:00 -0700 Message-Id: <1376280781-6539-4-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> References: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130812_001348_309570_AE34DEC2 X-CRM114-Status: GOOD ( 13.02 ) X-Spam-Score: -2.6 (--) Cc: linaro-kernel@lists.linaro.org, kvm@vger.kernel.org, patches@linaro.org, Marc Zyngier , Christoffer Dall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Marc Zyngier When using 64kB pages, we only have two levels of page tables, meaning that PGD, PUD and PMD are fused. In this case, trying to refcount PUDs and PMDs independently is a a complete disaster, as they are the same. We manage to get it right for the allocation (stage2_set_pte uses {pmd,pud}_none), but the unmapping path clears both pud and pmd refcounts, which fails spectacularly with 2-level page tables. The fix is to avoid calling clear_pud_entry when both the pmd and pud pages are empty. For this, and instead of introducing another pud_empty function, consolidate both pte_empty and pmd_empty into page_empty (the code is actually identical) and use that to also test the validity of the pud. Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 80a83ec..0988d9e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -85,6 +85,12 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) return p; } +static bool page_empty(void *ptr) +{ + struct page *ptr_page = virt_to_page(ptr); + return page_count(ptr_page) == 1; +} + static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr) { pmd_t *pmd_table = pmd_offset(pud, 0); @@ -103,12 +109,6 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr) put_page(virt_to_page(pmd)); } -static bool pmd_empty(pmd_t *pmd) -{ - struct page *pmd_page = virt_to_page(pmd); - return page_count(pmd_page) == 1; -} - static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) { if (pte_present(*pte)) { @@ -118,12 +118,6 @@ static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) } } -static bool pte_empty(pte_t *pte) -{ - struct page *pte_page = virt_to_page(pte); - return page_count(pte_page) == 1; -} - static void unmap_range(struct kvm *kvm, pgd_t *pgdp, unsigned long long start, u64 size) { @@ -153,10 +147,10 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ - if (pte_empty(pte)) { + if (page_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); next = pmd_addr_end(addr, end); - if (pmd_empty(pmd)) { + if (page_empty(pmd) && !page_empty(pud)) { clear_pud_entry(kvm, pud, addr); next = pud_addr_end(addr, end); }