From patchwork Fri Nov 7 21:18:45 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 5255451 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C397DC11AC for ; Fri, 7 Nov 2014 21:18:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F3DB92011B for ; Fri, 7 Nov 2014 21:18:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 04FEB2010C for ; Fri, 7 Nov 2014 21:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752350AbaKGVSr (ORCPT ); Fri, 7 Nov 2014 16:18:47 -0500 Received: from cantor2.suse.de ([195.135.220.15]:57265 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751519AbaKGVSr (ORCPT ); Fri, 7 Nov 2014 16:18:47 -0500 Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D8D94ABA3; Fri, 7 Nov 2014 21:18:45 +0000 (UTC) From: Alexander Graf To: qemu-ppc@nongnu.org Cc: stuart.yoder@freescale.com, qemu-devel@nongnu.org, pbonzini@redhat.com, kvm@vger.kernel.org, qemu-stable@nongnu.org Subject: [PATCH] kvm: Fix memory slot page alignment logic Date: Fri, 7 Nov 2014 22:18:45 +0100 Message-Id: <1415395125-18926-1-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.8.1.4 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Memory slots have to be page aligned to get entered into KVM. There is existing logic that tries to ensure that we pad memory slots that are not page aligned to the biggest region that would still fit in the alignment requirements. Unfortunately, that logic is broken. It tries to calculate the start offset based on the region size. Fix up the logic to do the thing it was intended to do and document it properly in the comment above it. With this patch applied, I can successfully run an e500 guest with more than 3GB RAM (at which point RAM starts overlapping subpage memory regions). Cc: qemu-stable@nongnu.org Signed-off-by: Alexander Graf --- kvm-all.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 44a5e72..596e7ce 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -634,8 +634,10 @@ static void kvm_set_phys_mem(MemoryRegionSection *section, bool add) unsigned delta; /* kvm works in page size chunks, but the function may be called - with sub-page size and unaligned start address. */ - delta = TARGET_PAGE_ALIGN(size) - size; + with sub-page size and unaligned start address. Pad the start + address to next and truncate size to previous page boundary. */ + delta = (TARGET_PAGE_SIZE - (start_addr & ~TARGET_PAGE_MASK)); + delta &= ~TARGET_PAGE_MASK; if (delta > size) { return; }