From patchwork Fri Apr 12 04:29:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10897229 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE39F14DB for ; Fri, 12 Apr 2019 04:32:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B795328A56 for ; Fri, 12 Apr 2019 04:32:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC20E28E52; Fri, 12 Apr 2019 04:32:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3B16328E44 for ; Fri, 12 Apr 2019 04:32:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEnpK-0005gZ-PN; Fri, 12 Apr 2019 04:30:10 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEnpJ-0005gU-7Z for xen-devel@lists.xenproject.org; Fri, 12 Apr 2019 04:30:09 +0000 X-Inumbo-ID: a7044816-5cdb-11e9-92d7-bc764e045a96 Received: from mail-it1-f196.google.com (unknown [209.85.166.196]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id a7044816-5cdb-11e9-92d7-bc764e045a96; Fri, 12 Apr 2019 04:30:08 +0000 (UTC) Received: by mail-it1-f196.google.com with SMTP id x132so13571064itf.2 for ; Thu, 11 Apr 2019 21:30:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=lA6kDMcqEtcgaJ6BWu/sbT5l4UJfqRINuhRiEqyG4L4=; b=TcvMMPu5wq8YB0biHq3RUCbEoWBvk4PQZrBY3gFqfHYKc9mIbtRhHk46kOYPHwS1wY XPlmTcUVTz9L7mGPPU70aU1Owy30I881f//SzPRDJJ9ocP8BvybOVpsQXD3yWXjleUHC yt8efLuHAfxhh3qkmNUtzlyGZ2bOzDezEBLyxlqpSPGv/hF9YO8Tkj9L5jYMSZd3lve9 LJWr1NkhwQGVPRUcETeHaY++onWeTN/dLFExSDJZCfn4uwbj4Vhmy4oNdNOthtfP0Rd1 QjaMNsDRocU13knTruSuq4MI4pavDsP9gqDtX0XOrk8v0KqepdJmK054xQCPSdYX58vO q5AA== X-Gm-Message-State: APjAAAX/gkdFBxJ0+WdCcYUpfIRAKtOEGCIit3B0j2vjjIqf81Uk7nfQ PoYHvgtpCicUDtYw0CKuNiRU+zNl X-Google-Smtp-Source: APXvYqx1ytuxZ2jbZ4AEFIibWsVRtqKNmxOyW/8OGczsfP48FQp+7OeC4togoFoLYgIww1yUfyV8ag== X-Received: by 2002:a24:fa41:: with SMTP id v62mr11573128ith.10.1555043407460; Thu, 11 Apr 2019 21:30:07 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id q2sm15349891ioh.4.2019.04.11.21.30.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Apr 2019 21:30:06 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 11 Apr 2019 22:29:29 -0600 Message-Id: <20190412042930.2867-1-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 1/2] x86/mem_sharing: reorder when pages are unlocked and released X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch 0502e0adae2 "x86: correct instances of PGC_allocated clearing" introduced grabbing extra references for pages that drop references tied to PGC_allocated. However, the way these extra references were grabbed were incorrect, resulting in both share_pages and unshare_pages failing. There actually is no need to grab extra references, only a reordering was needed for when the existing references are being released. This is in accordance to the XSA-242 recommendation of not calling _put_page_type while also holding the page_lock for that page. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: Roger Pau Monne --- xen/arch/x86/mm/mem_sharing.c | 31 ++++++++----------------------- 1 file changed, 8 insertions(+), 23 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 5ac9d8f54c..345a1778f9 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -900,6 +900,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, p2m_type_t smfn_type, cmfn_type; struct two_gfns tg; struct rmap_iterator ri; + unsigned long put_count = 0; get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn, cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg); @@ -964,15 +965,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, goto err_out; } - /* Acquire an extra reference, for the freeing below to be safe. */ - if ( !get_page(cpage, cd) ) - { - ret = -EOVERFLOW; - mem_sharing_page_unlock(secondpg); - mem_sharing_page_unlock(firstpg); - goto err_out; - } - /* Merge the lists together */ rmap_seed_iterator(cpage, &ri); while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) @@ -984,7 +976,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, * Don't change the type of rmap for the client page. */ rmap_del(gfn, cpage, 0); rmap_add(gfn, spage); - put_page_and_type(cpage); + put_count++; d = get_domain_by_id(gfn->domain); BUG_ON(!d); BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn)); @@ -1002,7 +994,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, /* Free the client page */ if(test_and_clear_bit(_PGC_allocated, &cpage->count_info)) put_page(cpage); - put_page(cpage); + + while(put_count--) + put_page_and_type(cpage); /* We managed to free a domain page. */ atomic_dec(&nr_shared_mfns); @@ -1167,20 +1161,11 @@ int __mem_sharing_unshare_page(struct domain *d, { if ( !last_gfn ) mem_sharing_gfn_destroy(page, d, gfn_info); - put_page_and_type(page); mem_sharing_page_unlock(page); - if ( last_gfn ) - { - if ( !get_page(page, d) ) - { - put_gfn(d, gfn); - domain_crash(d); - return -EOVERFLOW; - } - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + if ( last_gfn && + test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); - } + put_page_and_type(page); put_gfn(d, gfn); return 0;