From patchwork Thu May 2 22:13:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10927685 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B74D913AD for ; Thu, 2 May 2019 22:15:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A60F326224 for ; Thu, 2 May 2019 22:15:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A5202654B; Thu, 2 May 2019 22:15:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2A7EC26224 for ; Thu, 2 May 2019 22:15:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJxt-0006Wn-9M; Thu, 02 May 2019 22:14:05 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJxr-0006Wh-VR for xen-devel@lists.xenproject.org; Thu, 02 May 2019 22:14:03 +0000 X-Inumbo-ID: 9735846f-6d27-11e9-843c-bc764e045a96 Received: from mail-it1-f193.google.com (unknown [209.85.166.193]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 9735846f-6d27-11e9-843c-bc764e045a96; Thu, 02 May 2019 22:14:02 +0000 (UTC) Received: by mail-it1-f193.google.com with SMTP id t200so6180626itf.4 for ; Thu, 02 May 2019 15:14:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Grvh2T35RT+SqT0OojsONSisoisbS2FfVjshFGuZk8c=; b=VXlCEcEu03l4MYw5VBBBUnXkAnoMLiQV+3bf/XpXISB63xSwIAxHSLCgAqfvDs6mQ8 L04o8AkCzTkfN0mkJvW7n0yKX7rJ/CL7gEJPTNYCwbT9fcQlLYCbd2FFip/dZpTDBc6B xMduBLmeDmLT4kc49HnWx7h12EJb7AmcgVDpMIG6u0vsPt00mf0KgYxK2EErA5QpayOk SSENPkSNvHiXg47apEjML+sgsecenGMcXhEvcBUQrvnWAPkYGgdRXxP84u7SFCoFV/a6 TAaGfdvxCOAmpgveTz5MA6kavpNSVDxxzWOh97DeduvrnjD0U+SPA6c4yYlqkp34cq6H 2wdA== X-Gm-Message-State: APjAAAXRt/4Uf7LSxWHVtG6/H5Ycj2eRin5Of2cugGW9zITTTeKFFsCm EZ7BMTwMwqGm7n76G4r1I0j7GeND X-Google-Smtp-Source: APXvYqxHHBLairZwDowumk8hAt/lXBvJF5edVQbhSvtGg5F6cZNH7Wdez7keZNxLKoiuFfbVOkCVTA== X-Received: by 2002:a02:43cf:: with SMTP id s198mr4606882jab.138.1556835241306; Thu, 02 May 2019 15:14:01 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id n199sm239715itn.34.2019.05.02.15.13.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 May 2019 15:14:00 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 2 May 2019 16:13:42 -0600 Message-Id: <20190502221345.18459-1-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 1/4] x86/mem_sharing: reorder when pages are unlocked and released X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Calling _put_page_type while also holding the page_lock for that page can cause a deadlock. The comment being dropped is incorrect since it's now out-of-date. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: Roger Pau Monne --- This series is based on Andrew Cooper's x86-next branch v4: drop grabbing extra references --- xen/arch/x86/mm/mem_sharing.c | 41 ++++++++++------------------------- 1 file changed, 11 insertions(+), 30 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index dfc279d371..4b3a094481 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct page_info *page) return -EBUSY; } - /* We can only change the type if count is one */ - /* Because we are locking pages individually, we need to drop - * the lock here, while the page is typed. We cannot risk the - * race of page_unlock and then put_page_type. */ expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2); if ( page->u.inuse.type_info != expected_type ) { @@ -660,12 +656,11 @@ static int page_make_private(struct domain *d, struct page_info *page) return -EEXIST; } + mem_sharing_page_unlock(page); + /* Drop the final typecount */ put_page_and_type(page); - /* Now that we've dropped the type, we can unlock */ - mem_sharing_page_unlock(page); - /* Change the owner */ ASSERT(page_get_owner(page) == dom_cow); page_set_owner(page, d); @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, p2m_type_t smfn_type, cmfn_type; struct two_gfns tg; struct rmap_iterator ri; + unsigned long put_count = 0; get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn, cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg); @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, goto err_out; } - /* Acquire an extra reference, for the freeing below to be safe. */ - if ( !get_page(cpage, dom_cow) ) - { - ret = -EOVERFLOW; - mem_sharing_page_unlock(secondpg); - mem_sharing_page_unlock(firstpg); - goto err_out; - } - /* Merge the lists together */ rmap_seed_iterator(cpage, &ri); while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) @@ -984,7 +971,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, * Don't change the type of rmap for the client page. */ rmap_del(gfn, cpage, 0); rmap_add(gfn, spage); - put_page_and_type(cpage); + put_count++; d = get_domain_by_id(gfn->domain); BUG_ON(!d); BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn)); @@ -1002,7 +989,10 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, /* Free the client page */ if(test_and_clear_bit(_PGC_allocated, &cpage->count_info)) put_page(cpage); - put_page(cpage); + + BUG_ON(!put_count); + while ( put_count-- ) + put_page_and_type(cpage); /* We managed to free a domain page. */ atomic_dec(&nr_shared_mfns); @@ -1167,20 +1157,11 @@ int __mem_sharing_unshare_page(struct domain *d, { if ( !last_gfn ) mem_sharing_gfn_destroy(page, d, gfn_info); - put_page_and_type(page); mem_sharing_page_unlock(page); - if ( last_gfn ) - { - if ( !get_page(page, dom_cow) ) - { - put_gfn(d, gfn); - domain_crash(d); - return -EOVERFLOW; - } - if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) - put_page(page); + if ( last_gfn && + test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); - } + put_page_and_type(page); put_gfn(d, gfn); return 0; From patchwork Thu May 2 22:13:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10927687 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A96AE13AD for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9955A26224 for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8DA1C2654B; Thu, 2 May 2019 22:15:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1AE4426224 for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy2-0006Z5-WF; Thu, 02 May 2019 22:14:14 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy2-0006Yr-5D for xen-devel@lists.xenproject.org; Thu, 02 May 2019 22:14:14 +0000 X-Inumbo-ID: 986a9858-6d27-11e9-b267-c7208b91b2a5 Received: from mail-it1-f195.google.com (unknown [209.85.166.195]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 986a9858-6d27-11e9-b267-c7208b91b2a5; Thu, 02 May 2019 22:14:04 +0000 (UTC) Received: by mail-it1-f195.google.com with SMTP id l140so6206989itb.0 for ; Thu, 02 May 2019 15:14:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LzSLo//LL4ffZ4M/WYluBnoL9/iZ0VjsIt2rD4H8Zj0=; b=rf1rwyd/II3dEm4cxDCvrtp4cveaQ+eS/6O49ekMvxqT2iYyLWrXGtgy+wd3/YsM4u 2xQFxvDK1NhmNCPkcwMPdf2HRh6iFNSpd2yN7jC4LHiE7DEexjt1FTugA85i9Cce3ay/ VIeb48TniGkVr01nlVXZAcyaMRmly2HGirO67TuFEDtEBN0lT0iEcXbAc6d3BaTgQxbN kmUvVPd3Cy/sxs6O9hX085TMIWBXLxPotC9yzVVNx5p+M+BkaCVfkrbptV2XVXRCc1no E2+kuYmgrnREMcf0I+ftbURwiJ7zRBEt1UeCLgoCj/HIU/TGOcuAu3578z4NzGzRbAVI u5jA== X-Gm-Message-State: APjAAAUez4sY2gPMjzYzmzgawGUbTt/2TKXk03Vz7s3lBPttLK1cvfr5 1paDX2vL9zQmbtQrtFrtVGuoLVRB X-Google-Smtp-Source: APXvYqwC9rT480wsWGXgd4BUsYiDpg/jdVeijCjYYQpkZkVVV3nedjj2oknktO9FvkjPMr1gkscVog== X-Received: by 2002:a05:660c:11c5:: with SMTP id p5mr3881742itm.64.1556835242963; Thu, 02 May 2019 15:14:02 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id n199sm239715itn.34.2019.05.02.15.14.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 May 2019 15:14:02 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 2 May 2019 16:13:43 -0600 Message-Id: <20190502221345.18459-2-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190502221345.18459-1-tamas@tklengyel.com> References: <20190502221345.18459-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 2/4] x86/mem_sharing: copy a page_lock version to be internal to memshr X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch cf4b30dca0a "Add debug code to detect illegal page_lock and put_page_type ordering" added extra sanity checking to page_lock/page_unlock for debug builds with the assumption that no hypervisor path ever locks two pages at once. This assumption doesn't hold during memory sharing so we copy a version of page_lock/unlock to be used exclusively in the memory sharing subsystem without the sanity checks. Signed-off-by: Tamas K Lengyel Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne --- xen/arch/x86/mm/mem_sharing.c | 43 +++++++++++++++++++++++++++++++---- xen/include/asm-x86/mm.h | 14 +----------- 2 files changed, 40 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 4b3a094481..baae7ceeda 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -112,13 +112,48 @@ static inline void page_sharing_dispose(struct page_info *page) #endif /* MEM_SHARING_AUDIT */ -static inline int mem_sharing_page_lock(struct page_info *pg) +/* + * Private implementations of page_lock/unlock to bypass PV-only + * sanity checks not applicable to mem-sharing. + */ +static inline bool _page_lock(struct page_info *page) { - int rc; + unsigned long x, nx; + + do { + while ( (x = page->u.inuse.type_info) & PGT_locked ) + cpu_relax(); + nx = x + (1 | PGT_locked); + if ( !(x & PGT_validated) || + !(x & PGT_count_mask) || + !(nx & PGT_count_mask) ) + return false; + } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x ); + + return true; +} + +static inline void _page_unlock(struct page_info *page) +{ + unsigned long x, nx, y = page->u.inuse.type_info; + + do { + x = y; + ASSERT((x & PGT_count_mask) && (x & PGT_locked)); + + nx = x - (1 | PGT_locked); + /* We must not drop the last reference here. */ + ASSERT(nx & PGT_count_mask); + } while ( (y = cmpxchg(&page->u.inuse.type_info, x, nx)) != x ); +} + +static inline bool mem_sharing_page_lock(struct page_info *pg) +{ + bool rc; pg_lock_data_t *pld = &(this_cpu(__pld)); page_sharing_mm_pre_lock(); - rc = page_lock(pg); + rc = _page_lock(pg); if ( rc ) { preempt_disable(); @@ -135,7 +170,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg) page_sharing_mm_unlock(pld->mm_unlock_level, &pld->recurse_count); preempt_enable(); - page_unlock(pg); + _page_unlock(pg); } static inline shr_handle_t get_next_handle(void) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 6faa563167..7dc7e33f73 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -356,24 +356,12 @@ struct platform_bad_page { const struct platform_bad_page *get_platform_badpages(unsigned int *array_size); /* Per page locks: - * page_lock() is used for two purposes: pte serialization, and memory sharing. + * page_lock() is used for pte serialization. * * All users of page lock for pte serialization live in mm.c, use it * to lock a page table page during pte updates, do not take other locks within * the critical section delimited by page_lock/unlock, and perform no * nesting. - * - * All users of page lock for memory sharing live in mm/mem_sharing.c. Page_lock - * is used in memory sharing to protect addition (share) and removal (unshare) - * of (gfn,domain) tupples to a list of gfn's that the shared page is currently - * backing. Nesting may happen when sharing (and locking) two pages -- deadlock - * is avoided by locking pages in increasing order. - * All memory sharing code paths take the p2m lock of the affected gfn before - * taking the lock for the underlying page. We enforce ordering between page_lock - * and p2m_lock using an mm-locks.h construct. - * - * These two users (pte serialization and memory sharing) do not collide, since - * sharing is only supported for hvm guests, which do not perform pv pte updates. */ int page_lock(struct page_info *page); void page_unlock(struct page_info *page); From patchwork Thu May 2 22:13:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10927681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 981411395 for ; Thu, 2 May 2019 22:15:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E86726224 for ; Thu, 2 May 2019 22:15:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6BE372654B; Thu, 2 May 2019 22:15:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9F2BD26224 for ; Thu, 2 May 2019 22:15:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJxw-0006XH-IB; Thu, 02 May 2019 22:14:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJxv-0006Ww-4L for xen-devel@lists.xenproject.org; Thu, 02 May 2019 22:14:07 +0000 X-Inumbo-ID: 99557461-6d27-11e9-843c-bc764e045a96 Received: from mail-io1-f68.google.com (unknown [209.85.166.68]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 99557461-6d27-11e9-843c-bc764e045a96; Thu, 02 May 2019 22:14:05 +0000 (UTC) Received: by mail-io1-f68.google.com with SMTP id h26so3647629ioj.1 for ; Thu, 02 May 2019 15:14:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kfZC3ZKi78+hYcm1QckJJrxmSZuf6YhT3xoFtXtED9Y=; b=imhIhjp4DN0jkWlnoODT5fnC/UMGM/Qici1mPShIJGzI9Z7gJw+ymdJSryhwAkEcq3 Bw5G09jThcB1FeNZibeihiLdLDC95oZc7zw6VTsa6MJluHd9pKamtvcfcvRLHN8X8c45 jTloZ5EeVJ+Ka5YxbGHXAXN13M6EQI6wIBm9mB/NmHBU1p54i651w5pjvsBC/si4vCXP s02DejBQVSO7ppQVueS5JW9LMVa6DCBJkYqeZckTGmZtX0xJivDA3FAve3+KkFNwALSB nfYwrETySzpvAlGPPdM2xpOXO2zRI7MnBrffsBQr3BFHi3GPvIJ+hrT3vNeNvplh7nMq 4p+w== X-Gm-Message-State: APjAAAWGIFfZclQmzJMFAs/J5kJPFpja6h95g+XfM+63c+4o2gvv2Irp jmAKv6TLIsDKhzTNwme+0AkL+Tpl X-Google-Smtp-Source: APXvYqyqulK4xg9zaL3xQTogVmrY0jxGo0U7NB/hLt/eghkLY/b0lzSN+L8gAZoOxYk2mXPurluokQ== X-Received: by 2002:a6b:6909:: with SMTP id e9mr4438521ioc.208.1556835244993; Thu, 02 May 2019 15:14:04 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id n199sm239715itn.34.2019.05.02.15.14.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 May 2019 15:14:03 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 2 May 2019 16:13:44 -0600 Message-Id: <20190502221345.18459-3-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190502221345.18459-1-tamas@tklengyel.com> References: <20190502221345.18459-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 3/4] x86/mem_sharing: enable mem_share audit mode only in debug builds X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tamas K Lengyel , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Improves performance for release builds. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne --- xen/include/asm-x86/mem_sharing.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index 0e77b7d935..bb19b7534f 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -25,7 +25,11 @@ #include /* Auditing of memory sharing code? */ +#ifndef NDEBUG #define MEM_SHARING_AUDIT 1 +#else +#define MEM_SHARING_AUDIT 0 +#endif typedef uint64_t shr_handle_t; From patchwork Thu May 2 22:13:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10927683 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EED341395 for ; Thu, 2 May 2019 22:15:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB83C26224 for ; Thu, 2 May 2019 22:15:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CDF012654B; Thu, 2 May 2019 22:15:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D87C226224 for ; Thu, 2 May 2019 22:15:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy8-0006bB-9i; Thu, 02 May 2019 22:14:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy7-0006aq-5C for xen-devel@lists.xenproject.org; Thu, 02 May 2019 22:14:19 +0000 X-Inumbo-ID: 9a880882-6d27-11e9-a10b-17eec5634051 Received: from mail-it1-f193.google.com (unknown [209.85.166.193]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 9a880882-6d27-11e9-a10b-17eec5634051; Thu, 02 May 2019 22:14:07 +0000 (UTC) Received: by mail-it1-f193.google.com with SMTP id t200so6181006itf.4 for ; Thu, 02 May 2019 15:14:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rnK3bfb23G+zfgp3PTJMob09a//AstE7eoVe83bF1oM=; b=su2+c0uPGeeuxLUn8AR3BDdJzmFuc5Obc/BTyEBm0Sdk9lqbKhNz55jfB8p5Enm6nC iCJ11nG5M73rHHb/P2oRft46PuhaNZHUetpQtvV3HQUkNA7xhCTyAYf+ShSB66Yq3U8+ usvsqeCU6uQrdwksO7QhVV0ix1Wbv6JdONgTufkOtHQ15z0s1KQQZag8hjU+O8CbScbv 72y/Hg//sfCPXpryqTEBXFCijKldbXTQE7QJidI3IZh8g4pI0PWXi2+LLuqb3cU4iixZ 5Cu+H+y5dTmPu3LhwJhwlqDFBGfPbyYzAsmJ+nfQig6F7w5qcFuUIONtVo8MkRTcA5kz JWtQ== X-Gm-Message-State: APjAAAXc3be0fDqqFV2lp82dkj+TpGQbhIODEzZdHmyk433h3xuUvUHw iJN+ZwCDO9mBas5svcQ78sRcWBMe X-Google-Smtp-Source: APXvYqw/FCHV4QDT848lup/62Ty1YV56IeKQqxfkEL8o+eefWfiAUsKRr/GCGgzjh7nJAXu9D1ZmWw== X-Received: by 2002:a24:ba02:: with SMTP id p2mr4409463itf.94.1556835246852; Thu, 02 May 2019 15:14:06 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id n199sm239715itn.34.2019.05.02.15.14.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 May 2019 15:14:06 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 2 May 2019 16:13:45 -0600 Message-Id: <20190502221345.18459-4-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190502221345.18459-1-tamas@tklengyel.com> References: <20190502221345.18459-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 4/4] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Disable it by default as it is only an experimental subsystem. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: George Dunlap ---- v4: add ASSERT_UNREACHABLE to inlined functions where applicable & other fixups Acked-by: Jan Beulich --- xen/arch/x86/Kconfig | 6 +++++- xen/arch/x86/domain.c | 2 ++ xen/arch/x86/domctl.c | 2 ++ xen/arch/x86/mm.c | 2 ++ xen/arch/x86/mm/Makefile | 2 +- xen/arch/x86/x86_64/compat/mm.c | 2 ++ xen/arch/x86/x86_64/mm.c | 2 ++ xen/common/Kconfig | 3 --- xen/common/domain.c | 2 +- xen/common/grant_table.c | 2 +- xen/common/memory.c | 2 +- xen/common/vm_event.c | 6 +++--- xen/include/asm-x86/mem_sharing.h | 28 ++++++++++++++++++++++++++++ xen/include/asm-x86/mm.h | 3 +++ xen/include/xen/sched.h | 2 +- xen/include/xsm/dummy.h | 2 +- xen/include/xsm/xsm.h | 4 ++-- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 4 ++-- 19 files changed, 60 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 4b8b07b549..600ca5c12e 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -17,7 +17,6 @@ config X86 select HAS_KEXEC select MEM_ACCESS_ALWAYS_ON select HAS_MEM_PAGING - select HAS_MEM_SHARING select HAS_NS16550 select HAS_PASSTHROUGH select HAS_PCI @@ -198,6 +197,11 @@ config PV_SHIM_EXCLUSIVE firmware, and will not function correctly in other scenarios. If unsure, say N. + +config MEM_SHARING + bool "Xen memory sharing support" if EXPERT = "y" + depends on HVM + endmenu source "common/Kconfig" diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index d2d9f2fc3c..474df8433b 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2046,6 +2046,7 @@ int domain_relinquish_resources(struct domain *d) d->arch.auto_unmask = 0; } +#ifdef CONFIG_MEM_SHARING PROGRESS(shared): if ( is_hvm_domain(d) ) @@ -2056,6 +2057,7 @@ int domain_relinquish_resources(struct domain *d) if ( ret ) return ret; } +#endif spin_lock(&d->page_alloc_lock); page_list_splice(&d->arch.relmem_list, &d->page_list); diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index 9bf2d0820f..bc9e024ccc 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1231,9 +1231,11 @@ long arch_do_domctl( break; } +#ifdef CONFIG_MEM_SHARING case XEN_DOMCTL_mem_sharing_op: ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op); break; +#endif #if P2M_AUDIT && defined(CONFIG_HVM) case XEN_DOMCTL_audit_p2m: diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 45fadbab61..f9f607fb4b 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -368,7 +368,9 @@ void __init arch_init_memory(void) efi_init_memory(); +#ifdef CONFIG_MEM_SHARING mem_sharing_init(); +#endif #ifndef NDEBUG if ( highmem_start ) diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 5a17646f98..5010a29d6c 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -6,7 +6,7 @@ obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o obj-y += mem_paging.o -obj-y += mem_sharing.o +obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-y += p2m.o p2m-pt.o obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o obj-y += paging.o diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c index 32410ed273..d4c6be3032 100644 --- a/xen/arch/x86/x86_64/compat/mm.c +++ b/xen/arch/x86/x86_64/compat/mm.c @@ -152,8 +152,10 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t)); +#endif default: rc = -ENOSYS; diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index d8f558bc3a..51d1d511f2 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -993,8 +993,10 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t)); +#endif default: rc = -ENOSYS; diff --git a/xen/common/Kconfig b/xen/common/Kconfig index c838506241..80575cac10 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -45,9 +45,6 @@ config MEM_ACCESS config HAS_MEM_PAGING bool -config HAS_MEM_SHARING - bool - config HAS_PDX bool diff --git a/xen/common/domain.c b/xen/common/domain.c index 88bbe984bc..bb072cf93f 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -926,7 +926,7 @@ static void complete_domain_destroy(struct rcu_head *head) xfree(d->vm_event_paging); #endif xfree(d->vm_event_monitor); -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING xfree(d->vm_event_share); #endif diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 80728ea57d..6c40dccae9 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -3760,7 +3760,7 @@ void grant_table_init_vcpu(struct vcpu *v) v->maptrack_tail = MAPTRACK_TAIL; } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref, gfn_t *gfn, uint16_t *status) { diff --git a/xen/common/memory.c b/xen/common/memory.c index 86567e6117..915f2cee1a 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1676,7 +1676,7 @@ int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly, return -EAGAIN; } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) ) { if ( page ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 6e68be47bc..163a671cea 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -544,7 +544,7 @@ static void monitor_notification(struct vcpu *v, unsigned int port) vm_event_resume(v->domain, v->domain->vm_event_monitor); } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_sharing_notification(struct vcpu *v, unsigned int port) { @@ -574,7 +574,7 @@ void vm_event_cleanup(struct domain *d) destroy_waitqueue_head(&d->vm_event_monitor->wq); (void)vm_event_disable(d, &d->vm_event_monitor); } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING if ( vm_event_check_ring(d->vm_event_share) ) { destroy_waitqueue_head(&d->vm_event_share->wq); @@ -720,7 +720,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, } break; -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING case XEN_DOMCTL_VM_EVENT_OP_SHARING: { rc = -EINVAL; diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index bb19b7534f..8edb8e4cc0 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -24,6 +24,8 @@ #include #include +#ifdef CONFIG_MEM_SHARING + /* Auditing of memory sharing code? */ #ifndef NDEBUG #define MEM_SHARING_AUDIT 1 @@ -100,4 +102,30 @@ void mem_sharing_init(void); */ int relinquish_shared_pages(struct domain *d); +#else + +static inline unsigned int mem_sharing_get_nr_saved_mfns(void) +{ + return 0; +} +static inline unsigned int mem_sharing_get_nr_shared_mfns(void) +{ + return 0; +} +static inline int mem_sharing_unshare_page(struct domain *d, + unsigned long gfn, + uint16_t flags) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} +static inline int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn, + bool allow_sleep) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} + +#endif + #endif /* __MEM_SHARING_H__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 7dc7e33f73..9c077af8ea 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -127,6 +127,8 @@ struct page_info /* For non-pinnable single-page shadows, a higher entry that points * at us. */ paddr_t up; + +#ifdef CONFIG_MEM_SHARING /* For shared/sharable pages, we use a doubly-linked list * of all the {pfn,domain} pairs that map this page. We also include * an opaque handle, which is effectively a version, so that clients @@ -134,6 +136,7 @@ struct page_info * This list is allocated and freed when a page is shared/unshared. */ struct page_sharing_info *sharing; +#endif }; /* Reference count and various PGC_xxx flags and fields. */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 748bb0f2f9..17cf8785fb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -462,7 +462,7 @@ struct domain /* Various vm_events */ /* Memory sharing support */ -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING struct vm_event_domain *vm_event_share; #endif /* Memory paging support */ diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index e628b1c6af..8afdec9fe8 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -603,7 +603,7 @@ static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 8a78d8abd3..8ec6b1a6e8 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -151,7 +151,7 @@ struct xsm_operations { int (*mem_paging) (struct domain *d); #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING int (*mem_sharing) (struct domain *d); #endif @@ -603,7 +603,7 @@ static inline int xsm_mem_paging (xsm_default_t def, struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d) { return xsm_ops->mem_sharing(d); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 1fe0e746fa..6158dce814 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -129,7 +129,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, mem_paging); #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING set_to_dummy_if_null(ops, mem_sharing); #endif diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 3d00c747f6..f5f3b42e6e 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1270,7 +1270,7 @@ static int flask_mem_paging(struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static int flask_mem_sharing(struct domain *d) { return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING); @@ -1838,7 +1838,7 @@ static struct xsm_operations flask_ops = { .mem_paging = flask_mem_paging, #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING .mem_sharing = flask_mem_sharing, #endif