From patchwork Fri Apr 26 17:21:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10919479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00E6514C0 for ; Fri, 26 Apr 2019 17:23:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7931288D1 for ; Fri, 26 Apr 2019 17:23:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CBDBF28E5C; Fri, 26 Apr 2019 17:23:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A4EF7288D1 for ; Fri, 26 Apr 2019 17:23:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xi-0004Mn-1V; Fri, 26 Apr 2019 17:21:46 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xg-0004Mi-VK for xen-devel@lists.xenproject.org; Fri, 26 Apr 2019 17:21:44 +0000 X-Inumbo-ID: c3205a37-6847-11e9-843c-bc764e045a96 Received: from mail-io1-f67.google.com (unknown [209.85.166.67]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c3205a37-6847-11e9-843c-bc764e045a96; Fri, 26 Apr 2019 17:21:43 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id c3so3562259iok.6 for ; Fri, 26 Apr 2019 10:21:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=A9UWGu9rBj7DOnwO13kr7TG5xm4sj9fXYKV4wmxsbgI=; b=LzXIkE9kCEhWjhUkPlPFunUS0e5unWKrPNJHfZk6blm7ddPpmmizvngqwSZAugG7F9 90iliLeMXhtPZ9yi5JsCI087eZ3IhF6sEhy/4Olnk/m+B6FL5Ems21X7JQqVni/0fRQW hDaf/mqYYIn0QVZ/O8FWqPqdEAgfqJCFNx42zR23z7Uuys8GBfrgnzTCjlDR3zwLCYfn Pj3VEWKhRYvcRIrqP2CcjtPsS6XfMqIac88nJCibo8kY3jwTuaGuCxlgM3MBf8h1IKcr GEbXzH5es7IJAnXjTwDrrfR/AjkLR+R0fh5cxDQH9F/In26HY2g8TkzJvtanicug2wyH q6QA== X-Gm-Message-State: APjAAAXEpdgu41YUIPaQkIt/M2bydb7DOJSImtwuN1kmj1IpLiWTuMJL +0+So1Xr0pFpBB437BGw9la2fBuX X-Google-Smtp-Source: APXvYqzI16HRAYQLJ9msmZvsW1tGKUQSLET3a3DJxQzmq3J0YLPlh3SgEYHKchTvemJef6HkJwft6Q== X-Received: by 2002:a5d:9d84:: with SMTP id 4mr13325746ion.124.1556299303029; Fri, 26 Apr 2019 10:21:43 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id u16sm3706808iol.66.2019.04.26.10.21.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Apr 2019 10:21:41 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 26 Apr 2019 11:21:35 -0600 Message-Id: <20190426172138.14669-1-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 1/4] x86/mem_sharing: reorder when pages are unlocked and released X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Calling _put_page_type while also holding the page_lock for that page can cause a deadlock. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: Roger Pau Monne --- v3: simplified patch by keeping the additional references already in-place --- xen/arch/x86/mm/mem_sharing.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index dfc279d371..e2f74ac770 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct page_info *page) return -EBUSY; } - /* We can only change the type if count is one */ - /* Because we are locking pages individually, we need to drop - * the lock here, while the page is typed. We cannot risk the - * race of page_unlock and then put_page_type. */ expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2); if ( page->u.inuse.type_info != expected_type ) { @@ -660,12 +656,12 @@ static int page_make_private(struct domain *d, struct page_info *page) return -EEXIST; } - /* Drop the final typecount */ - put_page_and_type(page); - /* Now that we've dropped the type, we can unlock */ mem_sharing_page_unlock(page); + /* Drop the final typecount */ + put_page_and_type(page); + /* Change the owner */ ASSERT(page_get_owner(page) == dom_cow); page_set_owner(page, d); @@ -900,6 +896,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, p2m_type_t smfn_type, cmfn_type; struct two_gfns tg; struct rmap_iterator ri; + unsigned long put_count = 0; get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn, cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg); @@ -984,7 +981,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, * Don't change the type of rmap for the client page. */ rmap_del(gfn, cpage, 0); rmap_add(gfn, spage); - put_page_and_type(cpage); + put_count++; d = get_domain_by_id(gfn->domain); BUG_ON(!d); BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn)); @@ -999,6 +996,10 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh, mem_sharing_page_unlock(secondpg); mem_sharing_page_unlock(firstpg); + BUG_ON(!put_count); + while ( put_count-- ) + put_page_and_type(cpage); + /* Free the client page */ if(test_and_clear_bit(_PGC_allocated, &cpage->count_info)) put_page(cpage); @@ -1167,8 +1168,8 @@ int __mem_sharing_unshare_page(struct domain *d, { if ( !last_gfn ) mem_sharing_gfn_destroy(page, d, gfn_info); - put_page_and_type(page); mem_sharing_page_unlock(page); + put_page_and_type(page); if ( last_gfn ) { if ( !get_page(page, dom_cow) ) From patchwork Fri Apr 26 17:21:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10919475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B80E514B6 for ; Fri, 26 Apr 2019 17:23:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 990B0288D1 for ; Fri, 26 Apr 2019 17:23:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C63A28E5C; Fri, 26 Apr 2019 17:23:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D7CF5288D1 for ; Fri, 26 Apr 2019 17:23:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xl-0004My-AL; Fri, 26 Apr 2019 17:21:49 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xj-0004Mt-BM for xen-devel@lists.xenproject.org; Fri, 26 Apr 2019 17:21:47 +0000 X-Inumbo-ID: c479d67c-6847-11e9-843c-bc764e045a96 Received: from mail-it1-f194.google.com (unknown [209.85.166.194]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c479d67c-6847-11e9-843c-bc764e045a96; Fri, 26 Apr 2019 17:21:46 +0000 (UTC) Received: by mail-it1-f194.google.com with SMTP id k64so6919375itb.5 for ; Fri, 26 Apr 2019 10:21:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8xzOtcypIsuAL03dR9QpHWkj+uy3+WDMhlEQkUDvueg=; b=UOMKonYp+/3uNwDyUF+T6sNP9vGDNLxMRO673XCwpvThOlUlZNwabbmvh3o5vFVlLY fDBKXtWxCoT0rGKBemZ6ctiD549eRwx1rBqV1ggQdkin+i2ylwLNygUF+6tID/5YNqED E+zCXrz0YpA/myWt84M39KfzAdrWZusMxEW7uuBvs4ThmaTy0w84UXpb7d622kjMtKJt Qw5Mz+uYNnUGhVHT71dkh0EoYRNqnPu+erye402JvONNe/alyXfRKM2+/MTrL1PmYPao b9pvGlZNgE8+NQ2r7UrzN+A8JIq3AxxWFVhCo+B+WW4gbzl1w9+xqG7vLQ51MOA+4XSl /qag== X-Gm-Message-State: APjAAAVhkmYYExnVIIAEo8amtdSflWQzowxCxjfrO5cj6qsXyaDltdfn ciozkWc7zmmCKzoxK6SAvVnG72Oy X-Google-Smtp-Source: APXvYqydjw2pFbNMGQJ6fJIwW4gBAlEWXYCVnBLwX9WNh1olhJ56AlbesRGOvVsvM0Opd7Q3gCbXUQ== X-Received: by 2002:a05:6638:214:: with SMTP id e20mr5929581jaq.59.1556299305200; Fri, 26 Apr 2019 10:21:45 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id u16sm3706808iol.66.2019.04.26.10.21.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Apr 2019 10:21:44 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 26 Apr 2019 11:21:36 -0600 Message-Id: <20190426172138.14669-2-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190426172138.14669-1-tamas@tklengyel.com> References: <20190426172138.14669-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 2/4] x86/mem_sharing: introduce and use page_lock_memshr instead of page_lock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch cf4b30dca0a "Add debug code to detect illegal page_lock and put_page_type ordering" added extra sanity checking to page_lock/page_unlock for debug builds with the assumption that no hypervisor path ever locks two pages at once. This assumption doesn't hold during memory sharing so we introduce separate functions, page_lock_memshr and page_unlock_memshr, to be used exclusively in the memory sharing subsystem. Also placing these functions behind their appropriate kconfig gates. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap --- v3: this patch was "x86/mm: conditionally check page_lock/page_unlock ownership" --- xen/arch/x86/mm.c | 46 ++++++++++++++++++++++++++++------- xen/arch/x86/mm/mem_sharing.c | 4 +-- xen/include/asm-x86/mm.h | 6 ++++- 3 files changed, 44 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 45fadbab61..c2c92a96ac 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2030,12 +2030,11 @@ static inline bool current_locked_page_ne_check(struct page_info *page) { #define current_locked_page_ne_check(x) true #endif -int page_lock(struct page_info *page) +#if defined(CONFIG_PV) || defined(CONFIG_HAS_MEM_SHARING) +static int _page_lock(struct page_info *page) { unsigned long x, nx; - ASSERT(current_locked_page_check(NULL)); - do { while ( (x = page->u.inuse.type_info) & PGT_locked ) cpu_relax(); @@ -2046,17 +2045,13 @@ int page_lock(struct page_info *page) return 0; } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x ); - current_locked_page_set(page); - return 1; } -void page_unlock(struct page_info *page) +static void _page_unlock(struct page_info *page) { unsigned long x, nx, y = page->u.inuse.type_info; - ASSERT(current_locked_page_check(page)); - do { x = y; ASSERT((x & PGT_count_mask) && (x & PGT_locked)); @@ -2065,11 +2060,44 @@ void page_unlock(struct page_info *page) /* We must not drop the last reference here. */ ASSERT(nx & PGT_count_mask); } while ( (y = cmpxchg(&page->u.inuse.type_info, x, nx)) != x ); +} +#endif - current_locked_page_set(NULL); +#ifdef CONFIG_HAS_MEM_SHARING +int page_lock_memshr(struct page_info *page) +{ + return _page_lock(page); } +void page_unlock_memshr(struct page_info *page) +{ + _page_unlock(page); +} +#endif + #ifdef CONFIG_PV +int page_lock(struct page_info *page) +{ + int rc; + + ASSERT(current_locked_page_check(NULL)); + + rc = _page_lock(page); + + current_locked_page_set(page); + + return rc; +} + +void page_unlock(struct page_info *page) +{ + ASSERT(current_locked_page_check(page)); + + _page_unlock(page); + + current_locked_page_set(NULL); +} + /* * PTE flags that a guest may change without re-validating the PTE. * All other bits affect translation, caching, or Xen's safety. diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index e2f74ac770..4b60bab28b 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -118,7 +118,7 @@ static inline int mem_sharing_page_lock(struct page_info *pg) pg_lock_data_t *pld = &(this_cpu(__pld)); page_sharing_mm_pre_lock(); - rc = page_lock(pg); + rc = page_lock_memshr(pg); if ( rc ) { preempt_disable(); @@ -135,7 +135,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg) page_sharing_mm_unlock(pld->mm_unlock_level, &pld->recurse_count); preempt_enable(); - page_unlock(pg); + page_unlock_memshr(pg); } static inline shr_handle_t get_next_handle(void) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 6faa563167..ba49eee24d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -356,7 +356,8 @@ struct platform_bad_page { const struct platform_bad_page *get_platform_badpages(unsigned int *array_size); /* Per page locks: - * page_lock() is used for two purposes: pte serialization, and memory sharing. + * page_lock() is used for pte serialization. + * page_lock_memshr() is used for memory sharing. * * All users of page lock for pte serialization live in mm.c, use it * to lock a page table page during pte updates, do not take other locks within @@ -378,6 +379,9 @@ const struct platform_bad_page *get_platform_badpages(unsigned int *array_size); int page_lock(struct page_info *page); void page_unlock(struct page_info *page); +int page_lock_memshr(struct page_info *page); +void page_unlock_memshr(struct page_info *page); + void put_page_type(struct page_info *page); int get_page_type(struct page_info *page, unsigned long type); int put_page_type_preemptible(struct page_info *page); From patchwork Fri Apr 26 17:21:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10919481 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A33A514D5 for ; Fri, 26 Apr 2019 17:23:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 854ED288D1 for ; Fri, 26 Apr 2019 17:23:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 79A6928E5C; Fri, 26 Apr 2019 17:23:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2C0F2288D1 for ; Fri, 26 Apr 2019 17:23:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xm-0004NH-JV; Fri, 26 Apr 2019 17:21:50 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xl-0004N2-JG for xen-devel@lists.xenproject.org; Fri, 26 Apr 2019 17:21:49 +0000 X-Inumbo-ID: c5b20fc5-6847-11e9-843c-bc764e045a96 Received: from mail-it1-f195.google.com (unknown [209.85.166.195]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c5b20fc5-6847-11e9-843c-bc764e045a96; Fri, 26 Apr 2019 17:21:48 +0000 (UTC) Received: by mail-it1-f195.google.com with SMTP id x132so7068884itf.2 for ; Fri, 26 Apr 2019 10:21:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eZ0xQmtXsb1Z+NDOML+qI/sBYrUyNMQwKbHOTdXldXQ=; b=S66BmoqT5EjISt5/6rTNEsMueRPH4TKqHFRSDFkaqT/GOomIgUsd1GdH2TyhZrlE9/ 5Rg0EuYZHXDweJ/bif+n20HZ3bJgdjxrR8uhBuogu3iTB/vVz7QsErqLvlBN0Iw0LqRA cuN5WMJMLXdJMHOM2IFwFGPsWE2P6gEPxBVsYS2Tmf7QmQ0mrOo352INdKLVleLQ3Auy kaTxnmhEgGFldf22T7WzkxY3NkECzSYB+qg+UqDWCTtBEM4hkg7QTgf9U4BQHRizjrxf yxJIfWW0vnnvk2W6AFUSZPhQYA6ffUjnkNGXcmMVbQ66ObQweiXDH1FR3uX86ORApE4N LoXg== X-Gm-Message-State: APjAAAV6KibeNrFzXeqUasGDSVkr2aJR0fbk0i1zQkWT1q1/eB3hgxXA 90im4QkXPI1/e56i83bXMhJSIQO3 X-Google-Smtp-Source: APXvYqyemw+gIm1WnVjlMpr444LA7STGvsHk1ezQADmrw9uZbbREWUwNd2WkATNU616a9afwe1g6lg== X-Received: by 2002:a02:3f0c:: with SMTP id d12mr26253158jaa.9.1556299306993; Fri, 26 Apr 2019 10:21:46 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id u16sm3706808iol.66.2019.04.26.10.21.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Apr 2019 10:21:46 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 26 Apr 2019 11:21:37 -0600 Message-Id: <20190426172138.14669-3-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190426172138.14669-1-tamas@tklengyel.com> References: <20190426172138.14669-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 3/4] x86/mem_sharing: enable mem_share audit mode only in debug builds X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tamas K Lengyel , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Improves performance for release builds. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne --- xen/include/asm-x86/mem_sharing.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index 0e77b7d935..52ea91efa0 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -25,7 +25,9 @@ #include /* Auditing of memory sharing code? */ +#ifndef NDEBUG #define MEM_SHARING_AUDIT 1 +#endif typedef uint64_t shr_handle_t; From patchwork Fri Apr 26 17:21:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10919477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA4F014B6 for ; Fri, 26 Apr 2019 17:23:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB0C9288D1 for ; Fri, 26 Apr 2019 17:23:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF1AD28E5C; Fri, 26 Apr 2019 17:23:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C9A08288D1 for ; Fri, 26 Apr 2019 17:23:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xn-0004NW-Sc; Fri, 26 Apr 2019 17:21:51 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hK4Xn-0004NP-3E for xen-devel@lists.xenproject.org; Fri, 26 Apr 2019 17:21:51 +0000 X-Inumbo-ID: c6abb54e-6847-11e9-843c-bc764e045a96 Received: from mail-it1-f196.google.com (unknown [209.85.166.196]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c6abb54e-6847-11e9-843c-bc764e045a96; Fri, 26 Apr 2019 17:21:49 +0000 (UTC) Received: by mail-it1-f196.google.com with SMTP id e13so6937761itk.4 for ; Fri, 26 Apr 2019 10:21:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZytWFp/Z7ETDSjSZubdeZVpP3I+jUOUZFdHkWyqVkp4=; b=F1E3Si0f/GiNVvnAdEunOzg5cNzZGq3nzZVrHibBpeeF9KF7VNez2jmZ5CPgVKA56J 9HCNVQefZeWnCe1Fkkj/G0kGanPbWnt8ywq31dncpznoc3gqqjarx9dLcRgwoTP3Z9yY +eKytvR6FsXinxdre8qaRymKVB0bnHj9osYHYwx3X0Gkf03Vi7nZI5D9nRVdMVZNC7P0 TQ10glDXeaxftdKoESkcP8ExAYpJqcp5XLhyI794tH5KCWV8+DKLK7hBHc1NIKH+48jl q5LENQL1OX79zIamzZy0tMYgMa09e6Gf9D6JkTE6bpN1EBb9H7OgrStymARMfDgtdedG ziGA== X-Gm-Message-State: APjAAAWOsBAWqTRXFvouV7i46PLblETfZFbWu2niSfUUoY9tfKYNOq0S 9t9tju9+Pf1QD9iJQyqokhSTupYs X-Google-Smtp-Source: APXvYqzwWfl0W/mxk0bULDXepI/2XLfMQp1GaQ/FqzZfw/JhkpTdXrbPNLExzYVemm3mYYpn4XUPQw== X-Received: by 2002:a02:9108:: with SMTP id a8mr15340885jag.107.1556299309039; Fri, 26 Apr 2019 10:21:49 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id u16sm3706808iol.66.2019.04.26.10.21.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Apr 2019 10:21:48 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 26 Apr 2019 11:21:38 -0600 Message-Id: <20190426172138.14669-4-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190426172138.14669-1-tamas@tklengyel.com> References: <20190426172138.14669-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 4/4] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Disable it by default as it is only an experimental subsystem. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: George Dunlap --- xen/arch/x86/Kconfig | 8 +++++++- xen/arch/x86/domctl.c | 2 ++ xen/arch/x86/mm.c | 4 ++-- xen/arch/x86/mm/Makefile | 2 +- xen/arch/x86/x86_64/compat/mm.c | 2 ++ xen/arch/x86/x86_64/mm.c | 2 ++ xen/common/Kconfig | 3 --- xen/common/domain.c | 2 +- xen/common/grant_table.c | 2 +- xen/common/memory.c | 2 +- xen/common/vm_event.c | 6 +++--- xen/include/asm-x86/mem_sharing.h | 31 +++++++++++++++++++++++++++++++ xen/include/asm-x86/mm.h | 3 +++ xen/include/xen/sched.h | 2 +- xen/include/xsm/dummy.h | 2 +- xen/include/xsm/xsm.h | 4 ++-- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 4 ++-- 18 files changed, 63 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 4b8b07b549..af7c25543f 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -17,7 +17,6 @@ config X86 select HAS_KEXEC select MEM_ACCESS_ALWAYS_ON select HAS_MEM_PAGING - select HAS_MEM_SHARING select HAS_NS16550 select HAS_PASSTHROUGH select HAS_PCI @@ -198,6 +197,13 @@ config PV_SHIM_EXCLUSIVE firmware, and will not function correctly in other scenarios. If unsure, say N. + +config MEM_SHARING + bool + default n + depends on HVM + prompt "Xen memory sharing support" if EXPERT = "y" + endmenu source "common/Kconfig" diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index 9bf2d0820f..bc9e024ccc 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1231,9 +1231,11 @@ long arch_do_domctl( break; } +#ifdef CONFIG_MEM_SHARING case XEN_DOMCTL_mem_sharing_op: ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op); break; +#endif #if P2M_AUDIT && defined(CONFIG_HVM) case XEN_DOMCTL_audit_p2m: diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index c2c92a96ac..c3b0f3115c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2030,7 +2030,7 @@ static inline bool current_locked_page_ne_check(struct page_info *page) { #define current_locked_page_ne_check(x) true #endif -#if defined(CONFIG_PV) || defined(CONFIG_HAS_MEM_SHARING) +#if defined(CONFIG_PV) || defined(CONFIG_MEM_SHARING) static int _page_lock(struct page_info *page) { unsigned long x, nx; @@ -2063,7 +2063,7 @@ static void _page_unlock(struct page_info *page) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING int page_lock_memshr(struct page_info *page) { return _page_lock(page); diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 5a17646f98..5010a29d6c 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -6,7 +6,7 @@ obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o obj-y += mem_paging.o -obj-y += mem_sharing.o +obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-y += p2m.o p2m-pt.o obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o obj-y += paging.o diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c index 32410ed273..d4c6be3032 100644 --- a/xen/arch/x86/x86_64/compat/mm.c +++ b/xen/arch/x86/x86_64/compat/mm.c @@ -152,8 +152,10 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t)); +#endif default: rc = -ENOSYS; diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index d8f558bc3a..51d1d511f2 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -993,8 +993,10 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t)); +#endif default: rc = -ENOSYS; diff --git a/xen/common/Kconfig b/xen/common/Kconfig index c838506241..80575cac10 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -45,9 +45,6 @@ config MEM_ACCESS config HAS_MEM_PAGING bool -config HAS_MEM_SHARING - bool - config HAS_PDX bool diff --git a/xen/common/domain.c b/xen/common/domain.c index 88bbe984bc..bb072cf93f 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -926,7 +926,7 @@ static void complete_domain_destroy(struct rcu_head *head) xfree(d->vm_event_paging); #endif xfree(d->vm_event_monitor); -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING xfree(d->vm_event_share); #endif diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 80728ea57d..6c40dccae9 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -3760,7 +3760,7 @@ void grant_table_init_vcpu(struct vcpu *v) v->maptrack_tail = MAPTRACK_TAIL; } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref, gfn_t *gfn, uint16_t *status) { diff --git a/xen/common/memory.c b/xen/common/memory.c index 86567e6117..915f2cee1a 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1676,7 +1676,7 @@ int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly, return -EAGAIN; } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) ) { if ( page ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 6e68be47bc..163a671cea 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -544,7 +544,7 @@ static void monitor_notification(struct vcpu *v, unsigned int port) vm_event_resume(v->domain, v->domain->vm_event_monitor); } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_sharing_notification(struct vcpu *v, unsigned int port) { @@ -574,7 +574,7 @@ void vm_event_cleanup(struct domain *d) destroy_waitqueue_head(&d->vm_event_monitor->wq); (void)vm_event_disable(d, &d->vm_event_monitor); } -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING if ( vm_event_check_ring(d->vm_event_share) ) { destroy_waitqueue_head(&d->vm_event_share->wq); @@ -720,7 +720,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, } break; -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING case XEN_DOMCTL_VM_EVENT_OP_SHARING: { rc = -EINVAL; diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index 52ea91efa0..e0f8792f52 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -24,6 +24,8 @@ #include #include +#ifdef CONFIG_MEM_SHARING + /* Auditing of memory sharing code? */ #ifndef NDEBUG #define MEM_SHARING_AUDIT 1 @@ -98,4 +100,33 @@ void mem_sharing_init(void); */ int relinquish_shared_pages(struct domain *d); +#else + +static inline unsigned int mem_sharing_get_nr_saved_mfns(void) +{ + return 0; +} +static inline unsigned int mem_sharing_get_nr_shared_mfns(void) +{ + return 0; +} +static inline int mem_sharing_unshare_page(struct domain *d, + unsigned long gfn, + uint16_t flags) +{ + return 0; +} +static inline int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn, + bool allow_sleep) +{ + return 0; +} +static inline int relinquish_shared_pages(struct domain *d) +{ + return 0; +} +static inline void mem_sharing_init(void) {} + +#endif + #endif /* __MEM_SHARING_H__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index ba49eee24d..a3c754cb2f 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -127,6 +127,8 @@ struct page_info /* For non-pinnable single-page shadows, a higher entry that points * at us. */ paddr_t up; + +#ifdef CONFIG_MEM_SHARING /* For shared/sharable pages, we use a doubly-linked list * of all the {pfn,domain} pairs that map this page. We also include * an opaque handle, which is effectively a version, so that clients @@ -134,6 +136,7 @@ struct page_info * This list is allocated and freed when a page is shared/unshared. */ struct page_sharing_info *sharing; +#endif }; /* Reference count and various PGC_xxx flags and fields. */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 748bb0f2f9..17cf8785fb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -462,7 +462,7 @@ struct domain /* Various vm_events */ /* Memory sharing support */ -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING struct vm_event_domain *vm_event_share; #endif /* Memory paging support */ diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index e628b1c6af..8afdec9fe8 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -603,7 +603,7 @@ static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 8a78d8abd3..8ec6b1a6e8 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -151,7 +151,7 @@ struct xsm_operations { int (*mem_paging) (struct domain *d); #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING int (*mem_sharing) (struct domain *d); #endif @@ -603,7 +603,7 @@ static inline int xsm_mem_paging (xsm_default_t def, struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d) { return xsm_ops->mem_sharing(d); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 1fe0e746fa..6158dce814 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -129,7 +129,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, mem_paging); #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING set_to_dummy_if_null(ops, mem_sharing); #endif diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 3d00c747f6..f5f3b42e6e 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1270,7 +1270,7 @@ static int flask_mem_paging(struct domain *d) } #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING static int flask_mem_sharing(struct domain *d) { return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING); @@ -1838,7 +1838,7 @@ static struct xsm_operations flask_ops = { .mem_paging = flask_mem_paging, #endif -#ifdef CONFIG_HAS_MEM_SHARING +#ifdef CONFIG_MEM_SHARING .mem_sharing = flask_mem_sharing, #endif