From patchwork Thu May 2 22:13:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10927687 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A96AE13AD for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9955A26224 for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8DA1C2654B; Thu, 2 May 2019 22:15:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1AE4426224 for ; Thu, 2 May 2019 22:15:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy2-0006Z5-WF; Thu, 02 May 2019 22:14:14 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hMJy2-0006Yr-5D for xen-devel@lists.xenproject.org; Thu, 02 May 2019 22:14:14 +0000 X-Inumbo-ID: 986a9858-6d27-11e9-b267-c7208b91b2a5 Received: from mail-it1-f195.google.com (unknown [209.85.166.195]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 986a9858-6d27-11e9-b267-c7208b91b2a5; Thu, 02 May 2019 22:14:04 +0000 (UTC) Received: by mail-it1-f195.google.com with SMTP id l140so6206989itb.0 for ; Thu, 02 May 2019 15:14:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LzSLo//LL4ffZ4M/WYluBnoL9/iZ0VjsIt2rD4H8Zj0=; b=rf1rwyd/II3dEm4cxDCvrtp4cveaQ+eS/6O49ekMvxqT2iYyLWrXGtgy+wd3/YsM4u 2xQFxvDK1NhmNCPkcwMPdf2HRh6iFNSpd2yN7jC4LHiE7DEexjt1FTugA85i9Cce3ay/ VIeb48TniGkVr01nlVXZAcyaMRmly2HGirO67TuFEDtEBN0lT0iEcXbAc6d3BaTgQxbN kmUvVPd3Cy/sxs6O9hX085TMIWBXLxPotC9yzVVNx5p+M+BkaCVfkrbptV2XVXRCc1no E2+kuYmgrnREMcf0I+ftbURwiJ7zRBEt1UeCLgoCj/HIU/TGOcuAu3578z4NzGzRbAVI u5jA== X-Gm-Message-State: APjAAAUez4sY2gPMjzYzmzgawGUbTt/2TKXk03Vz7s3lBPttLK1cvfr5 1paDX2vL9zQmbtQrtFrtVGuoLVRB X-Google-Smtp-Source: APXvYqwC9rT480wsWGXgd4BUsYiDpg/jdVeijCjYYQpkZkVVV3nedjj2oknktO9FvkjPMr1gkscVog== X-Received: by 2002:a05:660c:11c5:: with SMTP id p5mr3881742itm.64.1556835242963; Thu, 02 May 2019 15:14:02 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id n199sm239715itn.34.2019.05.02.15.14.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 May 2019 15:14:02 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 2 May 2019 16:13:43 -0600 Message-Id: <20190502221345.18459-2-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190502221345.18459-1-tamas@tklengyel.com> References: <20190502221345.18459-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 2/4] x86/mem_sharing: copy a page_lock version to be internal to memshr X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch cf4b30dca0a "Add debug code to detect illegal page_lock and put_page_type ordering" added extra sanity checking to page_lock/page_unlock for debug builds with the assumption that no hypervisor path ever locks two pages at once. This assumption doesn't hold during memory sharing so we copy a version of page_lock/unlock to be used exclusively in the memory sharing subsystem without the sanity checks. Signed-off-by: Tamas K Lengyel Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne --- xen/arch/x86/mm/mem_sharing.c | 43 +++++++++++++++++++++++++++++++---- xen/include/asm-x86/mm.h | 14 +----------- 2 files changed, 40 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 4b3a094481..baae7ceeda 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -112,13 +112,48 @@ static inline void page_sharing_dispose(struct page_info *page) #endif /* MEM_SHARING_AUDIT */ -static inline int mem_sharing_page_lock(struct page_info *pg) +/* + * Private implementations of page_lock/unlock to bypass PV-only + * sanity checks not applicable to mem-sharing. + */ +static inline bool _page_lock(struct page_info *page) { - int rc; + unsigned long x, nx; + + do { + while ( (x = page->u.inuse.type_info) & PGT_locked ) + cpu_relax(); + nx = x + (1 | PGT_locked); + if ( !(x & PGT_validated) || + !(x & PGT_count_mask) || + !(nx & PGT_count_mask) ) + return false; + } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x ); + + return true; +} + +static inline void _page_unlock(struct page_info *page) +{ + unsigned long x, nx, y = page->u.inuse.type_info; + + do { + x = y; + ASSERT((x & PGT_count_mask) && (x & PGT_locked)); + + nx = x - (1 | PGT_locked); + /* We must not drop the last reference here. */ + ASSERT(nx & PGT_count_mask); + } while ( (y = cmpxchg(&page->u.inuse.type_info, x, nx)) != x ); +} + +static inline bool mem_sharing_page_lock(struct page_info *pg) +{ + bool rc; pg_lock_data_t *pld = &(this_cpu(__pld)); page_sharing_mm_pre_lock(); - rc = page_lock(pg); + rc = _page_lock(pg); if ( rc ) { preempt_disable(); @@ -135,7 +170,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg) page_sharing_mm_unlock(pld->mm_unlock_level, &pld->recurse_count); preempt_enable(); - page_unlock(pg); + _page_unlock(pg); } static inline shr_handle_t get_next_handle(void) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 6faa563167..7dc7e33f73 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -356,24 +356,12 @@ struct platform_bad_page { const struct platform_bad_page *get_platform_badpages(unsigned int *array_size); /* Per page locks: - * page_lock() is used for two purposes: pte serialization, and memory sharing. + * page_lock() is used for pte serialization. * * All users of page lock for pte serialization live in mm.c, use it * to lock a page table page during pte updates, do not take other locks within * the critical section delimited by page_lock/unlock, and perform no * nesting. - * - * All users of page lock for memory sharing live in mm/mem_sharing.c. Page_lock - * is used in memory sharing to protect addition (share) and removal (unshare) - * of (gfn,domain) tupples to a list of gfn's that the shared page is currently - * backing. Nesting may happen when sharing (and locking) two pages -- deadlock - * is avoided by locking pages in increasing order. - * All memory sharing code paths take the p2m lock of the affected gfn before - * taking the lock for the underlying page. We enforce ordering between page_lock - * and p2m_lock using an mm-locks.h construct. - * - * These two users (pte serialization and memory sharing) do not collide, since - * sharing is only supported for hvm guests, which do not perform pv pte updates. */ int page_lock(struct page_info *page); void page_unlock(struct page_info *page);