From patchwork Fri Apr 12 04:29:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10897231 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1D0F617EF for ; Fri, 12 Apr 2019 04:32:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F086A28E59 for ; Fri, 12 Apr 2019 04:32:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E475C28A56; Fri, 12 Apr 2019 04:32:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3D73A28A56 for ; Fri, 12 Apr 2019 04:32:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEnpc-0005hh-2B; Fri, 12 Apr 2019 04:30:28 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEnpb-0005ha-5H for xen-devel@lists.xenproject.org; Fri, 12 Apr 2019 04:30:27 +0000 X-Inumbo-ID: b17affb3-5cdb-11e9-92d7-bc764e045a96 Received: from mail-io1-f67.google.com (unknown [209.85.166.67]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b17affb3-5cdb-11e9-92d7-bc764e045a96; Fri, 12 Apr 2019 04:30:25 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id c4so7346041ioh.9 for ; Thu, 11 Apr 2019 21:30:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qho2kGDrrmfxsZ8wVuZaLMlc9T7r8JWSf9y7KcRXIe4=; b=cl47W8gBcZ+a4wl38Ta6GZK3n5r1HZ060vc44wIUAXbrKloskst4QtMNGyPQ9R0kqx RQ5+rX5W39DQdTsj6l51vlZjbNOA85XT9z9HzzoIDFm+6WLBByO4CrxkmXoEHYjjW7N5 +Tmu5jL5OD867y5foT1OUDHGcOkLGkX+hwhSe7CPTFZWcSQrtasz5QHXy9+9VXI3uFgz 3CHNmrk3q0YOLtb8PKH9SSs8X0AYvXlFkbhV1bzfQyvAgL36h081WuwoZ2OmB1uUeS/Y nfoy/HOpCFgLxOUrTkBNnZUh2nghhdWdT8O+LROhanoY0M3W95PGiWOKvAdoYRToY1oz BN+A== X-Gm-Message-State: APjAAAU0WhqbH4G2Io+7z2oxddrB4RIvnyFkVASYuP9Vkvt+0LeFjJ/0 LsVJrQr2wU3B//AZ0cFFWWLuOWgT X-Google-Smtp-Source: APXvYqyYaORAjBXYWJ6G6+2Zo2X1leVN1cVF6R41iWLUd+iZydBEltVwQBsqYWRkWVZO+fGgmH5SXA== X-Received: by 2002:a6b:6b1a:: with SMTP id g26mr28551065ioc.211.1555043425068; Thu, 11 Apr 2019 21:30:25 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id q2sm15349891ioh.4.2019.04.11.21.30.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Apr 2019 21:30:24 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 11 Apr 2019 22:29:30 -0600 Message-Id: <20190412042930.2867-2-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190412042930.2867-1-tamas@tklengyel.com> References: <20190412042930.2867-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 2/2] RFC: x86/mm: conditionally check page_lock/page_unlock ownership X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch cf4b30dca0a "Add debug code to detect illegal page_lock and put_page_type ordering" added extra sanity checking to page_lock/page_unlock for debug builds with the assumption that no hypervisor path ever locks two pages at once. This assumption doesn't hold during memory sharing. This is only an RFC as I currently don't have a better idea how to resolve this issue while also keeping the sanity-check in place. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap --- xen/arch/x86/domain.c | 4 ++-- xen/arch/x86/mm.c | 20 +++++++++++--------- xen/arch/x86/mm/mem_sharing.c | 4 ++-- xen/arch/x86/pv/grant_table.c | 12 ++++++------ xen/arch/x86/pv/ro-page-fault.c | 6 +++--- xen/include/asm-x86/mm.h | 4 ++-- 6 files changed, 26 insertions(+), 24 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 8d579e2cf9..93cda1ccdd 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -995,13 +995,13 @@ int arch_set_info_guest( { struct page_info *page = page_list_remove_head(&d->page_list); - if ( page_lock(page) ) + if ( page_lock(page, 1) ) { if ( (page->u.inuse.type_info & PGT_type_mask) == PGT_l4_page_table ) done = !fill_ro_mpt(page_to_mfn(page)); - page_unlock(page); + page_unlock(page, 1); } page_list_add_tail(page, &d->page_list); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index a88cd9ce7c..ff734e362c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2030,11 +2030,12 @@ static inline bool current_locked_page_ne_check(struct page_info *page) { #define current_locked_page_ne_check(x) true #endif -int page_lock(struct page_info *page) +int page_lock(struct page_info *page, bool lock_check) { unsigned long x, nx; - ASSERT(current_locked_page_check(NULL)); + if ( lock_check ) + ASSERT(current_locked_page_check(NULL)); do { while ( (x = page->u.inuse.type_info) & PGT_locked ) @@ -2051,11 +2052,12 @@ int page_lock(struct page_info *page) return 1; } -void page_unlock(struct page_info *page) +void page_unlock(struct page_info *page, bool lock_check) { unsigned long x, nx, y = page->u.inuse.type_info; - ASSERT(current_locked_page_check(page)); + if ( lock_check ) + ASSERT(current_locked_page_check(page)); do { x = y; @@ -3897,7 +3899,7 @@ long do_mmu_update( } va = _p(((unsigned long)va & PAGE_MASK) + (req.ptr & ~PAGE_MASK)); - if ( page_lock(page) ) + if ( page_lock(page, 1) ) { switch ( page->u.inuse.type_info & PGT_type_mask ) { @@ -3954,7 +3956,7 @@ long do_mmu_update( rc = 0; break; } - page_unlock(page); + page_unlock(page, 1); if ( rc == -EINTR ) rc = -ERESTART; } @@ -4247,7 +4249,7 @@ static int __do_update_va_mapping( if ( unlikely(!gl1pg) ) goto out; - if ( !page_lock(gl1pg) ) + if ( !page_lock(gl1pg, 1) ) { put_page(gl1pg); goto out; @@ -4255,7 +4257,7 @@ static int __do_update_va_mapping( if ( (gl1pg->u.inuse.type_info & PGT_type_mask) != PGT_l1_page_table ) { - page_unlock(gl1pg); + page_unlock(gl1pg, 1); put_page(gl1pg); goto out; } @@ -4263,7 +4265,7 @@ static int __do_update_va_mapping( rc = mod_l1_entry(pl1e, val, mfn_x(gl1mfn), MMU_NORMAL_PT_UPDATE, v, pg_owner); - page_unlock(gl1pg); + page_unlock(gl1pg, 1); put_page(gl1pg); out: diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 345a1778f9..777af7f7c7 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -118,7 +118,7 @@ static inline int mem_sharing_page_lock(struct page_info *pg) pg_lock_data_t *pld = &(this_cpu(__pld)); page_sharing_mm_pre_lock(); - rc = page_lock(pg); + rc = page_lock(pg, 0); if ( rc ) { preempt_disable(); @@ -135,7 +135,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg) page_sharing_mm_unlock(pld->mm_unlock_level, &pld->recurse_count); preempt_enable(); - page_unlock(pg); + page_unlock(pg, 0); } static inline shr_handle_t get_next_handle(void) diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c index 5180334f42..be9bbe7c4c 100644 --- a/xen/arch/x86/pv/grant_table.c +++ b/xen/arch/x86/pv/grant_table.c @@ -101,7 +101,7 @@ int create_grant_pv_mapping(uint64_t addr, mfn_t frame, goto out_unmap; } - if ( !page_lock(page) ) + if ( !page_lock(page, 1) ) goto out_put; if ( (page->u.inuse.type_info & PGT_type_mask) != PGT_l1_page_table ) @@ -112,7 +112,7 @@ int create_grant_pv_mapping(uint64_t addr, mfn_t frame, rc = GNTST_okay; out_unlock: - page_unlock(page); + page_unlock(page, 1); out_put: put_page(page); out_unmap: @@ -158,7 +158,7 @@ static bool steal_linear_address(unsigned long linear, l1_pgentry_t *out) if ( !page ) goto out_unmap; - if ( !page_lock(page) ) + if ( !page_lock(page, 1) ) goto out_put; if ( (page->u.inuse.type_info & PGT_type_mask) != PGT_l1_page_table ) @@ -171,7 +171,7 @@ static bool steal_linear_address(unsigned long linear, l1_pgentry_t *out) *out = ol1e; out_unlock: - page_unlock(page); + page_unlock(page, 1); out_put: put_page(page); out_unmap: @@ -264,7 +264,7 @@ int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, goto out_unmap; } - if ( !page_lock(page) ) + if ( !page_lock(page, 1) ) goto out_put; if ( (page->u.inuse.type_info & PGT_type_mask) != PGT_l1_page_table ) @@ -297,7 +297,7 @@ int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, rc = GNTST_okay; out_unlock: - page_unlock(page); + page_unlock(page, 1); out_put: put_page(page); out_unmap: diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c index e7a7179dda..8e90e82f0b 100644 --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -277,7 +277,7 @@ static int ptwr_do_page_fault(struct x86_emulate_ctxt *ctxt, if ( !page ) return X86EMUL_UNHANDLEABLE; - if ( !page_lock(page) ) + if ( !page_lock(page, 1) ) { put_page(page); return X86EMUL_UNHANDLEABLE; @@ -285,7 +285,7 @@ static int ptwr_do_page_fault(struct x86_emulate_ctxt *ctxt, if ( (page->u.inuse.type_info & PGT_type_mask) != PGT_l1_page_table ) { - page_unlock(page); + page_unlock(page, 1); put_page(page); return X86EMUL_UNHANDLEABLE; } @@ -293,7 +293,7 @@ static int ptwr_do_page_fault(struct x86_emulate_ctxt *ctxt, ctxt->data = &ptwr_ctxt; rc = x86_emulate(ctxt, &ptwr_emulate_ops); - page_unlock(page); + page_unlock(page, 1); put_page(page); return rc; diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 6faa563167..dae52e8880 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -375,8 +375,8 @@ const struct platform_bad_page *get_platform_badpages(unsigned int *array_size); * These two users (pte serialization and memory sharing) do not collide, since * sharing is only supported for hvm guests, which do not perform pv pte updates. */ -int page_lock(struct page_info *page); -void page_unlock(struct page_info *page); +int page_lock(struct page_info *page, bool lock_check); +void page_unlock(struct page_info *page, bool lock_check); void put_page_type(struct page_info *page); int get_page_type(struct page_info *page, unsigned long type);