From patchwork Thu Sep 14 12:58:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 9953117 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1B08A60317 for ; Thu, 14 Sep 2017 13:31:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1561428CD3 for ; Thu, 14 Sep 2017 13:31:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A19C28D99; Thu, 14 Sep 2017 13:31:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6B32728DF0 for ; Thu, 14 Sep 2017 13:31:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dsUCZ-0008HW-1A; Thu, 14 Sep 2017 13:29:07 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dsUCX-0008Ey-Do for xen-devel@lists.xenproject.org; Thu, 14 Sep 2017 13:29:05 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 8C/4A-03414-0248AB95; Thu, 14 Sep 2017 13:29:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprGIsWRWlGSWpSXmKPExsXitHSDva5Cy65 Ig0mLRSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oz+9YvZCs7aVVx9OIm1gXGrURcjJ4eEgL/E /GVPmEFsNgFliZ+dvWwgtoiAnkTTgeeMXYxcHMwCcxglps7tAnI4OIQFXCUuTKwBMVkEVCV2T dIHKecVsJSYcm8DE8RIeYldbRdZQWxOoHjXh5dgcSEBC4neS/vZIWwFiY7px5ggegUlTs58wg JiMwtISBx88YJ5AiPvLCSpWUhSCxiZVjFqFKcWlaUW6Roa6iUVZaZnlOQmZuboGhqY6eWmFhc npqfmJCYV6yXn525iBIYOAxDsYPy0LOAQoyQHk5Io717dnZFCfEn5KZUZicUZ8UWlOanFhxhl ODiUJHjPNu2KFBIsSk1PrUjLzAEGMUxagoNHSYT3Pkiat7ggMbc4Mx0idYrRmOPRjbt/mDg6b gJJIZa8/LxUKXFe7WagUgGQ0ozSPLhBsOi6xCgrJczLCHSaEE9BalFuZgmq/CtGcQ5GJWHeyy ALeTLzSuD2vQI6hQnolDOnd4CcUpKIkJJqYGw6sS6u7jrz08St+Y2ihikmwm/f/ow+LvC+e+e h1SVr7L3dr5e/ar93weoj70SOCGufCca/2pMkXr/ltI+12r1Y/OJ0cSP+aXX/tty1frtsc2j6 u20/FiYdTO+dEqeccH1Jee7VtGPXFv4JmNa0/E55jZL6b97kmV85piZMqOLPloicnDU1xUKJp Tgj0VCLuag4EQABEldJqQIAAA== X-Env-Sender: prvs=423b67c62=wei.liu2@citrix.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1505395739!76223780!4 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44487 invoked from network); 14 Sep 2017 13:29:04 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 14 Sep 2017 13:29:04 -0000 X-IronPort-AV: E=Sophos;i="5.42,393,1500940800"; d="scan'208";a="447841046" From: Wei Liu To: Xen-devel Date: Thu, 14 Sep 2017 13:58:41 +0100 Message-ID: <20170914125852.22129-13-wei.liu2@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170914125852.22129-1-wei.liu2@citrix.com> References: <20170914125852.22129-1-wei.liu2@citrix.com> MIME-Version: 1.0 Cc: George Dunlap , Andrew Cooper , Wei Liu , Jan Beulich Subject: [Xen-devel] [PATCH v5 12/23] x86/mm: move and rename map_ldt_shadow_page X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add pv prefix to it. Move it to pv/mm.c. Fix call sites. Take the chance to change v to curr and d to currd. Signed-off-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/mm.c | 73 ------------------------------------------- xen/arch/x86/pv/mm.c | 75 +++++++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/traps.c | 4 +-- xen/include/asm-x86/mm.h | 2 -- xen/include/asm-x86/pv/mm.h | 4 +++ 5 files changed, 81 insertions(+), 77 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index bfdba34468..8e25d15631 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -526,27 +526,6 @@ void update_cr3(struct vcpu *v) make_cr3(v, cr3_mfn); } -/* - * Read the guest's l1e that maps this address, from the kernel-mode - * page tables. - */ -static l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear) -{ - struct vcpu *curr = current; - const bool user_mode = !(curr->arch.flags & TF_kernel_mode); - l1_pgentry_t l1e; - - if ( user_mode ) - toggle_guest_mode(curr); - - l1e = guest_get_eff_l1e(linear); - - if ( user_mode ) - toggle_guest_mode(curr); - - return l1e; -} - static inline void page_set_tlbflush_timestamp(struct page_info *page) { /* @@ -615,58 +594,6 @@ static int alloc_segdesc_page(struct page_info *page) return i == 512 ? 0 : -EINVAL; } - -/* - * Map a guest's LDT page (covering the byte at @offset from start of the LDT) - * into Xen's virtual range. Returns true if the mapping changed, false - * otherwise. - */ -bool map_ldt_shadow_page(unsigned int offset) -{ - struct vcpu *v = current; - struct domain *d = v->domain; - struct page_info *page; - l1_pgentry_t gl1e, *pl1e; - unsigned long linear = v->arch.pv_vcpu.ldt_base + offset; - - BUG_ON(unlikely(in_irq())); - - /* - * Hardware limit checking should guarantee this property. NB. This is - * safe as updates to the LDT can only be made by MMUEXT_SET_LDT to the - * current vcpu, and vcpu_reset() will block until this vcpu has been - * descheduled before continuing. - */ - ASSERT((offset >> 3) <= v->arch.pv_vcpu.ldt_ents); - - if ( is_pv_32bit_domain(d) ) - linear = (uint32_t)linear; - - gl1e = guest_get_eff_kern_l1e(linear); - if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) ) - return false; - - page = get_page_from_gfn(d, l1e_get_pfn(gl1e), NULL, P2M_ALLOC); - if ( unlikely(!page) ) - return false; - - if ( unlikely(!get_page_type(page, PGT_seg_desc_page)) ) - { - put_page(page); - return false; - } - - pl1e = &pv_ldt_ptes(v)[offset >> PAGE_SHIFT]; - l1e_add_flags(gl1e, _PAGE_RW); - - spin_lock(&v->arch.pv_vcpu.shadow_ldt_lock); - l1e_write(pl1e, gl1e); - v->arch.pv_vcpu.shadow_ldt_mapcnt++; - spin_unlock(&v->arch.pv_vcpu.shadow_ldt_lock); - - return true; -} - static int get_page_and_type_from_mfn( mfn_t mfn, unsigned long type, struct domain *d, int partial, int preemptible) diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c index 4bfa322788..6890e80efd 100644 --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -22,6 +22,9 @@ #include #include +#include + +#include "mm.h" /* Override macros from asm/page.h to make them work with mfn_t */ #undef mfn_to_page @@ -58,6 +61,78 @@ l1_pgentry_t *map_guest_l1e(unsigned long linear, mfn_t *gl1mfn) return (l1_pgentry_t *)map_domain_page(*gl1mfn) + l1_table_offset(linear); } +/* + * Read the guest's l1e that maps this address, from the kernel-mode + * page tables. + */ +static l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear) +{ + struct vcpu *curr = current; + const bool user_mode = !(curr->arch.flags & TF_kernel_mode); + l1_pgentry_t l1e; + + if ( user_mode ) + toggle_guest_mode(curr); + + l1e = guest_get_eff_l1e(linear); + + if ( user_mode ) + toggle_guest_mode(curr); + + return l1e; +} + +/* + * Map a guest's LDT page (covering the byte at @offset from start of the LDT) + * into Xen's virtual range. Returns true if the mapping changed, false + * otherwise. + */ +bool pv_map_ldt_shadow_page(unsigned int offset) +{ + struct vcpu *curr = current; + struct domain *currd = curr->domain; + struct page_info *page; + l1_pgentry_t gl1e, *pl1e; + unsigned long linear = curr->arch.pv_vcpu.ldt_base + offset; + + BUG_ON(unlikely(in_irq())); + + /* + * Hardware limit checking should guarantee this property. NB. This is + * safe as updates to the LDT can only be made by MMUEXT_SET_LDT to the + * current vcpu, and vcpu_reset() will block until this vcpu has been + * descheduled before continuing. + */ + ASSERT((offset >> 3) <= curr->arch.pv_vcpu.ldt_ents); + + if ( is_pv_32bit_domain(currd) ) + linear = (uint32_t)linear; + + gl1e = guest_get_eff_kern_l1e(linear); + if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) ) + return false; + + page = get_page_from_gfn(currd, l1e_get_pfn(gl1e), NULL, P2M_ALLOC); + if ( unlikely(!page) ) + return false; + + if ( unlikely(!get_page_type(page, PGT_seg_desc_page)) ) + { + put_page(page); + return false; + } + + pl1e = &pv_ldt_ptes(curr)[offset >> PAGE_SHIFT]; + l1e_add_flags(gl1e, _PAGE_RW); + + spin_lock(&curr->arch.pv_vcpu.shadow_ldt_lock); + l1e_write(pl1e, gl1e); + curr->arch.pv_vcpu.shadow_ldt_mapcnt++; + spin_unlock(&curr->arch.pv_vcpu.shadow_ldt_lock); + + return true; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index d84db4acda..d8feef2942 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1101,7 +1101,7 @@ static int handle_gdt_ldt_mapping_fault(unsigned long offset, /* * If the fault is in another vcpu's area, it cannot be due to * a GDT/LDT descriptor load. Thus we can reasonably exit immediately, and - * indeed we have to since map_ldt_shadow_page() works correctly only on + * indeed we have to since pv_map_ldt_shadow_page() works correctly only on * accesses to a vcpu's own area. */ if ( vcpu_area != curr->vcpu_id ) @@ -1113,7 +1113,7 @@ static int handle_gdt_ldt_mapping_fault(unsigned long offset, if ( likely(is_ldt_area) ) { /* LDT fault: Copy a mapping from the guest's LDT, if it is valid. */ - if ( likely(map_ldt_shadow_page(offset)) ) + if ( likely(pv_map_ldt_shadow_page(offset)) ) { if ( guest_mode(regs) ) trace_trap_two_addr(TRC_PV_GDT_LDT_MAPPING_FAULT, diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index a48d75d434..8a56bed454 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -562,8 +562,6 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg); int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void)); int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void)); -bool map_ldt_shadow_page(unsigned int); - #define NIL(type) ((type *)-sizeof(type)) #define IS_NIL(ptr) (!((uintptr_t)(ptr) + sizeof(*(ptr)))) diff --git a/xen/include/asm-x86/pv/mm.h b/xen/include/asm-x86/pv/mm.h index 3ca24cc70a..47223e38eb 100644 --- a/xen/include/asm-x86/pv/mm.h +++ b/xen/include/asm-x86/pv/mm.h @@ -28,6 +28,8 @@ int pv_ro_page_fault(unsigned long addr, struct cpu_user_regs *regs); long pv_set_gdt(struct vcpu *d, unsigned long *frames, unsigned int entries); void pv_destroy_gdt(struct vcpu *d); +bool pv_map_ldt_shadow_page(unsigned int off); + #else #include @@ -43,6 +45,8 @@ static inline long pv_set_gdt(struct vcpu *d, unsigned long *frames, { return -EINVAL; } static inline void pv_destroy_gdt(struct vcpu *d) {} +static inline bool pv_map_ldt_shadow_page(unsigned int off) { return false; } + #endif #endif /* __X86_PV_MM_H__ */