From patchwork Fri Aug 11 14:21:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9896171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 47A7360236 for ; Fri, 11 Aug 2017 14:48:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3885D28C29 for ; Fri, 11 Aug 2017 14:48:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D5DF28C42; Fri, 11 Aug 2017 14:48:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4D87A28C29 for ; Fri, 11 Aug 2017 14:48:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dgBCi-00011T-RT; Fri, 11 Aug 2017 14:46:24 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dgBCh-00011E-Sn for xen-devel@lists.xenproject.org; Fri, 11 Aug 2017 14:46:23 +0000 Received: from [193.109.254.147] by server-7.bemta-6.messagelabs.com id 5B/DC-03557-F33CD895; Fri, 11 Aug 2017 14:46:23 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprIIsWRWlGSWpSXmKPExsXitHSDva7d4d5 IgyP7FSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oyTy2ezFVwPqji/q52xgXGtbRcjJ4eEgL/E 19Vd7CA2m4COxNSnl1i7GDk4RARUJG7vNehi5OJgFnjOJPGp9wgjSI2wgI9E88f9YDaLgKrEr 2dHmUFsXgFbiXWnd7BDzJSX2NV2kRXE5gSK/97eAGYLCdhIvDz5CspWkVg/dRYbRK+gxMmZT1 hAbGYBCYmDL14wT2DknYUkNQtJagEj0ypGjeLUorLUIl0jY72kosz0jJLcxMwcXUMDM73c1OL ixPTUnMSkYr3k/NxNjMDgYQCCHYx/5gceYpTkYFIS5U3w6Y0U4kvKT6nMSCzOiC8qzUktPsQo w8GhJMG79iBQTrAoNT21Ii0zBxjGMGkJDh4lEV6mQ0Bp3uKCxNzizHSI1ClGXY5XE/5/YxJiy cvPS5US510JMkMApCijNA9uBCymLjHKSgnzMgIdJcRTkFqUm1mCKv+KUZyDUUmY1wNkFU9mXg ncpldARzABHdHnA3ZESSJCSqqBsWx363SGDzNlpOV1D0ceFRd+L/RNbZdc0ZtP/g77Np350cL 9MrLid00jew/75tM/4zJTLKpqTF62Cp+/1jZ7howbX33zZ971ezwN9R05HuZ/uhttFVyTXC+j eL99jWzv4ULGa9fL2PK0r88N5ufrEX959fU2y71anvXsZ84qrrWxmHnN9laBEktxRqKhFnNRc SIAlHTEDaQCAAA= X-Env-Sender: prvs=389ee98fb=Paul.Durrant@citrix.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1502462780!98976744!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 48288 invoked from network); 11 Aug 2017 14:46:22 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 11 Aug 2017 14:46:22 -0000 X-IronPort-AV: E=Sophos;i="5.41,358,1498521600"; d="scan'208";a="443482523" From: Paul Durrant To: Date: Fri, 11 Aug 2017 15:21:43 +0100 Message-ID: <20170811142143.35787-13-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170811142143.35787-1-paul.durrant@citrix.com> References: <20170811142143.35787-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v2 12/12] x86/hvm/ioreq: add a new mappable resource type... X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... XENMEM_resource_ioreq_server This patch adds support for a new resource type that can be mapped using the XENMEM_acquire_resource memory op. If an emulator makes use of this resource type then, instead of mapping gfns, the IOREQ server will allocate pages from the heap. These pages will never be present in the P2M of the guest at any point and so are not vulnerable to any direct attack by the guest. They are only ever accessible by Xen and any domain that has mapping privilege over the guest (which may or may not be limited to the domain running the emulator). NOTE: Use of the new resource type is not compatible with use of XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is set. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu --- xen/arch/x86/hvm/ioreq.c | 136 ++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/mm.c | 27 ++++++++ xen/include/asm-x86/hvm/ioreq.h | 2 + xen/include/public/hvm/dm_op.h | 4 ++ xen/include/public/memory.h | 3 + 5 files changed, 172 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 795c198f95..9e6838dab6 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -231,6 +231,15 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( iorp->page ) + { + /* Make sure the page has not been allocated */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + if ( d->is_dying ) return -EINVAL; @@ -253,6 +262,60 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *currd = current->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( iorp->page ) + { + /* Make sure the page has not been mapped */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + /* + * Allocated IOREQ server pages are assigned to the emulating + * domain, not the target domain. This is because the emulator is + * likely to be destroyed after the target domain has been torn + * down, and we must use MEMF_no_refcount otherwise page allocation + * could fail if the emulating domain has already reached its + * maximum allocation. + */ + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); + if ( !iorp->page ) + return -ENOMEM; + + get_page(iorp->page, currd); + + iorp->va = __map_domain_page_global(iorp->page); + if ( !iorp->va ) + { + put_page(iorp->page); + iorp->page = NULL; + return -ENOMEM; + } + + clear_page(iorp->va); + return 0; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( !iorp->page ) + return; + + unmap_domain_page_global(iorp->va); + iorp->va = NULL; + + put_page(iorp->page); + iorp->page = NULL; +} + bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct hvm_ioreq_server *s; @@ -457,6 +520,27 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc = -ENOMEM; + + rc = hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc = hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) { unsigned int i; @@ -583,7 +667,18 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); return rc; } @@ -593,6 +688,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); } @@ -745,6 +841,9 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, rc = hvm_ioreq_server_map_pages(s); if ( rc ) break; + + gdprintk(XENLOG_INFO, "d%d ioreq server %u using gfns\n", + d->domain_id, s->id); } if ( ioreq_gfn ) @@ -767,6 +866,43 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, return rc; } +mfn_t hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned int idx) +{ + struct hvm_ioreq_server *s; + mfn_t mfn = INVALID_MFN; + + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + list_for_each_entry ( s, + &d->arch.hvm_domain.ioreq_server.list, + list_entry ) + { + int rc; + + if ( s == d->arch.hvm_domain.default_ioreq_server ) + continue; + + if ( s->id != id ) + continue; + + rc = hvm_ioreq_server_alloc_pages(s); + if ( rc ) + break; + + if ( idx == 0 ) + mfn = _mfn(page_to_mfn(s->bufioreq.page)); + else if ( idx == 1 ) + mfn = _mfn(page_to_mfn(s->ioreq.page)); + + break; + } + + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + return mfn; +} + int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index c0cd63689a..b135b22a9a 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -122,6 +122,7 @@ #include #include #include +#include /* Mapping of the fixmap space needed early. */ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) @@ -4727,6 +4728,27 @@ static int xenmem_acquire_grant_table(struct domain *d, return 0; } +static int xenmem_acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned long nr_frames, + unsigned long mfn_list[]) +{ + unsigned int i; + + for ( i = 0; i < nr_frames; i++ ) + { + mfn_t mfn = hvm_get_ioreq_server_frame(d, id, frame + i); + + if ( mfn_eq(mfn, INVALID_MFN) ) + return -EINVAL; + + mfn_list[i] = mfn_x(mfn); + } + + return 0; +} + static int xenmem_acquire_resource(xen_mem_acquire_resource_t *xmar) { struct domain *d, *currd = current->domain; @@ -4761,6 +4783,11 @@ static int xenmem_acquire_resource(xen_mem_acquire_resource_t *xmar) mfn_list); break; + case XENMEM_resource_ioreq_server: + rc = xenmem_acquire_ioreq_server(d, xmar->id, xmar->frame, + xmar->nr_frames, mfn_list); + break; + default: rc = -EOPNOTSUPP; break; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 1829fcf43e..032aeb6fa9 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -31,6 +31,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port); +mfn_t hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned int idx); int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 9677bd74e7..59b6006910 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -90,6 +90,10 @@ struct xen_dm_op_create_ioreq_server { * the frame numbers passed back in gfns and * respectively. (If the IOREQ Server is not handling buffered emulation * only will be valid). + * + * NOTE: To access the synchronous ioreq structures and buffered ioreq + * ring, it is preferable to use the XENMEM_acquire_resource memory + * op specifying resource type XENMEM_resource_ioreq_server. */ #define XEN_DMOP_get_ioreq_server_info 2 diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 9bf58e7384..716941dc0c 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -664,10 +664,13 @@ struct xen_mem_acquire_resource { uint16_t type; #define XENMEM_resource_grant_table 0 +#define XENMEM_resource_ioreq_server 1 /* * IN - a type-specific resource identifier, which must be zero * unless stated otherwise. + * + * type == XENMEM_resource_ioreq_server -> id == ioreq server id */ uint32_t id; /* IN - number of (4K) frames of the resource to be mapped */