From patchwork Wed Jul 10 12:54:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Norbert Manthey X-Patchwork-Id: 11038449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22C32912 for ; Wed, 10 Jul 2019 12:57:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1528228747 for ; Wed, 10 Jul 2019 12:57:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 090DB28929; Wed, 10 Jul 2019 12:57:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_ADSP_ALL, DKIM_INVALID,DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2424D28747 for ; Wed, 10 Jul 2019 12:57:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hlC8h-0002vG-AQ; Wed, 10 Jul 2019 12:56:03 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hlC8g-0002vA-1z for xen-devel@lists.xenproject.org; Wed, 10 Jul 2019 12:56:02 +0000 X-Inumbo-ID: 102a4cba-a312-11e9-94a0-833051f95132 Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 102a4cba-a312-11e9-94a0-833051f95132; Wed, 10 Jul 2019 12:55:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1562763358; x=1594299358; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=XBzH0W3OdXm3NeZI6TTckd79ApqaMOflFUazB8yVjO4=; b=nq/Q+T1rMNgIfToProzyKKureZGcpEhOlIMZn6OL4P3kNQmeV00iwMFo bKRYZdEOJeGSgku/UCYEnEF3NgszMk35mzF0muAbKFrpGIDAMPldCL46t wl4RfV/lW7CEe6zQZ4ZYXQ6EVVw4o+pilSgE0O1NN4n43PrfBYGV7pnWa Q=; X-IronPort-AV: E=Sophos;i="5.62,474,1554768000"; d="scan'208";a="410093182" Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-1d-f273de60.us-east-1.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 10 Jul 2019 12:55:57 +0000 Received: from EX13MTAUEB001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-f273de60.us-east-1.amazon.com (Postfix) with ESMTPS id 368ACA25DA; Wed, 10 Jul 2019 12:55:53 +0000 (UTC) Received: from EX13D08UEB001.ant.amazon.com (10.43.60.245) by EX13MTAUEB001.ant.amazon.com (10.43.60.129) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 10 Jul 2019 12:55:38 +0000 Received: from EX13MTAUEB001.ant.amazon.com (10.43.60.96) by EX13D08UEB001.ant.amazon.com (10.43.60.245) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 10 Jul 2019 12:55:37 +0000 Received: from uc1a35a69ae4659.ant.amazon.com (10.95.156.35) by mail-relay.amazon.com (10.43.60.129) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 10 Jul 2019 12:55:34 +0000 From: Norbert Manthey To: Date: Wed, 10 Jul 2019 14:54:36 +0200 Message-ID: <1562763277-11674-2-git-send-email-nmanthey@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562763277-11674-1-git-send-email-nmanthey@amazon.de> References: <1562763277-11674-1-git-send-email-nmanthey@amazon.de> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH L1TF MDS GT v2 1/2] common/grant_table: harden bound accesses X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Martin Pohlack , Pawel Wieczorkiewicz , Julien Grall , David Woodhouse , Jan Beulich , Martin Mazein , Bjoern Doebel , Norbert Manthey Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Guests can issue grant table operations and provide guest controlled data to them. This data is used as index for memory loads after bound checks have been done. To avoid speculative out-of-bound accesses, we use the array_index_nospec macro where applicable, or the macro block_speculation. Note, the block_speculation macro is used on all path in shared_entry_header and nr_grant_entries. This way, after a call to such a function, all bound checks that happened before become architectural visible, so that no additional protection is required for corresponding array accesses. As the way we introduce an lfence instruction might allow the compiler to reload certain values from memory multiple times, we try to avoid speculatively continuing execution with stale register data by moving relevant data into function local variables. Speculative execution is not blocked in case one of the following properties is true: - path cannot be triggered by the guest - path does not return to the guest - path does not result in an out-of-bound access - path cannot be executed repeatedly Only the combination of the above properties allows to actually leak continuous chunks of memory. Therefore, we only add the penalty of protective mechanisms in case a potential speculative out-of-bound access matches all the above properties. This commit addresses only out-of-bound accesses whose index is directly controlled by the guest, and the index is checked before. Potential out-of-bound accesses that are caused by speculatively evaluating the version of the current table are not addressed in this commit. Hence, speculative out-of-bound accesses might still be possible, for example in gnttab_get_status_frame_mfn, when calling gnttab_grow_table, the assertion that the grant table version equals two might not hold under speculation. This is part of the speculative hardening effort. Signed-off-by: Norbert Manthey --- Notes: v2: Mention version based blocking for upcoming commit Introduce local variable as op->ref replacement Use array_nospec_index in gnttab_get_status_frame_mfn xen/common/grant_table.c | 80 +++++++++++++++++++++++++++++++++--------------- 1 file changed, 55 insertions(+), 25 deletions(-) diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -912,6 +912,7 @@ map_grant_ref( { struct domain *ld, *rd, *owner = NULL; struct grant_table *lgt, *rgt; + grant_ref_t ref; struct vcpu *led; grant_handle_t handle; mfn_t mfn; @@ -975,13 +976,15 @@ map_grant_ref( grant_read_lock(rgt); /* Bounds check on the grant ref */ - if ( unlikely(op->ref >= nr_grant_entries(rgt))) + ref = op->ref; + if ( unlikely(ref >= nr_grant_entries(rgt))) PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n", - op->ref, rgt->domain->domain_id); + ref, rgt->domain->domain_id); - act = active_entry_acquire(rgt, op->ref); - shah = shared_entry_header(rgt, op->ref); - status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref); + /* This call also ensures the above check cannot be passed speculatively */ + shah = shared_entry_header(rgt, ref); + status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, ref); + act = active_entry_acquire(rgt, ref); /* If already pinned, check the active domid and avoid refcnt overflow. */ if ( act->pin && @@ -1004,8 +1007,8 @@ map_grant_ref( if ( !act->pin ) { unsigned long gfn = rgt->gt_version == 1 ? - shared_entry_v1(rgt, op->ref).frame : - shared_entry_v2(rgt, op->ref).full_page.frame; + shared_entry_v1(rgt, ref).frame : + shared_entry_v2(rgt, ref).full_page.frame; rc = get_paged_frame(gfn, &mfn, &pg, op->flags & GNTMAP_readonly, rd); @@ -1018,7 +1021,7 @@ map_grant_ref( act->length = PAGE_SIZE; act->is_sub_page = false; act->trans_domain = rd; - act->trans_gref = op->ref; + act->trans_gref = ref; } } @@ -1266,6 +1269,7 @@ unmap_common( domid_t dom; struct domain *ld, *rd; struct grant_table *lgt, *rgt; + grant_ref_t ref; struct active_grant_entry *act; s16 rc = 0; struct grant_mapping *map; @@ -1319,6 +1323,7 @@ unmap_common( op->rd = rd; op->ref = map->ref; + ref = map->ref; /* * We can't assume there was no racing unmap for this maptrack entry, @@ -1328,7 +1333,7 @@ unmap_common( * invalid lock. */ smp_rmb(); - if ( unlikely(op->ref >= nr_grant_entries(rgt)) ) + if ( unlikely(ref >= nr_grant_entries(rgt)) ) { gdprintk(XENLOG_WARNING, "Unstable d%d handle %#x\n", rgt->domain->domain_id, op->handle); @@ -1337,7 +1342,10 @@ unmap_common( goto unlock_out; } - act = active_entry_acquire(rgt, op->ref); + /* Make sure the above bound check cannot be bypassed speculatively */ + block_speculation(); + + act = active_entry_acquire(rgt, ref); /* * Note that we (ab)use the active entry lock here to protect against @@ -1350,7 +1358,7 @@ unmap_common( flags = read_atomic(&map->flags); smp_rmb(); if ( unlikely(!flags) || unlikely(map->domid != dom) || - unlikely(map->ref != op->ref) ) + unlikely(map->ref != ref) ) { gdprintk(XENLOG_WARNING, "Unstable handle %#x\n", op->handle); rc = GNTST_bad_handle; @@ -1434,7 +1442,7 @@ unmap_common_complete(struct gnttab_unmap_common *op) struct page_info *pg; uint16_t *status; - if ( !op->done ) + if ( evaluate_nospec(!op->done) ) { /* unmap_common() didn't do anything - nothing to complete. */ return; @@ -2042,6 +2050,7 @@ gnttab_prepare_for_transfer( goto fail; } + /* This call also ensures the above check cannot be passed speculatively */ raw_shah = (uint32_t *)shared_entry_header(rgt, ref); scombo.raw = ACCESS_ONCE(*raw_shah); @@ -2091,6 +2100,7 @@ gnttab_transfer( struct page_info *page; int i; struct gnttab_transfer gop; + grant_ref_t ref; mfn_t mfn; unsigned int max_bitsize; struct active_grant_entry *act; @@ -2237,8 +2247,14 @@ gnttab_transfer( */ spin_unlock(&e->page_alloc_lock); okay = gnttab_prepare_for_transfer(e, d, gop.ref); + ref = gop.ref; - if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) ) + /* + * Make sure the reference bound check in gnttab_prepare_for_transfer + * is respected and speculative execution is blocked accordingly + */ + if ( unlikely(!evaluate_nospec(okay)) || + unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) ) { bool drop_dom_ref; @@ -2268,11 +2284,11 @@ gnttab_transfer( /* Tell the guest about its new page frame. */ grant_read_lock(e->grant_table); - act = active_entry_acquire(e->grant_table, gop.ref); + act = active_entry_acquire(e->grant_table, ref); if ( e->grant_table->gt_version == 1 ) { - grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref); + grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, ref); guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0); if ( !paging_mode_translate(e) ) @@ -2280,14 +2296,14 @@ gnttab_transfer( } else { - grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref); + grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, ref); guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0); if ( !paging_mode_translate(e) ) sha->full_page.frame = mfn_x(mfn); } smp_wmb(); - shared_entry_header(e->grant_table, gop.ref)->flags |= + shared_entry_header(e->grant_table, ref)->flags |= GTF_transfer_completed; active_entry_release(act); @@ -2426,8 +2442,10 @@ acquire_grant_for_copy( PIN_FAIL(gt_unlock_out, GNTST_bad_gntref, "Bad grant reference %#x\n", gref); - act = active_entry_acquire(rgt, gref); + /* This call also ensures the above check cannot be passed speculatively */ shah = shared_entry_header(rgt, gref); + act = active_entry_acquire(rgt, gref); + if ( rgt->gt_version == 1 ) { sha2 = NULL; @@ -2843,6 +2861,9 @@ static int gnttab_copy_buf(const struct gnttab_copy *op, op->dest.offset, dest->ptr.offset, op->len, dest->len); + /* Make sure the above checks are not bypassed speculatively */ + block_speculation(); + memcpy(dest->virt + op->dest.offset, src->virt + op->source.offset, op->len); gnttab_mark_dirty(dest->domain, dest->mfn); @@ -2962,7 +2983,7 @@ gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop) struct grant_table *gt = currd->grant_table; grant_entry_v1_t reserved_entries[GNTTAB_NR_RESERVED_ENTRIES]; int res; - unsigned int i; + unsigned int i, nr_ents; if ( copy_from_guest(&op, uop, 1) ) return -EFAULT; @@ -2986,7 +3007,8 @@ gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop) * are allowed to be in use (xenstore/xenconsole keeps them mapped). * (You need to change the version number for e.g. kexec.) */ - for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_grant_entries(gt); i++ ) + nr_ents = nr_grant_entries(gt); + for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_ents; i++ ) { if ( read_atomic(&_active_entry(gt, i).pin) != 0 ) { @@ -3228,6 +3250,9 @@ swap_grant_ref(grant_ref_t ref_a, grant_ref_t ref_b) if ( unlikely(ref_b >= nr_grant_entries(d->grant_table))) PIN_FAIL(out, GNTST_bad_gntref, "Bad ref-b %#x\n", ref_b); + /* Make sure the above checks are not bypassed speculatively */ + block_speculation(); + /* Swapping the same ref is a no-op. */ if ( ref_a == ref_b ) goto out; @@ -3697,13 +3722,14 @@ void grant_table_warn_active_grants(struct domain *d) struct grant_table *gt = d->grant_table; struct active_grant_entry *act; grant_ref_t ref; - unsigned int nr_active = 0; + unsigned int nr_active = 0, nr_ents; #define WARN_GRANT_MAX 10 grant_read_lock(gt); - for ( ref = 0; ref != nr_grant_entries(gt); ref++ ) + nr_ents = nr_grant_entries(gt); + for ( ref = 0; ref != nr_ents; ref++ ) { act = active_entry_acquire(gt, ref); if ( !act->pin ) @@ -3853,7 +3879,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d, return -EINVAL; } - *mfn = _mfn(virt_to_mfn(gt->status[idx])); + /* Make sure idx is bounded wrt nr_status_frames */ + *mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, nr_status_frames(gt))])); return 0; } @@ -3882,7 +3909,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d, return -EINVAL; } - *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx])); + /* Make sure idx is bounded wrt nr_status_frames */ + *mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))])); return 0; } @@ -3952,6 +3980,7 @@ static void gnttab_usage_print(struct domain *rd) int first = 1; grant_ref_t ref; struct grant_table *gt = rd->grant_table; + unsigned int nr_ents; printk(" -------- active -------- -------- shared --------\n"); printk("[ref] localdom mfn pin localdom gmfn flags\n"); @@ -3964,7 +3993,8 @@ static void gnttab_usage_print(struct domain *rd) nr_grant_frames(gt), gt->max_grant_frames, nr_maptrack_frames(gt), gt->max_maptrack_frames); - for ( ref = 0; ref != nr_grant_entries(gt); ref++ ) + nr_ents = nr_grant_entries(gt); + for ( ref = 0; ref != nr_ents; ref++ ) { struct active_grant_entry *act; struct grant_entry_header *sha;