From patchwork Tue Aug 15 14:40:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9901963 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DD3FB602C9 for ; Tue, 15 Aug 2017 14:42:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA77628865 for ; Tue, 15 Aug 2017 14:42:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD47E28887; Tue, 15 Aug 2017 14:42:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DFC5128888 for ; Tue, 15 Aug 2017 14:42:37 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dhd12-00026e-Ea; Tue, 15 Aug 2017 14:40:20 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dhd11-00026S-Ge for xen-devel@lists.xenproject.org; Tue, 15 Aug 2017 14:40:19 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id 8C/84-02962-2D703995; Tue, 15 Aug 2017 14:40:18 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrIIsWRWlGSWpSXmKPExsXS6fjDS/cS++R Ig+W/hSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oxFUw8xF+yMrZiw8xxTA2OnYxcjJ4eQQJ7E 3M497CA2r4CdRPv+t0wgtoSAocTphTdZQGwWAVWJFa+uMIPYbALqEm3PtrN2MXJwiAgYSJw7m tTFyMXBLPCLUWLOhSlg9cICuhJtD36xQswvklj9/ARYL6eAvcTOB2uYQXp5BQQl/u4QBgkzC2 hJPPx1iwXC1pZYtvA1WAmzgLTE8n8cExj5ZiE0zELSMAtJwyyEhgWMLKsYNYpTi8pSi3QNzfW SijLTM0pyEzNzdA0NzPRyU4uLE9NTcxKTivWS83M3MQKDjwEIdjDe3hhwiFGSg0lJlHfR2UmR QnxJ+SmVGYnFGfFFpTmpxYcYZTg4lCR4+9gmRwoJFqWmp1akZeYA4wAmLcHBoyTCuwIkzVtck JhbnJkOkTrFqMvxasL/b0xCLHn5ealS4rxbQYoEQIoySvPgRsBi8hKjrJQwLyPQUUI8BalFuZ klqPKvGMU5GJWEedeCTOHJzCuB2/QK6AgmoCOutE8COaIkESEl1cDYenP3jlvzfzso3TP/my8 pUK7lW9Kfv664n2GrqEL+khvVLDLTr3+7dSCD52aZa/imzrts4f+vXTD9mpC8wfCpf9TcgEPn iq4dnVd2cM65f9yZJkrx646X+l+Wv9GvqNT1Sf/Zkgf3KoNZ533qnDI13Cem9t8ljsqC6LLv5 t8NXU/dNEorjM9UYinOSDTUYi4qTgQAbEBWYsQCAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-16.tower-27.messagelabs.com!1502808016!111580261!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 32297 invoked from network); 15 Aug 2017 14:40:17 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 15 Aug 2017 14:40:17 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 15 Aug 2017 08:40:16 -0600 Message-Id: <599323EA020000780016FECC@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Tue, 15 Aug 2017 08:40:10 -0600 From: "Jan Beulich" To: "xen-devel" References: <59931E0F020000780016FEA3@prv-mh.provo.novell.com> <59931E0F020000780016FEA3@prv-mh.provo.novell.com> In-Reply-To: <59931E0F020000780016FEA3@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan Subject: [Xen-devel] [PATCH 3/8] gnttab: type adjustments X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP In particular use grant_ref_t and grant_handle_t where appropriate. Also switch other nearby type uses to their canonical variants where appropriate and introduce INVALID_MAPTRACK_HANDLE. Signed-by: Jan Beulich Reviewed-by: Andrew Cooper --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -96,7 +96,7 @@ struct gnttab_unmap_common { int16_t status; /* Shared state beteen *_unmap and *_unmap_complete */ - u16 done; + uint16_t done; unsigned long frame; struct domain *rd; grant_ref_t ref; @@ -118,11 +118,11 @@ struct gnttab_unmap_common { * table of these, indexes into which are returned as a 'mapping handle'. */ struct grant_mapping { - u32 ref; /* grant ref */ - u16 flags; /* 0-4: GNTMAP_* ; 5-15: unused */ + grant_ref_t ref; /* grant ref */ + uint16_t flags; /* 0-4: GNTMAP_* ; 5-15: unused */ domid_t domid; /* granting domain */ - u32 vcpu; /* vcpu which created the grant mapping */ - u32 pad; /* round size to a power of 2 */ + uint32_t vcpu; /* vcpu which created the grant mapping */ + uint32_t pad; /* round size to a power of 2 */ }; #define MAPTRACK_PER_PAGE (PAGE_SIZE / sizeof(struct grant_mapping)) @@ -158,10 +158,10 @@ shared_entry_header(struct grant_table * /* Active grant entry - used for shadowing GTF_permit_access grants. */ struct active_grant_entry { - u32 pin; /* Reference count information. */ + uint32_t pin; /* Reference count information. */ domid_t domid; /* Domain being granted access. */ struct domain *trans_domain; - uint32_t trans_gref; + grant_ref_t trans_gref; unsigned long frame; /* Frame being granted. */ unsigned long gfn; /* Guest's idea of the frame being granted. */ unsigned is_sub_page:1; /* True if this is a sub-page grant. */ @@ -297,7 +297,9 @@ double_gt_unlock(struct grant_table *lgt grant_write_unlock(rgt); } -static inline int +#define INVALID_MAPTRACK_HANDLE UINT_MAX + +static inline grant_handle_t __get_maptrack_handle( struct grant_table *t, struct vcpu *v) @@ -312,7 +314,7 @@ __get_maptrack_handle( if ( unlikely(head == MAPTRACK_TAIL) ) { spin_unlock(&v->maptrack_freelist_lock); - return -1; + return INVALID_MAPTRACK_HANDLE; } /* @@ -323,7 +325,7 @@ __get_maptrack_handle( if ( unlikely(next == MAPTRACK_TAIL) ) { spin_unlock(&v->maptrack_freelist_lock); - return -1; + return INVALID_MAPTRACK_HANDLE; } prev_head = head; @@ -345,8 +347,8 @@ __get_maptrack_handle( * each VCPU and to avoid two VCPU repeatedly stealing entries from * each other, the initial victim VCPU is selected randomly. */ -static int steal_maptrack_handle(struct grant_table *t, - const struct vcpu *curr) +static grant_handle_t steal_maptrack_handle(struct grant_table *t, + const struct vcpu *curr) { const struct domain *currd = curr->domain; unsigned int first, i; @@ -357,10 +359,10 @@ static int steal_maptrack_handle(struct do { if ( currd->vcpu[i] ) { - int handle; + grant_handle_t handle; handle = __get_maptrack_handle(t, currd->vcpu[i]); - if ( handle != -1 ) + if ( handle != INVALID_MAPTRACK_HANDLE ) { maptrack_entry(t, handle).vcpu = curr->vcpu_id; return handle; @@ -373,12 +375,12 @@ static int steal_maptrack_handle(struct } while ( i != first ); /* No free handles on any VCPU. */ - return -1; + return INVALID_MAPTRACK_HANDLE; } static inline void put_maptrack_handle( - struct grant_table *t, int handle) + struct grant_table *t, grant_handle_t handle) { struct domain *currd = current->domain; struct vcpu *v; @@ -404,7 +406,7 @@ put_maptrack_handle( spin_unlock(&v->maptrack_freelist_lock); } -static inline int +static inline grant_handle_t get_maptrack_handle( struct grant_table *lgt) { @@ -414,7 +416,7 @@ get_maptrack_handle( struct grant_mapping *new_mt = NULL; handle = __get_maptrack_handle(lgt, curr); - if ( likely(handle != -1) ) + if ( likely(handle != INVALID_MAPTRACK_HANDLE) ) return handle; spin_lock(&lgt->maptrack_lock); @@ -439,8 +441,8 @@ get_maptrack_handle( if ( curr->maptrack_tail == MAPTRACK_TAIL ) { handle = steal_maptrack_handle(lgt, curr); - if ( handle == -1 ) - return -1; + if ( handle == INVALID_MAPTRACK_HANDLE ) + return handle; spin_lock(&curr->maptrack_freelist_lock); maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL; curr->maptrack_tail = handle; @@ -461,6 +463,7 @@ get_maptrack_handle( for ( i = 0; i < MAPTRACK_PER_PAGE; i++ ) { + BUILD_BUG_ON(sizeof(new_mt->ref) < sizeof(handle)); new_mt[i].ref = handle + i + 1; new_mt[i].vcpu = curr->vcpu_id; } @@ -683,9 +686,9 @@ static int _set_status(unsigned gt_versi static int grant_map_exists(const struct domain *ld, struct grant_table *rgt, unsigned long mfn, - unsigned int *ref_count) + grant_ref_t *cur_ref) { - unsigned int ref, max_iter; + grant_ref_t ref, max_iter; /* * The remote grant table should be locked but the percpu rwlock @@ -695,9 +698,9 @@ static int grant_map_exists(const struct * ASSERT(rw_is_locked(&rgt->lock)); */ - max_iter = min(*ref_count + (1 << GNTTABOP_CONTINUATION_ARG_SHIFT), + max_iter = min(*cur_ref + (1 << GNTTABOP_CONTINUATION_ARG_SHIFT), nr_grant_entries(rgt)); - for ( ref = *ref_count; ref < max_iter; ref++ ) + for ( ref = *cur_ref; ref < max_iter; ref++ ) { struct active_grant_entry *act; bool_t exists; @@ -716,7 +719,7 @@ static int grant_map_exists(const struct if ( ref < nr_grant_entries(rgt) ) { - *ref_count = ref; + *cur_ref = ref; return 1; } @@ -773,7 +776,7 @@ __gnttab_map_grant_ref( struct domain *ld, *rd, *owner = NULL; struct grant_table *lgt, *rgt; struct vcpu *led; - int handle; + grant_handle_t handle; unsigned long frame = 0; struct page_info *pg = NULL; int rc = GNTST_okay; @@ -822,7 +825,8 @@ __gnttab_map_grant_ref( } lgt = ld->grant_table; - if ( unlikely((handle = get_maptrack_handle(lgt)) == -1) ) + handle = get_maptrack_handle(lgt); + if ( unlikely(handle == INVALID_MAPTRACK_HANDLE) ) { rcu_unlock_domain(rd); gdprintk(XENLOG_INFO, "Failed to obtain maptrack handle.\n"); @@ -2038,7 +2042,7 @@ gnttab_transfer( type and reference counts. */ static void __release_grant_for_copy( - struct domain *rd, unsigned long gref, int readonly) + struct domain *rd, grant_ref_t gref, bool readonly) { struct grant_table *rgt = rd->grant_table; grant_entry_header_t *sha; @@ -2119,9 +2123,9 @@ static void __fixup_status_for_copy_pin( If there is any error, *page = NULL, no ref taken. */ static int __acquire_grant_for_copy( - struct domain *rd, unsigned long gref, domid_t ldom, int readonly, + struct domain *rd, grant_ref_t gref, domid_t ldom, bool readonly, unsigned long *frame, struct page_info **page, - uint16_t *page_off, uint16_t *length, unsigned allow_transitive) + uint16_t *page_off, uint16_t *length, bool allow_transitive) { struct grant_table *rgt = rd->grant_table; grant_entry_v2_t *sha2; @@ -2144,7 +2148,7 @@ __acquire_grant_for_copy( if ( unlikely(gref >= nr_grant_entries(rgt)) ) PIN_FAIL(gt_unlock_out, GNTST_bad_gntref, - "Bad grant reference %ld\n", gref); + "Bad grant reference %#x\n", gref); act = active_entry_acquire(rgt, gref); shah = shared_entry_header(rgt, gref); @@ -2211,7 +2215,8 @@ __acquire_grant_for_copy( rc = __acquire_grant_for_copy(td, trans_gref, rd->domain_id, readonly, &grant_frame, page, - &trans_page_off, &trans_length, 0); + &trans_page_off, &trans_length, + false); grant_read_lock(rgt); act = active_entry_acquire(rgt, gref); @@ -2257,7 +2262,7 @@ __acquire_grant_for_copy( act->trans_domain = td; act->trans_gref = trans_gref; act->frame = grant_frame; - act->gfn = -1ul; + act->gfn = gfn_x(INVALID_GFN); /* * The actual remote remote grant may or may not be a sub-page, * but we always treat it as one because that blocks mappings of @@ -2374,12 +2379,12 @@ struct gnttab_copy_buf { bool_t have_type; }; -static int gnttab_copy_lock_domain(domid_t domid, unsigned int gref_flag, +static int gnttab_copy_lock_domain(domid_t domid, bool is_gref, struct gnttab_copy_buf *buf) { int rc; - if ( domid != DOMID_SELF && !gref_flag ) + if ( domid != DOMID_SELF && !is_gref ) PIN_FAIL(out, GNTST_permission_denied, "only allow copy-by-mfn for DOMID_SELF.\n"); @@ -2480,7 +2485,7 @@ static int gnttab_copy_claim_buf(const s current->domain->domain_id, buf->read_only, &buf->frame, &buf->page, - &buf->ptr.offset, &buf->len, 1); + &buf->ptr.offset, &buf->len, true); if ( rc != GNTST_okay ) goto out; buf->ptr.u.ref = ptr->u.ref; @@ -2985,7 +2990,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P } static int __gnttab_cache_flush(gnttab_cache_flush_t *cflush, - unsigned int *ref_count) + grant_ref_t *cur_ref) { struct domain *d, *owner; struct page_info *page; @@ -3029,7 +3034,7 @@ static int __gnttab_cache_flush(gnttab_c { grant_read_lock(owner->grant_table); - ret = grant_map_exists(d, owner->grant_table, mfn, ref_count); + ret = grant_map_exists(d, owner->grant_table, mfn, cur_ref); if ( ret != 0 ) { grant_read_unlock(owner->grant_table); @@ -3061,7 +3066,7 @@ static int __gnttab_cache_flush(gnttab_c static long gnttab_cache_flush(XEN_GUEST_HANDLE_PARAM(gnttab_cache_flush_t) uop, - unsigned int *ref_count, + grant_ref_t *cur_ref, unsigned int count) { unsigned int i; @@ -3075,7 +3080,7 @@ gnttab_cache_flush(XEN_GUEST_HANDLE_PARA return -EFAULT; for ( ; ; ) { - int ret = __gnttab_cache_flush(&op, ref_count); + int ret = __gnttab_cache_flush(&op, cur_ref); if ( ret < 0 ) return ret; @@ -3084,7 +3089,7 @@ gnttab_cache_flush(XEN_GUEST_HANDLE_PARA if ( hypercall_preempt_check() ) return i; } - *ref_count = 0; + *cur_ref = 0; guest_handle_add_offset(uop, 1); } return 0;