From patchwork Wed Sep 28 09:42:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Rzeszutek Wilk X-Patchwork-Id: 9353665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 38F056077A for ; Wed, 28 Sep 2016 09:45:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27B1329478 for ; Wed, 28 Sep 2016 09:45:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1C3992947C; Wed, 28 Sep 2016 09:45:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1CF6B29478 for ; Wed, 28 Sep 2016 09:45:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bpBOD-0000M8-OJ; Wed, 28 Sep 2016 09:42:57 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bpBOC-0000Kk-PM for xen-devel@lists.xenproject.org; Wed, 28 Sep 2016 09:42:56 +0000 Received: from [85.158.139.211] by server-10.bemta-5.messagelabs.com id 0D/12-16745-0A09BE75; Wed, 28 Sep 2016 09:42:56 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrDLMWRWlGSWpSXmKPExsXSO6nOVXf+hNf hBn13eC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oyNt76yFjyfxFix4MJJlgbGDUldjFwcQgIT mSTm79zKCOH8ZpS4d3otO4SzkVFi3/elUJkJjBKLD3aydjFycLAJmEi8WeXYxcjJISJgKrF94 kkWkBpmgWmMEp9OHWECcYQFpjBKnPq8iBGkgUVAVeLwQzaQBl4Bd4m235vYQMISAvISz27Xg5 icQOH7h31BKoQE3CS+NzxmAbElBAwlPm9cyjyBkW8BI8MqRo3i1KKy1CJdIxO9pKLM9IyS3MT MHF1DA1O93NTi4sT01JzEpGK95PzcTYzAUKlnYGDcwXhzst8hRkkOJiVR3s3hr8OF+JLyUyoz Eosz4otKc1KLDzHKcHAoSfAe7APKCRalpqdWpGXmAIMWJi3BwaMkwpsDkuYtLkjMLc5Mh0idY lSUEuc9DpIQAElklObBtcEi5RKjrJQwLyMDA4MQT0FqUW5mCar8K0ZxDkYlYd43IFN4MvNK4K a/AlrMBLR46YkXIItLEhFSUg2MkSXals03U06sfbnz86WjJ+7GC+5gT3u6x+yVps/u+RfMvi3 9UdWvtCwrnC/wyM7+BRWeaomOspL+Ty3nNdr8i+Q3rOeImT7lPnfKqQnvjmxba75CfrVHoNU8 80b9D+yFcjnna1hriqZcLV7VuEQu9dYah/vWNoZVvG+U8kqv+EQxsC4z2hOmxFKckWioxVxUn AgA/1srSI8CAAA= X-Env-Sender: konrad.wilk@oracle.com X-Msg-Ref: server-5.tower-206.messagelabs.com!1475055773!60260758!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 28808 invoked from network); 28 Sep 2016 09:42:54 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Sep 2016 09:42:54 -0000 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u8S9gjLb028809 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 28 Sep 2016 09:42:45 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.13.8) with ESMTP id u8S9gjI9009426 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 28 Sep 2016 09:42:45 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id u8S9gigA031947; Wed, 28 Sep 2016 09:42:44 GMT Received: from localhost.localdomain.com (/209.6.196.81) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 28 Sep 2016 02:42:44 -0700 From: Konrad Rzeszutek Wilk To: xen-devel@lists.xenproject.org, konrad@kernel.org Date: Wed, 28 Sep 2016 05:42:26 -0400 Message-Id: <1475055746-22401-13-git-send-email-konrad.wilk@oracle.com> X-Mailer: git-send-email 2.4.11 In-Reply-To: <1475055746-22401-1-git-send-email-konrad.wilk@oracle.com> References: <1475055746-22401-1-git-send-email-konrad.wilk@oracle.com> X-Source-IP: userv0022.oracle.com [156.151.31.74] Cc: Wei Liu , Ian Jackson , Konrad Rzeszutek Wilk Subject: [Xen-devel] [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP These operations are used during the save process of migration. Instead of doing 64 hypercalls lets do just one. We modify the 'struct tmem_client' structure (used in XEN_SYSCTL_TMEM_OP_[GET|SET]_CLIENT_INFO) to have an extra field 'nr_pools'. Armed with that the code slurping up pages from the hypervisor can allocate a big enough structure (struct tmem_pool_info) to contain all the active pools. And then just iterate over each one and save it in the stream. In the xc_tmem_[save|restore] we also added proper memory handling of the 'buf' and 'pools'. Because of the loops and to make it as easy as possible to review we add a goto label and for almost all error conditions jump in it. The include for inttypes is required for the PRId64 macro to work (which is needed to compile this code under 32-bit). Signed-off-by: Konrad Rzeszutek Wilk Acked-by: Wei Liu --- Cc: Ian Jackson Cc: Wei Liu v1: First submission. --- tools/libxc/xc_tmem.c | 159 +++++++++++++++++++++++++------------------- xen/common/tmem.c | 3 + xen/common/tmem_control.c | 80 +++++++++++++--------- xen/include/public/sysctl.h | 33 ++++++++- 4 files changed, 174 insertions(+), 101 deletions(-) diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c index 07fa101..cfa3fb6 100644 --- a/tools/libxc/xc_tmem.c +++ b/tools/libxc/xc_tmem.c @@ -18,6 +18,7 @@ */ #include "xc_private.h" +#include #include #include @@ -216,11 +217,11 @@ int xc_tmem_save(xc_interface *xch, { int marker = field_marker; int i, j, rc; - uint32_t flags; uint32_t minusone = -1; - uint32_t pool_id; struct tmem_handle *h; xen_sysctl_tmem_client_t info; + xen_sysctl_tmem_pool_info_t *pools; + char *buf = NULL; rc = xc_tmem_control(xch, 0, XEN_SYSCTL_TMEM_OP_SAVE_BEGIN, dom, 0 /* len*/ , live, NULL); @@ -233,72 +234,82 @@ int xc_tmem_save(xc_interface *xch, return rc; } - if ( write_exact(io_fd, &marker, sizeof(marker)) ) - return -1; - if ( xc_tmem_control(xch, 0 /* pool_id */, XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO, dom /* cli_id */, sizeof(info), 0 /* arg */, &info) < 0 ) return -1; - if ( write_exact(io_fd, &info, sizeof(info)) ) + /* Nothing to do. */ + if ( !info.nr_pools ) + return 0; + + pools = calloc(info.nr_pools, sizeof(*pools)); + if ( !pools ) return -1; + + rc = xc_tmem_control(xch, 0 /* pool_id is ignored. */, + XEN_SYSCTL_TMEM_OP_GET_POOLS, + dom /* cli_id */, sizeof(*pools) * info.nr_pools, + 0 /* arg */, pools); + + if ( rc < 0 || (uint32_t)rc > info.nr_pools ) + goto out_memory; + + /* Update it - as we have less pools between the two hypercalls. */ + info.nr_pools = (uint32_t)rc; + + if ( write_exact(io_fd, &marker, sizeof(marker)) ) + goto out_memory; + + if ( write_exact(io_fd, &info, sizeof(info)) ) + goto out_memory; + if ( write_exact(io_fd, &minusone, sizeof(minusone)) ) - return -1; - for ( i = 0; i < info.maxpools; i++ ) + goto out_memory; + + for ( i = 0; i < info.nr_pools; i++ ) { - uint64_t uuid[2]; - uint32_t n_pages; uint32_t pagesize; - char *buf = NULL; int bufsize = 0; int checksum = 0; + xen_sysctl_tmem_pool_info_t *pool = &pools[i]; - /* get pool id, flags, pagesize, n_pages, uuid */ - flags = xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS,dom,0,0,NULL); - if ( flags != -1 ) + if ( pool->flags.raw != -1 ) { - pool_id = i; - n_pages = xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES,dom,0,0,NULL); - if ( !(flags & TMEM_POOL_PERSIST) ) - n_pages = 0; - (void)xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID,dom,sizeof(uuid),0,&uuid); - if ( write_exact(io_fd, &pool_id, sizeof(pool_id)) ) - return -1; - if ( write_exact(io_fd, &flags, sizeof(flags)) ) - return -1; - if ( write_exact(io_fd, &n_pages, sizeof(n_pages)) ) - return -1; - if ( write_exact(io_fd, &uuid, sizeof(uuid)) ) - return -1; - if ( n_pages == 0 ) + if ( !pool->flags.u.persist ) + pool->n_pages = 0; + + if ( write_exact(io_fd, pool, sizeof(*pool)) ) + goto out_memory; + + if ( !pool->flags.u.persist ) continue; - pagesize = 1 << (((flags >> TMEM_POOL_PAGESIZE_SHIFT) & - TMEM_POOL_PAGESIZE_MASK) + 12); + pagesize = 1 << (pool->flags.u.pagebits + 12); if ( pagesize > bufsize ) { bufsize = pagesize + sizeof(struct tmem_handle); if ( (buf = realloc(buf,bufsize)) == NULL ) - return -1; + goto out_memory; } - for ( j = n_pages; j > 0; j-- ) + for ( j = pool->n_pages; j > 0; j-- ) { int ret; - if ( (ret = xc_tmem_control(xch, pool_id, + if ( (ret = xc_tmem_control(xch, pool->id, XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE, dom, bufsize, 0, buf)) > 0 ) { h = (struct tmem_handle *)buf; if ( write_exact(io_fd, &h->oid, sizeof(h->oid)) ) - return -1; + goto out_memory; + if ( write_exact(io_fd, &h->index, sizeof(h->index)) ) - return -1; + goto out_memory; h++; checksum += *(char *)h; if ( write_exact(io_fd, h, pagesize) ) - return -1; + goto out_memory; } else if ( ret == 0 ) { continue; } else { @@ -306,14 +317,22 @@ int xc_tmem_save(xc_interface *xch, h = (struct tmem_handle *)buf; h->oid.oid[0] = h->oid.oid[1] = h->oid.oid[2] = -1L; if ( write_exact(io_fd, &h->oid, sizeof(h->oid)) ) + { + out_memory: + free(pools); + free(buf); return -1; + } break; } } - DPRINTF("saved %d tmem pages for dom=%d pool=%d, checksum=%x\n", - n_pages-j,dom,pool_id,checksum); + DPRINTF("saved %"PRId64" tmem pages for dom=%d pool=%d, checksum=%x\n", + pool->n_pages - j, dom, pool->id, checksum); } } + free(pools); + free(buf); + /* pool list terminator */ minusone = -1; if ( write_exact(io_fd, &minusone, sizeof(minusone)) ) @@ -382,14 +401,19 @@ static int xc_tmem_restore_new_pool( int xc_tmem_restore(xc_interface *xch, int dom, int io_fd) { - uint32_t pool_id; uint32_t minusone; - uint32_t flags; xen_sysctl_tmem_client_t info; int checksum = 0; + unsigned int i; + char *buf = NULL; if ( read_exact(io_fd, &info, sizeof(info)) ) return -1; + + /* We would never save if there weren't any pools! */ + if ( !info.nr_pools ) + return -1; + if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN,dom,0,0,NULL) < 0 ) return -1; @@ -401,62 +425,63 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd) if ( read_exact(io_fd, &minusone, sizeof(minusone)) ) return -1; - while ( read_exact(io_fd, &pool_id, sizeof(pool_id)) == 0 && pool_id != -1 ) + + for ( i = 0; i < info.nr_pools; i++ ) { - uint64_t uuid[2]; - uint32_t n_pages; - char *buf = NULL; int bufsize = 0, pagesize; int j; + xen_sysctl_tmem_pool_info_t pool; - if ( read_exact(io_fd, &flags, sizeof(flags)) ) - return -1; - if ( read_exact(io_fd, &n_pages, sizeof(n_pages)) ) - return -1; - if ( read_exact(io_fd, &uuid, sizeof(uuid)) ) - return -1; - if ( xc_tmem_restore_new_pool(xch, dom, pool_id, - flags, uuid[0], uuid[1]) < 0) - return -1; - if ( n_pages <= 0 ) + if ( read_exact(io_fd, &pool, sizeof(pool)) ) + goto out_memory; + + if ( xc_tmem_restore_new_pool(xch, dom, pool.id, pool.flags.raw, + pool.uuid[0], pool.uuid[1]) < 0 ) + goto out_memory; + + if ( pool.n_pages <= 0 ) continue; - pagesize = 1 << (((flags >> TMEM_POOL_PAGESIZE_SHIFT) & - TMEM_POOL_PAGESIZE_MASK) + 12); + pagesize = 1 << (pool.flags.u.pagebits + 12); if ( pagesize > bufsize ) { bufsize = pagesize; if ( (buf = realloc(buf,bufsize)) == NULL ) - return -1; + goto out_memory; } - for ( j = n_pages; j > 0; j-- ) + for ( j = pool.n_pages; j > 0; j-- ) { struct xen_tmem_oid oid; uint32_t index; int rc; + if ( read_exact(io_fd, &oid, sizeof(oid)) ) - return -1; + goto out_memory; + if ( oid.oid[0] == -1L && oid.oid[1] == -1L && oid.oid[2] == -1L ) break; if ( read_exact(io_fd, &index, sizeof(index)) ) - return -1; + goto out_memory; + if ( read_exact(io_fd, buf, pagesize) ) - return -1; + goto out_memory; + checksum += *buf; - if ( (rc = xc_tmem_control_oid(xch, pool_id, + if ( (rc = xc_tmem_control_oid(xch, pool.id, XEN_SYSCTL_TMEM_OP_RESTORE_PUT_PAGE, dom, bufsize, index, oid, buf)) <= 0 ) { DPRINTF("xc_tmem_restore: putting page failed, rc=%d\n",rc); + out_memory: + free(buf); return -1; } } - if ( n_pages ) - DPRINTF("restored %d tmem pages for dom=%d pool=%d, check=%x\n", - n_pages-j,dom,pool_id,checksum); + if ( pool.n_pages ) + DPRINTF("restored %"PRId64" tmem pages for dom=%d pool=%d, check=%x\n", + pool.n_pages - j, dom, pool.id, checksum); } - if ( pool_id != -1 ) - return -1; + free(buf); return 0; } diff --git a/xen/common/tmem.c b/xen/common/tmem.c index 294f0cd..79b853d 100644 --- a/xen/common/tmem.c +++ b/xen/common/tmem.c @@ -880,6 +880,7 @@ static void client_flush(struct client *client) continue; pool_flush(pool, client->cli_id); client->pools[i] = NULL; + client->info.nr_pools--; } client_free(client); } @@ -1430,6 +1431,7 @@ static int do_tmem_destroy_pool(uint32_t pool_id) return 0; client->pools[pool_id] = NULL; pool_flush(pool, client->cli_id); + client->info.nr_pools--; return 1; } @@ -1592,6 +1594,7 @@ static int do_tmem_new_pool(domid_t this_cli_id, out: tmem_client_info("pool_id=%d\n", d_poolid); + client->info.nr_pools ++; return d_poolid; fail: diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c index cfbc8b0..a9960b7 100644 --- a/xen/common/tmem_control.c +++ b/xen/common/tmem_control.c @@ -276,6 +276,8 @@ static int __tmemc_set_var(struct client *client, if ( info.maxpools > MAX_POOLS_PER_DOMAIN ) return -ERANGE; + /* Ignore info.nr_pools. */ + if ( info.weight != client->info.weight ) { old_weight = client->info.weight; @@ -342,46 +344,63 @@ static int tmemc_get_client_info(int cli_id, return 0; } -static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop, - XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg) +static int tmemc_get_pool(int cli_id, + XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_pool_info_t) pools, + uint32_t len) { struct client *client = tmem_client_from_cli_id(cli_id); - struct tmem_pool *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN) - ? NULL : client->pools[pool_id]; - int rc = -1; + unsigned int i, idx; + int rc = 0; + unsigned int nr = len / sizeof(xen_sysctl_tmem_pool_info_t); + + if ( len % sizeof(xen_sysctl_tmem_pool_info_t) ) + return -EINVAL; + + if ( nr >= MAX_POOLS_PER_DOMAIN ) + return -E2BIG; + + if ( !guest_handle_okay(pools, nr) ) + return -EINVAL; - switch(subop) + if ( !client ) + return -EINVAL; + + for ( idx = 0, i = 0; i < MAX_POOLS_PER_DOMAIN; i++ ) { - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS: - if ( pool == NULL ) - break; - rc = (pool->persistent ? TMEM_POOL_PERSIST : 0) | + struct tmem_pool *pool = client->pools[i]; + xen_sysctl_tmem_pool_info_t out; + + if ( pool == NULL ) + continue; + + out.flags.raw = (pool->persistent ? TMEM_POOL_PERSIST : 0) | (pool->shared ? TMEM_POOL_SHARED : 0) | (POOL_PAGESHIFT << TMEM_POOL_PAGESIZE_SHIFT) | (TMEM_SPEC_VERSION << TMEM_POOL_VERSION_SHIFT); - break; - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES: - if ( pool == NULL ) - break; - rc = _atomic_read(pool->pgp_count); - break; - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID: - if ( pool == NULL ) - break; - rc = 0; - if ( copy_to_guest(guest_handle_cast(buf, void), pool->uuid, 2) ) + out.n_pages = _atomic_read(pool->pgp_count); + out.uuid[0] = pool->uuid[0]; + out.uuid[1] = pool->uuid[1]; + out.id = i; + + /* N.B. 'idx' != 'i'. */ + if ( __copy_to_guest_offset(pools, idx, &out, 1) ) + { rc = -EFAULT; - break; - default: - rc = -1; + break; + } + idx ++; + /* Don't try to put more than what was requested. */ + if ( idx >= nr ) + break; } - return rc; + + /* And how many we have processed. */ + return rc ? : idx; } int tmem_control(struct xen_sysctl_tmem_op *op) { int ret; - uint32_t pool_id = op->pool_id; uint32_t cmd = op->cmd; if ( op->pad != 0 ) @@ -414,11 +433,10 @@ int tmem_control(struct xen_sysctl_tmem_op *op) ret = tmemc_get_client_info(op->cli_id, guest_handle_cast(op->u.client, xen_sysctl_tmem_client_t)); break; - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS: - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES: - case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID: - ret = tmemc_save_subop(op->cli_id, pool_id, cmd, - guest_handle_cast(op->u.buf, void), op->arg); + case XEN_SYSCTL_TMEM_OP_GET_POOLS: + ret = tmemc_get_pool(op->cli_id, + guest_handle_cast(op->u.pool, xen_sysctl_tmem_pool_info_t), + op->len); break; default: ret = do_tmem_control(op); diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h index a9f95f8..8899c50 100644 --- a/xen/include/public/sysctl.h +++ b/xen/include/public/sysctl.h @@ -760,9 +760,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t); #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN 10 #define XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO 11 #define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO 12 -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS 16 -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES 17 -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID 18 +#define XEN_SYSCTL_TMEM_OP_GET_POOLS 16 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE 19 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV 20 #define XEN_SYSCTL_TMEM_OP_SAVE_END 21 @@ -790,6 +788,7 @@ struct tmem_client { uint32_t version; /* If mismatched we will get XEN_ENOSPC. */ uint32_t maxpools; /* If greater than what hypervisor supports, will get XEN_ERANGE. */ + uint32_t nr_pools; /* Current amount of pools. Ignored on SET*/ union { /* See TMEM_CLIENT_[COMPRESS,FROZEN] */ uint32_t raw; struct { @@ -803,6 +802,31 @@ struct tmem_client { typedef struct tmem_client xen_sysctl_tmem_client_t; DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_client_t); +/* + * XEN_SYSCTL_TMEM_OP_GET_POOLS uses the 'pool' array in + * xen_sysctl_tmem_op with this structure. The hypercall will + * return the number of entries in 'pool' or a negative value + * if an error was encountered. + */ +struct tmem_pool_info { + union { + uint32_t raw; + struct { + uint32_t persist:1, /* See TMEM_POOL_PERSIST. */ + shared:1, /* See TMEM_POOL_SHARED. */ + rsv:2, + pagebits:8, /* TMEM_POOL_PAGESIZE_[SHIFT,MASK]. */ + rsv2:12, + version:8; /* TMEM_POOL_VERSION_[SHIFT,MASK]. */ + } u; + } flags; + uint32_t id; /* Less than tmem_client.maxpools. */ + uint64_t n_pages; + uint64_t uuid[2]; +}; +typedef struct tmem_pool_info xen_sysctl_tmem_pool_info_t; +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_pool_info_t); + struct xen_sysctl_tmem_op { uint32_t cmd; /* IN: XEN_SYSCTL_TMEM_OP_* . */ int32_t pool_id; /* IN: 0 by default unless _SAVE_*, RESTORE_* .*/ @@ -817,6 +841,9 @@ struct xen_sysctl_tmem_op { XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save/restore */ XEN_GUEST_HANDLE_64(xen_sysctl_tmem_client_t) client; /* IN/OUT for */ /* XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT. */ + XEN_GUEST_HANDLE_64(xen_sysctl_tmem_pool_info_t) pool; /* OUT for */ + /* XEN_SYSCTL_TMEM_OP_GET_POOLS. Must have 'len' */ + /* of them. */ } u; }; typedef struct xen_sysctl_tmem_op xen_sysctl_tmem_op_t;