From patchwork Wed Dec 9 16:09:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2ADACC19425 for ; Wed, 9 Dec 2020 16:10:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD34C23A02 for ; Wed, 9 Dec 2020 16:10:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD34C23A02 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48427.85638 (Exim 4.92) (envelope-from ) id 1kn22V-0002Xh-0u; Wed, 09 Dec 2020 16:10:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48427.85638; Wed, 09 Dec 2020 16:10:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22U-0002XU-Ox; Wed, 09 Dec 2020 16:10:02 +0000 Received: by outflank-mailman (input) for mailman id 48427; Wed, 09 Dec 2020 16:10:01 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22T-0002Oq-Ad for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:01 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5098c72f-d137-4a1d-b8ac-fcbb34d1c47c; Wed, 09 Dec 2020 16:09:59 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 05E57AF8D; Wed, 9 Dec 2020 16:09:59 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5098c72f-d137-4a1d-b8ac-fcbb34d1c47c X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=708vSf4hdDLmhqVFIbpBu/E6M/WDwL53lyQ6wvfCs8c=; b=GOdn9DLkeQ5NgCKeDecYY+yJJtj3HJFe6Oc9MX2QEujxoe4TiN54CGBJ2DVq/UG0dELW7g LAfkuDAtm383RkycI3vY0SI382b/5VWBxpP0uvAXu5MYJdZCEGglQ+WzpRuN4wtPFY3xZ1 Z0ODRWmMQByUCN185u3/rFNn0tIGwrg= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v3 1/8] xen/cpupool: support moving domain between cpupools with different granularity Date: Wed, 9 Dec 2020 17:09:49 +0100 Message-Id: <20201209160956.32456-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 When moving a domain between cpupools with different scheduling granularity the sched_units of the domain need to be adjusted. Do that by allocating new sched_units and throwing away the old ones in sched_move_domain(). Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++---------- 1 file changed, 90 insertions(+), 31 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index a429fc7640..2a61c879b3 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -613,17 +613,45 @@ static void sched_move_irqs(const struct sched_unit *unit) vcpu_move_irqs(v); } +/* + * Move a domain from one cpupool to another. + * + * A domain with any vcpu having temporary affinity settings will be denied + * to move. Hard and soft affinities will be reset. + * + * In order to support cpupools with different scheduling granularities all + * scheduling units are replaced by new ones. + * + * The complete move is done in the following steps: + * - check prerequisites (no vcpu with temporary affinities) + * - allocate all new data structures (scheduler specific domain data, unit + * memory, scheduler specific unit data) + * - pause domain + * - temporarily move all (old) units to the same scheduling resource (this + * makes the final resource assignment easier in case the new cpupool has + * a larger granularity than the old one, as the scheduling locks for all + * vcpus must be held for that operation) + * - remove old units from scheduling + * - set new cpupool and scheduler domain data pointers in struct domain + * - switch all vcpus to new units, still assigned to the old scheduling + * resource + * - migrate all new units to scheduling resources of the new cpupool + * - unpause the domain + * - free the old memory (scheduler specific domain data, unit memory, + * scheduler specific unit data) + */ int sched_move_domain(struct domain *d, struct cpupool *c) { struct vcpu *v; - struct sched_unit *unit; + struct sched_unit *unit, *old_unit; + struct sched_unit *new_units = NULL, *old_units; + struct sched_unit **unit_ptr = &new_units; unsigned int new_p, unit_idx; - void **unit_priv; void *domdata; - void *unitdata; - struct scheduler *old_ops; + struct scheduler *old_ops = dom_scheduler(d); void *old_domdata; unsigned int gran = cpupool_get_granularity(c); + unsigned int n_units = DIV_ROUND_UP(d->max_vcpus, gran); int ret = 0; for_each_vcpu ( d, v ) @@ -641,53 +669,78 @@ int sched_move_domain(struct domain *d, struct cpupool *c) goto out; } - unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran)); - if ( unit_priv == NULL ) + for ( unit_idx = 0; unit_idx < n_units; unit_idx++ ) { - sched_free_domdata(c->sched, domdata); - ret = -ENOMEM; - goto out; - } + unit = sched_alloc_unit_mem(); + if ( unit ) + { + /* Initialize unit for sched_alloc_udata() to work. */ + unit->domain = d; + unit->unit_id = unit_idx * gran; + unit->vcpu_list = d->vcpu[unit->unit_id]; + unit->priv = sched_alloc_udata(c->sched, unit, domdata); + *unit_ptr = unit; + } - unit_idx = 0; - for_each_sched_unit ( d, unit ) - { - unit_priv[unit_idx] = sched_alloc_udata(c->sched, unit, domdata); - if ( unit_priv[unit_idx] == NULL ) + if ( !unit || !unit->priv ) { - for ( unit_idx = 0; unit_priv[unit_idx]; unit_idx++ ) - sched_free_udata(c->sched, unit_priv[unit_idx]); - xfree(unit_priv); - sched_free_domdata(c->sched, domdata); + old_units = new_units; + old_domdata = domdata; ret = -ENOMEM; - goto out; + goto out_free; } - unit_idx++; + + unit_ptr = &unit->next_in_list; } domain_pause(d); - old_ops = dom_scheduler(d); old_domdata = d->sched_priv; + new_p = cpumask_first(d->cpupool->cpu_valid); for_each_sched_unit ( d, unit ) { + spinlock_t *lock; + + /* + * Temporarily move all units to same processor to make locking + * easier when moving the new units to the new processors. + */ + lock = unit_schedule_lock_irq(unit); + sched_set_res(unit, get_sched_res(new_p)); + spin_unlock_irq(lock); + sched_remove_unit(old_ops, unit); } + old_units = d->sched_unit_list; + d->cpupool = c; d->sched_priv = domdata; + unit = new_units; + for_each_vcpu ( d, v ) + { + old_unit = v->sched_unit; + if ( unit->unit_id + gran == v->vcpu_id ) + unit = unit->next_in_list; + + unit->state_entry_time = old_unit->state_entry_time; + unit->runstate_cnt[v->runstate.state]++; + /* Temporarily use old resource assignment */ + unit->res = get_sched_res(new_p); + + v->sched_unit = unit; + } + + d->sched_unit_list = new_units; + new_p = cpumask_first(c->cpu_valid); - unit_idx = 0; for_each_sched_unit ( d, unit ) { spinlock_t *lock; unsigned int unit_p = new_p; - unitdata = unit->priv; - unit->priv = unit_priv[unit_idx]; - for_each_sched_unit_vcpu ( unit, v ) { migrate_timer(&v->periodic_timer, new_p); @@ -713,8 +766,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_insert_unit(c->sched, unit); - sched_free_udata(old_ops, unitdata); - unit_idx++; } @@ -722,11 +773,19 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domain_unpause(d); - sched_free_domdata(old_ops, old_domdata); + out_free: + for ( unit = old_units; unit; ) + { + if ( unit->priv ) + sched_free_udata(c->sched, unit->priv); + old_unit = unit; + unit = unit->next_in_list; + xfree(old_unit); + } - xfree(unit_priv); + sched_free_domdata(old_ops, old_domdata); -out: + out: rcu_read_unlock(&sched_res_rculock); return ret; From patchwork Wed Dec 9 16:09:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B81C4361B for ; Wed, 9 Dec 2020 16:10:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9696B23B51 for ; Wed, 9 Dec 2020 16:10:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9696B23B51 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48428.85654 (Exim 4.92) (envelope-from ) id 1kn22Z-00034q-CQ; Wed, 09 Dec 2020 16:10:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48428.85654; Wed, 09 Dec 2020 16:10:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22Z-00034g-8q; Wed, 09 Dec 2020 16:10:07 +0000 Received: by outflank-mailman (input) for mailman id 48428; Wed, 09 Dec 2020 16:10:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22Y-0002Oq-0f for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 882a4794-43af-449a-b516-ed041b9e8ffe; Wed, 09 Dec 2020 16:10:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 47C3EAFEB; Wed, 9 Dec 2020 16:09:59 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 882a4794-43af-449a-b516-ed041b9e8ffe X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BW9eYydQo2F6aub4GEJBwW00UPy+ko8uE06ZsSs8E0c=; b=s+aVsUzKnQvk3WcsKTSlB//9Bhm8JWaN/FT2ryo4p4hQk2LVDSBp0Kxuu+YGIcvzXMOvxL qjvtiIT01twmizuwFx9GviG1TNnOCchq52L2zi6Y6Em/gjvsOhdXLHoN8rpjK/Q56mXh16 suuDBdWc1pID0DjuUOj63IV2JWDxeVQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v3 2/8] xen/hypfs: switch write function handles to const Date: Wed, 9 Dec 2020 17:09:50 +0100 Message-Id: <20201209160956.32456-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 The node specific write functions take a void user address handle as parameter. As a write won't change the user memory use a const_void handle instead. This requires a new macro for casting a guest handle to a const type. Suggested-by: Jan Beulich Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V3: - new patch --- xen/common/hypfs.c | 17 +++++++++++------ xen/include/xen/guest_access.h | 5 +++++ xen/include/xen/hypfs.h | 14 +++++++++----- 3 files changed, 25 insertions(+), 11 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 2e8e90591e..6f822ae097 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -344,7 +344,8 @@ static int hypfs_read(const struct hypfs_entry *entry, } int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen) + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) { char *buf; int ret; @@ -384,7 +385,8 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, } int hypfs_write_bool(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen) + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) { bool buf; @@ -405,7 +407,8 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf, } int hypfs_write_custom(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen) + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) { struct param_hypfs *p; char *buf; @@ -439,13 +442,15 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf, } int hypfs_write_deny(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen) + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) { return -EACCES; } static int hypfs_write(struct hypfs_entry *entry, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen) + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned long ulen) { struct hypfs_entry_leaf *l; @@ -497,7 +502,7 @@ long do_hypfs_op(unsigned int cmd, break; case XEN_HYPFS_OP_write_contents: - ret = hypfs_write(entry, arg3, arg4); + ret = hypfs_write(entry, guest_handle_const_cast(arg3, void), arg4); break; default: diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h index f9b94cf1f4..5a50c3ccee 100644 --- a/xen/include/xen/guest_access.h +++ b/xen/include/xen/guest_access.h @@ -26,6 +26,11 @@ type *_x = (hnd).p; \ (XEN_GUEST_HANDLE_PARAM(type)) { _x }; \ }) +/* Same for casting to a const type. */ +#define guest_handle_const_cast(hnd, type) ({ \ + const type *_x = (const type *)((hnd).p); \ + (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x }; \ +}) /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */ #define guest_handle_to_param(hnd, type) ({ \ diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 53f50772b4..99fd4b036d 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -38,7 +38,7 @@ struct hypfs_funcs { int (*read)(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int (*write)(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, unsigned int ulen); unsigned int (*getsize)(const struct hypfs_entry *entry); struct hypfs_entry *(*findentry)(const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len); @@ -154,13 +154,17 @@ int hypfs_read_dir(const struct hypfs_entry *entry, int hypfs_read_leaf(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int hypfs_write_deny(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen); int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen); int hypfs_write_bool(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen); int hypfs_write_custom(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen); unsigned int hypfs_getsize(const struct hypfs_entry *entry); struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir, const char *name, From patchwork Wed Dec 9 16:09:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E32EC4167B for ; Wed, 9 Dec 2020 16:10:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EC80023A02 for ; Wed, 9 Dec 2020 16:10:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC80023A02 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48430.85679 (Exim 4.92) (envelope-from ) id 1kn22e-0003FL-2G; Wed, 09 Dec 2020 16:10:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48430.85679; Wed, 09 Dec 2020 16:10:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22d-0003FC-V9; Wed, 09 Dec 2020 16:10:11 +0000 Received: by outflank-mailman (input) for mailman id 48430; Wed, 09 Dec 2020 16:10:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22d-0002Oq-0k for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:11 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 655f8d91-9a56-412f-9c6c-1a6a305e6172; Wed, 09 Dec 2020 16:10:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 855F4AFB1; Wed, 9 Dec 2020 16:09:59 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 655f8d91-9a56-412f-9c6c-1a6a305e6172 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1J8R7FCSyQqCaMoIA3PTZ/7bQX8TF6nm8R32Obm0UGs=; b=pXXSTzhOn5RlDCzbRdLNqZrvZAf7Hec+YmgMl9L9ega6V8p26vsQlNwIF1695+BGH5wx49 TGuZZqjhzRbqMy4lD5UiG0m0SDGiLJv+NTvtA0sF4XDWnGaQbux38seDgCwjuXlEhq7CL5 B6RYWs9JbvAa4aDB6FjDzQd6KNrd/ko= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node callbacks Date: Wed, 9 Dec 2020 17:09:51 +0100 Message-Id: <20201209160956.32456-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 In order to better support resource allocation and locking for dynamic hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs. The enter() callback is called when entering a node during hypfs user actions (traversing, reading or writing it), while the exit() callback is called when leaving a node (accessing another node at the same or a higher directory level, or when returning to the user). For avoiding recursion this requires a parent pointer in each node. Let the enter() callback return the entry address which is stored as the last accessed node in order to be able to use a template entry for that purpose in case of dynamic entries. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - new patch V3: - add ASSERT(entry); (Jan Beulich) Signed-off-by: Juergen Gross --- xen/common/hypfs.c | 80 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 5 +++ 2 files changed, 85 insertions(+) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 6f822ae097..f04934db10 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry; ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry))) const struct hypfs_funcs hypfs_dir_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_dir, .write = hypfs_write_deny, .getsize = hypfs_getsize, .findentry = hypfs_dir_findentry, }; const struct hypfs_funcs hypfs_leaf_ro_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_deny, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_leaf_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_leaf, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_bool_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_bool, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_custom_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_custom, .getsize = hypfs_getsize, @@ -63,6 +73,8 @@ enum hypfs_lock_state { }; static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked); +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered); + HYPFS_DIR_INIT(hypfs_root, ""); static void hypfs_read_lock(void) @@ -100,11 +112,59 @@ static void hypfs_unlock(void) } } +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry) +{ + return entry; +} + +void hypfs_node_exit(const struct hypfs_entry *entry) +{ +} + +static int node_enter(const struct hypfs_entry *entry) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + entry = entry->funcs->enter(entry); + if ( IS_ERR(entry) ) + return PTR_ERR(entry); + + ASSERT(entry); + ASSERT(!*last || *last == entry->parent); + + *last = entry; + + return 0; +} + +static void node_exit(const struct hypfs_entry *entry) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + if ( !*last ) + return; + + ASSERT(*last == entry); + *last = entry->parent; + + entry->funcs->exit(entry); +} + +static void node_exit_all(void) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + while ( *last ) + node_exit(*last); +} + static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) { int ret = -ENOENT; struct hypfs_entry *e; + ASSERT(new->funcs->enter); + ASSERT(new->funcs->exit); ASSERT(new->funcs->read); ASSERT(new->funcs->write); ASSERT(new->funcs->getsize); @@ -140,6 +200,7 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) unsigned int sz = strlen(new->name); parent->e.size += DIRENTRY_SIZE(sz); + new->parent = &parent->e; } hypfs_unlock(); @@ -221,6 +282,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, const char *end; struct hypfs_entry *entry; unsigned int name_len; + int ret; for ( ; ; ) { @@ -235,6 +297,10 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, end = strchr(path, '\0'); name_len = end - path; + ret = node_enter(&dir->e); + if ( ret ) + return ERR_PTR(ret); + entry = dir->e.funcs->findentry(dir, path, name_len); if ( IS_ERR(entry) || !*end ) return entry; @@ -265,6 +331,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry, const struct hypfs_entry_dir *d; const struct hypfs_entry *e; unsigned int size = entry->funcs->getsize(entry); + int ret; ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked); @@ -276,12 +343,19 @@ int hypfs_read_dir(const struct hypfs_entry *entry, unsigned int e_namelen = strlen(e->name); unsigned int e_len = DIRENTRY_SIZE(e_namelen); + ret = node_enter(e); + if ( ret ) + return ret; + direntry.e.pad = 0; direntry.e.type = e->type; direntry.e.encoding = e->encoding; direntry.e.content_len = e->funcs->getsize(e); direntry.e.max_write_len = e->max_size; direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len; + + node_exit(e); + if ( copy_to_guest(uaddr, &direntry, 1) ) return -EFAULT; @@ -495,6 +569,10 @@ long do_hypfs_op(unsigned int cmd, goto out; } + ret = node_enter(entry); + if ( ret ) + goto out; + switch ( cmd ) { case XEN_HYPFS_OP_read: @@ -511,6 +589,8 @@ long do_hypfs_op(unsigned int cmd, } out: + node_exit_all(); + hypfs_unlock(); return ret; diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 99fd4b036d..a6dfdb7d8e 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -35,6 +35,8 @@ struct hypfs_entry; * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b"). */ struct hypfs_funcs { + const struct hypfs_entry *(*enter)(const struct hypfs_entry *entry); + void (*exit)(const struct hypfs_entry *entry); int (*read)(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int (*write)(struct hypfs_entry_leaf *leaf, @@ -56,6 +58,7 @@ struct hypfs_entry { unsigned int size; unsigned int max_size; const char *name; + struct hypfs_entry *parent; struct list_head list; const struct hypfs_funcs *funcs; }; @@ -149,6 +152,8 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent, struct hypfs_entry_dir *dir, bool nofault); int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault); +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry); +void hypfs_node_exit(const struct hypfs_entry *entry); int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int hypfs_read_leaf(const struct hypfs_entry *entry, From patchwork Wed Dec 9 16:09:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72E4FC433FE for ; Wed, 9 Dec 2020 16:10:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 276BE23B51 for ; Wed, 9 Dec 2020 16:10:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 276BE23B51 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48432.85703 (Exim 4.92) (envelope-from ) id 1kn22j-0003Nc-0w; Wed, 09 Dec 2020 16:10:17 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48432.85703; Wed, 09 Dec 2020 16:10:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22i-0003NM-QB; Wed, 09 Dec 2020 16:10:16 +0000 Received: by outflank-mailman (input) for mailman id 48432; Wed, 09 Dec 2020 16:10:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22i-0002Oq-15 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:16 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 06e11609-ef77-40a5-a43e-42fb9c9b29c6; Wed, 09 Dec 2020 16:10:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C14FBB274; Wed, 9 Dec 2020 16:09:59 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 06e11609-ef77-40a5-a43e-42fb9c9b29c6 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o2JApg47DI5mpxRq4xxKt1WSm6vnOnCEs4DTuG9auFw=; b=TfMarEecuPKxBUsV6iBJWIxXE48w9LHvhkM8v+o35axGIIhKsn2UBEdzD++qmqyDooRLUi K4UN22skTxkVMW79KjkkeDTaLxqeg8gOOYlI+6yKYhn1p1u0Zet5MdMk8NveFWrBOQCKH1 X32mG/vgmiWph33cXECEDg6UhbtDARs= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes Date: Wed, 9 Dec 2020 17:09:52 +0100 Message-Id: <20201209160956.32456-5-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 Add a HYPFS_VARDIR_INIT() macro for initializing such a directory statically, taking a struct hypfs_funcs pointer as parameter additional to those of HYPFS_DIR_INIT(). Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an additional parameter as this will be needed for dynamical entries. For being able to let the generic hypfs coding continue to work on normal struct hypfs_entry entities even for dynamical nodes add some infrastructure for allocating a working area for the current hypfs request in order to store needed information for traversing the tree. This area is anchored in a percpu pointer and can be retrieved by any level of the dynamic entries. The normal way to handle allocation and freeing is to allocate the data in the enter() callback of a node and to free it in the related exit() callback. Add a hypfs_add_dyndir() function for adding a dynamic directory template to the tree, which is needed for having the correct reference to its position in hypfs. Signed-off-by: Juergen Gross --- V2: - switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich) - carved out from previous patch - use enter() and exit() callbacks for allocating and freeing dyndata memory - add hypfs_add_dyndir() V3: - switch hypfs_alloc_dyndata() to be type safe (Jan Beulich) - rename HYPFS_VARDIR_INIT() to HYPFS_DIR_INIT_FUNC() (Jan Beulich) Signed-off-by: Juergen Gross --- xen/common/hypfs.c | 31 +++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 29 +++++++++++++++++++---------- 2 files changed, 50 insertions(+), 10 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index f04934db10..8faf65cea0 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -72,6 +72,7 @@ enum hypfs_lock_state { hypfs_write_locked }; static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked); +static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata); static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered); @@ -158,6 +159,30 @@ static void node_exit_all(void) node_exit(*last); } +void *hypfs_alloc_dyndata_size(unsigned long size) +{ + unsigned int cpu = smp_processor_id(); + + ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked); + ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL); + + per_cpu(hypfs_dyndata, cpu) = xzalloc_bytes(size); + + return per_cpu(hypfs_dyndata, cpu); +} + +void *hypfs_get_dyndata(void) +{ + ASSERT(this_cpu(hypfs_dyndata)); + + return this_cpu(hypfs_dyndata); +} + +void hypfs_free_dyndata(void) +{ + XFREE(this_cpu(hypfs_dyndata)); +} + static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) { int ret = -ENOENT; @@ -219,6 +244,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent, return ret; } +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template) +{ + template->e.parent = &parent->e; +} + int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index a6dfdb7d8e..4c469cbeb4 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,7 +76,7 @@ struct hypfs_entry_dir { struct list_head dirlist; }; -#define HYPFS_DIR_INIT(var, nam) \ +#define HYPFS_DIR_INIT_FUNC(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var = { \ .e.type = XEN_HYPFS_TYPE_DIR, \ .e.encoding = XEN_HYPFS_ENC_PLAIN, \ @@ -84,22 +84,25 @@ struct hypfs_entry_dir { .e.size = 0, \ .e.max_size = 0, \ .e.list = LIST_HEAD_INIT(var.e.list), \ - .e.funcs = &hypfs_dir_funcs, \ + .e.funcs = (fn), \ .dirlist = LIST_HEAD_INIT(var.dirlist), \ } -#define HYPFS_VARSIZE_INIT(var, typ, nam, msz) \ - struct hypfs_entry_leaf __read_mostly var = { \ - .e.type = (typ), \ - .e.encoding = XEN_HYPFS_ENC_PLAIN, \ - .e.name = (nam), \ - .e.max_size = (msz), \ - .e.funcs = &hypfs_leaf_ro_funcs, \ +#define HYPFS_DIR_INIT(var, nam) \ + HYPFS_DIR_INIT_FUNC(var, nam, &hypfs_dir_funcs) + +#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \ + struct hypfs_entry_leaf __read_mostly var = { \ + .e.type = (typ), \ + .e.encoding = XEN_HYPFS_ENC_PLAIN, \ + .e.name = (nam), \ + .e.max_size = (msz), \ + .e.funcs = (fn), \ } /* Content and size need to be set via hypfs_string_set_reference(). */ #define HYPFS_STRING_INIT(var, nam) \ - HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0) + HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs) /* * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point @@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root; int hypfs_add_dir(struct hypfs_entry_dir *parent, struct hypfs_entry_dir *dir, bool nofault); +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template); int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault); const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry); @@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir, struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len); +void *hypfs_alloc_dyndata_size(unsigned long size); +#define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(sizeof(type)) +void *hypfs_get_dyndata(void); +void hypfs_free_dyndata(void); #endif #endif /* __XEN_HYPFS_H__ */ From patchwork Wed Dec 9 16:09:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 086D1C4361B for ; Wed, 9 Dec 2020 16:10:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C417423A02 for ; Wed, 9 Dec 2020 16:10:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C417423A02 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48436.85727 (Exim 4.92) (envelope-from ) id 1kn22o-0003Xp-MZ; Wed, 09 Dec 2020 16:10:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48436.85727; Wed, 09 Dec 2020 16:10:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22o-0003Xb-Fr; Wed, 09 Dec 2020 16:10:22 +0000 Received: by outflank-mailman (input) for mailman id 48436; Wed, 09 Dec 2020 16:10:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22n-0002Oq-19 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:21 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a4a5b375-8b5e-4597-b1b4-71553962e5b6; Wed, 09 Dec 2020 16:10:01 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 11FAAB279; Wed, 9 Dec 2020 16:10:00 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a4a5b375-8b5e-4597-b1b4-71553962e5b6 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jJK4b1ugQ0UokWHNyqHOknc4B9LzQZ5ESb/m8S9r3M4=; b=VCjABHYGpCX/zS1sAiAupiFalwBJK4b5d1LwOh9X67fqEVr3S7XWJL7APl86F/khjaNZPJ DdAycoJ5Pe/DUs6dE80Fpiket73S6x0TTg3y47N2F1OeA9fki+1AQwKGH3BU5DHkOuwpsb ffCin5jN3ioxWlfaoSWx0h9WdWCDQgY= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic directories Date: Wed, 9 Dec 2020 17:09:53 +0100 Message-Id: <20201209160956.32456-6-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 Add some helpers to hypfs.c to support dynamic directories with a numerical id as name. The dynamic directory is based on a template specified by the user allowing to use specific access functions and having a predefined set of entries in the directory. Signed-off-by: Juergen Gross --- V2: - use macro for length of entry name (Jan Beulich) - const attributes (Jan Beulich) - use template name as format string (Jan Beulich) - add hypfs_dynid_entry_size() helper (Jan Beulich) - expect dyndir data having been allocated by enter() callback V3: - add a specific enter() callback returning the template pointer - add data field to struct hypfs_dyndir_id - rename hypfs_gen_dyndir_entry_id() (Jan Beulich) - add comments regarding generated names to be kept in sync (Jan Beulich) Signed-off-by: Juergen Gross --- xen/common/hypfs.c | 98 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 18 ++++++++ 2 files changed, 116 insertions(+) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 8faf65cea0..087c63b92f 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -356,6 +356,104 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry) return entry->size; } +/* + * Fill the direntry for a dynamically generated directory. Especially the + * generated name needs to be kept in sync with hypfs_gen_dyndir_id_entry(). + */ +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr) +{ + struct xen_hypfs_dirlistentry direntry; + char name[HYPFS_DYNDIR_ID_NAMELEN]; + unsigned int e_namelen, e_len; + + e_namelen = snprintf(name, sizeof(name), template->e.name, id); + e_len = DIRENTRY_SIZE(e_namelen); + direntry.e.pad = 0; + direntry.e.type = template->e.type; + direntry.e.encoding = template->e.encoding; + direntry.e.content_len = template->e.funcs->getsize(&template->e); + direntry.e.max_write_len = template->e.max_size; + direntry.off_next = is_last ? 0 : e_len; + if ( copy_to_guest(*uaddr, &direntry, 1) ) + return -EFAULT; + if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name, + e_namelen + 1) ) + return -EFAULT; + + guest_handle_add_offset(*uaddr, e_len); + + return 0; +} + +static const struct hypfs_entry *hypfs_dyndir_enter( + const struct hypfs_entry *entry) +{ + const struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + /* Use template with original enter function. */ + return data->template->e.funcs->enter(&data->template->e); +} + +static struct hypfs_entry *hypfs_dyndir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len) +{ + const struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + /* Use template with original findentry function. */ + return data->template->e.funcs->findentry(data->template, name, name_len); +} + +static int hypfs_read_dyndir(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + /* Use template with original read function. */ + return data->template->e.funcs->read(&data->template->e, uaddr); +} + +/* + * Fill dyndata with a dynamically generated directory based on a template + * and a numerical id. + * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regarding the + * name generated. + */ +struct hypfs_entry *hypfs_gen_dyndir_id_entry( + const struct hypfs_entry_dir *template, unsigned int id, void *data) +{ + struct hypfs_dyndir_id *dyndata; + + dyndata = hypfs_get_dyndata(); + + dyndata->template = template; + dyndata->id = id; + dyndata->data = data; + snprintf(dyndata->name, sizeof(dyndata->name), template->e.name, id); + dyndata->dir = *template; + dyndata->dir.e.name = dyndata->name; + dyndata->dir.e.funcs = &dyndata->funcs; + dyndata->funcs = *template->e.funcs; + dyndata->funcs.enter = hypfs_dyndir_enter; + dyndata->funcs.findentry = hypfs_dyndir_findentry; + dyndata->funcs.read = hypfs_read_dyndir; + + return &dyndata->dir.e; +} + +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id) +{ + return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id)); +} + int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 4c469cbeb4..34073faff8 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,6 +76,17 @@ struct hypfs_entry_dir { struct list_head dirlist; }; +struct hypfs_dyndir_id { + struct hypfs_entry_dir dir; /* Modified copy of template. */ + struct hypfs_funcs funcs; /* Dynamic functions. */ + const struct hypfs_entry_dir *template; /* Template used. */ +#define HYPFS_DYNDIR_ID_NAMELEN 12 + char name[HYPFS_DYNDIR_ID_NAMELEN]; /* Name of hypfs entry. */ + + unsigned int id; /* Numerical id. */ + void *data; /* Data associated with id. */ +}; + #define HYPFS_DIR_INIT_FUNC(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var = { \ .e.type = XEN_HYPFS_TYPE_DIR, \ @@ -186,6 +197,13 @@ void *hypfs_alloc_dyndata_size(unsigned long size); #define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(sizeof(type)) void *hypfs_get_dyndata(void); void hypfs_free_dyndata(void); +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr); +struct hypfs_entry *hypfs_gen_dyndir_id_entry( + const struct hypfs_entry_dir *template, unsigned int id, void *data); +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id); #endif #endif /* __XEN_HYPFS_H__ */ From patchwork Wed Dec 9 16:09:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 902A7C4361B for ; Wed, 9 Dec 2020 16:10:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B41323A02 for ; Wed, 9 Dec 2020 16:10:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B41323A02 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48429.85663 (Exim 4.92) (envelope-from ) id 1kn22Z-00035p-UJ; Wed, 09 Dec 2020 16:10:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48429.85663; Wed, 09 Dec 2020 16:10:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22Z-00035S-J3; Wed, 09 Dec 2020 16:10:07 +0000 Received: by outflank-mailman (input) for mailman id 48429; Wed, 09 Dec 2020 16:10:06 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22Y-0002Or-6q for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a2cfb2ef-da70-498d-8298-e61b67c49e9d; Wed, 09 Dec 2020 16:10:01 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 56988B262; Wed, 9 Dec 2020 16:10:00 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a2cfb2ef-da70-498d-8298-e61b67c49e9d X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vnXlBBXDbMK1+Yl5pOhMMaMgERekk96ALZfGkXhOrdM=; b=dyDAkVMHprI72vEcBc/P1SiyqOVzfw0JGgZtEwAoi3MF32zbTWsgyhhau5+29pYArS0qz/ CPY04Iw9K1z5cErHxJJ4xH4swH1+0+RDTc7Yy8ISvo48rCDvYF81cL2uvjVqi15Y0vb633 OKJagR2VatOt1rVOfW5vJqlyE0FHhSY= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v3 6/8] xen/cpupool: add cpupool directories Date: Wed, 9 Dec 2020 17:09:54 +0100 Message-Id: <20201209160956.32456-7-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 Add /cpupool/ directories to hypfs. Those are completely dynamic, so the related hypfs access functions need to be implemented. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V2: - added const (Jan Beulich) - call hypfs_add_dir() in helper (Dario Faggioli) - switch locking to enter/exit callbacks V3: - use generic dyndirid enter function - const for hypfs function vector (Jan Beulich) - drop size calculation from cpupool_dir_read() (Jan Beulich) - check cpupool id to not exceed UINT_MAX (Jan Beulich) - coding style (#if/#else/#endif) (Jan Beulich) Signed-off-by: Juergen Gross --- docs/misc/hypfs-paths.pandoc | 9 +++ xen/common/sched/cpupool.c | 104 +++++++++++++++++++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index 6c7b2f7ee3..aaca1cdf92 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -175,6 +175,15 @@ The major version of Xen. The minor version of Xen. +#### /cpupool/ + +A directory of all current cpupools. + +#### /cpupool/*/ + +The individual cpupools. Each entry is a directory with the name being the +cpupool-id (e.g. /cpupool/0/). + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 0db7d77219..f293ba0cc4 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include #include @@ -33,6 +35,7 @@ static int cpupool_moving_cpu = -1; static struct cpupool *cpupool_cpu_moving = NULL; static cpumask_t cpupool_locked_cpus; +/* This lock nests inside sysctl or hypfs lock. */ static DEFINE_SPINLOCK(cpupool_lock); static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu; @@ -1003,12 +1006,113 @@ static struct notifier_block cpu_nfb = { .notifier_call = cpu_callback }; +#ifdef CONFIG_HYPFS + +static HYPFS_DIR_INIT(cpupool_pooldir, "%u"); + +static int cpupool_dir_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + int ret = 0; + const struct cpupool *c; + + list_for_each_entry(c, &cpupool_list, list) + { + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, + list_is_last(&c->list, &cpupool_list), + &uaddr); + if ( ret ) + break; + } + + return ret; +} + +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) +{ + const struct cpupool *c; + unsigned int size = 0; + + list_for_each_entry(c, &cpupool_list, list) + size += hypfs_dynid_entry_size(entry, c->cpupool_id); + + return size; +} + +static const struct hypfs_entry *cpupool_dir_enter( + const struct hypfs_entry *entry) +{ + struct hypfs_dyndir_id *data; + + data = hypfs_alloc_dyndata(struct hypfs_dyndir_id); + if ( !data ) + return ERR_PTR(-ENOMEM); + data->id = CPUPOOLID_NONE; + + spin_lock(&cpupool_lock); + + return entry; +} + +static void cpupool_dir_exit(const struct hypfs_entry *entry) +{ + spin_unlock(&cpupool_lock); + + hypfs_free_dyndata(); +} + +static struct hypfs_entry *cpupool_dir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len) +{ + unsigned long id; + const char *end; + struct cpupool *cpupool; + + id = simple_strtoul(name, &end, 10); + if ( end != name + name_len || id > UINT_MAX ) + return ERR_PTR(-ENOENT); + + cpupool = __cpupool_find_by_id(id, true); + + if ( !cpupool ) + return ERR_PTR(-ENOENT); + + return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool); +} + +static const struct hypfs_funcs cpupool_dir_funcs = { + .enter = cpupool_dir_enter, + .exit = cpupool_dir_exit, + .read = cpupool_dir_read, + .write = hypfs_write_deny, + .getsize = cpupool_dir_getsize, + .findentry = cpupool_dir_findentry, +}; + +static HYPFS_DIR_INIT_FUNC(cpupool_dir, "cpupool", &cpupool_dir_funcs); + +static void cpupool_hypfs_init(void) +{ + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); + hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); +} + +#else /* CONFIG_HYPFS */ + +static void cpupool_hypfs_init(void) +{ +} + +#endif /* CONFIG_HYPFS */ + static int __init cpupool_init(void) { unsigned int cpu; cpupool_gran_init(); + cpupool_hypfs_init(); + cpupool0 = cpupool_create(0, 0); BUG_ON(IS_ERR(cpupool0)); cpupool_put(cpupool0); From patchwork Wed Dec 9 16:09:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94AF9C4361B for ; Wed, 9 Dec 2020 16:10:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 434E323B51 for ; Wed, 9 Dec 2020 16:10:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 434E323B51 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48431.85691 (Exim 4.92) (envelope-from ) id 1kn22f-0003Hj-Bo; Wed, 09 Dec 2020 16:10:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48431.85691; Wed, 09 Dec 2020 16:10:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22f-0003HT-8Q; Wed, 09 Dec 2020 16:10:13 +0000 Received: by outflank-mailman (input) for mailman id 48431; Wed, 09 Dec 2020 16:10:11 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22d-0002Or-7D for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:11 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0f6655d5-55ae-48c3-be13-b6ed1eccfdd3; Wed, 09 Dec 2020 16:10:01 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9A041B278; Wed, 9 Dec 2020 16:10:00 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0f6655d5-55ae-48c3-be13-b6ed1eccfdd3 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=am94U0utN2Ue9jktg4hF/gzv3ykid5UM0GvnBsWgHyU=; b=EpPmOjNG0FFXPmBjiSvX6HmVSUhiHTbeOY9WCv7hSTNwt6+60G4OOwKdLUxtBkadZb9Jq2 KPksBVJ2V9xqu9NPhoW9ZqViGv+usvLqXbChx2kF42EwWQ2mOVgqqwmbKi66ZiU+ybwKV/ J2TZebCcU9fwBgnr8c9/6RI4ldDit7A= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v3 7/8] xen/cpupool: add scheduling granularity entry to cpupool entries Date: Wed, 9 Dec 2020 17:09:55 +0100 Message-Id: <20201209160956.32456-8-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 Add a "sched-gran" entry to the per-cpupool hypfs directories. For now make this entry read-only and let it contain one of the strings "cpu", "core" or "socket". Signed-off-by: Juergen Gross --- V2: - added const (Jan Beulich) - modify test in cpupool_gran_read() (Jan Beulich) --- docs/misc/hypfs-paths.pandoc | 4 ++ xen/common/sched/cpupool.c | 72 ++++++++++++++++++++++++++++++++++-- 2 files changed, 72 insertions(+), 4 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index aaca1cdf92..f1ce24d7fe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,6 +184,10 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). +#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") + +The scheduling granularity of a cpupool. + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index f293ba0cc4..e2011367bd 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock); static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu; static unsigned int __read_mostly sched_granularity = 1; +#define SCHED_GRAN_NAME_LEN 8 struct sched_gran_name { enum sched_gran mode; - char name[8]; + char name[SCHED_GRAN_NAME_LEN]; }; static const struct sched_gran_name sg_name[] = { @@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] = { {SCHED_GRAN_socket, "socket"}, }; -static void sched_gran_print(enum sched_gran mode, unsigned int gran) +static const char *sched_gran_get_name(enum sched_gran mode) { const char *name = ""; unsigned int i; @@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran) } } + return name; +} + +static void sched_gran_print(enum sched_gran mode, unsigned int gran) +{ printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n", - name, gran, gran == 1 ? "" : "s"); + sched_gran_get_name(mode), gran, gran == 1 ? "" : "s"); } #ifdef CONFIG_HAS_SCHED_GRANULARITY @@ -1014,10 +1020,16 @@ static int cpupool_dir_read(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { int ret = 0; - const struct cpupool *c; + struct cpupool *c; + struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); list_for_each_entry(c, &cpupool_list, list) { + data->id = c->cpupool_id; + data->data = c; + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, list_is_last(&c->list, &cpupool_list), &uaddr); @@ -1080,6 +1092,56 @@ static struct hypfs_entry *cpupool_dir_findentry( return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool); } +static int cpupool_gran_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data = hypfs_get_dyndata(); + cpupool = data->data; + ASSERT(cpupool); + + gran = sched_gran_get_name(cpupool->gran); + + if ( !*gran ) + return -ENOENT; + + return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0; +} + +static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data = hypfs_get_dyndata(); + cpupool = data->data; + ASSERT(cpupool); + + gran = sched_gran_get_name(cpupool->gran); + + return strlen(gran) + 1; +} + +static const struct hypfs_funcs cpupool_gran_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, + .read = cpupool_gran_read, + .write = hypfs_write_deny, + .getsize = hypfs_gran_getsize, + .findentry = hypfs_leaf_findentry, +}; + +static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran", + 0, &cpupool_gran_funcs); +static char granstr[SCHED_GRAN_NAME_LEN] = { + [0 ... SCHED_GRAN_NAME_LEN - 2] = '?', + [SCHED_GRAN_NAME_LEN - 1] = 0 +}; + static const struct hypfs_funcs cpupool_dir_funcs = { .enter = cpupool_dir_enter, .exit = cpupool_dir_exit, @@ -1095,6 +1157,8 @@ static void cpupool_hypfs_init(void) { hypfs_add_dir(&hypfs_root, &cpupool_dir, true); hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); + hypfs_string_set_reference(&cpupool_gran, granstr); + hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true); } #else /* CONFIG_HYPFS */ From patchwork Wed Dec 9 16:09:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11961827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C6EC4361B for ; Wed, 9 Dec 2020 16:10:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9149723A02 for ; Wed, 9 Dec 2020 16:10:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9149723A02 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.48433.85715 (Exim 4.92) (envelope-from ) id 1kn22k-0003QO-A5; Wed, 09 Dec 2020 16:10:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 48433.85715; Wed, 09 Dec 2020 16:10:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22k-0003QF-50; Wed, 09 Dec 2020 16:10:18 +0000 Received: by outflank-mailman (input) for mailman id 48433; Wed, 09 Dec 2020 16:10:16 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kn22i-0002Or-85 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:16 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f5b6a8e3-6e04-4556-b4d4-e66437eb66b7; Wed, 09 Dec 2020 16:10:01 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E29DBAF98; Wed, 9 Dec 2020 16:10:00 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f5b6a8e3-6e04-4556-b4d4-e66437eb66b7 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607530201; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m3mpGjGJjZ3eVI9IE+6eYYkoeqGGeXNSXNPwqjgRH8Q=; b=Q/vYSC91rfMZH1E/W1oKg4ejMvz0aKHj3aswTdDUdZ9iTMjK4KNGUzsoK8RxdPY5EJfQoj TiF3gpkyaPYsLBpBB0jSzhed9g3FpLO+YkIVb/68mCtk9O6sOe5Cz2+/yU6/SwH8eiMz6B ZS8n433bmqjoi0kp9ajV5IcnLi/gH7M= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v3 8/8] xen/cpupool: make per-cpupool sched-gran hypfs node writable Date: Wed, 9 Dec 2020 17:09:56 +0100 Message-Id: <20201209160956.32456-9-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209160956.32456-1-jgross@suse.com> References: <20201209160956.32456-1-jgross@suse.com> MIME-Version: 1.0 Make /cpupool//sched-gran in hypfs writable. This will enable per cpupool selectable scheduling granularity. Writing this node is allowed only with no cpu assigned to the cpupool. Allowed are values "cpu", "core" and "socket". Signed-off-by: Juergen Gross --- V2: - test user parameters earlier (Jan Beulich) V3: - fix build without CONFIG_HYPFS on Arm (Andrew Cooper) Signed-off-by: Juergen Gross --- docs/misc/hypfs-paths.pandoc | 5 ++- xen/common/sched/cpupool.c | 70 ++++++++++++++++++++++++++++++------ 2 files changed, 63 insertions(+), 12 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index f1ce24d7fe..e86f7d0dbe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,10 +184,13 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). -#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") +#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w] The scheduling granularity of a cpupool. +Writing a value is allowed only for cpupools with no cpu assigned and if the +architecture is supporting different scheduling granularities. + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index e2011367bd..acd26f9449 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran) } #ifdef CONFIG_HAS_SCHED_GRANULARITY -static int __init sched_select_granularity(const char *str) +static int sched_gran_get(const char *str, enum sched_gran *mode) { unsigned int i; @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str) { if ( strcmp(sg_name[i].name, str) == 0 ) { - opt_sched_granularity = sg_name[i].mode; + *mode = sg_name[i].mode; return 0; } } return -EINVAL; } + +static int __init sched_select_granularity(const char *str) +{ + return sched_gran_get(str, &opt_sched_granularity); +} custom_param("sched-gran", sched_select_granularity); +#elif CONFIG_HYPFS +static int sched_gran_get(const char *str, enum sched_gran *mode) +{ + return -EINVAL; +} #endif -static unsigned int __init cpupool_check_granularity(void) +static unsigned int cpupool_check_granularity(enum sched_gran mode) { unsigned int cpu; unsigned int siblings, gran = 0; - if ( opt_sched_granularity == SCHED_GRAN_cpu ) + if ( mode == SCHED_GRAN_cpu ) return 1; for_each_online_cpu ( cpu ) { - siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity, - cpu)); + siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu)); if ( gran == 0 ) gran = siblings; else if ( gran != siblings ) return 0; } - sched_disable_smt_switching = true; - return gran; } @@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void) while ( gran == 0 ) { - gran = cpupool_check_granularity(); + gran = cpupool_check_granularity(opt_sched_granularity); if ( gran == 0 ) { @@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void) if ( fallback ) warning_add(fallback); + if ( opt_sched_granularity != SCHED_GRAN_cpu ) + sched_disable_smt_switching = true; + sched_granularity = gran; sched_gran_print(opt_sched_granularity, sched_granularity); } @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry) return strlen(gran) + 1; } +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) +{ + const struct hypfs_dyndir_id *data; + struct cpupool *cpupool; + enum sched_gran gran; + unsigned int sched_gran = 0; + char name[SCHED_GRAN_NAME_LEN]; + int ret = 0; + + if ( ulen > SCHED_GRAN_NAME_LEN ) + return -ENOSPC; + + if ( copy_from_guest(name, uaddr, ulen) ) + return -EFAULT; + + if ( memchr(name, 0, ulen) == (name + ulen - 1) ) + sched_gran = sched_gran_get(name, &gran) ? + 0 : cpupool_check_granularity(gran); + if ( sched_gran == 0 ) + return -EINVAL; + + data = hypfs_get_dyndata(); + cpupool = data->data; + ASSERT(cpupool); + + if ( !cpumask_empty(cpupool->cpu_valid) ) + ret = -EBUSY; + else + { + cpupool->gran = gran; + cpupool->sched_gran = sched_gran; + } + + return ret; +} + static const struct hypfs_funcs cpupool_gran_funcs = { .enter = hypfs_node_enter, .exit = hypfs_node_exit, .read = cpupool_gran_read, - .write = hypfs_write_deny, + .write = cpupool_gran_write, .getsize = hypfs_gran_getsize, .findentry = hypfs_leaf_findentry, }; static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran", - 0, &cpupool_gran_funcs); + SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs); static char granstr[SCHED_GRAN_NAME_LEN] = { [0 ... SCHED_GRAN_NAME_LEN - 2] = '?', [SCHED_GRAN_NAME_LEN - 1] = 0