Message ID | 20201026091316.25680-11-jgross@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen: support per-cpupool scheduling granularity | expand |
On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote: > Add /cpupool/<cpupool-id> directories to hypfs. Those are completely > dynamic, so the related hypfs access functions need to be > implemented. > > Signed-off-by: Juergen Gross <jgross@suse.com> > So, I'm almost sold... Just one comment: > --- a/xen/common/sched/cpupool.c > +++ b/xen/common/sched/cpupool.c > @@ -999,6 +1073,10 @@ static int __init cpupool_init(void) > > cpupool_gran_init(); > > +#ifdef CONFIG_HYPFS > + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); > +#endif > + What would you think about doing this in an helper function (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an empty stub if !CONFIG_HYPFS ? That will save us from having the #ifdef-s again here. I'm asking because it's certainly not critical and I don't have a too strong opinion about it. But I do think the code would look better. > cpupool0 = cpupool_create(0, 0, &err); > BUG_ON(cpupool0 == NULL); > cpupool_put(cpupool0);
On 11.11.2020 15:51, Dario Faggioli wrote: > On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote: >> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely >> dynamic, so the related hypfs access functions need to be >> implemented. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> >> > So, I'm almost sold... Just one comment: > >> --- a/xen/common/sched/cpupool.c >> +++ b/xen/common/sched/cpupool.c >> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void) >> >> cpupool_gran_init(); >> >> +#ifdef CONFIG_HYPFS >> + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); >> +#endif >> + > What would you think about doing this in an helper function > (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an > empty stub if !CONFIG_HYPFS ? > > That will save us from having the #ifdef-s again here. Having a hypfs_add_dir() stub would also allow to achieve this, and then, going forward, perhaps also elsewhere. Jan
On 11.11.20 15:51, Dario Faggioli wrote: > On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote: >> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely >> dynamic, so the related hypfs access functions need to be >> implemented. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> >> > So, I'm almost sold... Just one comment: > >> --- a/xen/common/sched/cpupool.c >> +++ b/xen/common/sched/cpupool.c >> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void) >> >> cpupool_gran_init(); >> >> +#ifdef CONFIG_HYPFS >> + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); >> +#endif >> + > What would you think about doing this in an helper function > (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an > empty stub if !CONFIG_HYPFS ? > > That will save us from having the #ifdef-s again here. > > I'm asking because it's certainly not critical and I don't have a too > strong opinion about it. But I do think the code would look better. I'm fine either way. In case nobody objects I'll change it. Juergen
On Wed, 2020-11-11 at 15:56 +0100, Jürgen Groß wrote: > On 11.11.20 15:51, Dario Faggioli wrote: > > > > What would you think about doing this in an helper function > > (hypfs_cpupool_init() ?), implemented inside the above #ifdef and > > as an > > empty stub if !CONFIG_HYPFS ? > > > > That will save us from having the #ifdef-s again here. > > > > I'm asking because it's certainly not critical and I don't have a > > too > > strong opinion about it. But I do think the code would look better. > > I'm fine either way. > > In case nobody objects I'll change it. > Ok, cool. If you do that and resend, and that's the only change, you can add: Reviewed-by: Dario Faggioli <dfaggioli@suse.com> Thanks and Regards
On 11.11.20 15:56, Jan Beulich wrote: > On 11.11.2020 15:51, Dario Faggioli wrote: >> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote: >>> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely >>> dynamic, so the related hypfs access functions need to be >>> implemented. >>> >>> Signed-off-by: Juergen Gross <jgross@suse.com> >>> >> So, I'm almost sold... Just one comment: >> >>> --- a/xen/common/sched/cpupool.c >>> +++ b/xen/common/sched/cpupool.c >>> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void) >>> >>> cpupool_gran_init(); >>> >>> +#ifdef CONFIG_HYPFS >>> + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); >>> +#endif >>> + >> What would you think about doing this in an helper function >> (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an >> empty stub if !CONFIG_HYPFS ? >> >> That will save us from having the #ifdef-s again here. > > Having a hypfs_add_dir() stub would also allow to achieve this, and > then, going forward, perhaps also elsewhere. I thought about that. This would require to be macro for the stub case, but I don't think this is a major problem. Currently there are no other places requiring a stub, but in future this might change. Juergen
On Wed, 2020-11-11 at 16:00 +0100, Jürgen Groß wrote: > On 11.11.20 15:56, Jan Beulich wrote: > > On 11.11.2020 15:51, Dario Faggioli wrote: > > > > > > Having a hypfs_add_dir() stub would also allow to achieve this, and > > then, going forward, perhaps also elsewhere. > > I thought about that. This would require to be macro for the stub > case, > but I don't think this is a major problem. > Yes, I thought about "stub-bing" at the hypfs_add_dir() level, but we need a macro for that, for dealing with the parameters. Also, it's not like having that stub would prevent having #ifdef-s at all in this file. So that's why I thought about a local stub. But sure, if you go for the generic macro stub, I'm fine with that too. Regards
On 26.10.2020 10:13, Juergen Gross wrote: > @@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = { > .notifier_call = cpu_callback > }; > > +#ifdef CONFIG_HYPFS > +static HYPFS_DIR_INIT(cpupool_pooldir, "id"); This "id" string won't appear anywhere, will it? I would have expected this to act as the format string used when generating names of the dynamic entries. This would e.g. allow CPU pools to have decimal numbered names, but other entries also hex ones, and then if so desired also e.g. with leading zeros. > +static int cpupool_dir_read(const struct hypfs_entry *entry, > + XEN_GUEST_HANDLE_PARAM(void) uaddr) > +{ > + int ret = 0; > + struct cpupool **q; I was going to ask for const here, but the way for_each_cpupool() works looks to prohibit this. Nevertheless I wonder whether the extra level of indirection there wouldn't better be dropped. Of the users, only cpupool_destroy() looks to need it, so open- coding the loop there (or introducing an auxiliary variable) would allow improvements here and elsewhere. (Actually I notice there's also a similar use in cpupool_create(), but the general consideration remains.) > + spin_lock(&cpupool_lock); > + > + for_each_cpupool(q) > + { > + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id, > + !(*q)->next, &uaddr); > + if ( ret ) > + break; > + } > + > + spin_unlock(&cpupool_lock); > + > + return ret; > +} > + > +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) > +{ > + struct cpupool **q; > + unsigned int size = 0; > + > + spin_lock(&cpupool_lock); > + > + for_each_cpupool(q) > + size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id)); Beyond the remark above I consider this problematic: If the pool ID was negative, the use of %d here would get things out of sync with the %u uses in hypfs.c. I guess exposing HYPFS_DIRENTRY_SIZE() isn't the right approach, and you instead need another hypfs library function. > +static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir, > + const char *name, > + unsigned int name_len) > +{ > + unsigned long id; > + const char *end; > + struct cpupool *cpupool; > + > + id = simple_strtoul(name, &end, 10); > + if ( id > INT_MAX || end != name + name_len ) What does this INT_MAX match up with? Afaics XEN_SYSCTL_CPUPOOL_OP_CREATE is fine to have an effectively negative pool ID passed in (the public interface struct uses uint32_t, but this gets converted to plain int first thing in the sysctl handler). > + return ERR_PTR(-ENOENT); > + > + spin_lock(&cpupool_lock); > + > + cpupool = __cpupool_find_by_id(id, true); > + > + spin_unlock(&cpupool_lock); > + > + if ( !cpupool ) > + return ERR_PTR(-ENOENT); > + > + return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id); The latest this one makes clear that cpupool_lock nests inside the hypfs one. I think this wants spelling out next to the definition of the former, as it implies that there are restrictions on what can be done from inside cpupool-locked regions. hypfs_read_dyndir_id_entry(), for example, has to remain lock free for this reason. Jan
On 17.11.20 15:13, Jan Beulich wrote: > On 26.10.2020 10:13, Juergen Gross wrote: >> @@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = { >> .notifier_call = cpu_callback >> }; >> >> +#ifdef CONFIG_HYPFS >> +static HYPFS_DIR_INIT(cpupool_pooldir, "id"); > > This "id" string won't appear anywhere, will it? I would have > expected this to act as the format string used when generating > names of the dynamic entries. This would e.g. allow CPU pools > to have decimal numbered names, but other entries also hex > ones, and then if so desired also e.g. with leading zeros. I like that idea. > >> +static int cpupool_dir_read(const struct hypfs_entry *entry, >> + XEN_GUEST_HANDLE_PARAM(void) uaddr) >> +{ >> + int ret = 0; >> + struct cpupool **q; > > I was going to ask for const here, but the way for_each_cpupool() > works looks to prohibit this. Nevertheless I wonder whether the > extra level of indirection there wouldn't better be dropped. Of > the users, only cpupool_destroy() looks to need it, so open- > coding the loop there (or introducing an auxiliary variable) > would allow improvements here and elsewhere. (Actually I notice > there's also a similar use in cpupool_create(), but the general > consideration remains.) I'll have a look. > >> + spin_lock(&cpupool_lock); >> + >> + for_each_cpupool(q) >> + { >> + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id, >> + !(*q)->next, &uaddr); >> + if ( ret ) >> + break; >> + } >> + >> + spin_unlock(&cpupool_lock); >> + >> + return ret; >> +} >> + >> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) >> +{ >> + struct cpupool **q; >> + unsigned int size = 0; >> + >> + spin_lock(&cpupool_lock); >> + >> + for_each_cpupool(q) >> + size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id)); > > Beyond the remark above I consider this problematic: If the pool > ID was negative, the use of %d here would get things out of sync > with the %u uses in hypfs.c. I guess exposing > HYPFS_DIRENTRY_SIZE() isn't the right approach, and you instead > need another hypfs library function. Fine with me. > >> +static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir, >> + const char *name, >> + unsigned int name_len) >> +{ >> + unsigned long id; >> + const char *end; >> + struct cpupool *cpupool; >> + >> + id = simple_strtoul(name, &end, 10); >> + if ( id > INT_MAX || end != name + name_len ) > > What does this INT_MAX match up with? Afaics > XEN_SYSCTL_CPUPOOL_OP_CREATE is fine to have an effectively > negative pool ID passed in (the public interface struct uses > uint32_t, but this gets converted to plain int first thing in > the sysctl handler). Oh, this wants to be fixed. > >> + return ERR_PTR(-ENOENT); >> + >> + spin_lock(&cpupool_lock); >> + >> + cpupool = __cpupool_find_by_id(id, true); >> + >> + spin_unlock(&cpupool_lock); >> + >> + if ( !cpupool ) >> + return ERR_PTR(-ENOENT); >> + >> + return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id); > > The latest this one makes clear that cpupool_lock nests inside > the hypfs one. I think this wants spelling out next to the > definition of the former, as it implies that there are > restrictions on what can be done from inside cpupool-locked > regions. hypfs_read_dyndir_id_entry(), for example, has to > remain lock free for this reason. Okay, I'll add a comment. Juergen
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index 6c7b2f7ee3..aaca1cdf92 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -175,6 +175,15 @@ The major version of Xen. The minor version of Xen. +#### /cpupool/ + +A directory of all current cpupools. + +#### /cpupool/*/ + +The individual cpupools. Each entry is a directory with the name being the +cpupool-id (e.g. /cpupool/0/). + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 84f326ea63..8612ee5cf6 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -13,6 +13,8 @@ #include <xen/cpu.h> #include <xen/cpumask.h> +#include <xen/guest_access.h> +#include <xen/hypfs.h> #include <xen/init.h> #include <xen/keyhandler.h> #include <xen/lib.h> @@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = { .notifier_call = cpu_callback }; +#ifdef CONFIG_HYPFS +static HYPFS_DIR_INIT(cpupool_pooldir, "id"); + +static int cpupool_dir_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + int ret = 0; + struct cpupool **q; + + spin_lock(&cpupool_lock); + + for_each_cpupool(q) + { + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id, + !(*q)->next, &uaddr); + if ( ret ) + break; + } + + spin_unlock(&cpupool_lock); + + return ret; +} + +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) +{ + struct cpupool **q; + unsigned int size = 0; + + spin_lock(&cpupool_lock); + + for_each_cpupool(q) + size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id)); + + spin_unlock(&cpupool_lock); + + return size; +} + +static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir, + const char *name, + unsigned int name_len) +{ + unsigned long id; + const char *end; + struct cpupool *cpupool; + + id = simple_strtoul(name, &end, 10); + if ( id > INT_MAX || end != name + name_len ) + return ERR_PTR(-ENOENT); + + spin_lock(&cpupool_lock); + + cpupool = __cpupool_find_by_id(id, true); + + spin_unlock(&cpupool_lock); + + if ( !cpupool ) + return ERR_PTR(-ENOENT); + + return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id); +} + +static struct hypfs_funcs cpupool_dir_funcs = { + .read = cpupool_dir_read, + .getsize = cpupool_dir_getsize, + .findentry = cpupool_dir_findentry, +}; + +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs); +#endif + static int __init cpupool_init(void) { unsigned int cpu; @@ -999,6 +1073,10 @@ static int __init cpupool_init(void) cpupool_gran_init(); +#ifdef CONFIG_HYPFS + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); +#endif + cpupool0 = cpupool_create(0, 0, &err); BUG_ON(cpupool0 == NULL); cpupool_put(cpupool0);
Add /cpupool/<cpupool-id> directories to hypfs. Those are completely dynamic, so the related hypfs access functions need to be implemented. Signed-off-by: Juergen Gross <jgross@suse.com> --- docs/misc/hypfs-paths.pandoc | 9 +++++ xen/common/sched/cpupool.c | 78 ++++++++++++++++++++++++++++++++++++ 2 files changed, 87 insertions(+)