Message ID | 1473874670-4986-3-git-send-email-joao.m.martins@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: > This patch introduces support for using TSC as platform time source > which is the highest resolution time and most performant to get. > Though there are also several problems associated with its usage, and > there isn't a complete (and architecturally defined) guarantee that > all machines will provide reliable and monotonic TSC in all cases (I > believe Intel to be the only that can guarantee that?) For this reason > it's set with less priority when compared to HPET unless adminstrator > changes "clocksource" boot option to "tsc". In the following sentence you removed the exclusive mentioning of HPET, but above you don't. Furthermore I don't think this sentence is in line with what the patch does: There's no priority given to it, and it won't be used at all when not requested on the command line. > Initializing TSC > clocksource requires all CPUs up to have the tsc reliability checks > performed. init_xen_time is called before all CPUs are up, so for > example we would start with HPET (or ACPI, PIT) at boot time, and > switch later to TSC. The switch then happens on verify_tsc_reliability > initcall that is invoked when all CPUs are up. When attempting to > initialize TSC we also check for time warps and if it has invariant > TSC. Note that while we deem reliable a CONSTANT_TSC with no deep > C-states, it might not always be the case, so we're conservative and > allow TSC to be used as platform timer only with invariant TSC. > Additionally we check if CPU Hotplug isn't meant to be performed on > the host which will either be when max vcpus and num_present_cpu are > the same. This is because a newly hotplugged CPU may not satisfy the > condition of having all TSCs synchronized - so when having tsc > clocksource being used we allow offlining CPUs but not onlining any > ones back. Finally we prevent TSC from being used as clocksource on > multiple sockets because it isn't guaranteed to be invariant. Further > relaxing of this last requirement is added in a separate patch, such > that we allow vendors with such guarantee to use TSC as clocksource. > In case any of these conditions is not met, we keep the clocksource > that was previously initialized on init_xen_time. > > Since b64438c7c ("x86/time: use correct (local) time stamp in > constant-TSC calibration fast path") updates to cpu time use local > stamps, which means platform timer is only used to seed the initial > cpu time. With clocksource=tsc there is no need to be in sync with > another clocksource, so we reseed the local/master stamps to be values > of TSC and update the platform time stamps accordingly. Time > calibration is set to 1sec after we switch to TSC, thus these stamps > are reseeded to also ensure monotonic returning values right after the > point we switch to TSC. This is also to avoid the possibility of > having inconsistent readings in this short period (i.e. until > calibration fires). And within this one second, which may cover some of Dom0's booting up, it is okay to have inconsistencies? > --- a/xen/arch/x86/time.c > +++ b/xen/arch/x86/time.c > @@ -475,6 +475,50 @@ uint64_t ns_to_acpi_pm_tick(uint64_t ns) > } > > /************************************************************ > + * PLATFORM TIMER 4: TSC > + */ > + > +/* > + * Called in verify_tsc_reliability() under reliable TSC conditions > + * thus reusing all the checks already performed there. > + */ > +static s64 __init init_tsc(struct platform_timesource *pts) > +{ > + u64 ret = pts->frequency; > + > + if ( nr_cpu_ids != num_present_cpus() ) > + { > + printk(XENLOG_INFO "TSC: CPU Hotplug intended\n"); > + ret = 0; > + } > + > + if ( nr_sockets > 1 ) > + { > + printk(XENLOG_INFO "TSC: Not invariant across sockets\n"); > + ret = 0; > + } > + > + if ( !ret ) > + printk(XENLOG_INFO "TSC: Not setting it as clocksource\n"); I think this last message is redundant with the former two. But since I also think that info level is too low for those earlier ones, perhaps keeping the latter (at info or even debug level) would be okay, once the other got bumped to warning level. > +static struct platform_timesource __initdata plt_tsc = > +{ > + .id = "tsc", > + .name = "TSC", > + .read_counter = read_tsc, > + .counter_bits = 63, Please add a brief comment explaining why this is not 64. > @@ -604,7 +667,9 @@ static u64 __init init_platform_timer(void) > unsigned int i; > s64 rc = -1; > > - if ( opt_clocksource[0] != '\0' ) > + /* clocksource=tsc is initialized via __initcalls (when CPUs are up). */ > + if ( (opt_clocksource[0] != '\0') && > + strcmp(opt_clocksource, "tsc") ) No real need to split this if() across two lines. > +static void __init try_platform_timer_tail(void) > +{ > + init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); > + plt_overflow(NULL); > + > + platform_timer_stamp = plt_stamp64; > + stime_platform_stamp = NOW(); > + > + if ( !clocksource_is_tsc() ) > + init_percpu_time(); This isn't really dependent on whether TSC is used as clocksource, but solely on the point in time at which the call gets made, is it? If so, I think an explicit boolean function parameter (named e.g. "late") would be better than abusing the predicate here. > @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) > printk("TSC warp detected, disabling TSC_RELIABLE\n"); > setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); > } > + else if ( !strcmp(opt_clocksource, "tsc") && > + (try_platform_timer(&plt_tsc) > 0) ) > + { > + /* > + * Platform timer has changed and CPU time will only be updated > + * after we set again the calibration timer, which means we need to > + * seed again each local CPU time. At this stage TSC is known to be > + * reliable i.e. monotonically increasing across all CPUs so this > + * lets us remove the skew between platform timer and TSC, since > + * these are now effectively the same. > + */ > + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); > + > + /* Finish platform timer switch. */ > + try_platform_timer_tail(); > + > + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", > + freq_string(plt_src.frequency)); This message should have the same log level as the one at the end of init_platform_timer(). Jan
On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >> This patch introduces support for using TSC as platform time source >> which is the highest resolution time and most performant to get. >> Though there are also several problems associated with its usage, and >> there isn't a complete (and architecturally defined) guarantee that >> all machines will provide reliable and monotonic TSC in all cases (I >> believe Intel to be the only that can guarantee that?) For this reason >> it's set with less priority when compared to HPET unless adminstrator >> changes "clocksource" boot option to "tsc". > > In the following sentence you removed the exclusive mentioning > of HPET, but above you don't. Furthermore I don't think this > sentence is in line with what the patch does: There's no priority > given to it, and it won't be used at all when not requested on the > command line. You're right, let me change this sentence to be: For this reason it's not used unless administrator changes "clocksource" boot option to "tsc". > >> Initializing TSC >> clocksource requires all CPUs up to have the tsc reliability checks >> performed. init_xen_time is called before all CPUs are up, so for >> example we would start with HPET (or ACPI, PIT) at boot time, and >> switch later to TSC. The switch then happens on verify_tsc_reliability >> initcall that is invoked when all CPUs are up. When attempting to >> initialize TSC we also check for time warps and if it has invariant >> TSC. Note that while we deem reliable a CONSTANT_TSC with no deep >> C-states, it might not always be the case, so we're conservative and >> allow TSC to be used as platform timer only with invariant TSC. >> Additionally we check if CPU Hotplug isn't meant to be performed on >> the host which will either be when max vcpus and num_present_cpu are >> the same. This is because a newly hotplugged CPU may not satisfy the >> condition of having all TSCs synchronized - so when having tsc >> clocksource being used we allow offlining CPUs but not onlining any >> ones back. Finally we prevent TSC from being used as clocksource on >> multiple sockets because it isn't guaranteed to be invariant. Further >> relaxing of this last requirement is added in a separate patch, such >> that we allow vendors with such guarantee to use TSC as clocksource. >> In case any of these conditions is not met, we keep the clocksource >> that was previously initialized on init_xen_time. >> >> Since b64438c7c ("x86/time: use correct (local) time stamp in >> constant-TSC calibration fast path") updates to cpu time use local >> stamps, which means platform timer is only used to seed the initial >> cpu time. With clocksource=tsc there is no need to be in sync with >> another clocksource, so we reseed the local/master stamps to be values >> of TSC and update the platform time stamps accordingly. Time >> calibration is set to 1sec after we switch to TSC, thus these stamps >> are reseeded to also ensure monotonic returning values right after the >> point we switch to TSC. This is also to avoid the possibility of >> having inconsistent readings in this short period (i.e. until >> calibration fires). > > And within this one second, which may cover some of Dom0's > booting up, it is okay to have inconsistencies? It's not okay which is why I am removing this possibility when switching to TSC. The inconsistencies in those readings (if I wasn't adjusting) would be because we would be using (in that 1-sec) those cpu time tuples calculated by the previous calibration or platform time initialization (while still was HPET, ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead change it to "remove the possibility" in this last sentence? > >> --- a/xen/arch/x86/time.c >> +++ b/xen/arch/x86/time.c >> @@ -475,6 +475,50 @@ uint64_t ns_to_acpi_pm_tick(uint64_t ns) >> } >> >> /************************************************************ >> + * PLATFORM TIMER 4: TSC >> + */ >> + >> +/* >> + * Called in verify_tsc_reliability() under reliable TSC conditions >> + * thus reusing all the checks already performed there. >> + */ >> +static s64 __init init_tsc(struct platform_timesource *pts) >> +{ >> + u64 ret = pts->frequency; >> + >> + if ( nr_cpu_ids != num_present_cpus() ) >> + { >> + printk(XENLOG_INFO "TSC: CPU Hotplug intended\n"); >> + ret = 0; >> + } >> + >> + if ( nr_sockets > 1 ) >> + { >> + printk(XENLOG_INFO "TSC: Not invariant across sockets\n"); >> + ret = 0; >> + } >> + >> + if ( !ret ) >> + printk(XENLOG_INFO "TSC: Not setting it as clocksource\n"); > > I think this last message is redundant with the former two. But since > I also think that info level is too low for those earlier ones, perhaps > keeping the latter (at info or even debug level) would be okay, once > the other got bumped to warning level. Makes sense and one can infer that message from the lack of "Switched to ..." Let me change the first two into warning level and the last one as debug level as you're suggesting. > >> +static struct platform_timesource __initdata plt_tsc = >> +{ >> + .id = "tsc", >> + .name = "TSC", >> + .read_counter = read_tsc, >> + .counter_bits = 63, > > Please add a brief comment explaining why this is not 64. OK. > >> @@ -604,7 +667,9 @@ static u64 __init init_platform_timer(void) >> unsigned int i; >> s64 rc = -1; >> >> - if ( opt_clocksource[0] != '\0' ) >> + /* clocksource=tsc is initialized via __initcalls (when CPUs are up). */ >> + if ( (opt_clocksource[0] != '\0') && >> + strcmp(opt_clocksource, "tsc") ) > > No real need to split this if() across two lines. OK. > >> +static void __init try_platform_timer_tail(void) >> +{ >> + init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); >> + plt_overflow(NULL); >> + >> + platform_timer_stamp = plt_stamp64; >> + stime_platform_stamp = NOW(); >> + >> + if ( !clocksource_is_tsc() ) >> + init_percpu_time(); > > This isn't really dependent on whether TSC is used as clocksource, > but solely on the point in time at which the call gets made, is it? If > so, I think an explicit boolean function parameter (named e.g. "late") > would be better than abusing the predicate here. Correct, I will introduce this boolean parameter. Not that is critical but probably add an likely(...) there too, since the late case only happens for clocksource=tsc ? > >> @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) >> printk("TSC warp detected, disabling TSC_RELIABLE\n"); >> setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); >> } >> + else if ( !strcmp(opt_clocksource, "tsc") && >> + (try_platform_timer(&plt_tsc) > 0) ) >> + { >> + /* >> + * Platform timer has changed and CPU time will only be updated >> + * after we set again the calibration timer, which means we need to >> + * seed again each local CPU time. At this stage TSC is known to be >> + * reliable i.e. monotonically increasing across all CPUs so this >> + * lets us remove the skew between platform timer and TSC, since >> + * these are now effectively the same. >> + */ >> + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); >> + >> + /* Finish platform timer switch. */ >> + try_platform_timer_tail(); >> + >> + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", >> + freq_string(plt_src.frequency)); > > This message should have the same log level as the one at the end > of init_platform_timer(). Agreed, but at the end of init_platform_timer there is a plain printk with an omitted log level. Or do you mean to remove XENLOG_INFO from this printk above or, instead add XENLOG_INFO to one printk at the end of init_platform_timer() ? Joao
>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: > On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>> constant-TSC calibration fast path") updates to cpu time use local >>> stamps, which means platform timer is only used to seed the initial >>> cpu time. With clocksource=tsc there is no need to be in sync with >>> another clocksource, so we reseed the local/master stamps to be values >>> of TSC and update the platform time stamps accordingly. Time >>> calibration is set to 1sec after we switch to TSC, thus these stamps >>> are reseeded to also ensure monotonic returning values right after the >>> point we switch to TSC. This is also to avoid the possibility of >>> having inconsistent readings in this short period (i.e. until >>> calibration fires). >> >> And within this one second, which may cover some of Dom0's >> booting up, it is okay to have inconsistencies? > It's not okay which is why I am removing this possibility when switching to TSC. > The inconsistencies in those readings (if I wasn't adjusting) would be because > we would be using (in that 1-sec) those cpu time tuples calculated by the > previous calibration or platform time initialization (while still was HPET, > ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead > change it to "remove the possibility" in this last sentence? Let's not do the 2nd step before the 1st, which is the question of what happens prior to and what actually changes at this first calibration (after 1 sec). >>> +static void __init try_platform_timer_tail(void) >>> +{ >>> + init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); >>> + plt_overflow(NULL); >>> + >>> + platform_timer_stamp = plt_stamp64; >>> + stime_platform_stamp = NOW(); >>> + >>> + if ( !clocksource_is_tsc() ) >>> + init_percpu_time(); >> >> This isn't really dependent on whether TSC is used as clocksource, >> but solely on the point in time at which the call gets made, is it? If >> so, I think an explicit boolean function parameter (named e.g. "late") >> would be better than abusing the predicate here. > > Correct, I will introduce this boolean parameter. Not that is critical but > probably add an likely(...) there too, since the late case only happens for > clocksource=tsc ? Well, in __init code I prefer to avoid likely()/unlikely(), unless it's in e.g. a performance critical loop. >>> @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) >>> printk("TSC warp detected, disabling TSC_RELIABLE\n"); >>> setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); >>> } >>> + else if ( !strcmp(opt_clocksource, "tsc") && >>> + (try_platform_timer(&plt_tsc) > 0) ) >>> + { >>> + /* >>> + * Platform timer has changed and CPU time will only be updated >>> + * after we set again the calibration timer, which means we need to >>> + * seed again each local CPU time. At this stage TSC is known to be >>> + * reliable i.e. monotonically increasing across all CPUs so this >>> + * lets us remove the skew between platform timer and TSC, since >>> + * these are now effectively the same. >>> + */ >>> + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); >>> + >>> + /* Finish platform timer switch. */ >>> + try_platform_timer_tail(); >>> + >>> + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", >>> + freq_string(plt_src.frequency)); >> >> This message should have the same log level as the one at the end >> of init_platform_timer(). > Agreed, but at the end of init_platform_timer there is a plain printk with an > omitted log level. Or do you mean to remove XENLOG_INFO from this printk above > or, instead add XENLOG_INFO to one printk at the end of > init_platform_timer() ? Well, info would again be too low a level for my taste. Hence either remove the level here (slightly preferred from my pov), or make both warning. Jan
On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>> constant-TSC calibration fast path") updates to cpu time use local >>>> stamps, which means platform timer is only used to seed the initial >>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>> another clocksource, so we reseed the local/master stamps to be values >>>> of TSC and update the platform time stamps accordingly. Time >>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>> are reseeded to also ensure monotonic returning values right after the >>>> point we switch to TSC. This is also to avoid the possibility of >>>> having inconsistent readings in this short period (i.e. until >>>> calibration fires). >>> >>> And within this one second, which may cover some of Dom0's >>> booting up, it is okay to have inconsistencies? >> It's not okay which is why I am removing this possibility when switching to TSC. >> The inconsistencies in those readings (if I wasn't adjusting) would be because >> we would be using (in that 1-sec) those cpu time tuples calculated by the >> previous calibration or platform time initialization (while still was HPET, >> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >> change it to "remove the possibility" in this last sentence? > > Let's not do the 2nd step before the 1st, which is the question of > what happens prior to and what actually changes at this first > calibration (after 1 sec). The first calibration won't change much - this 1-sec was meant when having nop_rendezvous which is the first time platform timer would be used to set local cpu_time (will adjust the mention above as it's misleading for the reader as it doesn't refer to this patch). Though reseeding the cpu times to boot_tsc_stamp prior to calibration (*without* the time latch values from previous platform timer) we can ensure NOW/get_s_time or calls to update_vcpu_system_time() will see monotonically increasing values. Otherwise keeping the previous ones, calibration would just add up local TSC delta and any existing divergence wouldn't be solved. On a later patch when we set the stable bit (with nop rendezvous), if these cpu times weren't adjusted guests/xen would still see small divergence between CPUs until the first calibration was fired (after we switched to TSC). And then the values wouldn't be consistent with the guarantees expected from this bit. But even not considering this bit (which is not the subject of this patch), same guarantee is expected from get_s_time() calls within Xen. >>>> @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) >>>> printk("TSC warp detected, disabling TSC_RELIABLE\n"); >>>> setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); >>>> } >>>> + else if ( !strcmp(opt_clocksource, "tsc") && >>>> + (try_platform_timer(&plt_tsc) > 0) ) >>>> + { >>>> + /* >>>> + * Platform timer has changed and CPU time will only be updated >>>> + * after we set again the calibration timer, which means we need to >>>> + * seed again each local CPU time. At this stage TSC is known to be >>>> + * reliable i.e. monotonically increasing across all CPUs so this >>>> + * lets us remove the skew between platform timer and TSC, since >>>> + * these are now effectively the same. >>>> + */ >>>> + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); >>>> + >>>> + /* Finish platform timer switch. */ >>>> + try_platform_timer_tail(); >>>> + >>>> + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", >>>> + freq_string(plt_src.frequency)); >>> >>> This message should have the same log level as the one at the end >>> of init_platform_timer(). >> Agreed, but at the end of init_platform_timer there is a plain printk with an >> omitted log level. Or do you mean to remove XENLOG_INFO from this printk above >> or, instead add XENLOG_INFO to one printk at the end of >> init_platform_timer() ? > > Well, info would again be too low a level for my taste. Hence either > remove the level here (slightly preferred from my pov), or make both > warning. As your preference goes towards without the log level, I will re-introduce back without it. Although I would find clearer to use printk with a log level as it was advised in earlier reviews. NB: My suggestion of info as level is because my usual line of thought is to see warning as something potentially erroneous that user should be warned about, and error as being an actual error. Joao
>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: > On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>> stamps, which means platform timer is only used to seed the initial >>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>> another clocksource, so we reseed the local/master stamps to be values >>>>> of TSC and update the platform time stamps accordingly. Time >>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>> are reseeded to also ensure monotonic returning values right after the >>>>> point we switch to TSC. This is also to avoid the possibility of >>>>> having inconsistent readings in this short period (i.e. until >>>>> calibration fires). >>>> >>>> And within this one second, which may cover some of Dom0's >>>> booting up, it is okay to have inconsistencies? >>> It's not okay which is why I am removing this possibility when switching to TSC. >>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>> previous calibration or platform time initialization (while still was HPET, >>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>> change it to "remove the possibility" in this last sentence? >> >> Let's not do the 2nd step before the 1st, which is the question of >> what happens prior to and what actually changes at this first >> calibration (after 1 sec). > The first calibration won't change much - this 1-sec was meant when having > nop_rendezvous which is the first time platform timer would be used to set local > cpu_time (will adjust the mention above as it's misleading for the reader as it > doesn't refer to this patch). So what makes it that it actually _is_ nop_rendezvous after that one second? (And yes, part of this may indeed be just bad placement of the description, as iirc nop_rendezvous gets introduced only in a later patch.) > Though reseeding the cpu times to boot_tsc_stamp > prior to calibration (*without* the time latch values from previous platform > timer) we can ensure NOW/get_s_time or calls to update_vcpu_system_time() will > see monotonically increasing values. Otherwise keeping the previous ones, > calibration would just add up local TSC delta and any existing divergence > wouldn't be solved. On a later patch when we set the stable bit (with nop > rendezvous), if these cpu times weren't adjusted guests/xen would still see > small divergence between CPUs until the first calibration was fired (after we > switched to TSC). Right. But my concern is regarding the time window _between_ switching to TSC and running calibration the first time afterwards. Part of this indeed gets taken care of by the re-seeding (which, other than one could imply from what you say above, doesn't get done [immediately] prior to calibration, but upon switching to TSC). But is re-seeding without switching to nop_rendezvous really sufficient (or in other words, can the patches really be broken up like this)? >>>>> @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) >>>>> printk("TSC warp detected, disabling TSC_RELIABLE\n"); >>>>> setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); >>>>> } >>>>> + else if ( !strcmp(opt_clocksource, "tsc") && >>>>> + (try_platform_timer(&plt_tsc) > 0) ) >>>>> + { >>>>> + /* >>>>> + * Platform timer has changed and CPU time will only be updated >>>>> + * after we set again the calibration timer, which means we need to >>>>> + * seed again each local CPU time. At this stage TSC is known to be >>>>> + * reliable i.e. monotonically increasing across all CPUs so this >>>>> + * lets us remove the skew between platform timer and TSC, since >>>>> + * these are now effectively the same. >>>>> + */ >>>>> + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); >>>>> + >>>>> + /* Finish platform timer switch. */ >>>>> + try_platform_timer_tail(); >>>>> + >>>>> + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", >>>>> + freq_string(plt_src.frequency)); >>>> >>>> This message should have the same log level as the one at the end >>>> of init_platform_timer(). >>> Agreed, but at the end of init_platform_timer there is a plain printk with an >>> omitted log level. Or do you mean to remove XENLOG_INFO from this printk above >>> or, instead add XENLOG_INFO to one printk at the end of >>> init_platform_timer() ? >> >> Well, info would again be too low a level for my taste. Hence either >> remove the level here (slightly preferred from my pov), or make both >> warning. > As your preference goes towards without the log level, I will re-introduce back > without it. Although I would find clearer to use printk with a log level as it > was advised in earlier reviews. > > NB: My suggestion of info as level is because my usual line of thought is to see > warning as something potentially erroneous that user should be warned about, and > error as being an actual error. I understand and mostly agree. There are a few cases though (and we're dealing with one here imo) where the absence of a log level is better: We want these messages present in the log by default (which they wouldn't be if they were info), but they're also not really warnings. Perhaps simply for documentation purposes we could add XENLOG_DEFAULT (expanding to an empty string), but that's not something to be done in this patch. Jan
On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>> stamps, which means platform timer is only used to seed the initial >>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>> having inconsistent readings in this short period (i.e. until >>>>>> calibration fires). >>>>> >>>>> And within this one second, which may cover some of Dom0's >>>>> booting up, it is okay to have inconsistencies? >>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>> previous calibration or platform time initialization (while still was HPET, >>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>> change it to "remove the possibility" in this last sentence? >>> >>> Let's not do the 2nd step before the 1st, which is the question of >>> what happens prior to and what actually changes at this first >>> calibration (after 1 sec). >> The first calibration won't change much - this 1-sec was meant when having >> nop_rendezvous which is the first time platform timer would be used to set local >> cpu_time (will adjust the mention above as it's misleading for the reader as it >> doesn't refer to this patch). > > So what makes it that it actually _is_ nop_rendezvous after that one > second? (And yes, part of this may indeed be just bad placement of > the description, as iirc nop_rendezvous gets introduced only in a later > patch.) Because with nop_rendezvous we will be using the platform timer to get a monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times solves both ends of the problem, with nop_rendezvous until it is first calibration fixes it, and without nop_rendezvous to remove the latch adjustment from initial platform timer. >> Though reseeding the cpu times to boot_tsc_stamp >> prior to calibration (*without* the time latch values from previous platform >> timer) we can ensure NOW/get_s_time or calls to update_vcpu_system_time() will >> see monotonically increasing values. Otherwise keeping the previous ones, >> calibration would just add up local TSC delta and any existing divergence >> wouldn't be solved. On a later patch when we set the stable bit (with nop >> rendezvous), if these cpu times weren't adjusted guests/xen would still see >> small divergence between CPUs until the first calibration was fired (after we >> switched to TSC). > > Right. But my concern is regarding the time window _between_ > switching to TSC and running calibration the first time afterwards. > Part of this indeed gets taken care of by the re-seeding (which, > other than one could imply from what you say above, doesn't get > done [immediately] prior to calibration, but upon switching to TSC). Exactly. > But is re-seeding without switching to nop_rendezvous really > sufficient (or in other words, can the patches really be broken up > like this)? I think it is sufficient, unless I am missing out something. In my opinion what requires it to be correct is that the tuple that is getting (re-)written to cpu time has a matching stamp + stime and corresponds to the same time reference as the next written after. Unless your doubts might originate from how std_rendezvous currently updates the time infos but even there with your series we fortunately add up to cpu_time with a monotonic delta (with a matching stamp and stime tuple) and thus having the monotonic property even without nop_rendezvous. I recall doing testing/validation of this patch alone and didn't observe any divergences, but I must say that the most significant portion of testing was with the whole series. >>>>>> @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) >>>>>> printk("TSC warp detected, disabling TSC_RELIABLE\n"); >>>>>> setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); >>>>>> } >>>>>> + else if ( !strcmp(opt_clocksource, "tsc") && >>>>>> + (try_platform_timer(&plt_tsc) > 0) ) >>>>>> + { >>>>>> + /* >>>>>> + * Platform timer has changed and CPU time will only be updated >>>>>> + * after we set again the calibration timer, which means we need to >>>>>> + * seed again each local CPU time. At this stage TSC is known to be >>>>>> + * reliable i.e. monotonically increasing across all CPUs so this >>>>>> + * lets us remove the skew between platform timer and TSC, since >>>>>> + * these are now effectively the same. >>>>>> + */ >>>>>> + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); >>>>>> + >>>>>> + /* Finish platform timer switch. */ >>>>>> + try_platform_timer_tail(); >>>>>> + >>>>>> + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", >>>>>> + freq_string(plt_src.frequency)); >>>>> >>>>> This message should have the same log level as the one at the end >>>>> of init_platform_timer(). >>>> Agreed, but at the end of init_platform_timer there is a plain printk with an >>>> omitted log level. Or do you mean to remove XENLOG_INFO from this printk above >>>> or, instead add XENLOG_INFO to one printk at the end of >>>> init_platform_timer() ? >>> >>> Well, info would again be too low a level for my taste. Hence either >>> remove the level here (slightly preferred from my pov), or make both >>> warning. >> As your preference goes towards without the log level, I will re-introduce back >> without it. Although I would find clearer to use printk with a log level as it >> was advised in earlier reviews. >> >> NB: My suggestion of info as level is because my usual line of thought is to see >> warning as something potentially erroneous that user should be warned about, and >> error as being an actual error. > > I understand and mostly agree. There are a few cases though (and > we're dealing with one here imo) where the absence of a log level is > better: We want these messages present in the log by default (which > they wouldn't be if they were info), but they're also not really > warnings. Perhaps simply for documentation purposes we could add > XENLOG_DEFAULT (expanding to an empty string), but that's not > something to be done in this patch. I see, thanks for the clarification. Joao
On 09/20/2016 11:15 AM, Joao Martins wrote: > On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >>> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>>> stamps, which means platform timer is only used to seed the initial >>>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>>> having inconsistent readings in this short period (i.e. until >>>>>>> calibration fires). >>>>>> >>>>>> And within this one second, which may cover some of Dom0's >>>>>> booting up, it is okay to have inconsistencies? >>>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>>> previous calibration or platform time initialization (while still was HPET, >>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>>> change it to "remove the possibility" in this last sentence? >>>> >>>> Let's not do the 2nd step before the 1st, which is the question of >>>> what happens prior to and what actually changes at this first >>>> calibration (after 1 sec). >>> The first calibration won't change much - this 1-sec was meant when having >>> nop_rendezvous which is the first time platform timer would be used to set local >>> cpu_time (will adjust the mention above as it's misleading for the reader as it >>> doesn't refer to this patch). >> >> So what makes it that it actually _is_ nop_rendezvous after that one >> second? (And yes, part of this may indeed be just bad placement of >> the description, as iirc nop_rendezvous gets introduced only in a later >> patch.) > Because with nop_rendezvous we will be using the platform timer to get a > monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC > delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times > solves both ends of the problem, with nop_rendezvous until it is first > calibration fixes it, and without nop_rendezvous to remove the latch adjustment > from initial platform timer. The part "until it is the first calibration fixes it" is very confusing/redundant I am sorry. I meant it: "with nop_rendezvous which otherwise would be the first calibration fixing it". The previous part was hinting like there's a problem, when it is fixed but the reseeding.
>>> On 20.09.16 at 12:15, <joao.m.martins@oracle.com> wrote: > On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >>> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>>> stamps, which means platform timer is only used to seed the initial >>>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>>> having inconsistent readings in this short period (i.e. until >>>>>>> calibration fires). >>>>>> >>>>>> And within this one second, which may cover some of Dom0's >>>>>> booting up, it is okay to have inconsistencies? >>>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>>> previous calibration or platform time initialization (while still was HPET, >>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>>> change it to "remove the possibility" in this last sentence? >>>> >>>> Let's not do the 2nd step before the 1st, which is the question of >>>> what happens prior to and what actually changes at this first >>>> calibration (after 1 sec). >>> The first calibration won't change much - this 1-sec was meant when having >>> nop_rendezvous which is the first time platform timer would be used to set local >>> cpu_time (will adjust the mention above as it's misleading for the reader as it >>> doesn't refer to this patch). >> >> So what makes it that it actually _is_ nop_rendezvous after that one >> second? (And yes, part of this may indeed be just bad placement of >> the description, as iirc nop_rendezvous gets introduced only in a later >> patch.) > Because with nop_rendezvous we will be using the platform timer to get a > monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC > delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times > solves both ends of the problem, with nop_rendezvous until it is first > calibration fixes it, and without nop_rendezvous to remove the latch adjustment > from initial platform timer. So am I getting you right (together with the second part of your reply further down) that you escape answering the question raised by saying that it doesn't really matter which rendezvous function gets used, when TSC is the clock source? I.e. the introduction of nop_rendezvous is really just to avoid unnecessary overhead? In which case it should probably be a separate patch, saying so in its description. Jan
On 09/20/2016 02:55 PM, Jan Beulich wrote: >>>> On 20.09.16 at 12:15, <joao.m.martins@oracle.com> wrote: >> On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >>>> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>>>> stamps, which means platform timer is only used to seed the initial >>>>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>>>> having inconsistent readings in this short period (i.e. until >>>>>>>> calibration fires). >>>>>>> >>>>>>> And within this one second, which may cover some of Dom0's >>>>>>> booting up, it is okay to have inconsistencies? >>>>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>>>> previous calibration or platform time initialization (while still was HPET, >>>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>>>> change it to "remove the possibility" in this last sentence? >>>>> >>>>> Let's not do the 2nd step before the 1st, which is the question of >>>>> what happens prior to and what actually changes at this first >>>>> calibration (after 1 sec). >>>> The first calibration won't change much - this 1-sec was meant when having >>>> nop_rendezvous which is the first time platform timer would be used to set local >>>> cpu_time (will adjust the mention above as it's misleading for the reader as it >>>> doesn't refer to this patch). >>> >>> So what makes it that it actually _is_ nop_rendezvous after that one >>> second? (And yes, part of this may indeed be just bad placement of >>> the description, as iirc nop_rendezvous gets introduced only in a later >>> patch.) >> Because with nop_rendezvous we will be using the platform timer to get a >> monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC >> delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times >> solves both ends of the problem, with nop_rendezvous until it is first >> calibration fixes it, and without nop_rendezvous to remove the latch adjustment >> from initial platform timer. > > So am I getting you right (together with the second part of your reply > further down) that you escape answering the question raised by saying > that it doesn't really matter which rendezvous function gets used, when > TSC is the clock source? Correct and in my defense I wasn't escaping the question, as despite unfortunate mis-mention in the patch (or bad English) I think the above explains it. During that time window, we now just need to ensure that we will get monotonic results solely based on the individual CPU time (i.e. calls to get_s_time or anything that uses cpu_time). Unless the calibration function is doing something wrong/fishy, I don't see a reason for this to go wrong. > I.e. the introduction of nop_rendezvous is > really just to avoid unnecessary overhead? Yes, but note that it's only the case since recent commit b64438c7c where cpu_time stime is now incremented with TSC based deltas with a matching TSC stamp. Before it wasn't the case. The main difference with nop_rendezvous (other than the significant overhead) versus std_rendezvous is that we use a single global tuple propagated to all cpus, whereas with std_rendezvous each tuple is different and will vary according to when it rendezvous with cpu 0. > In which case it should > probably be a separate patch, saying so in its description. OK, will move that out of Patch 4 into its own while keeping the same logic. Joao
On 09/20/2016 05:17 PM, Joao Martins wrote: > On 09/20/2016 02:55 PM, Jan Beulich wrote: >>>>> On 20.09.16 at 12:15, <joao.m.martins@oracle.com> wrote: >>> On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>>>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >>>>> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>>>>> stamps, which means platform timer is only used to seed the initial >>>>>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>>>>> having inconsistent readings in this short period (i.e. until >>>>>>>>> calibration fires). >>>>>>>> >>>>>>>> And within this one second, which may cover some of Dom0's >>>>>>>> booting up, it is okay to have inconsistencies? >>>>>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>>>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>>>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>>>>> previous calibration or platform time initialization (while still was HPET, >>>>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>>>>> change it to "remove the possibility" in this last sentence? >>>>>> >>>>>> Let's not do the 2nd step before the 1st, which is the question of >>>>>> what happens prior to and what actually changes at this first >>>>>> calibration (after 1 sec). >>>>> The first calibration won't change much - this 1-sec was meant when having >>>>> nop_rendezvous which is the first time platform timer would be used to set local >>>>> cpu_time (will adjust the mention above as it's misleading for the reader as it >>>>> doesn't refer to this patch). >>>> >>>> So what makes it that it actually _is_ nop_rendezvous after that one >>>> second? (And yes, part of this may indeed be just bad placement of >>>> the description, as iirc nop_rendezvous gets introduced only in a later >>>> patch.) >>> Because with nop_rendezvous we will be using the platform timer to get a >>> monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC >>> delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times >>> solves both ends of the problem, with nop_rendezvous until it is first >>> calibration fixes it, and without nop_rendezvous to remove the latch adjustment >>> from initial platform timer. >> >> So am I getting you right (together with the second part of your reply >> further down) that you escape answering the question raised by saying >> that it doesn't really matter which rendezvous function gets used, when >> TSC is the clock source? > Correct and in my defense I wasn't escaping the question, as despite > unfortunate mis-mention in the patch (or bad English) I think the above > explains it. During that time window, we now just need to ensure that we will > get monotonic results solely based on the individual CPU time (i.e. calls to > get_s_time or anything that uses cpu_time). Unless the calibration function is > doing something wrong/fishy, I don't see a reason for this to go wrong. > >> I.e. the introduction of nop_rendezvous is >> really just to avoid unnecessary overhead? > Yes, but note that it's only the case since recent commit b64438c7c where > cpu_time stime is now incremented with TSC based deltas with a matching TSC > stamp. Before it wasn't the case. The main difference with nop_rendezvous (other > than the significant overhead) versus std_rendezvous is that we use a single > global tuple propagated to all cpus, whereas with std_rendezvous each tuple is > different and will vary according to when it rendezvous with cpu 0. I have to take back my comment: having redouble-checked on a test run overnight without rendezvous and stable bit, and I saw time going backwards a few times (~100ns) but only after a few hours (initially there were none probably why I was led into error). This is in contrast to nop_rendezvous where I see none in weeks. Joao
On 09/20/2016 05:17 PM, Joao Martins wrote: > On 09/20/2016 02:55 PM, Jan Beulich wrote: >>>>> On 20.09.16 at 12:15, <joao.m.martins@oracle.com> wrote: >>> On 09/20/2016 08:13 AM, Jan Beulich wrote: >>>>>>> On 19.09.16 at 19:54, <joao.m.martins@oracle.com> wrote: >>>>> On 09/19/2016 05:25 PM, Jan Beulich wrote: >>>>>>>>> On 19.09.16 at 18:11, <joao.m.martins@oracle.com> wrote: >>>>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote: >>>>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@oracle.com> wrote: >>>>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in >>>>>>>>> constant-TSC calibration fast path") updates to cpu time use local >>>>>>>>> stamps, which means platform timer is only used to seed the initial >>>>>>>>> cpu time. With clocksource=tsc there is no need to be in sync with >>>>>>>>> another clocksource, so we reseed the local/master stamps to be values >>>>>>>>> of TSC and update the platform time stamps accordingly. Time >>>>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps >>>>>>>>> are reseeded to also ensure monotonic returning values right after the >>>>>>>>> point we switch to TSC. This is also to avoid the possibility of >>>>>>>>> having inconsistent readings in this short period (i.e. until >>>>>>>>> calibration fires). >>>>>>>> >>>>>>>> And within this one second, which may cover some of Dom0's >>>>>>>> booting up, it is okay to have inconsistencies? >>>>>>> It's not okay which is why I am removing this possibility when switching to TSC. >>>>>>> The inconsistencies in those readings (if I wasn't adjusting) would be because >>>>>>> we would be using (in that 1-sec) those cpu time tuples calculated by the >>>>>>> previous calibration or platform time initialization (while still was HPET, >>>>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and instead >>>>>>> change it to "remove the possibility" in this last sentence? >>>>>> >>>>>> Let's not do the 2nd step before the 1st, which is the question of >>>>>> what happens prior to and what actually changes at this first >>>>>> calibration (after 1 sec). >>>>> The first calibration won't change much - this 1-sec was meant when having >>>>> nop_rendezvous which is the first time platform timer would be used to set local >>>>> cpu_time (will adjust the mention above as it's misleading for the reader as it >>>>> doesn't refer to this patch). >>>> >>>> So what makes it that it actually _is_ nop_rendezvous after that one >>>> second? (And yes, part of this may indeed be just bad placement of >>>> the description, as iirc nop_rendezvous gets introduced only in a later >>>> patch.) >>> Because with nop_rendezvous we will be using the platform timer to get a >>> monotonic time tuple and *set* cpu_time as opposed to just adding up plain TSC >>> delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu times >>> solves both ends of the problem, with nop_rendezvous until it is first >>> calibration fixes it, and without nop_rendezvous to remove the latch adjustment >>> from initial platform timer. >> >> So am I getting you right (together with the second part of your reply >> further down) that you escape answering the question raised by saying >> that it doesn't really matter which rendezvous function gets used, when >> TSC is the clock source? > Correct and in my defense I wasn't escaping the question, as despite > unfortunate mis-mention in the patch (or bad English) I think the above > explains it. During that time window, we now just need to ensure that we will > get monotonic results solely based on the individual CPU time (i.e. calls to > get_s_time or anything that uses cpu_time). Unless the calibration function is > doing something wrong/fishy, I don't see a reason for this to go wrong. > >> I.e. the introduction of nop_rendezvous is >> really just to avoid unnecessary overhead? > Yes, but note that it's only the case since recent commit b64438c7c where > cpu_time stime is now incremented with TSC based deltas with a matching TSC > stamp. Before it wasn't the case. The main difference with nop_rendezvous (other > than the significant overhead) versus std_rendezvous is that we use a single > global tuple propagated to all cpus, whereas with std_rendezvous each tuple is > different and will vary according to when it rendezvous with cpu 0. > >> In which case it should >> probably be a separate patch, saying so in its description. > OK, will move that out of Patch 4 into its own while keeping the same logic. I have to take back my comment: having redouble-checked on a test run overnight with std_rendezvous and stable bit, and I saw time going backwards a few times (~100ns) but only after a few hours (initially there were none - probably why I was led into error). This is in contrast to nop_rendezvous where I see none in weeks. Joao P.S. If you received similar earlier response but my mailer was misbehaving - please ignore and sorry for the noise.
>>> On 21.09.16 at 11:20, <joao.m.martins@oracle.com> wrote: > On 09/20/2016 05:17 PM, Joao Martins wrote: >> On 09/20/2016 02:55 PM, Jan Beulich wrote: >>> I.e. the introduction of nop_rendezvous is >>> really just to avoid unnecessary overhead? >> Yes, but note that it's only the case since recent commit b64438c7c where >> cpu_time stime is now incremented with TSC based deltas with a matching TSC >> stamp. Before it wasn't the case. The main difference with nop_rendezvous (other >> than the significant overhead) versus std_rendezvous is that we use a single >> global tuple propagated to all cpus, whereas with std_rendezvous each tuple is >> different and will vary according to when it rendezvous with cpu 0. >> >>> In which case it should >>> probably be a separate patch, saying so in its description. >> OK, will move that out of Patch 4 into its own while keeping the same logic. > I have to take back my comment: having redouble-checked on a test run overnight > with std_rendezvous and stable bit, and I saw time going backwards a few times > (~100ns) but only after a few hours (initially there were none - probably why I > was led into error). This is in contrast to nop_rendezvous where I see none in > weeks. Hmm, that would then seem to call for the introduction of nop_rendezvous to be pulled ahead in the series (presumably into the very patch we're discussing here). Jan
On 09/21/2016 10:45 AM, Jan Beulich wrote: >>>> On 21.09.16 at 11:20, <joao.m.martins@oracle.com> wrote: >> On 09/20/2016 05:17 PM, Joao Martins wrote: >>> On 09/20/2016 02:55 PM, Jan Beulich wrote: >>>> I.e. the introduction of nop_rendezvous is >>>> really just to avoid unnecessary overhead? >>> Yes, but note that it's only the case since recent commit b64438c7c where >>> cpu_time stime is now incremented with TSC based deltas with a matching TSC >>> stamp. Before it wasn't the case. The main difference with nop_rendezvous (other >>> than the significant overhead) versus std_rendezvous is that we use a single >>> global tuple propagated to all cpus, whereas with std_rendezvous each tuple is >>> different and will vary according to when it rendezvous with cpu 0. >>> >>>> In which case it should >>>> probably be a separate patch, saying so in its description. >>> OK, will move that out of Patch 4 into its own while keeping the same logic. >> I have to take back my comment: having redouble-checked on a test run overnight >> with std_rendezvous and stable bit, and I saw time going backwards a few times >> (~100ns) but only after a few hours (initially there were none - probably why I >> was led into error). This is in contrast to nop_rendezvous where I see none in >> weeks. > > Hmm, that would then seem to call for the introduction of > nop_rendezvous to be pulled ahead in the series (presumably into > the very patch we're discussing here). Seems like it. I will move it into this patch, in which case patch 3 needs to be moved before this one (since it's a prerequisite patch). Joao
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown index 3a250cb..f92fb3f 100644 --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -264,9 +264,13 @@ minimum of 32M, subject to a suitably aligned and sized contiguous region of memory being available. ### clocksource -> `= pit | hpet | acpi` +> `= pit | hpet | acpi | tsc` If set, override Xen's default choice for the platform timer. +Having TSC as platform timer requires being explicitly set. This is because +TSC can only be safely used if CPU hotplug isn't performed on the system. In +some platforms, "maxcpus" parameter may require further adjustment to the +number of online cpus. ### cmci-threshold > `= <integer>` diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c index 780f22d..0879e19 100644 --- a/xen/arch/x86/platform_hypercall.c +++ b/xen/arch/x86/platform_hypercall.c @@ -631,7 +631,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op) if ( ret ) break; - if ( cpu >= nr_cpu_ids || !cpu_present(cpu) ) + if ( cpu >= nr_cpu_ids || !cpu_present(cpu) || + clocksource_is_tsc() ) { ret = -EINVAL; break; diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c index 0c1ad45..e5001d5 100644 --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -475,6 +475,50 @@ uint64_t ns_to_acpi_pm_tick(uint64_t ns) } /************************************************************ + * PLATFORM TIMER 4: TSC + */ + +/* + * Called in verify_tsc_reliability() under reliable TSC conditions + * thus reusing all the checks already performed there. + */ +static s64 __init init_tsc(struct platform_timesource *pts) +{ + u64 ret = pts->frequency; + + if ( nr_cpu_ids != num_present_cpus() ) + { + printk(XENLOG_INFO "TSC: CPU Hotplug intended\n"); + ret = 0; + } + + if ( nr_sockets > 1 ) + { + printk(XENLOG_INFO "TSC: Not invariant across sockets\n"); + ret = 0; + } + + if ( !ret ) + printk(XENLOG_INFO "TSC: Not setting it as clocksource\n"); + + return ret; +} + +static u64 read_tsc(void) +{ + return rdtsc_ordered(); +} + +static struct platform_timesource __initdata plt_tsc = +{ + .id = "tsc", + .name = "TSC", + .read_counter = read_tsc, + .counter_bits = 63, + .init = init_tsc, +}; + +/************************************************************ * GENERIC PLATFORM TIMER INFRASTRUCTURE */ @@ -576,6 +620,21 @@ static void resume_platform_timer(void) plt_stamp = plt_src.read_counter(); } +static void __init reset_platform_timer(void) +{ + /* Deactivate any timers running */ + kill_timer(&plt_overflow_timer); + kill_timer(&calibration_timer); + + /* Reset counters and stamps */ + spin_lock_irq(&platform_timer_lock); + plt_stamp = 0; + plt_stamp64 = 0; + platform_timer_stamp = 0; + stime_platform_stamp = 0; + spin_unlock_irq(&platform_timer_lock); +} + static s64 __init try_platform_timer(struct platform_timesource *pts) { s64 rc = pts->init(pts); @@ -583,6 +642,10 @@ static s64 __init try_platform_timer(struct platform_timesource *pts) if ( rc <= 0 ) return rc; + /* We have a platform timesource already so reset it */ + if ( plt_src.counter_bits != 0 ) + reset_platform_timer(); + plt_mask = (u64)~0ull >> (64 - pts->counter_bits); set_time_scale(&plt_scale, pts->frequency); @@ -604,7 +667,9 @@ static u64 __init init_platform_timer(void) unsigned int i; s64 rc = -1; - if ( opt_clocksource[0] != '\0' ) + /* clocksource=tsc is initialized via __initcalls (when CPUs are up). */ + if ( (opt_clocksource[0] != '\0') && + strcmp(opt_clocksource, "tsc") ) { for ( i = 0; i < ARRAY_SIZE(plt_timers); i++ ) { @@ -1463,6 +1528,31 @@ static void __init tsc_check_writability(void) disable_tsc_sync = 1; } +static void __init reset_percpu_time(void *unused) +{ + struct cpu_time *t = &this_cpu(cpu_time); + + t->stamp.local_tsc = boot_tsc_stamp; + t->stamp.local_stime = 0; + t->stamp.local_stime = get_s_time_fixed(boot_tsc_stamp); + t->stamp.master_stime = t->stamp.local_stime; +} + +static void __init try_platform_timer_tail(void) +{ + init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); + plt_overflow(NULL); + + platform_timer_stamp = plt_stamp64; + stime_platform_stamp = NOW(); + + if ( !clocksource_is_tsc() ) + init_percpu_time(); + + init_timer(&calibration_timer, time_calibration, NULL, 0); + set_timer(&calibration_timer, NOW() + EPOCH); +} + /* Late init function, after all cpus have booted */ static int __init verify_tsc_reliability(void) { @@ -1480,6 +1570,25 @@ static int __init verify_tsc_reliability(void) printk("TSC warp detected, disabling TSC_RELIABLE\n"); setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE); } + else if ( !strcmp(opt_clocksource, "tsc") && + (try_platform_timer(&plt_tsc) > 0) ) + { + /* + * Platform timer has changed and CPU time will only be updated + * after we set again the calibration timer, which means we need to + * seed again each local CPU time. At this stage TSC is known to be + * reliable i.e. monotonically increasing across all CPUs so this + * lets us remove the skew between platform timer and TSC, since + * these are now effectively the same. + */ + on_selected_cpus(&cpu_online_map, reset_percpu_time, NULL, 1); + + /* Finish platform timer switch. */ + try_platform_timer_tail(); + + printk(XENLOG_INFO "Switched to Platform timer %s TSC\n", + freq_string(plt_src.frequency)); + } } return 0; @@ -1505,15 +1614,7 @@ int __init init_xen_time(void) do_settime(get_cmos_time(), 0, NOW()); /* Finish platform timer initialization. */ - init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); - plt_overflow(NULL); - platform_timer_stamp = plt_stamp64; - stime_platform_stamp = NOW(); - - init_percpu_time(); - - init_timer(&calibration_timer, time_calibration, NULL, 0); - set_timer(&calibration_timer, NOW() + EPOCH); + try_platform_timer_tail(); return 0; } @@ -1527,6 +1628,7 @@ void __init early_time_init(void) preinit_pit(); tmp = init_platform_timer(); + plt_tsc.frequency = tmp; set_time_scale(&t->tsc_scale, tmp); t->stamp.local_tsc = boot_tsc_stamp; @@ -1775,6 +1877,11 @@ void pv_soft_rdtsc(struct vcpu *v, struct cpu_user_regs *regs, int rdtscp) (d->arch.tsc_mode == TSC_MODE_PVRDTSCP) ? d->arch.incarnation : 0; } +bool clocksource_is_tsc(void) +{ + return plt_src.read_counter == read_tsc; +} + int host_tsc_is_safe(void) { return boot_cpu_has(X86_FEATURE_TSC_RELIABLE); diff --git a/xen/include/asm-x86/time.h b/xen/include/asm-x86/time.h index 971883a..6d704b4 100644 --- a/xen/include/asm-x86/time.h +++ b/xen/include/asm-x86/time.h @@ -69,6 +69,7 @@ void tsc_get_info(struct domain *d, uint32_t *tsc_mode, uint64_t *elapsed_nsec, void force_update_vcpu_system_time(struct vcpu *v); +bool clocksource_is_tsc(void); int host_tsc_is_safe(void); void cpuid_time_leaf(uint32_t sub_idx, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
Recent x86/time changes improved a lot of the monotonicity in xen timekeeping, making it much harder to observe time going backwards. Although platform timer can't be expected to be perfectly in sync with TSC and so get_s_time won't be guaranteed to always return monotonically increasing values across cpus. This is the case in some of the boxes I am testing with, observing sometimes ~100 warps (of very few nanoseconds each) after a few hours. This patch introduces support for using TSC as platform time source which is the highest resolution time and most performant to get. Though there are also several problems associated with its usage, and there isn't a complete (and architecturally defined) guarantee that all machines will provide reliable and monotonic TSC in all cases (I believe Intel to be the only that can guarantee that?) For this reason it's set with less priority when compared to HPET unless adminstrator changes "clocksource" boot option to "tsc". Initializing TSC clocksource requires all CPUs up to have the tsc reliability checks performed. init_xen_time is called before all CPUs are up, so for example we would start with HPET (or ACPI, PIT) at boot time, and switch later to TSC. The switch then happens on verify_tsc_reliability initcall that is invoked when all CPUs are up. When attempting to initialize TSC we also check for time warps and if it has invariant TSC. Note that while we deem reliable a CONSTANT_TSC with no deep C-states, it might not always be the case, so we're conservative and allow TSC to be used as platform timer only with invariant TSC. Additionally we check if CPU Hotplug isn't meant to be performed on the host which will either be when max vcpus and num_present_cpu are the same. This is because a newly hotplugged CPU may not satisfy the condition of having all TSCs synchronized - so when having tsc clocksource being used we allow offlining CPUs but not onlining any ones back. Finally we prevent TSC from being used as clocksource on multiple sockets because it isn't guaranteed to be invariant. Further relaxing of this last requirement is added in a separate patch, such that we allow vendors with such guarantee to use TSC as clocksource. In case any of these conditions is not met, we keep the clocksource that was previously initialized on init_xen_time. Since b64438c7c ("x86/time: use correct (local) time stamp in constant-TSC calibration fast path") updates to cpu time use local stamps, which means platform timer is only used to seed the initial cpu time. With clocksource=tsc there is no need to be in sync with another clocksource, so we reseed the local/master stamps to be values of TSC and update the platform time stamps accordingly. Time calibration is set to 1sec after we switch to TSC, thus these stamps are reseeded to also ensure monotonic returning values right after the point we switch to TSC. This is also to avoid the possibility of having inconsistent readings in this short period (i.e. until calibration fires). Signed-off-by: Joao Martins <joao.m.martins@oracle.com> --- Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Changes since v3: - Really fix "HPET switching to TSC" comment. Despite mentioned in the in previous version, the change wasn't there. - Remove parenthesis around the function call in init_platform_timer - Merge if on verify_tsc_reliability with opt_clocksource check - Removed comment above ".init = init_tsctimer" - Fixup docs updated into this patch. - Move host_tsc_is_clocksource() and CPU hotplug possibility check to this patch. - s/host_tsc_is_clocksource/clocksource_is_tsc - Use bool instead of bool_t - Add a comment above init_tsctimer() declaration mentioning the reliable TSC checks on verify_tsc_reliability(), under which the function is invoked. - Prevent clocksource=tsc on platforms with multiple sockets. Further relaxing of this requirement is added in a separate patch, as extension of "tsc" boot parameter. - Removed control group to update cpu_time and do instead with on_selected_cpus to avoid any potential races. - Accomodate common path between init_xen_time and TSC switch into try_platform_timer_tail, such that finishing platform timer initialization is done in the same place (including platform timer overflow which was previously was removed in previous versions). - Changed TSC counter_bits 63 to avoid mishandling of TSC counter wrap-around in platform timer overflow timer. - Moved paragraph CPU Hotplug from last patch and add comment on commit message about multiple sockets TSC sync. - s/init_tsctimer/init_tsc/g to be consistent with other TSC platform timer functions. Changes since v2: - Suggest "HPET switching to TSC" only as an example as otherwise it would be misleading on platforms not having one. - Change init_tsctimer to skip all the tests and assume it's called only on reliable TSC conditions and no warps observed. Tidy initialization on verify_tsc_reliability as suggested by Konrad. - CONSTANT_TSC and max_cstate <= 2 case removed and only allow tsc clocksource in invariant TSC boxes. - Prefer omit !=0 on init_platform_timer for tsc case. - Change comment on init_platform_timer. - Add comment on plt_tsc declaration. - Reinit CPU time for all online cpus instead of just CPU 0. - Use rdtsc_ordered() as opposed to rdtsc() - Remove tsc_freq variable and set plt_tsc clocksource frequency with the refined tsc calibration. - Rework a bit the commit message. Changes since v1: - s/printk/printk(XENLOG_INFO - Remove extra space on inner brackets - Add missing space around brackets - Defer TSC initialization when all CPUs are up. Changes since RFC: - Spelling fixes in the commit message. - Remove unused clocksource_is_tsc variable and introduce it instead on the patch that uses it. - Move plt_tsc from second to last in the available clocksources. --- docs/misc/xen-command-line.markdown | 6 +- xen/arch/x86/platform_hypercall.c | 3 +- xen/arch/x86/time.c | 127 +++++++++++++++++++++++++++++++++--- xen/include/asm-x86/time.h | 1 + 4 files changed, 125 insertions(+), 12 deletions(-)