Message ID | 20191218160926.12538-1-pdurrant@amazon.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | x86/save: reserve HVM save record numbers that have been consumed... | expand |
On Wed, Dec 18, 2019 at 04:09:25PM +0000, Paul Durrant wrote: > ...for patches not (yet) upstream. > > This patch is simply reserving save record number space to avoid the > risk of clashes between existent downstream changes made by Amazon and > future upstream changes which may be incompatible. > > Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Wei Liu <wl@xen.org>
On 18/12/2019 16:09, Paul Durrant wrote: > ...for patches not (yet) upstream. > > This patch is simply reserving save record number space to avoid the > risk of clashes between existent downstream changes made by Amazon and > future upstream changes which may be incompatible. > > Signed-off-by: Paul Durrant <pdurrant@amazon.com> Is this "you've already used some of these", or you plan to? ~Andrew
> -----Original Message----- > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of > Andrew Cooper > Sent: 18 December 2019 19:45 > To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org > Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné > <roger.pau@citrix.com> > Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers > that have been consumed... > > On 18/12/2019 16:09, Paul Durrant wrote: > > ...for patches not (yet) upstream. > > > > This patch is simply reserving save record number space to avoid the > > risk of clashes between existent downstream changes made by Amazon and > > future upstream changes which may be incompatible. > > > > Signed-off-by: Paul Durrant <pdurrant@amazon.com> > > Is this "you've already used some of these", or you plan to? Already used in code that has been deployed, although I have left some headroom because I know there is code in development which is using new ones. Where records can be upstreamed in a way that is compatible with downstream use, we will keep the existing number. If incompatible changes are necessary to get the code upstream then we will have to use a new number and maintain downstream compatibility patches. Paul
On 18.12.2019 17:09, Paul Durrant wrote: > --- a/xen/include/public/arch-x86/hvm/save.h > +++ b/xen/include/public/arch-x86/hvm/save.h > @@ -639,10 +639,12 @@ struct hvm_msr { > > #define CPU_MSR_CODE 20 > > +/* Range 22 - 40 reserved for Amazon */ > + > /* > * Largest type-code in use > */ > -#define HVM_SAVE_CODE_MAX 20 > +#define HVM_SAVE_CODE_MAX 40 I'm not overly happy to see the respective in-Xen array double its size for no use at all. I also don't think out-of-tree extensions should have been added using numbers consecutive to the upstream ones. Instead, an Amazon range should have been picked high up in the number space (e.g. with the upper byte being ASCII 'A'). Jan
On 19/12/2019 08:52, Durrant, Paul wrote: >> -----Original Message----- >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of >> Andrew Cooper >> Sent: 18 December 2019 19:45 >> To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org >> Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné >> <roger.pau@citrix.com> >> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers >> that have been consumed... >> >> On 18/12/2019 16:09, Paul Durrant wrote: >>> ...for patches not (yet) upstream. >>> >>> This patch is simply reserving save record number space to avoid the >>> risk of clashes between existent downstream changes made by Amazon and >>> future upstream changes which may be incompatible. >>> >>> Signed-off-by: Paul Durrant <pdurrant@amazon.com> >> Is this "you've already used some of these", or you plan to? > Already used in code that has been deployed, although I have left some headroom because I know there is code in development which is using new ones. > > Where records can be upstreamed in a way that is compatible with downstream use, we will keep the existing number. If incompatible changes are necessary to get the code upstream then we will have to use a new number and maintain downstream compatibility patches. Every bump to this number is more wasted memory in Xen. It is not fair or reasonable to include extra headroom in a "oh dear we screwed up - will the community be kind enough to help us work around our ABI problems" scenario. For now, I'd just leave it as a comment, and strictly only covering the ones you have used. There is no need to actually bump the structure sizes in xen for now - simply that the ones you have currently used don't get allocated for something different in the future. ~Andrew
> -----Original Message----- > From: Jan Beulich <jbeulich@suse.com> > Sent: 19 December 2019 10:33 > To: Durrant, Paul <pdurrant@amazon.com> > Cc: xen-devel@lists.xenproject.org; Andrew Cooper > <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné > <roger.pau@citrix.com> > Subject: Re: [PATCH] x86/save: reserve HVM save record numbers that have > been consumed... > > On 18.12.2019 17:09, Paul Durrant wrote: > > --- a/xen/include/public/arch-x86/hvm/save.h > > +++ b/xen/include/public/arch-x86/hvm/save.h > > @@ -639,10 +639,12 @@ struct hvm_msr { > > > > #define CPU_MSR_CODE 20 > > > > +/* Range 22 - 40 reserved for Amazon */ > > + > > /* > > * Largest type-code in use > > */ > > -#define HVM_SAVE_CODE_MAX 20 > > +#define HVM_SAVE_CODE_MAX 40 > > I'm not overly happy to see the respective in-Xen array double its > size for no use at all. I also don't think out-of-tree extensions > should have been added using numbers consecutive to the upstream > ones. Instead, an Amazon range should have been picked high up in > the number space (e.g. with the upper byte being ASCII 'A'). > Totally agreed, but we don't have a time machine handy. Paul > Jan
> -----Original Message----- > From: Andrew Cooper <andrew.cooper3@citrix.com> > Sent: 19 December 2019 10:52 > To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org > Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné > <roger.pau@citrix.com> > Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers > that have been consumed... > > On 19/12/2019 08:52, Durrant, Paul wrote: > >> -----Original Message----- > >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of > >> Andrew Cooper > >> Sent: 18 December 2019 19:45 > >> To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org > >> Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau > Monné > >> <roger.pau@citrix.com> > >> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record > numbers > >> that have been consumed... > >> > >> On 18/12/2019 16:09, Paul Durrant wrote: > >>> ...for patches not (yet) upstream. > >>> > >>> This patch is simply reserving save record number space to avoid the > >>> risk of clashes between existent downstream changes made by Amazon and > >>> future upstream changes which may be incompatible. > >>> > >>> Signed-off-by: Paul Durrant <pdurrant@amazon.com> > >> Is this "you've already used some of these", or you plan to? > > Already used in code that has been deployed, although I have left some > headroom because I know there is code in development which is using new > ones. > > > > Where records can be upstreamed in a way that is compatible with > downstream use, we will keep the existing number. If incompatible changes > are necessary to get the code upstream then we will have to use a new > number and maintain downstream compatibility patches. > > Every bump to this number is more wasted memory in Xen. > How much more memory? > It is not fair or reasonable to include extra headroom in a "oh dear we > screwed up - will the community be kind enough to help us work around > our ABI problems" scenario. > I would have thought all the pain you went through to keep XenServer going with its non-upstreamed hypercall numbers would have made you a little more sympathetic to dealing with past mistakes. > For now, I'd just leave it as a comment, and strictly only covering the > ones you have used. There is no need to actually bump the structure > sizes in xen for now - simply that the ones you have currently used > don't get allocated for something different in the future. > Ok, we can defer actually bumping HVM_SAVE_CODE_MAX, but it's almost certain to happen eventually. Paul
On 19.12.2019 12:06, Durrant, Paul wrote: >> -----Original Message----- >> From: Andrew Cooper <andrew.cooper3@citrix.com> >> Sent: 19 December 2019 10:52 >> To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org >> Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné >> <roger.pau@citrix.com> >> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers >> that have been consumed... >> >> On 19/12/2019 08:52, Durrant, Paul wrote: >>>> -----Original Message----- >>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of >>>> Andrew Cooper >>>> Sent: 18 December 2019 19:45 >>>> To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org >>>> Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau >> Monné >>>> <roger.pau@citrix.com> >>>> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record >> numbers >>>> that have been consumed... >>>> >>>> On 18/12/2019 16:09, Paul Durrant wrote: >>>>> ...for patches not (yet) upstream. >>>>> >>>>> This patch is simply reserving save record number space to avoid the >>>>> risk of clashes between existent downstream changes made by Amazon and >>>>> future upstream changes which may be incompatible. >>>>> >>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com> >>>> Is this "you've already used some of these", or you plan to? >>> Already used in code that has been deployed, although I have left some >> headroom because I know there is code in development which is using new >> ones. >>> >>> Where records can be upstreamed in a way that is compatible with >> downstream use, we will keep the existing number. If incompatible changes >> are necessary to get the code upstream then we will have to use a new >> number and maintain downstream compatibility patches. >> >> Every bump to this number is more wasted memory in Xen. > > How much more memory? It is, btw, not just memory, but also a higher number of loop iterations. Jan
On 19/12/2019 11:06, Durrant, Paul wrote: >> It is not fair or reasonable to include extra headroom in a "oh dear we >> screwed up - will the community be kind enough to help us work around >> our ABI problems" scenario. >> > I would have thought all the pain you went through to keep XenServer going with its non-upstreamed hypercall numbers would have made you a little more sympathetic to dealing with past mistakes. I could object to the principle of the patch, if you'd prefer :) If you recall for the legacy window PV driver ABI breakages, I didn't actually reserve any numbers upstream in the end. All compatibility was handled locally. >> For now, I'd just leave it as a comment, and strictly only covering the >> ones you have used. There is no need to actually bump the structure >> sizes in xen for now - simply that the ones you have currently used >> don't get allocated for something different in the future. >> > Ok, we can defer actually bumping HVM_SAVE_CODE_MAX, but it's almost certain to happen eventually. That's fine. ~Andrew
> -----Original Message----- > From: Andrew Cooper <andrew.cooper3@citrix.com> > Sent: 19 December 2019 11:30 > To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org > Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné > <roger.pau@citrix.com> > Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers > that have been consumed... > > On 19/12/2019 11:06, Durrant, Paul wrote: > >> It is not fair or reasonable to include extra headroom in a "oh dear we > >> screwed up - will the community be kind enough to help us work around > >> our ABI problems" scenario. > >> > > I would have thought all the pain you went through to keep XenServer > going with its non-upstreamed hypercall numbers would have made you a > little more sympathetic to dealing with past mistakes. > > I could object to the principle of the patch, if you'd prefer :) > > If you recall for the legacy window PV driver ABI breakages, I didn't > actually reserve any numbers upstream in the end. All compatibility was > handled locally. And I remember how nasty the hacks were ;-) Given we don't yet have a clash that requires such nastiness, I just want to avoid it happening before we manage to dispense with the downstream-only legacy code. > > >> For now, I'd just leave it as a comment, and strictly only covering the > >> ones you have used. There is no need to actually bump the structure > >> sizes in xen for now - simply that the ones you have currently used > >> don't get allocated for something different in the future. > >> > > Ok, we can defer actually bumping HVM_SAVE_CODE_MAX, but it's almost > certain to happen eventually. > > That's fine. > Ok. I'll send a v2 with just the comment (and assume Wei's R-b still stands). Paul
On 19/12/2019 11:45, Durrant, Paul wrote: >> -----Original Message----- >> From: Andrew Cooper <andrew.cooper3@citrix.com> >> Sent: 19 December 2019 11:30 >> To: Durrant, Paul <pdurrant@amazon.com>; xen-devel@lists.xenproject.org >> Cc: Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Roger Pau Monné >> <roger.pau@citrix.com> >> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers >> that have been consumed... >> >> On 19/12/2019 11:06, Durrant, Paul wrote: >>>> It is not fair or reasonable to include extra headroom in a "oh dear we >>>> screwed up - will the community be kind enough to help us work around >>>> our ABI problems" scenario. >>>> >>> I would have thought all the pain you went through to keep XenServer >> going with its non-upstreamed hypercall numbers would have made you a >> little more sympathetic to dealing with past mistakes. >> >> I could object to the principle of the patch, if you'd prefer :) >> >> If you recall for the legacy window PV driver ABI breakages, I didn't >> actually reserve any numbers upstream in the end. All compatibility was >> handled locally. > And I remember how nasty the hacks were ;-) Like I'm ever going to forget those... For anyone else reading this thread and is a little confused, https://github.com/xenserver/xen.pg/blob/XS-7.1.x/master/xen-legacy-win-xenmapspace-quirks.patch ought to clear some things up. ~Andrew
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index b2ad3fcd74..9c7b86678e 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -639,10 +639,12 @@ struct hvm_msr { #define CPU_MSR_CODE 20 +/* Range 22 - 40 reserved for Amazon */ + /* * Largest type-code in use */ -#define HVM_SAVE_CODE_MAX 20 +#define HVM_SAVE_CODE_MAX 40 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
...for patches not (yet) upstream. This patch is simply reserving save record number space to avoid the risk of clashes between existent downstream changes made by Amazon and future upstream changes which may be incompatible. Signed-off-by: Paul Durrant <pdurrant@amazon.com> --- Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Wei Liu <wl@xen.org> Cc: "Roger Pau Monné" <roger.pau@citrix.com> --- xen/include/public/arch-x86/hvm/save.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)