diff mbox series

[v2,10/10] xen/arm: add reserved-memory regions to the dom0 memory node

Message ID 1556658172-8824-10-git-send-email-sstabellini@kernel.org (mailing list archive)
State Superseded
Headers show
Series [v2,01/10] xen: add a p2mt parameter to map_mmio_regions | expand

Commit Message

Stefano Stabellini April 30, 2019, 9:02 p.m. UTC
Reserved memory regions are automatically remapped to dom0. Their device
tree nodes are also added to dom0 device tree. However, the dom0 memory
node is not currently extended to cover the reserved memory regions
ranges as required by the spec.  This commit fixes it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
 xen/arch/arm/domain_build.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

Comments

Julien Grall May 7, 2019, 8:15 p.m. UTC | #1
Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Reserved memory regions are automatically remapped to dom0. Their device
> tree nodes are also added to dom0 device tree. However, the dom0 memory
> node is not currently extended to cover the reserved memory regions
> ranges as required by the spec.  This commit fixes it.

AFAICT, this does not cover the problem mention by Amit in [1].

But I am still not happy with the approach taken for the reserved-memory 
regions in this series. As I pointed out before, they are just normal 
memory that was reserved for other purpose (CMA, framebuffer...).

Treating them as "device" from Xen POV is a clear abuse of the meaning 
and I don't believe it is a viable solution long term.

Indeed, some of the regions may have a property "reusable" allowing the 
the OS to use them until they are claimed by the device driver owning 
the region. I don't know how Linux (or any other OS) is using it today, 
but I don't see what would prevent it to use them as hypercall buffer. 
This would obviously not work because they are not actual RAM from Xen POV.

On a similar topic, because they are normal memory, I don't think 
XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the 
iomem rangeset should not contain them.

Cheers,

[1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>
Stefano Stabellini May 10, 2019, 8:51 p.m. UTC | #2
On Tue, 7 May 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> > Reserved memory regions are automatically remapped to dom0. Their device
> > tree nodes are also added to dom0 device tree. However, the dom0 memory
> > node is not currently extended to cover the reserved memory regions
> > ranges as required by the spec.  This commit fixes it.
> 
> AFAICT, this does not cover the problem mention by Amit in [1].

What do you think is required to fix Amit's problem?


> But I am still not happy with the approach taken for the reserved-memory
> regions in this series. As I pointed out before, they are just normal memory
> that was reserved for other purpose (CMA, framebuffer...).
> 
> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
> don't believe it is a viable solution long term.

If we don't consider "reusable" memory regions as part of the
discussion, the distinction becomes more philosophical than practical:

- Xen is not supposed to use them for anything
- only given them to the VM configured for it

I don't see much of a difference with MMIO regions, except for the
expected pagetable attributes: i.e. cacheable, not-cacheable. But even
in that case, there could be reasonable use cases for non-cacheable
mappings of reserved-memory regions, even if reserved-memory regions are
"normal" memory.

Could you please help me understand why you see them so differently, as
far as to say that "treating them as "device" from Xen POV is a clear
abuse of the meaning"?


> Indeed, some of the regions may have a property "reusable" allowing the the OS
> to use them until they are claimed by the device driver owning the region. I
> don't know how Linux (or any other OS) is using it today, but I don't see what
> would prevent it to use them as hypercall buffer. This would obviously not
> work because they are not actual RAM from Xen POV.

I haven't attempted at handling "reusable" reserved-memory regions
because I don't have a test environment and/or a use-case for them. In
other words, I don't have any "reusable" reserved-memory regions in any
of the boards (Xilinx and not Xilinx) I have access to. I could add a
warning if we find a "reusable" reserved-memory region at boot.

Nonetheless, if you have a concrete suggestion which doesn't require a
complete rework of this series, I can try to put extra effort to handle
this case even if it is not a benefit to my employer. I am also open to
the possibility of dropping patches 6-10 from the series.

There is also the option of going to devicetree.org to request a new
binding different from reserved-memory. If reserved-memory regions are
expected to be treated as normal memory for all intents and purposes
except for being reserved sometimes, then they might not be the right
bindings to describe Xilinx hardware, which requires fully dedicated
memory regions with both cacheable and non-cacheable mappings for the
purpose of communicating with foreign CPUs.

As a maintainer, even if the approach might considered not-ideal, my
opinion is that this series is still an improvement over what we have
today.


> On a similar topic, because they are normal memory, I don't think
> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
> rangeset should not contain them.

What hypercall do you suggest should be used instead?


> Cheers,
> 
> [1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>
> 
> -- 
> Julien Grall
>
Julien Grall May 10, 2019, 9:43 p.m. UTC | #3
Hi,

On 10/05/2019 21:51, Stefano Stabellini wrote:
> On Tue, 7 May 2019, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>> Reserved memory regions are automatically remapped to dom0. Their device
>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>> node is not currently extended to cover the reserved memory regions
>>> ranges as required by the spec.  This commit fixes it.
>>
>> AFAICT, this does not cover the problem mention by Amit in [1].
> 
> What do you think is required to fix Amit's problem?

I haven't fully investigated the problem to be able to answer the 
question here. Although I provided some insights in:

<b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>

> 
> 
>> But I am still not happy with the approach taken for the reserved-memory
>> regions in this series. As I pointed out before, they are just normal memory
>> that was reserved for other purpose (CMA, framebuffer...).
>>
>> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
>> don't believe it is a viable solution long term.
> 
> If we don't consider "reusable" memory regions as part of the
> discussion, the distinction becomes more philosophical than practical:
> 
> - Xen is not supposed to use them for anything
> - only given them to the VM configured for it
> 
> I don't see much of a difference with MMIO regions, except for the
> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> in that case, there could be reasonable use cases for non-cacheable
> mappings of reserved-memory regions, even if reserved-memory regions are
> "normal" memory.
> 
> Could you please help me understand why you see them so differently, as
> far as to say that "treating them as "device" from Xen POV is a clear
> abuse of the meaning"?

Obviously if you take half of the picture, then it makes things easier.
However, we are not here to discuss half of the picture but the full one 
(even if at the end you only implement half of it).

>> Indeed, some of the regions may have a property "reusable" allowing the the OS
>> to use them until they are claimed by the device driver owning the region. I
>> don't know how Linux (or any other OS) is using it today, but I don't see what
>> would prevent it to use them as hypercall buffer. This would obviously not
>> work because they are not actual RAM from Xen POV.
> 
> I haven't attempted at handling "reusable" reserved-memory regions
> because I don't have a test environment and/or a use-case for them. In
> other words, I don't have any "reusable" reserved-memory regions in any
> of the boards (Xilinx and not Xilinx) I have access to. I could add a
> warning if we find a "reusable" reserved-memory region at boot.

Don't get me wrong, I don't ask for the implementation now, so a warning 
would be fine here. However, you need at least to show me some ground 
that re-usable memory can be implemented with your solution or they are 
not a concern for Xen at all.

> 
> Nonetheless, if you have a concrete suggestion which doesn't require a
> complete rework of this series, I can try to put extra effort to handle
> this case even if it is not a benefit to my employer. I am also open to
> the possibility of dropping patches 6-10 from the series.
I don't think the series as it is would allow us to support re-usable 
memory. However as I haven't spent enough time to understand how this 
could be possibly dealt. So I am happy to be proved wrong.

> 
> There is also the option of going to devicetree.org to request a new
> binding different from reserved-memory. If reserved-memory regions are
> expected to be treated as normal memory for all intents and purposes
> except for being reserved sometimes, then they might not be the right
> bindings to describe Xilinx hardware, which requires fully dedicated
> memory regions with both cacheable and non-cacheable mappings for the
> purpose of communicating with foreign CPUs.
> 
> As a maintainer, even if the approach might considered not-ideal, my
> opinion is that this series is still an improvement over what we have
> today.

Well yes it is an improvement compare to what we have today. However, I 
don't think the problem of the reserved-memory region has been fully 
thought through so far. I am worry that your suggestion is going to put 
us into a corner make impossible to expand it (such as re-usable) in the 
future without breaking backward compatibility.

Maybe your solution is correct and we will be able to expand for 
re-usable or at least add it in a backward compatible way. But for that, 
I need solid explanation from your side that it would be possible.

>> On a similar topic, because they are normal memory, I don't think
>> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
>> rangeset should not contain them.
> 
> What hypercall do you suggest should be used instead?

Let's talk about that once we agree on the overall approach for 
reserved-memory.

Cheers,
Julien Grall May 11, 2019, 12:40 p.m. UTC | #4
On 5/10/19 10:43 PM, Julien Grall wrote:
> Hi,
> 
> On 10/05/2019 21:51, Stefano Stabellini wrote:
>> On Tue, 7 May 2019, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>>> Reserved memory regions are automatically remapped to dom0. Their 
>>>> device
>>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>>> node is not currently extended to cover the reserved memory regions
>>>> ranges as required by the spec.  This commit fixes it.
>>>
>>> AFAICT, this does not cover the problem mention by Amit in [1].
>>
>> What do you think is required to fix Amit's problem?
> 
> I haven't fully investigated the problem to be able to answer the 
> question here. Although I provided some insights in:
> 
> <b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>
> 
>>
>>
>>> But I am still not happy with the approach taken for the reserved-memory
>>> regions in this series. As I pointed out before, they are just normal 
>>> memory
>>> that was reserved for other purpose (CMA, framebuffer...).
>>>
>>> Treating them as "device" from Xen POV is a clear abuse of the 
>>> meaning and I
>>> don't believe it is a viable solution long term.
>>
>> If we don't consider "reusable" memory regions as part of the
>> discussion, the distinction becomes more philosophical than practical:
>>
>> - Xen is not supposed to use them for anything
>> - only given them to the VM configured for it
>>
>> I don't see much of a difference with MMIO regions, except for the
>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>> in that case, there could be reasonable use cases for non-cacheable
>> mappings of reserved-memory regions, even if reserved-memory regions are
>> "normal" memory.
>>
>> Could you please help me understand why you see them so differently, as
>> far as to say that "treating them as "device" from Xen POV is a clear
>> abuse of the meaning"?
> 
> Obviously if you take half of the picture, then it makes things easier.
> However, we are not here to discuss half of the picture but the full one 
> (even if at the end you only implement half of it).
> 
>>> Indeed, some of the regions may have a property "reusable" allowing 
>>> the the OS
>>> to use them until they are claimed by the device driver owning the 
>>> region. I
>>> don't know how Linux (or any other OS) is using it today, but I don't 
>>> see what
>>> would prevent it to use them as hypercall buffer. This would 
>>> obviously not
>>> work because they are not actual RAM from Xen POV.
>>
>> I haven't attempted at handling "reusable" reserved-memory regions
>> because I don't have a test environment and/or a use-case for them. In
>> other words, I don't have any "reusable" reserved-memory regions in any
>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>> warning if we find a "reusable" reserved-memory region at boot.
> 
> Don't get me wrong, I don't ask for the implementation now, so a warning 
> would be fine here. However, you need at least to show me some ground 
> that re-usable memory can be implemented with your solution or they are 
> not a concern for Xen at all.
> 
>>
>> Nonetheless, if you have a concrete suggestion which doesn't require a
>> complete rework of this series, I can try to put extra effort to handle
>> this case even if it is not a benefit to my employer. I am also open to
>> the possibility of dropping patches 6-10 from the series.
> I don't think the series as it is would allow us to support re-usable 
> memory. However as I haven't spent enough time to understand how this 
> could be possibly dealt. So I am happy to be proved wrong.

I thought a bit more about this series during the night. I do agree that 
we need to improve the support of the reserved-memory today as we may 
give memory to the allocator that are could be exposed to a guest via a 
different method (iomem). So carving out the reserved-memory region from 
the memory allocator is the first step to go.

Now we have to differentiate the hardware domain from the other guests. 
I don't have any objection regarding the way to map reserved-memory 
region to the hardware domain because this is completely internal to 
Xen. However, I have some objections with the current interface for DomU:
    1) It is still unclear how "reusable" property would fit in that story
    2) It is definitely not possible for a user to use 'iomem' for 
reserved-memory region today because the partial Device-Tree doesn't 
allow you to create /reserved-memory node nor /memory
    3) AFAIK, there are no way for to prevent the hardware domain to use 
the reserved-region (status = "disabled" would not work).

So, IHMO, the guest support for reserved-memory is not in shape. So I 
think it would be best if we don't permit the reserved-memory region in 
the iomem rangeset. This would avoid us to tie us in an interface until 
we figure out the correct plan for guest.

With that in place, I don't have a strong objection with patches 6-10.

In any case I think you should clearly spell out in the commit message 
what kind of reserved-memory region is supported. For instance, by just 
going through the binding, I have the feeling that those properties are 
not actually supported:
     1) "no-map" - It is used to tell the OS to not create a virtual 
memory of the region as part of its standard mapping of system memory, 
nor permit speculative access to it under any circumstances other than 
under the control of the device driver using the region. On Arm64, Xen 
will map reserved-memory as part of xenheap (i.e the direct mapping), 
but carving out from xenheap would not be sufficient as we use 1GB block 
for the mapping. So they may still be covered. I would assume this is 
used for memory that needs to be mapped non-cacheable, so it is 
potentially critical as Xen would map them cacheable in the stage-1 
hypervisor page-tables.
     2) "alloc-ranges": it is used to specify regions of memory where it 
is acceptable to allocate memory from. This may not play well with the 
Dom0 memory allocator.
     3) "reusable": I mention here only for completeness. My 
understanding is it could potentially be used for hypercall buffer. This 
needs to be investigated.

Cheers,
Stefano Stabellini May 20, 2019, 9:26 p.m. UTC | #5
On Sat, 11 May 2019, Julien Grall wrote:
> > > > But I am still not happy with the approach taken for the reserved-memory
> > > > regions in this series. As I pointed out before, they are just normal
> > > > memory
> > > > that was reserved for other purpose (CMA, framebuffer...).
> > > > 
> > > > Treating them as "device" from Xen POV is a clear abuse of the meaning
> > > > and I
> > > > don't believe it is a viable solution long term.
> > > 
> > > If we don't consider "reusable" memory regions as part of the
> > > discussion, the distinction becomes more philosophical than practical:
> > > 
> > > - Xen is not supposed to use them for anything
> > > - only given them to the VM configured for it
> > > 
> > > I don't see much of a difference with MMIO regions, except for the
> > > expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> > > in that case, there could be reasonable use cases for non-cacheable
> > > mappings of reserved-memory regions, even if reserved-memory regions are
> > > "normal" memory.
> > > 
> > > Could you please help me understand why you see them so differently, as
> > > far as to say that "treating them as "device" from Xen POV is a clear
> > > abuse of the meaning"?
> > 
> > Obviously if you take half of the picture, then it makes things easier.
> > However, we are not here to discuss half of the picture but the full one
> > (even if at the end you only implement half of it).
> > 
> > > > Indeed, some of the regions may have a property "reusable" allowing the
> > > > the OS
> > > > to use them until they are claimed by the device driver owning the
> > > > region. I
> > > > don't know how Linux (or any other OS) is using it today, but I don't
> > > > see what
> > > > would prevent it to use them as hypercall buffer. This would obviously
> > > > not
> > > > work because they are not actual RAM from Xen POV.
> > > 
> > > I haven't attempted at handling "reusable" reserved-memory regions
> > > because I don't have a test environment and/or a use-case for them. In
> > > other words, I don't have any "reusable" reserved-memory regions in any
> > > of the boards (Xilinx and not Xilinx) I have access to. I could add a
> > > warning if we find a "reusable" reserved-memory region at boot.
> > 
> > Don't get me wrong, I don't ask for the implementation now, so a warning
> > would be fine here. However, you need at least to show me some ground that
> > re-usable memory can be implemented with your solution or they are not a
> > concern for Xen at all.
> > 
> > > 
> > > Nonetheless, if you have a concrete suggestion which doesn't require a
> > > complete rework of this series, I can try to put extra effort to handle
> > > this case even if it is not a benefit to my employer. I am also open to
> > > the possibility of dropping patches 6-10 from the series.
> > I don't think the series as it is would allow us to support re-usable
> > memory. However as I haven't spent enough time to understand how this could
> > be possibly dealt. So I am happy to be proved wrong.
> 
> I thought a bit more about this series during the night. I do agree that we
> need to improve the support of the reserved-memory today as we may give memory
> to the allocator that are could be exposed to a guest via a different method
> (iomem). So carving out the reserved-memory region from the memory allocator
> is the first step to go.
> 
> Now we have to differentiate the hardware domain from the other guests. I
> don't have any objection regarding the way to map reserved-memory region to
> the hardware domain because this is completely internal to Xen. However, I
> have some objections with the current interface for DomU:
>    1) It is still unclear how "reusable" property would fit in that story
>    2) It is definitely not possible for a user to use 'iomem' for
> reserved-memory region today because the partial Device-Tree doesn't allow you
> to create /reserved-memory node nor /memory
>    3) AFAIK, there are no way for to prevent the hardware domain to use the
> reserved-region (status = "disabled" would not work).
> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
> would be best if we don't permit the reserved-memory region in the iomem
> rangeset. This would avoid us to tie us in an interface until we figure out
> the correct plan for guest.

Wouldn't be proper documentation be enough? (See below for where the
documentation should live.)

This is not about privilege over the system: whoever will make the
decision to ask the hypervisor to map the page will have all the
necessary rights to do it.  If the user wants to map a given region,
either because she knows what she is doing, because she is
experimenting, or for whatever reason, I think she should be allowed. In
fact, she can always do it by reverting the patch. So why make it
inconvenient for her?


> With that in place, I don't have a strong objection with patches 6-10.
> 
> In any case I think you should clearly spell out in the commit message what
> kind of reserved-memory region is supported.

Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
other places where I should write it down aside from commit messages?


> For instance, by just going through the binding, I have the feeling
> that those properties are not actually supported:
>     1) "no-map" - It is used to tell the OS to not create a virtual memory of
> the region as part of its standard mapping of system memory, nor permit
> speculative access to it under any circumstances other than under the control
> of the device driver using the region. On Arm64, Xen will map reserved-memory
> as part of xenheap (i.e the direct mapping), but carving out from xenheap
> would not be sufficient as we use 1GB block for the mapping. So they may still
> be covered. I would assume this is used for memory that needs to be mapped
> non-cacheable, so it is potentially critical as Xen would map them cacheable
> in the stage-1 hypervisor page-tables.
>     2) "alloc-ranges": it is used to specify regions of memory where it is
> acceptable to allocate memory from. This may not play well with the Dom0
> memory allocator.
>     3) "reusable": I mention here only for completeness. My understanding is
> it could potentially be used for hypercall buffer. This needs to be
> investigated.

Yes, you are right about these properties not being properly supported.
Do you think that I should list them in SUPPORT.md under a new iomem
section? Or do you prefer a longer document under docs/? Or both?
Julien Grall May 20, 2019, 10:38 p.m. UTC | #6
Hi Stefano,

On 20/05/2019 22:26, Stefano Stabellini wrote:
> On Sat, 11 May 2019, Julien Grall wrote:
>>>>> But I am still not happy with the approach taken for the reserved-memory
>>>>> regions in this series. As I pointed out before, they are just normal
>>>>> memory
>>>>> that was reserved for other purpose (CMA, framebuffer...).
>>>>>
>>>>> Treating them as "device" from Xen POV is a clear abuse of the meaning
>>>>> and I
>>>>> don't believe it is a viable solution long term.
>>>>
>>>> If we don't consider "reusable" memory regions as part of the
>>>> discussion, the distinction becomes more philosophical than practical:
>>>>
>>>> - Xen is not supposed to use them for anything
>>>> - only given them to the VM configured for it
>>>>
>>>> I don't see much of a difference with MMIO regions, except for the
>>>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>>>> in that case, there could be reasonable use cases for non-cacheable
>>>> mappings of reserved-memory regions, even if reserved-memory regions are
>>>> "normal" memory.
>>>>
>>>> Could you please help me understand why you see them so differently, as
>>>> far as to say that "treating them as "device" from Xen POV is a clear
>>>> abuse of the meaning"?
>>>
>>> Obviously if you take half of the picture, then it makes things easier.
>>> However, we are not here to discuss half of the picture but the full one
>>> (even if at the end you only implement half of it).
>>>
>>>>> Indeed, some of the regions may have a property "reusable" allowing the
>>>>> the OS
>>>>> to use them until they are claimed by the device driver owning the
>>>>> region. I
>>>>> don't know how Linux (or any other OS) is using it today, but I don't
>>>>> see what
>>>>> would prevent it to use them as hypercall buffer. This would obviously
>>>>> not
>>>>> work because they are not actual RAM from Xen POV.
>>>>
>>>> I haven't attempted at handling "reusable" reserved-memory regions
>>>> because I don't have a test environment and/or a use-case for them. In
>>>> other words, I don't have any "reusable" reserved-memory regions in any
>>>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>>>> warning if we find a "reusable" reserved-memory region at boot.
>>>
>>> Don't get me wrong, I don't ask for the implementation now, so a warning
>>> would be fine here. However, you need at least to show me some ground that
>>> re-usable memory can be implemented with your solution or they are not a
>>> concern for Xen at all.
>>>
>>>>
>>>> Nonetheless, if you have a concrete suggestion which doesn't require a
>>>> complete rework of this series, I can try to put extra effort to handle
>>>> this case even if it is not a benefit to my employer. I am also open to
>>>> the possibility of dropping patches 6-10 from the series.
>>> I don't think the series as it is would allow us to support re-usable
>>> memory. However as I haven't spent enough time to understand how this could
>>> be possibly dealt. So I am happy to be proved wrong.
>>
>> I thought a bit more about this series during the night. I do agree that we
>> need to improve the support of the reserved-memory today as we may give memory
>> to the allocator that are could be exposed to a guest via a different method
>> (iomem). So carving out the reserved-memory region from the memory allocator
>> is the first step to go.
>>
>> Now we have to differentiate the hardware domain from the other guests. I
>> don't have any objection regarding the way to map reserved-memory region to
>> the hardware domain because this is completely internal to Xen. However, I
>> have some objections with the current interface for DomU:
>>     1) It is still unclear how "reusable" property would fit in that story
>>     2) It is definitely not possible for a user to use 'iomem' for
>> reserved-memory region today because the partial Device-Tree doesn't allow you
>> to create /reserved-memory node nor /memory
>>     3) AFAIK, there are no way for to prevent the hardware domain to use the
>> reserved-region (status = "disabled" would not work).
>> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
>> would be best if we don't permit the reserved-memory region in the iomem
>> rangeset. This would avoid us to tie us in an interface until we figure out
>> the correct plan for guest.
> 
> Wouldn't be proper documentation be enough? (See below for where the
> documentation should live.)
> 
> This is not about privilege over the system: whoever will make the
> decision to ask the hypervisor to map the page will have all the
> necessary rights to do it.  If the user wants to map a given region,
> either because she knows what she is doing, because she is
> experimenting, or for whatever reason, I think she should be allowed. In
> fact, she can always do it by reverting the patch. So why make it
> inconvenient for her?
TBH, I am getting very frustrated on reviewing this series. We spent our 
previous f2f meetings discussing reserved-memory in lengthy way. We also 
agreed on a plan (see below), but now we are back on square one again...

Yes, a user will need to revert the patch. But then as you said the user 
would know what he/she is doing. So reverting a patch is not going to be 
a complication.

However, I already pointed out multiple time that giving permission is 
not going to be enough. So I still don't see the value of having that in 
Xen without an easy way to use it.

For reminder, you agreed on the following splitting the series in 3 parts:
    - Part 1: Extend iomem to support cacheability
    - Part 2: Partially support reserved-memory for Dom0 and don't give 
iomem permission on them
    - Part 3: reserved-memory for guest

I agreed to merge part 1 and 2. Part 3 will be a start for a discussion 
how this should be supported for guest. I also pointed out that Xilinx 
can carry part 3 in their tree if they feel like too.

> 
> 
>> With that in place, I don't have a strong objection with patches 6-10.
>>
>> In any case I think you should clearly spell out in the commit message what
>> kind of reserved-memory region is supported.
> 
> Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
> other places where I should write it down aside from commit messages?
> 
> 
>> For instance, by just going through the binding, I have the feeling
>> that those properties are not actually supported:
>>      1) "no-map" - It is used to tell the OS to not create a virtual memory of
>> the region as part of its standard mapping of system memory, nor permit
>> speculative access to it under any circumstances other than under the control
>> of the device driver using the region. On Arm64, Xen will map reserved-memory
>> as part of xenheap (i.e the direct mapping), but carving out from xenheap
>> would not be sufficient as we use 1GB block for the mapping. So they may still
>> be covered. I would assume this is used for memory that needs to be mapped
>> non-cacheable, so it is potentially critical as Xen would map them cacheable
>> in the stage-1 hypervisor page-tables.
>>      2) "alloc-ranges": it is used to specify regions of memory where it is
>> acceptable to allocate memory from. This may not play well with the Dom0
>> memory allocator.
>>      3) "reusable": I mention here only for completeness. My understanding is
>> it could potentially be used for hypercall buffer. This needs to be
>> investigated.
> 
> Yes, you are right about these properties not being properly supported.
> Do you think that I should list them in SUPPORT.md under a new iomem
> section? Or do you prefer a longer document under docs/? Or both?

The properties have nothing to do with iomem. So it would be clearly the 
wrong place to put under. Instead this should be a separate sections.

Cheers,
Julien Grall June 5, 2019, 4:30 p.m. UTC | #7
Hi,

On 20/05/2019 23:38, Julien Grall wrote:
> On 20/05/2019 22:26, Stefano Stabellini wrote:
>> On Sat, 11 May 2019, Julien Grall wrote:
>> This is not about privilege over the system: whoever will make the
>> decision to ask the hypervisor to map the page will have all the
>> necessary rights to do it.  If the user wants to map a given region,
>> either because she knows what she is doing, because she is
>> experimenting, or for whatever reason, I think she should be allowed. In
>> fact, she can always do it by reverting the patch. So why make it
>> inconvenient for her?
> TBH, I am getting very frustrated on reviewing this series. We spent our 
> previous f2f meetings discussing reserved-memory in lengthy way. We also agreed 
> on a plan (see below), but now we are back on square one again...
> 
> Yes, a user will need to revert the patch. But then as you said the user would 
> know what he/she is doing. So reverting a patch is not going to be a complication.
> 
> However, I already pointed out multiple time that giving permission is not going 
> to be enough. So I still don't see the value of having that in Xen without an 
> easy way to use it.
> 
> For reminder, you agreed on the following splitting the series in 3 parts:
>     - Part 1: Extend iomem to support cacheability
>     - Part 2: Partially support reserved-memory for Dom0 and don't give iomem 
> permission on them
>     - Part 3: reserved-memory for guest
> 
> I agreed to merge part 1 and 2. Part 3 will be a start for a discussion how this 
> should be supported for guest. I also pointed out that Xilinx can carry part 3 
> in their tree if they feel like too.

I just wanted to bump this as I haven't got any feedback on the way forward here.
It should be possible get part 1 and 2 merged for Xen 4.13.

Cheers,
Stefano Stabellini June 21, 2019, 11:47 p.m. UTC | #8
On Wed, 5 Jun 2019, Julien Grall wrote:
> Hi,
> 
> On 20/05/2019 23:38, Julien Grall wrote:
> > On 20/05/2019 22:26, Stefano Stabellini wrote:
> > > On Sat, 11 May 2019, Julien Grall wrote:
> > > This is not about privilege over the system: whoever will make the
> > > decision to ask the hypervisor to map the page will have all the
> > > necessary rights to do it.  If the user wants to map a given region,
> > > either because she knows what she is doing, because she is
> > > experimenting, or for whatever reason, I think she should be allowed. In
> > > fact, she can always do it by reverting the patch. So why make it
> > > inconvenient for her?
> > TBH, I am getting very frustrated on reviewing this series. We spent our
> > previous f2f meetings discussing reserved-memory in lengthy way. We also
> > agreed on a plan (see below), but now we are back on square one again...
> > 
> > Yes, a user will need to revert the patch. But then as you said the user
> > would know what he/she is doing. So reverting a patch is not going to be a
> > complication.
> > 
> > However, I already pointed out multiple time that giving permission is not
> > going to be enough. So I still don't see the value of having that in Xen
> > without an easy way to use it.
> > 
> > For reminder, you agreed on the following splitting the series in 3 parts:
> >     - Part 1: Extend iomem to support cacheability
> >     - Part 2: Partially support reserved-memory for Dom0 and don't give
> > iomem permission on them
> >     - Part 3: reserved-memory for guest
> > 
> > I agreed to merge part 1 and 2. Part 3 will be a start for a discussion how
> > this should be supported for guest. I also pointed out that Xilinx can carry
> > part 3 in their tree if they feel like too.
> 
> I just wanted to bump this as I haven't got any feedback on the way forward
> here.
> It should be possible get part 1 and 2 merged for Xen 4.13.

I am about to send an update with Part 2 only. I tried to address all
comments, the only one I didn't address (splitting a function into two),
I mentioned it explicitely.  Apologies if I missed anything, it wasn't
intentional.
diff mbox series

Patch

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e5d488d..fa1ca20 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -643,7 +643,8 @@  static int __init make_memory_node(const struct domain *d,
 {
     int res, i;
     int reg_size = addrcells + sizecells;
-    int nr_cells = reg_size*kinfo->mem.nr_banks;
+    int nr_cells = reg_size * (kinfo->mem.nr_banks + (is_hardware_domain(d) ?
+                               bootinfo.reserved_mem.nr_banks : 0));
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -673,6 +674,20 @@  static int __init make_memory_node(const struct domain *d,
         dt_child_set_range(&cells, addrcells, sizecells, start, size);
     }
 
+    if ( is_hardware_domain(d) )
+    {
+        for ( i = 0; i < bootinfo.reserved_mem.nr_banks; i++ )
+        {
+            u64 start = bootinfo.reserved_mem.bank[i].start;
+            u64 size = bootinfo.reserved_mem.bank[i].size;
+
+            dt_dprintk("  Bank %d: %#"PRIx64"->%#"PRIx64"\n",
+                    i, start, start + size);
+
+            dt_child_set_range(&cells, addrcells, sizecells, start, size);
+        }
+    }
+
     res = fdt_property(fdt, "reg", reg, nr_cells * sizeof(*reg));
     if ( res )
         return res;