diff mbox

[Linaro-acpi,RFC,part1,1/7] ACPI: Make ACPI core running without PCI on ARM64

Message ID 52A5C024.5050702@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Hanjun Guo Dec. 9, 2013, 1:05 p.m. UTC
On 2013-12-9 19:50, Catalin Marinas wrote:
> On Mon, Dec 09, 2013 at 04:12:24AM +0000, Hanjun Guo wrote:
>> On 2013-12-7 1:23, Arnd Bergmann wrote:
>>> On Friday 06 December 2013, Tomasz Nowicki wrote:
>>>> On 05.12.2013 23:04, Arnd Bergmann wrote:
>>>>> On Wednesday 04 December 2013, Hanjun Guo wrote:
>>>>>> On 2013?12?04? 00:41, Matthew Garrett wrote:
>>>>>>> Given the number of #ifdefs you're adding, wouldn't it make more sense
>>>>>>> to just add stub functions to include/linux/pci.h?
>>>>>>
>>>>>> Thanks for the suggestion :)
>>>>>>
>>>>>> I can add stub functions in include/linux/pci.h for raw_pci_read()/
>>>>>> raw_pci_write(), then can remove #ifdefs for acpi_os_read/write_pci_configuration().
>>>>>
>>>>> Actually I wonder about the usefulness of this patch in either form: Since ACPI
>>>>> on ARM64 is only for servers, I would very much expect them to always come with
>>>>> PCI, either physical host bridges with attached devices, or logical PCI functions
>>>>> used to describe the on-SoC I/O devices. Even in case of virtual machines, you'd
>>>>> normally use PCI as the method to communicate data about the virtio channels.
>>>>>
>>>>> Can you name a realistic use-case where you'd want ACPI but not PCI?
>>>>
>>>> Yes you can describe SoC I/O devices using logical PCI functions only if 
>>>> they are on PCI, correct me if I am wrong. Also, devices can be placed 
>>>> only on IOMEM (like for ARM SoC) and it is hard to predict which way 
>>>> vendors chose. So way don't let it be configurable? ACPI spec says 
>>>> nothing like PCI is needed for ACPI, AFAIK.
>>>
>>> You are right that today's ARM SoCs basically never use PCI to describe
>>> internal devices (IIRC VIA VT8500 is an exception, but their PCI was
>>> just a software fabrication).
>>>
>>> However, when we're talking about ACPI on ARM64, that is nothing like classic
>>> ARM SoCs: As Jon Masters mentioned, this is about new server hardware following
>>> a (still secret, but hopefully not much longer) hardware specification that is
>>> explicitly designed to allow interoperability between vendors, so they
>>> must have put some thought into how to make the hardware discoverable. It
>>> seems that they are modeling things after how it's done on x86, and the
>>> only sensible way to have discoverable hardware there is PCI. This is
>>> also what all x86 SoCs do.
>>
>> I think the concern here is that ACPI is only for server platform or not.
>>
>> Since ACPI has lots of content related to power management, I think ACPI
>> can be used for mobile devices and other platform too, not only for ARM
>> servers, and with this patch, we can support both requirement.
> 
> 'Can be used' is one thing, will it really be used is another? I don't
> think so, it was (well, is) difficult enough to make the transition to
> FDT, I don't see how ACPI would solve the current issues.
> 
> I see ACPI as a server distro requirement and there are indeed benefits
> in abstracting the hardware behind standard description, AML. Of course,
> this would work even better with probe-able buses like PCIe and I'm
> pretty sure this would be the case on high-end servers. But even if a
> server distro like RHEL supports a SoC without PCIe, I would expect them
> to only provide a single binary Image with CONFIG_PCI enabled.
> 
> This patch is small enough and allows ACPI build with !CONFIG_PCI for
> the time being but longer term I would expect such SoCs without PCI to
> be able to run on a kernel with CONFIG_PCI enabled.

Yes, we will support PCI in ACPI in the long run, and we just make PCI
optional for ACPI in this patch.

Actually, I had reworked this patch and make the code with minimal
changes to ACPI code:


Not all the ARM64 targets that are using ACPI have PCI, so introduce
some stub functions to make ACPI core run without CONFIG_PCI on ARM64.

pcibios_penalize_isa_irq() is arch dependent, introduce asm/pci.h to
include it.

Since ACPI on X86 and IA64 depends on PCI, it will not break X86 and
IA64 with this patch.

Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
---
 arch/arm64/include/asm/pci.h |   13 +++++++++++++
 drivers/acpi/Makefile        |    2 +-
 drivers/acpi/internal.h      |    5 +++++
 include/linux/pci.h          |   32 +++++++++++++++++++++++---------
 4 files changed, 42 insertions(+), 10 deletions(-)
 create mode 100644 arch/arm64/include/asm/pci.h


--
Hanjun

Comments

Arnd Bergmann Dec. 9, 2013, 4:35 p.m. UTC | #1
On Monday 09 December 2013, Hanjun Guo wrote:
> On 2013-12-9 19:50, Catalin Marinas wrote:
> > On Mon, Dec 09, 2013 at 04:12:24AM +0000, Hanjun Guo wrote:
> >>
> >> I think the concern here is that ACPI is only for server platform or not.
> >>
> >> Since ACPI has lots of content related to power management, I think ACPI
> >> can be used for mobile devices and other platform too, not only for ARM
> >> servers, and with this patch, we can support both requirement.
> > 
> > 'Can be used' is one thing, will it really be used is another? I don't
> > think so, it was (well, is) difficult enough to make the transition to
> > FDT, I don't see how ACPI would solve the current issues.

Exactly. In particular we don't want people to get the wrong idea about
where we are heading, so making it possible to use this code on embedded
systems for me is a reason *not* to take the patch.

> > I see ACPI as a server distro requirement and there are indeed benefits
> > in abstracting the hardware behind standard description, AML. Of course,
> > this would work even better with probe-able buses like PCIe and I'm
> > pretty sure this would be the case on high-end servers. But even if a
> > server distro like RHEL supports a SoC without PCIe, I would expect them
> > to only provide a single binary Image with CONFIG_PCI enabled.
> > 
> > This patch is small enough and allows ACPI build with !CONFIG_PCI for
> > the time being but longer term I would expect such SoCs without PCI to
> > be able to run on a kernel with CONFIG_PCI enabled.
> 
> Yes, we will support PCI in ACPI in the long run, and we just make PCI
> optional for ACPI in this patch.

Do you mean there is a problem running your code with PCI /enabled/ at the
moment? If so, I'd suggest fixing that instead since you will have to fix
it anyway.

	Arnd
Catalin Marinas Dec. 9, 2013, 4:55 p.m. UTC | #2
On Mon, Dec 09, 2013 at 04:35:04PM +0000, Arnd Bergmann wrote:
> On Monday 09 December 2013, Hanjun Guo wrote:
> > On 2013-12-9 19:50, Catalin Marinas wrote:
> > > On Mon, Dec 09, 2013 at 04:12:24AM +0000, Hanjun Guo wrote:
> > >>
> > >> I think the concern here is that ACPI is only for server platform or not.
> > >>
> > >> Since ACPI has lots of content related to power management, I think ACPI
> > >> can be used for mobile devices and other platform too, not only for ARM
> > >> servers, and with this patch, we can support both requirement.
> > > 
> > > 'Can be used' is one thing, will it really be used is another? I don't
> > > think so, it was (well, is) difficult enough to make the transition to
> > > FDT, I don't see how ACPI would solve the current issues.
> 
> Exactly. In particular we don't want people to get the wrong idea about
> where we are heading, so making it possible to use this code on embedded
> systems for me is a reason *not* to take the patch.

I agree.

> > > I see ACPI as a server distro requirement and there are indeed benefits
> > > in abstracting the hardware behind standard description, AML. Of course,
> > > this would work even better with probe-able buses like PCIe and I'm
> > > pretty sure this would be the case on high-end servers. But even if a
> > > server distro like RHEL supports a SoC without PCIe, I would expect them
> > > to only provide a single binary Image with CONFIG_PCI enabled.
> > > 
> > > This patch is small enough and allows ACPI build with !CONFIG_PCI for
> > > the time being but longer term I would expect such SoCs without PCI to
> > > be able to run on a kernel with CONFIG_PCI enabled.
> > 
> > Yes, we will support PCI in ACPI in the long run, and we just make PCI
> > optional for ACPI in this patch.
> 
> Do you mean there is a problem running your code with PCI /enabled/ at the
> moment? If so, I'd suggest fixing that instead since you will have to fix
> it anyway.

CONFIG_PCI does not exist on arm64 yet (we have some internal patches
but may not be ready to be posted before the holidays; they try to share
code with other archs, so more discussions before merging). We could add
CONFIG_PCI and some dummy functions on arm64 for development (not to be
upstreamed) or Hanjun could continue to use the current patch before we
get PCI working. In the order of priorities, we'll have to merge PCI
before ACPI anyway.
Matthew Garrett Dec. 9, 2013, 5:06 p.m. UTC | #3
On Mon, Dec 09, 2013 at 05:35:04PM +0100, Arnd Bergmann wrote:

> Exactly. In particular we don't want people to get the wrong idea about
> where we are heading, so making it possible to use this code on embedded
> systems for me is a reason *not* to take the patch.

People are trying to deploy ACPI-based embedded x86, and most of the 
ACPI/DT integration discussion seems to have been based on the idea that 
this is a worthwhile thing to support. If we're not interested in doing 
so then we should probably make that a whole kernel decision rather than 
a per architecture one.
Arnd Bergmann Dec. 9, 2013, 5:20 p.m. UTC | #4
On Monday 09 December 2013, Catalin Marinas wrote:
> CONFIG_PCI does not exist on arm64 yet (we have some internal patches
> but may not be ready to be posted before the holidays; they try to share
> code with other archs, so more discussions before merging). We could add
> CONFIG_PCI and some dummy functions on arm64 for development (not to be
> upstreamed) or Hanjun could continue to use the current patch before we
> get PCI working. In the order of priorities, we'll have to merge PCI
> before ACPI anyway.

Well, lack of PCI support on ARM64 is a much better reason for accepting
the patch than potential use on non-server platforms of course.

What is the status of the PCI work though? I suspect it won't be all
that hard to add minimal PCI support for a simple mmconfig plus
fixed I/O space based host of the kind that qemu can easily provide.

The hard part that we want to share code with other architectures is
supporting pluggable host controllers, and I think we can defer that
a bit.

	Arnd
Catalin Marinas Dec. 9, 2013, 6:01 p.m. UTC | #5
On Mon, Dec 09, 2013 at 05:20:22PM +0000, Arnd Bergmann wrote:
> On Monday 09 December 2013, Catalin Marinas wrote:
> > CONFIG_PCI does not exist on arm64 yet (we have some internal patches
> > but may not be ready to be posted before the holidays; they try to share
> > code with other archs, so more discussions before merging). We could add
> > CONFIG_PCI and some dummy functions on arm64 for development (not to be
> > upstreamed) or Hanjun could continue to use the current patch before we
> > get PCI working. In the order of priorities, we'll have to merge PCI
> > before ACPI anyway.
> 
> Well, lack of PCI support on ARM64 is a much better reason for accepting
> the patch than potential use on non-server platforms of course.

As I said above about priorities, we are not in a hurry to merge ACPI
for arm64 before PCI is supported.

> What is the status of the PCI work though? I suspect it won't be all
> that hard to add minimal PCI support for a simple mmconfig plus
> fixed I/O space based host of the kind that qemu can easily provide.

Liviu (ARM engineer) has been working on generalising the microblaze
code (which is very similar to powerpc) and enable it on arm64. The
patches will be posted soon (though may slip into the new year) but
there will be many discussions on how to do this best, so I don't expect
a quick merge.

In parallel, Will is looking at getting PCI to work with kvmtool and
that's something we could merge sooner (but again, in the new year).

> The hard part that we want to share code with other architectures is
> supporting pluggable host controllers, and I think we can defer that
> a bit.

Indeed, this would take time.
Hanjun Guo Dec. 10, 2013, 1:52 a.m. UTC | #6
On 2013-12-10 1:06, Matthew Garrett wrote:
> On Mon, Dec 09, 2013 at 05:35:04PM +0100, Arnd Bergmann wrote:
> 
>> Exactly. In particular we don't want people to get the wrong idea about
>> where we are heading, so making it possible to use this code on embedded
>> systems for me is a reason *not* to take the patch.
> 
> People are trying to deploy ACPI-based embedded x86, and most of the 
> ACPI/DT integration discussion seems to have been based on the idea that 
> this is a worthwhile thing to support. If we're not interested in doing 
> so then we should probably make that a whole kernel decision rather than 
> a per architecture one.

I agree, thanks for this information.
Hanjun Guo Dec. 10, 2013, 2:53 a.m. UTC | #7
On 2013-12-10 0:55, Catalin Marinas wrote:
> On Mon, Dec 09, 2013 at 04:35:04PM +0000, Arnd Bergmann wrote:
>> On Monday 09 December 2013, Hanjun Guo wrote:
>>> On 2013-12-9 19:50, Catalin Marinas wrote:
>>>> On Mon, Dec 09, 2013 at 04:12:24AM +0000, Hanjun Guo wrote:
>>>>>
>>>>> I think the concern here is that ACPI is only for server platform or not.
>>>>>
>>>>> Since ACPI has lots of content related to power management, I think ACPI
>>>>> can be used for mobile devices and other platform too, not only for ARM
>>>>> servers, and with this patch, we can support both requirement.
>>>>
>>>> 'Can be used' is one thing, will it really be used is another? I don't
>>>> think so, it was (well, is) difficult enough to make the transition to
>>>> FDT, I don't see how ACPI would solve the current issues.
>>
>> Exactly. In particular we don't want people to get the wrong idea about
>> where we are heading, so making it possible to use this code on embedded
>> systems for me is a reason *not* to take the patch.
> 
> I agree.
> 
>>>> I see ACPI as a server distro requirement and there are indeed benefits
>>>> in abstracting the hardware behind standard description, AML. Of course,
>>>> this would work even better with probe-able buses like PCIe and I'm
>>>> pretty sure this would be the case on high-end servers. But even if a
>>>> server distro like RHEL supports a SoC without PCIe, I would expect them
>>>> to only provide a single binary Image with CONFIG_PCI enabled.
>>>>
>>>> This patch is small enough and allows ACPI build with !CONFIG_PCI for
>>>> the time being but longer term I would expect such SoCs without PCI to
>>>> be able to run on a kernel with CONFIG_PCI enabled.
>>>
>>> Yes, we will support PCI in ACPI in the long run, and we just make PCI
>>> optional for ACPI in this patch.
>>
>> Do you mean there is a problem running your code with PCI /enabled/ at the
>> moment? If so, I'd suggest fixing that instead since you will have to fix
>> it anyway.
> 
> CONFIG_PCI does not exist on arm64 yet (we have some internal patches
> but may not be ready to be posted before the holidays; they try to share
> code with other archs, so more discussions before merging). We could add
> CONFIG_PCI and some dummy functions on arm64 for development (not to be
> upstreamed) or Hanjun could continue to use the current patch before we
> get PCI working. 

Thanks for the suggestion, I will continue to use the current patch, and
I will rework or rebase this one when PCI is working.

Hanjun
Arnd Bergmann Dec. 10, 2013, 3:28 a.m. UTC | #8
On Monday 09 December 2013, Matthew Garrett wrote:
> 
> On Mon, Dec 09, 2013 at 05:35:04PM +0100, Arnd Bergmann wrote:
> 
> > Exactly. In particular we don't want people to get the wrong idea about
> > where we are heading, so making it possible to use this code on embedded
> > systems for me is a reason not to take the patch.
> 
> People are trying to deploy ACPI-based embedded x86, and most of the 
> ACPI/DT integration discussion seems to have been based on the idea that 
> this is a worthwhile thing to support. If we're not interested in doing 
> so then we should probably make that a whole kernel decision rather than 
> a per architecture one.

Well, except it's not an architecture independent decision. An embedded
x86 SoC will still be very much like a PC, just with a few things added
in and some other bits left out, and you can already describe it mostly
with plain ACPI-5.0. Also, there are only a couple of different non-PC style
devices that Intel is integrating into their SoCs, so we're talking
about a few dozen device drivers here.

The embedded ARM SoCs we have are very much unlike a PC in lots of ways
and there are orders of magnitude more SoCs and on-chip devices that
are potentially impacted by this, so it's definitely not the same thing.

ARM developers are still licking the wounds from a painful migration
from board files to DT, and we will probably spend at least one or
two more years tying up the loose ends from that before we can actually
call that done. We are not ready to go through the same process (or worse)
again any time soon just because x86 does it, and the only reason we're
talking about this for servers is the promise that this is contained to
server-class systems with hardware and firmware people that know what
they are doing and that can make this work as easy as x86 servers
without adding a whole lot of complexity into the kernel.

	Arnd
Linus Walleij Dec. 10, 2013, 9:56 a.m. UTC | #9
On Mon, Dec 9, 2013 at 6:06 PM, Matthew Garrett <mjg59@srcf.ucam.org> wrote:
> On Mon, Dec 09, 2013 at 05:35:04PM +0100, Arnd Bergmann wrote:
>
>> Exactly. In particular we don't want people to get the wrong idea about
>> where we are heading, so making it possible to use this code on embedded
>> systems for me is a reason *not* to take the patch.
>
> People are trying to deploy ACPI-based embedded x86, and most of the
> ACPI/DT integration discussion seems to have been based on the idea that
> this is a worthwhile thing to support.

I have only seen Intel doing this, are there more people doing that?

As noted on patch [0/7] I still get patches for embedded x86 which
use ISA-style probing for embedded x86, e.g:
http://marc.info/?l=linux-gpio&m=138559852307673&w=2

At the same time some people are refining SFI (simple firmware
interface) support for GPIO, albeit I think that was for elder
embedded x86'es.

Yours,
Linus Walleij
Mark Brown Dec. 10, 2013, 7:22 p.m. UTC | #10
On Tue, Dec 10, 2013 at 04:28:52AM +0100, Arnd Bergmann wrote:
> On Monday 09 December 2013, Matthew Garrett wrote:

> > People are trying to deploy ACPI-based embedded x86, and most of the 
> > ACPI/DT integration discussion seems to have been based on the idea that 
> > this is a worthwhile thing to support. If we're not interested in doing 
> > so then we should probably make that a whole kernel decision rather than 
> > a per architecture one.

> Well, except it's not an architecture independent decision. An embedded
> x86 SoC will still be very much like a PC, just with a few things added
> in and some other bits left out, and you can already describe it mostly

It's not just the SoC, it's also the rest of the board.  The patches the
Intel guys are submitting at the minute are mainly for the off-SoC
devices at least as far as I noticed.  This'll impact anyone who ends up
using ACPI, we need to at least pay attention to what's going on there.

> with plain ACPI-5.0. Also, there are only a couple of different non-PC style
> devices that Intel is integrating into their SoCs, so we're talking
> about a few dozen device drivers here.

It's going to be way more than that for the whole system, and you can't
assume that all the system integrators are going to pay a blind bit of
notice to the reference designs.  Some will just clone them but others
will bin them and do their own thing.
Arnd Bergmann Dec. 10, 2013, 8 p.m. UTC | #11
On Tuesday 10 December 2013, Mark Brown wrote:
> On Tue, Dec 10, 2013 at 04:28:52AM +0100, Arnd Bergmann wrote:
> > On Monday 09 December 2013, Matthew Garrett wrote:
> 
> > > People are trying to deploy ACPI-based embedded x86, and most of the 
> > > ACPI/DT integration discussion seems to have been based on the idea that 
> > > this is a worthwhile thing to support. If we're not interested in doing 
> > > so then we should probably make that a whole kernel decision rather than 
> > > a per architecture one.
> 
> > Well, except it's not an architecture independent decision. An embedded
> > x86 SoC will still be very much like a PC, just with a few things added
> > in and some other bits left out, and you can already describe it mostly
> 
> It's not just the SoC, it's also the rest of the board.  The patches the
> Intel guys are submitting at the minute are mainly for the off-SoC
> devices at least as far as I noticed.  This'll impact anyone who ends up
> using ACPI, we need to at least pay attention to what's going on there.

Yes, but I'm not that worried about off-soc stuff, which tends to be
off the much simpler variety: a few MMIO or PIO registers, IRQs,
GPIOs or (with ACPI-5.0) devices on i2c and spi buses.

> > with plain ACPI-5.0. Also, there are only a couple of different non-PC style
> > devices that Intel is integrating into their SoCs, so we're talking
> > about a few dozen device drivers here.
> 
> It's going to be way more than that for the whole system, and you can't
> assume that all the system integrators are going to pay a blind bit of
> notice to the reference designs.  Some will just clone them but others
> will bin them and do their own thing.

They won't be able to change the on-chip components for obvious reasons.

	Arnd
Mark Brown Dec. 10, 2013, 8:23 p.m. UTC | #12
On Tue, Dec 10, 2013 at 09:00:20PM +0100, Arnd Bergmann wrote:
> On Tuesday 10 December 2013, Mark Brown wrote:

> > It's not just the SoC, it's also the rest of the board.  The patches the
> > Intel guys are submitting at the minute are mainly for the off-SoC
> > devices at least as far as I noticed.  This'll impact anyone who ends up
> > using ACPI, we need to at least pay attention to what's going on there.

> Yes, but I'm not that worried about off-soc stuff, which tends to be
> off the much simpler variety: a few MMIO or PIO registers, IRQs,
> GPIOs or (with ACPI-5.0) devices on i2c and spi buses.

That's not my experience especially once you get into phone type
hardware - there's not much complexity difference when gluing things
into the system and the fact that it's connected by the board increases
the amount of flexibility that has to be coped with.  I don't see a
substantial difference between the two cases.  To be honest I'm a bit
concerned about what we're going to see given where ACPI's at as a spec.
Arnd Bergmann Dec. 11, 2013, 3:07 a.m. UTC | #13
On Tuesday 10 December 2013, Mark Brown wrote:
> On Tue, Dec 10, 2013 at 09:00:20PM +0100, Arnd Bergmann wrote:
> > On Tuesday 10 December 2013, Mark Brown wrote:
> 
> > > It's not just the SoC, it's also the rest of the board.  The patches the
> > > Intel guys are submitting at the minute are mainly for the off-SoC
> > > devices at least as far as I noticed.  This'll impact anyone who ends up
> > > using ACPI, we need to at least pay attention to what's going on there.
> 
> > Yes, but I'm not that worried about off-soc stuff, which tends to be
> > off the much simpler variety: a few MMIO or PIO registers, IRQs,
> > GPIOs or (with ACPI-5.0) devices on i2c and spi buses.
> 
> That's not my experience especially once you get into phone type
> hardware - there's not much complexity difference when gluing things
> into the system and the fact that it's connected by the board increases
> the amount of flexibility that has to be coped with.

Yes, that is probably right. The only argument that one can make about
the mobile phone case is that these devices are so complex that nobody
even bothers any more running upstream kernels on them on any CPU
architecture. If the kernel code is kept out of the mainline tree,
it doesn't matter to us what they use, and the developers don't gain
much by following any of the available firmware models either.

	Arnd
Mark Brown Dec. 11, 2013, 11:02 a.m. UTC | #14
On Wed, Dec 11, 2013 at 04:07:27AM +0100, Arnd Bergmann wrote:
> On Tuesday 10 December 2013, Mark Brown wrote:

> > That's not my experience especially once you get into phone type
> > hardware - there's not much complexity difference when gluing things
> > into the system and the fact that it's connected by the board increases
> > the amount of flexibility that has to be coped with.

> Yes, that is probably right. The only argument that one can make about
> the mobile phone case is that these devices are so complex that nobody
> even bothers any more running upstream kernels on them on any CPU
> architecture. If the kernel code is kept out of the mainline tree,
> it doesn't matter to us what they use, and the developers don't gain
> much by following any of the available firmware models either.

It's more of a commercial thing than a complexity thing (complexity adds
a barrier but it's not fundamental) - the designs for phones aren't
meaningfully different to those for tablets, and looking at both things
like the ARM Chromeboos and what the low power Haswell stuff is doing
laptops are looking an awful lot like tablets these days.
Graeme Gregory Dec. 16, 2013, 8:51 p.m. UTC | #15
On Mon, Dec 09, 2013 at 06:01:55PM +0000, Catalin Marinas wrote:
> On Mon, Dec 09, 2013 at 05:20:22PM +0000, Arnd Bergmann wrote:
> > On Monday 09 December 2013, Catalin Marinas wrote:
> > > CONFIG_PCI does not exist on arm64 yet (we have some internal patches
> > > but may not be ready to be posted before the holidays; they try to share
> > > code with other archs, so more discussions before merging). We could add
> > > CONFIG_PCI and some dummy functions on arm64 for development (not to be
> > > upstreamed) or Hanjun could continue to use the current patch before we
> > > get PCI working. In the order of priorities, we'll have to merge PCI
> > > before ACPI anyway.
> > 
> > Well, lack of PCI support on ARM64 is a much better reason for accepting
> > the patch than potential use on non-server platforms of course.
> 
> As I said above about priorities, we are not in a hurry to merge ACPI
> for arm64 before PCI is supported.
> 
> > What is the status of the PCI work though? I suspect it won't be all
> > that hard to add minimal PCI support for a simple mmconfig plus
> > fixed I/O space based host of the kind that qemu can easily provide.
> 
> Liviu (ARM engineer) has been working on generalising the microblaze
> code (which is very similar to powerpc) and enable it on arm64. The
> patches will be posted soon (though may slip into the new year) but
> there will be many discussions on how to do this best, so I don't expect
> a quick merge.
> 
> In parallel, Will is looking at getting PCI to work with kvmtool and
> that's something we could merge sooner (but again, in the new year).
> 
> > The hard part that we want to share code with other architectures is
> > supporting pluggable host controllers, and I think we can defer that
> > a bit.
> 
> Indeed, this would take time.
> 
Hi Catalin,

So the real question now is how do we progress with these ACPI patches? After
repeated incorrect accusations of developing behind closed doors I am loath
to dissapear back into linaro with them for another few months.

Also as Mark Brown has already pointed out the bigger the patchset gets
while developed in Linaro trees the more strain it is going to put on
maintainers for review.

We have worked to try and keep the patchset as self contained as possible
and to affect arch/arm64 in a minimal way. It should not affect it at all
in the !CONFIG_ACPI case.

Currently Hanjun is busy preparing a v2 PATCH series which contains amendments
for all the technical issues found in review so far. Should we continue with
this process until all the neccessary Acks are in place?

Graeme
Catalin Marinas Dec. 17, 2013, 11:29 a.m. UTC | #16
Hi Graeme,

On Mon, Dec 16, 2013 at 08:51:33PM +0000, Graeme Gregory wrote:
> So the real question now is how do we progress with these ACPI patches? After
> repeated incorrect accusations of developing behind closed doors I am loath
> to dissapear back into linaro with them for another few months.

Well, just follow the Linux community process, no need to disappear
back. There was feedback that needs to be addressed, work on getting
acks from maintainers. The first version has only been posted two weeks
ago, I don't see any reason to panic ;).

> Also as Mark Brown has already pointed out the bigger the patchset gets
> while developed in Linaro trees the more strain it is going to put on
> maintainers for review.

Yes, that's correct, so just gather maintainer's acks in smaller steps.

> We have worked to try and keep the patchset as self contained as possible
> and to affect arch/arm64 in a minimal way. It should not affect it at all
> in the !CONFIG_ACPI case.

And this is great, I really don't have any complaints here.

> Currently Hanjun is busy preparing a v2 PATCH series which contains amendments
> for all the technical issues found in review so far. Should we continue with
> this process until all the neccessary Acks are in place?

Reviews/acks is the first step and you are on the right track here. The
following step would be upstreaming with good arguments on why and when
the code needs to be merged. Code quality on its own is not an argument
for merging. Backlog in Linaro's trees is not an argument either. You
could of course start upstreaming clean-up code that is necessary
whether you have ACPI on arm64 or not.

So while waiting to debate the good arguments for when to merge the code
(once reviewed), I have several concerns which I want addressed before
enabling ACPI for arm64:

- Does anyone have a wider view of how ACPI on ARM looks like? There is
  a lot of effort going into the next version of ACPI but for now I
  don't see how we can enable a feature and hope we sort it out later.
- Who is coordinating the non-standard ACPI descriptors being pushed to
  various drivers in the kernel? Do we trust the hw vendors to do the
  right thing (and also talk to each other)?
- What if two hw vendors have different descriptors for the same device?
- Have we agreed what we do about clocks, voltage regulators?
- Do we actually have a real platform which requires ACPI at this point?

Just to be clear, I'm not against ACPI for arm64 and I am aware of
hardware vendors requiring this. But I'm looking forward to them being
more open and explain what (rather than why) they need because I don't
think ACPI solves anything for the ARM kernel community. It's rather a
favour we do to them and OS distros.
Graeme Gregory Dec. 19, 2013, 11:30 a.m. UTC | #17
On Tue, Dec 17, 2013 at 11:29:14AM +0000, Catalin Marinas wrote:
> Hi Graeme,
> 
> On Mon, Dec 16, 2013 at 08:51:33PM +0000, Graeme Gregory wrote:
> > So the real question now is how do we progress with these ACPI patches? After
> > repeated incorrect accusations of developing behind closed doors I am loath
> > to dissapear back into linaro with them for another few months.
> 
> Well, just follow the Linux community process, no need to disappear
> back. There was feedback that needs to be addressed, work on getting
> acks from maintainers. The first version has only been posted two weeks
> ago, I don't see any reason to panic ;).
> 

Ok, thanks for that, we will continue to work on v2, v3, ... as normal then

> Reviews/acks is the first step and you are on the right track here. The
> following step would be upstreaming with good arguments on why and when
> the code needs to be merged. Code quality on its own is not an argument
> for merging. Backlog in Linaro's trees is not an argument either. You
> could of course start upstreaming clean-up code that is necessary
> whether you have ACPI on arm64 or not.
>

Yes coming out of the reviews some of the patches which we initially thought
were ARM64 work turned out to be general cleanups and they will go via
the appropriate channel.

> So while waiting to debate the good arguments for when to merge the code
> (once reviewed), I have several concerns which I want addressed before
> enabling ACPI for arm64:
> 
> - Does anyone have a wider view of how ACPI on ARM looks like? There is
>   a lot of effort going into the next version of ACPI but for now I
>   don't see how we can enable a feature and hope we sort it out later.
> - Who is coordinating the non-standard ACPI descriptors being pushed to
>   various drivers in the kernel? Do we trust the hw vendors to do the
>   right thing (and also talk to each other)?
> - What if two hw vendors have different descriptors for the same device?
> - Have we agreed what we do about clocks, voltage regulators?
> - Do we actually have a real platform which requires ACPI at this point?
> 
> Just to be clear, I'm not against ACPI for arm64 and I am aware of
> hardware vendors requiring this. But I'm looking forward to them being
> more open and explain what (rather than why) they need because I don't
> think ACPI solves anything for the ARM kernel community. It's rather a
> favour we do to them and OS distros.
> 

You have some good points here, obviously we are currently working on
preperation work based on the RTSM/FVP (whatever they are called next week)
models which currently are not a good representation of an armv8 server.

Hopefully the documenation of what real armv8 server architecture will look
like will come in the new year. Things like regulators and clocks I do not
have answers to yet as obviously in Intel world these things are hidden
from view, I do not know what the plan is for armv8 silicon/motherboards.

On the multiple venders same hardware issue I guess Intel guys must have
already seen this happen. We shall have to ask them what their solution was?

Graeme
Arnd Bergmann Dec. 19, 2013, 2:01 p.m. UTC | #18
On Thursday 19 December 2013, Graeme Gregory wrote:
> Hopefully the documenation of what real armv8 server architecture will look
> like will come in the new year. Things like regulators and clocks I do not
> have answers to yet as obviously in Intel world these things are hidden
> from view, I do not know what the plan is for armv8 silicon/motherboards.

The clocks and regulators (and a handful of other subsystems) are
the key thing to work out IMHO. For all I know these are either completely
static (turned on by firmware at boot time) on current servers, or they
are done in a way that each device can manage itself using power states
in the PCI configuration space. If you have on-chip devices that do not
look like PCI devices to software, or that interact with other on-chip
controllers at run-time as on typical arm32 embedded SoCs, you are in
trouble to start with, and there are two possible ways to deal with this
in theory:

a) Hide all the register-level setup behind AML code and make Linux only
   aware of the possible device states that it can ask for, which would
   make this look similar to today's servers.

b) Model all the soc-internal registers as devices and write OS-specific
   SoC-specific device drivers for them, using yet-to-be-defined ACPI
   extensions to describe the interactions between devices. This would
   be modeled along the lines of what we do today with DT, and what Intel
   wants to do on their embedded SoCs with ACPI in the future.

I think anybody would agree that we should not try to mix the two models
in a single system, as that would create an endless source of bugs when
you have two drivers fighting over the same hardware. There is also a
rough consensus that we really only want a) and not b) on ARM, but there
have been indications that people are already working on b), which I
think is a bit worrying. I would argue that anyone who wants b) on 
ARM should not use ACPI at all but rather describe the hardware using
DT as we do today. This could possibly change if someone shows that a)
is actually not a realistic model at all, but I also think that doing b)
properly will depend on doing a major ACPI-6.0 or ACPI-7.0 release
to actually specify a standard model for the extra subsystems.

> On the multiple venders same hardware issue I guess Intel guys must have
> already seen this happen. We shall have to ask them what their solution was?

There is basically only one SoC vendor on x86, which makes this a lot
easier. Off-chip devices on the board are typically PCI based and
don't need any special treatment because the PCI vendor/device ID
pair is enought to identify the hardware. Anything that does not fall
into these categories (e.g. vendor specific laptop extensions) is
handled with drivers in drivers/platform/x86/. This works fine
because that code is only needed for _optional_ features such as
multimedia buttons or sensors, and the total amount of code for
all the platforms is fairly contained.

The main concern for ARM is that if we need to do the same, it ends up
as a direct replacement for the "board files" that we just spent years
on making obsolete. We can do this as a workaround for the oddball broken
firmware in shipping products, but we should not go back to having to
add platform-specific code that is only meant to interface with how
a random vendor decided to expose standard hardware in their ACPI BIOS.

	Arnd
Catalin Marinas Dec. 19, 2013, 3:43 p.m. UTC | #19
On Thu, Dec 19, 2013 at 02:01:26PM +0000, Arnd Bergmann wrote:
> On Thursday 19 December 2013, Graeme Gregory wrote:
> > Hopefully the documenation of what real armv8 server architecture will look
> > like will come in the new year. Things like regulators and clocks I do not
> > have answers to yet as obviously in Intel world these things are hidden
> > from view, I do not know what the plan is for armv8 silicon/motherboards.
> 
> The clocks and regulators (and a handful of other subsystems) are
> the key thing to work out IMHO. For all I know these are either completely
> static (turned on by firmware at boot time) on current servers, or they
> are done in a way that each device can manage itself using power states
> in the PCI configuration space. If you have on-chip devices that do not
> look like PCI devices to software, or that interact with other on-chip
> controllers at run-time as on typical arm32 embedded SoCs, you are in
> trouble to start with, and there are two possible ways to deal with this
> in theory:
> 
> a) Hide all the register-level setup behind AML code and make Linux only
>    aware of the possible device states that it can ask for, which would
>    make this look similar to today's servers.
> 
> b) Model all the soc-internal registers as devices and write OS-specific
>    SoC-specific device drivers for them, using yet-to-be-defined ACPI
>    extensions to describe the interactions between devices. This would
>    be modeled along the lines of what we do today with DT, and what Intel
>    wants to do on their embedded SoCs with ACPI in the future.
> 
> I think anybody would agree that we should not try to mix the two models
> in a single system, as that would create an endless source of bugs when
> you have two drivers fighting over the same hardware. There is also a
> rough consensus that we really only want a) and not b) on ARM, but there
> have been indications that people are already working on b), which I
> think is a bit worrying. I would argue that anyone who wants b) on 
> ARM should not use ACPI at all but rather describe the hardware using
> DT as we do today. This could possibly change if someone shows that a)
> is actually not a realistic model at all, but I also think that doing b)
> properly will depend on doing a major ACPI-6.0 or ACPI-7.0 release
> to actually specify a standard model for the extra subsystems.

I'm inclined to say that (ARM) Linux should only support stuff captured
in an ACPI spec but I'm not familiar enough with this to assess its
feasibility.

Choosing between a) and b) depends when where you place the maintenance
burden. Point a) pretty much leaves this with the hw vendors. They get a
distro with a kernel supporting ACPI-x and (PCI) device drivers they
need but other SoC specific is handled by firmware or AML. It is their
responsibility to work on firmware and AML until getting it right
without changing the kernel (well, unless they find genuine bugs with
the code).

Point b) is simpler for kernel developers as we know how to debug and
maintain kernel code but I agree with you that we should rather use FDT
here than duplicate the effort just for the sake of ACPI.

Waiting for OS distros and vendors to clarify but I think RH are mainly
looking at a). My (mis)understanding is based based on pro-ACPI
arguments I heard like being able to use newer hardware with older
kernels (and b) would always require new SoC drivers and bindings).
Mark Brown Dec. 20, 2013, 7:55 p.m. UTC | #20
On Tue, Dec 17, 2013 at 11:29:14AM +0000, Catalin Marinas wrote:

> - What if two hw vendors have different descriptors for the same device?

This one at least is already handled - ACPI ID tables are lists of IDs
just the same as everything else so we can have as many different
bindings for the same device as the hardware vendors see fit to bless us
with.
diff mbox

Patch

diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h
new file mode 100644
index 0000000..e682c25
--- /dev/null
+++ b/arch/arm64/include/asm/pci.h
@@ -0,0 +1,13 @@ 
+#ifndef ASMARM_PCI_H
+#define ASMARM_PCI_H
+
+#ifdef __KERNEL__
+
+static inline void pcibios_penalize_isa_irq(int irq, int active)
+{
+	/* We don't do dynamic PCI IRQ allocation */
+}
+
+#endif /* __KERNEL__ */
+
+#endif
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index 0331f91..d8cebe3 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -38,7 +38,7 @@  acpi-y				+= acpi_processor.o
 acpi-y				+= processor_core.o
 acpi-y				+= ec.o
 acpi-$(CONFIG_ACPI_DOCK)	+= dock.o
-acpi-y				+= pci_root.o pci_link.o pci_irq.o
+acpi-$(CONFIG_PCI)		+= pci_root.o pci_link.o pci_irq.o
 acpi-$(CONFIG_X86_INTEL_LPSS)	+= acpi_lpss.o
 acpi-y				+= acpi_platform.o
 acpi-y				+= power.o
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index b125fdb..b1ef8fa 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -26,8 +26,13 @@ 
 acpi_status acpi_os_initialize1(void);
 int init_acpi_device_notify(void);
 int acpi_scan_init(void);
+#ifdef CONFIG_PCI
 void acpi_pci_root_init(void);
 void acpi_pci_link_init(void);
+#else
+static inline void acpi_pci_root_init(void) {}
+static inline void acpi_pci_link_init(void) {}
+#endif /* CONFIG_PCI */
 void acpi_processor_init(void);
 void acpi_platform_init(void);
 int acpi_sysfs_init(void);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 1084a15..28334dd 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -541,15 +541,6 @@  struct pci_ops {
 	int (*write)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val);
 };

-/*
- * ACPI needs to be able to access PCI config space before we've done a
- * PCI bus scan and created pci_bus structures.
- */
-int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
-		 int reg, int len, u32 *val);
-int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
-		  int reg, int len, u32 val);
-
 struct pci_bus_region {
 	resource_size_t start;
 	resource_size_t end;
@@ -1280,6 +1271,15 @@  typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode,
 		      unsigned int command_bits, u32 flags);
 void pci_register_set_vga_state(arch_set_vga_state_t func);

+/*
+ * ACPI needs to be able to access PCI config space before we've done a
+ * PCI bus scan and created pci_bus structures.
+ */
+int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
+		 int reg, int len, u32 *val);
+int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
+		  int reg, int len, u32 val);
+
 #else /* CONFIG_PCI is not enabled */

 /*
@@ -1476,6 +1476,20 @@  static inline int pci_domain_nr(struct pci_bus *bus)
 static inline struct pci_dev *pci_dev_get(struct pci_dev *dev)
 { return NULL; }

+static inline struct pci_bus *pci_find_bus(int domain, int busnr)
+{ return NULL; }
+
+static inline int pci_bus_write_config_byte(struct pci_bus *bus,
+			unsigned int devfn, int where, u8 val);
+{ return -ENODEV; }
+
+static inline int raw_pci_read(unsigned int domain, unsigned int bus,
+		unsigned int devfn, int reg, int len, u32 *val);
+{ return -EINVAL; }
+static inline int raw_pci_write(unsigned int domain, unsigned int bus,
+		unsigned int devfn, int reg, int len, u32 val);
+{return -EINVAL; }
+
 #define dev_is_pci(d) (false)
 #define dev_is_pf(d) (false)
 #define dev_num_vf(d) (0)