diff mbox

[1/4] arm64: topology: Implement basic CPU topology support

Message ID 1392037324-5069-1-git-send-email-broonie@kernel.org (mailing list archive)
State New, archived
Headers show

Commit Message

Mark Brown Feb. 10, 2014, 1:02 p.m. UTC
From: Mark Brown <broonie@linaro.org>

Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton.  This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.

The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.

A following patch will implement support for parsing the DT topology
bindings for ARM, similar patches will be needed for ACPI.

Signed-off-by: Mark Brown <broonie@linaro.org>
---
 arch/arm64/Kconfig                | 24 +++++++++++
 arch/arm64/include/asm/topology.h | 39 +++++++++++++++++
 arch/arm64/kernel/Makefile        |  1 +
 arch/arm64/kernel/smp.c           | 11 +++++
 arch/arm64/kernel/topology.c      | 91 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 166 insertions(+)
 create mode 100644 arch/arm64/include/asm/topology.h
 create mode 100644 arch/arm64/kernel/topology.c

Comments

Catalin Marinas Feb. 10, 2014, 4:22 p.m. UTC | #1
On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
> +static void update_siblings_masks(unsigned int cpuid)
> +{
> +	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> +	int cpu;
> +
> +	/* update core and thread sibling masks */
> +	for_each_possible_cpu(cpu) {
> +		cpu_topo = &cpu_topology[cpu];
> +
> +		if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
> +			continue;
> +
> +		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
> +		if (cpu != cpuid)
> +			cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
> +
> +		if (cpuid_topo->core_id != cpu_topo->core_id)
> +			continue;
> +
> +		cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
> +		if (cpu != cpuid)
> +			cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
> +	}
> +	smp_wmb();

I now noticed there are a couple of smp_wmb() calls in this patch. What
are they for?
Mark Brown Feb. 10, 2014, 4:46 p.m. UTC | #2
On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:

> > +		if (cpu != cpuid)
> > +			cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
> > +	}
> > +	smp_wmb();

> I now noticed there are a couple of smp_wmb() calls in this patch. What
> are they for?

To be honest I mostly cargo culted them from the ARM implementation; I
did look a bit but didn't fully dig into it - it seemed they were
required to ensure that the updates for the new CPU are visible over all
CPUs.  Vincent?
Vincent Guittot Feb. 11, 2014, 8:15 a.m. UTC | #3
On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
> On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
>> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
>
>> > +           if (cpu != cpuid)
>> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
>> > +   }
>> > +   smp_wmb();
>
>> I now noticed there are a couple of smp_wmb() calls in this patch. What
>> are they for?
>
> To be honest I mostly cargo culted them from the ARM implementation; I
> did look a bit but didn't fully dig into it - it seemed they were
> required to ensure that the updates for the new CPU are visible over all
> CPUs.  Vincent?

Yes that's it. we must ensure that updates are made visible to other CPUs
Will Deacon Feb. 11, 2014, 10:34 a.m. UTC | #4
On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
> >
> >> > +           if (cpu != cpuid)
> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
> >> > +   }
> >> > +   smp_wmb();
> >
> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
> >> are they for?
> >
> > To be honest I mostly cargo culted them from the ARM implementation; I
> > did look a bit but didn't fully dig into it - it seemed they were
> > required to ensure that the updates for the new CPU are visible over all
> > CPUs.  Vincent?
> 
> Yes that's it. we must ensure that updates are made visible to other CPUs

In relation to what? The smp_* barriers ensure ordering of observability
between a number of independent accesses, so you must be ensuring
ordering against something else. Also, you need to guarantee ordering on the
read-side too -- how is this achieved? I can't see any smp_rmb calls from a
quick grep, so I assume you're making use of address dependencies?

/confused

Will
Vincent Guittot Feb. 11, 2014, 1:18 p.m. UTC | #5
On 11 February 2014 11:34, Will Deacon <will.deacon@arm.com> wrote:
> On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
>> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
>> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
>> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
>> >
>> >> > +           if (cpu != cpuid)
>> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
>> >> > +   }
>> >> > +   smp_wmb();
>> >
>> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
>> >> are they for?
>> >
>> > To be honest I mostly cargo culted them from the ARM implementation; I
>> > did look a bit but didn't fully dig into it - it seemed they were
>> > required to ensure that the updates for the new CPU are visible over all
>> > CPUs.  Vincent?
>>
>> Yes that's it. we must ensure that updates are made visible to other CPUs
>
> In relation to what? The smp_* barriers ensure ordering of observability
> between a number of independent accesses, so you must be ensuring
> ordering against something else. Also, you need to guarantee ordering on the
> read-side too -- how is this achieved? I can't see any smp_rmb calls from a
> quick grep, so I assume you're making use of address dependencies?

The boot sequence ensures the rmb

Vincent

>
> /confused
>
> Will
Catalin Marinas Feb. 11, 2014, 2:07 p.m. UTC | #6
On Tue, Feb 11, 2014 at 01:18:56PM +0000, Vincent Guittot wrote:
> On 11 February 2014 11:34, Will Deacon <will.deacon@arm.com> wrote:
> > On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
> >> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
> >> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
> >> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
> >> >
> >> >> > +           if (cpu != cpuid)
> >> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
> >> >> > +   }
> >> >> > +   smp_wmb();
> >> >
> >> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
> >> >> are they for?
> >> >
> >> > To be honest I mostly cargo culted them from the ARM implementation; I
> >> > did look a bit but didn't fully dig into it - it seemed they were
> >> > required to ensure that the updates for the new CPU are visible over all
> >> > CPUs.  Vincent?
> >>
> >> Yes that's it. we must ensure that updates are made visible to other CPUs
> >
> > In relation to what? The smp_* barriers ensure ordering of observability
> > between a number of independent accesses, so you must be ensuring
> > ordering against something else. Also, you need to guarantee ordering on the
> > read-side too -- how is this achieved? I can't see any smp_rmb calls from a
> > quick grep, so I assume you're making use of address dependencies?
> 
> The boot sequence ensures the rmb

As Will said, smp_*mb() do not ensure absolute visibility, only relative
to subsequent memory accesses on the same processor. So just placing a
barrier at the end of a function does not mean much, it only shows half
of the problem it is trying to solve.

How are the secondary CPUs using this information? AFAICT, secondaries
call smp_store_cpu_info() which also go through each CPU in
update_siblings_mask(). Is there any race here that smp_wmb() is trying
to solve?

I guess for secondaries you could move the barrier just before
set_cpu_online(), this way it is clear that we want any previous writes
to become visible when this CPU is marked online. For the primary, any
memory writes should become visible before the CPU is started. One
synchronisation point is the pen release, depending on the smp_ops. I
think that's already covered by code like arch/arm/mach-*/platsmp.c.

So my proposal is to remove the smp_wmb() from topology.c and add it
where it is relevant as described above. If we have some race in
topology.c (like for example we may later decide to start more
secondaries at the same time), it needs to be solved using spinlocks.
Vincent Guittot Feb. 11, 2014, 2:46 p.m. UTC | #7
On 11 February 2014 15:07, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Feb 11, 2014 at 01:18:56PM +0000, Vincent Guittot wrote:
>> On 11 February 2014 11:34, Will Deacon <will.deacon@arm.com> wrote:
>> > On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
>> >> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
>> >> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
>> >> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
>> >> >
>> >> >> > +           if (cpu != cpuid)
>> >> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
>> >> >> > +   }
>> >> >> > +   smp_wmb();
>> >> >
>> >> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
>> >> >> are they for?
>> >> >
>> >> > To be honest I mostly cargo culted them from the ARM implementation; I
>> >> > did look a bit but didn't fully dig into it - it seemed they were
>> >> > required to ensure that the updates for the new CPU are visible over all
>> >> > CPUs.  Vincent?
>> >>
>> >> Yes that's it. we must ensure that updates are made visible to other CPUs
>> >
>> > In relation to what? The smp_* barriers ensure ordering of observability
>> > between a number of independent accesses, so you must be ensuring
>> > ordering against something else. Also, you need to guarantee ordering on the
>> > read-side too -- how is this achieved? I can't see any smp_rmb calls from a
>> > quick grep, so I assume you're making use of address dependencies?
>>
>> The boot sequence ensures the rmb
>
> As Will said, smp_*mb() do not ensure absolute visibility, only relative
> to subsequent memory accesses on the same processor. So just placing a
> barrier at the end of a function does not mean much, it only shows half
> of the problem it is trying to solve.

OK, that's probably the shortcut that has been made, we want to drain
the write buffer to make modification available to other cpus and I
though smp_wmb and the associated mb(ishst) was there for that
purpose.

Vincent

>
> How are the secondary CPUs using this information? AFAICT, secondaries
> call smp_store_cpu_info() which also go through each CPU in
> update_siblings_mask(). Is there any race here that smp_wmb() is trying
> to solve?
>
> I guess for secondaries you could move the barrier just before
> set_cpu_online(), this way it is clear that we want any previous writes
> to become visible when this CPU is marked online. For the primary, any
> memory writes should become visible before the CPU is started. One
> synchronisation point is the pen release, depending on the smp_ops. I
> think that's already covered by code like arch/arm/mach-*/platsmp.c.
>
> So my proposal is to remove the smp_wmb() from topology.c and add it
> where it is relevant as described above. If we have some race in
> topology.c (like for example we may later decide to start more
> secondaries at the same time), it needs to be solved using spinlocks.
>
> --
> Catalin
Mark Brown Feb. 11, 2014, 10:04 p.m. UTC | #8
On Tue, Feb 11, 2014 at 03:46:04PM +0100, Vincent Guittot wrote:
> On 11 February 2014 15:07, Catalin Marinas <catalin.marinas@arm.com> wrote:

> > As Will said, smp_*mb() do not ensure absolute visibility, only relative
> > to subsequent memory accesses on the same processor. So just placing a
> > barrier at the end of a function does not mean much, it only shows half
> > of the problem it is trying to solve.

> OK, that's probably the shortcut that has been made, we want to drain
> the write buffer to make modification available to other cpus and I
> though smp_wmb and the associated mb(ishst) was there for that
> purpose.

It does also explain where I got to trying to figure out what exactly
the mechanism was!

> > So my proposal is to remove the smp_wmb() from topology.c and add it
> > where it is relevant as described above. If we have some race in
> > topology.c (like for example we may later decide to start more
> > secondaries at the same time), it needs to be solved using spinlocks.

I'll just repost a version dropping them from the topology series and
separately add the barriers elsewhere.  The 32 bit ARM implementation
probably ought to be fixed as well.
Vincent Guittot Feb. 12, 2014, 8:04 a.m. UTC | #9
On 11 February 2014 15:07, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Feb 11, 2014 at 01:18:56PM +0000, Vincent Guittot wrote:
>> On 11 February 2014 11:34, Will Deacon <will.deacon@arm.com> wrote:
>> > On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
>> >> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
>> >> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
>> >> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
>> >> >
>> >> >> > +           if (cpu != cpuid)
>> >> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
>> >> >> > +   }
>> >> >> > +   smp_wmb();
>> >> >
>> >> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
>> >> >> are they for?
>> >> >
>> >> > To be honest I mostly cargo culted them from the ARM implementation; I
>> >> > did look a bit but didn't fully dig into it - it seemed they were
>> >> > required to ensure that the updates for the new CPU are visible over all
>> >> > CPUs.  Vincent?
>> >>
>> >> Yes that's it. we must ensure that updates are made visible to other CPUs
>> >
>> > In relation to what? The smp_* barriers ensure ordering of observability
>> > between a number of independent accesses, so you must be ensuring
>> > ordering against something else. Also, you need to guarantee ordering on the
>> > read-side too -- how is this achieved? I can't see any smp_rmb calls from a
>> > quick grep, so I assume you're making use of address dependencies?
>>
>> The boot sequence ensures the rmb
>
> As Will said, smp_*mb() do not ensure absolute visibility, only relative
> to subsequent memory accesses on the same processor. So just placing a

It's my time to be a bit confused, if smp_*mb() do not ensure absolute
visibility on other CPUs, how can we ensure that ?

> barrier at the end of a function does not mean much, it only shows half
> of the problem it is trying to solve.
>
> How are the secondary CPUs using this information? AFAICT, secondaries
> call smp_store_cpu_info() which also go through each CPU in
> update_siblings_mask(). Is there any race here that smp_wmb() is trying
> to solve?

The fields will be used to construct topology so we must ensure their
visibility

Vincent
>
> I guess for secondaries you could move the barrier just before
> set_cpu_online(), this way it is clear that we want any previous writes
> to become visible when this CPU is marked online. For the primary, any
> memory writes should become visible before the CPU is started. One
> synchronisation point is the pen release, depending on the smp_ops. I
> think that's already covered by code like arch/arm/mach-*/platsmp.c.
>
> So my proposal is to remove the smp_wmb() from topology.c and add it
> where it is relevant as described above. If we have some race in
> topology.c (like for example we may later decide to start more
> secondaries at the same time), it needs to be solved using spinlocks.
>
> --
> Catalin
Catalin Marinas Feb. 12, 2014, 10:27 a.m. UTC | #10
On Wed, Feb 12, 2014 at 08:04:54AM +0000, Vincent Guittot wrote:
> On 11 February 2014 15:07, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Tue, Feb 11, 2014 at 01:18:56PM +0000, Vincent Guittot wrote:
> >> On 11 February 2014 11:34, Will Deacon <will.deacon@arm.com> wrote:
> >> > On Tue, Feb 11, 2014 at 08:15:19AM +0000, Vincent Guittot wrote:
> >> >> On 10 February 2014 17:46, Mark Brown <broonie@kernel.org> wrote:
> >> >> > On Mon, Feb 10, 2014 at 04:22:31PM +0000, Catalin Marinas wrote:
> >> >> >> On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:
> >> >> >
> >> >> >> > +           if (cpu != cpuid)
> >> >> >> > +                   cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
> >> >> >> > +   }
> >> >> >> > +   smp_wmb();
> >> >> >
> >> >> >> I now noticed there are a couple of smp_wmb() calls in this patch. What
> >> >> >> are they for?
> >> >> >
> >> >> > To be honest I mostly cargo culted them from the ARM implementation; I
> >> >> > did look a bit but didn't fully dig into it - it seemed they were
> >> >> > required to ensure that the updates for the new CPU are visible over all
> >> >> > CPUs.  Vincent?
> >> >>
> >> >> Yes that's it. we must ensure that updates are made visible to other CPUs
> >> >
> >> > In relation to what? The smp_* barriers ensure ordering of observability
> >> > between a number of independent accesses, so you must be ensuring
> >> > ordering against something else. Also, you need to guarantee ordering on the
> >> > read-side too -- how is this achieved? I can't see any smp_rmb calls from a
> >> > quick grep, so I assume you're making use of address dependencies?
> >>
> >> The boot sequence ensures the rmb
> >
> > As Will said, smp_*mb() do not ensure absolute visibility, only relative
> > to subsequent memory accesses on the same processor. So just placing a
> 
> It's my time to be a bit confused, if smp_*mb() do not ensure absolute
> visibility on other CPUs, how can we ensure that ?

smb_wmb()/smb_rmb() do not provide any waiting, they are not
synchronisation primitives. You have to use spinlocks or some other
polling (and of course, barriers for relative ordering of memory
reads/writes).

> > barrier at the end of a function does not mean much, it only shows half
> > of the problem it is trying to solve.
> >
> > How are the secondary CPUs using this information? AFAICT, secondaries
> > call smp_store_cpu_info() which also go through each CPU in
> > update_siblings_mask(). Is there any race here that smp_wmb() is trying
> > to solve?
> 
> The fields will be used to construct topology so we must ensure their
> visibility

I wonder whether you need spinlocks around the topology updating code.
Mark Brown Feb. 12, 2014, 12:34 p.m. UTC | #11
On Wed, Feb 12, 2014 at 10:27:16AM +0000, Catalin Marinas wrote:
> On Wed, Feb 12, 2014 at 08:04:54AM +0000, Vincent Guittot wrote:

> > The fields will be used to construct topology so we must ensure their
> > visibility

> I wonder whether you need spinlocks around the topology updating code.

It certainly feels a lot safer to have them; if we can't convince
ourselves that the code is safe without them then there's very little
cost in having them so we may as well err on the side of doing that -
does that seem reasonable?
Lorenzo Pieralisi Feb. 21, 2014, 3:01 p.m. UTC | #12
On Mon, Feb 10, 2014 at 01:02:01PM +0000, Mark Brown wrote:

[...]

> +void store_cpu_topology(unsigned int cpuid)
> +{
> +	struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
> +
> +	/* DT should have been parsed by the time we get here */
> +	if (cpuid_topo->core_id == -1)
> +		pr_info("CPU%u: No topology information configured\n", cpuid);
> +	else
> +		update_siblings_masks(cpuid);

If the DT does not contain a proper topology the scheduler seem to go for a
toss. I tried to track it down and it seems it expects topology cpumasks to be
initialized regardless (eg to possible mask), they cannot be left empty.

Either update_siblings_masks is called regardless or possible mask must
be copied to the topology masks.

I will have a thorough look to check if the scheduler has a fall-back
mechanism.

On top of that, the pr_info message is quite annoying and should be
probably downgraded or removed altogether.

Furthermore leaving core_id as -1 should be fine, but I have to have a
proper look into the scheduler topology code to countercheck that.

Lorenzo
Mark Brown Feb. 22, 2014, 2:06 a.m. UTC | #13
On Fri, Feb 21, 2014 at 03:01:40PM +0000, Lorenzo Pieralisi wrote:

> If the DT does not contain a proper topology the scheduler seem to go for a
> toss. I tried to track it down and it seems it expects topology cpumasks to be
> initialized regardless (eg to possible mask), they cannot be left empty.

Could you be more specific please?  I didn't notice anything particular
in my testing but then the fact that it's a model does obscure a lot of
things.

> On top of that, the pr_info message is quite annoying and should be
> probably downgraded or removed altogether.

This was deliberate - since we are not willing to use the MPIDR
information to discover the topology we need to get the information into
the DT bindings in order to discover it.  Even in a SMP system there is
a difference in how closely attached the cores are so it seems like we
should be expecting a description of the topology.
Lorenzo Pieralisi Feb. 22, 2014, 12:26 p.m. UTC | #14
On Sat, Feb 22, 2014 at 02:06:02AM +0000, Mark Brown wrote:
> On Fri, Feb 21, 2014 at 03:01:40PM +0000, Lorenzo Pieralisi wrote:
> 
> > If the DT does not contain a proper topology the scheduler seem to go for a
> > toss. I tried to track it down and it seems it expects topology cpumasks to be
> > initialized regardless (eg to possible mask), they cannot be left empty.
> 
> Could you be more specific please?  I didn't notice anything particular
> in my testing but then the fact that it's a model does obscure a lot of
> things.

I will send you a backtrace, config file, commit.

Problem is hit when CONFIG_SCHED_SMT is on and there is no cpu-map in
the dts.

What's a model ? And if you mean the processor model, what can it possibly
obscure as long as this patch is concerned ?

> > On top of that, the pr_info message is quite annoying and should be
> > probably downgraded or removed altogether.
> 
> This was deliberate - since we are not willing to use the MPIDR
> information to discover the topology we need to get the information into
> the DT bindings in order to discover it.  Even in a SMP system there is
> a difference in how closely attached the cores are so it seems like we
> should be expecting a description of the topology.

We have to make a decision. Either we rely on MPIDR_EL1 as a fallback or
we barf on missing topology nodes (or we just set-up a flat topology if
the cpu-map is missing and do not log anything).

Thanks,
Lorenzo
Mark Brown Feb. 23, 2014, 2:09 a.m. UTC | #15
On Sat, Feb 22, 2014 at 12:26:48PM +0000, Lorenzo Pieralisi wrote:
> On Sat, Feb 22, 2014 at 02:06:02AM +0000, Mark Brown wrote:
> > On Fri, Feb 21, 2014 at 03:01:40PM +0000, Lorenzo Pieralisi wrote:

> > > If the DT does not contain a proper topology the scheduler seem to go for a
> > > toss. I tried to track it down and it seems it expects topology cpumasks to be
> > > initialized regardless (eg to possible mask), they cannot be left empty.

> > Could you be more specific please?  I didn't notice anything particular
> > in my testing but then the fact that it's a model does obscure a lot of
> > things.

> I will send you a backtrace, config file, commit.

> Problem is hit when CONFIG_SCHED_SMT is on and there is no cpu-map in
> the dts.

Interesting...  I did test this incrementally during development but
didn't see any issues, though as there have been many stylistic updates
so I have to confess it's probably been a while since I ran that test,
I'll take a look when I have a stable enough network connection to run
the models (I'm on a Shinkansen at the minute so my connection keeps
dropping out).

> What's a model ? And if you mean the processor model, what can it possibly
> obscure as long as this patch is concerned ?

A fast or foundation model.  Your only description was "went for a toss"
so I had no idea what the problem was, I was guessing that this was some
sort of issue with performance like only using one core or something.
If it's a crash then using the models won't make a difference but for
performance issues the emulation means it's not always apparent when
using the system if the kernel is performing poorly or if the emulation
is just slow.

> > > On top of that, the pr_info message is quite annoying and should be
> > > probably downgraded or removed altogether.

> > This was deliberate - since we are not willing to use the MPIDR
> > information to discover the topology we need to get the information into
> > the DT bindings in order to discover it.  Even in a SMP system there is
> > a difference in how closely attached the cores are so it seems like we
> > should be expecting a description of the topology.

> We have to make a decision. Either we rely on MPIDR_EL1 as a fallback or
> we barf on missing topology nodes (or we just set-up a flat topology if
> the cpu-map is missing and do not log anything).

Indeed.  My personal preference would be that we fall back to MPIDR if
we don't have topology information from the firmware (since it's always
possible that the silicon does the right thing) or failing that we
insist on topology information from the firmware.  Once systems are out
in the wild it's potentially painful to get the data added to DTs so
pushing for information in case we need it in the future seems like the
safest approach in cases like this where it's not going to be too much
work to provide.
diff mbox

Patch

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 27bbcfc7202a..fea7b477676b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -164,6 +164,30 @@  config SMP
 
 	  If you don't know what to do here, say N.
 
+config CPU_TOPOLOGY
+	bool "Support CPU topology definition"
+	depends on SMP
+	default y
+	help
+	  Support CPU topology definition, based on configuration
+	  provided by the firmware.
+
+config SCHED_MC
+	bool "Multi-core scheduler support"
+	depends on CPU_TOPOLOGY
+	help
+	  Multi-core scheduler support improves the CPU scheduler's decision
+	  making when dealing with multi-core CPU chips at a cost of slightly
+	  increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+	bool "SMT scheduler support"
+	depends on CPU_TOPOLOGY
+	help
+	  Improves the CPU scheduler's decision making when dealing with
+	  MultiThreading at a cost of slightly increased overhead in some
+	  places. If unsure say N here.
+
 config NR_CPUS
 	int "Maximum number of CPUs (2-32)"
 	range 2 32
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
new file mode 100644
index 000000000000..c8a47e8f452b
--- /dev/null
+++ b/arch/arm64/include/asm/topology.h
@@ -0,0 +1,39 @@ 
+#ifndef __ASM_TOPOLOGY_H
+#define __ASM_TOPOLOGY_H
+
+#ifdef CONFIG_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cpu_topology {
+	int thread_id;
+	int core_id;
+	int cluster_id;
+	cpumask_t thread_sibling;
+	cpumask_t core_sibling;
+};
+
+extern struct cpu_topology cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu)	(cpu_topology[cpu].cluster_id)
+#define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
+#define topology_thread_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+
+#define mc_capable()	(cpu_topology[0].cluster_id != -1)
+#define smt_capable()	(cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { }
+static inline void store_cpu_topology(unsigned int cpuid) { }
+
+#endif
+
+#include <asm-generic/topology.h>
+
+#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2d4554b13410..252b62181532 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -20,6 +20,7 @@  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND)	+= sleep.o suspend.o
 arm64-obj-$(CONFIG_JUMP_LABEL)		+= jump_label.o
+arm64-obj-$(CONFIG_CPU_TOPOLOGY)	+= topology.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 7cfb92a4ab66..9660750f34ba 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -114,6 +114,11 @@  int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	return ret;
 }
 
+static void smp_store_cpu_info(unsigned int cpuid)
+{
+	store_cpu_topology(cpuid);
+}
+
 /*
  * This is the secondary CPU boot entry.  We're using this CPUs
  * idle thread stack, but a set of temporary page tables.
@@ -152,6 +157,8 @@  asmlinkage void secondary_start_kernel(void)
 	 */
 	notify_cpu_starting(cpu);
 
+	smp_store_cpu_info(cpu);
+
 	/*
 	 * OK, now it's safe to let the boot CPU continue.  Wait for
 	 * the CPU migration code to notice that the CPU is online
@@ -390,6 +397,10 @@  void __init smp_prepare_cpus(unsigned int max_cpus)
 	int err;
 	unsigned int cpu, ncores = num_possible_cpus();
 
+	init_cpu_topology();
+
+	smp_store_cpu_info(smp_processor_id());
+
 	/*
 	 * are we trying to boot more cores than exist?
 	 */
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
new file mode 100644
index 000000000000..a6d4ed2d69c0
--- /dev/null
+++ b/arch/arm64/kernel/topology.c
@@ -0,0 +1,91 @@ 
+/*
+ * arch/arm64/kernel/topology.c
+ *
+ * Copyright (C) 2011,2013 Linaro Limited.
+ *
+ * Based on the arm32 version written by Vincent Guittot in turn based on
+ * arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/topology.h>
+
+/*
+ * cpu topology table
+ */
+struct cpu_topology cpu_topology[NR_CPUS];
+EXPORT_SYMBOL_GPL(cpu_topology);
+
+const struct cpumask *cpu_coregroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].core_sibling;
+}
+
+static void update_siblings_masks(unsigned int cpuid)
+{
+	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+	int cpu;
+
+	/* update core and thread sibling masks */
+	for_each_possible_cpu(cpu) {
+		cpu_topo = &cpu_topology[cpu];
+
+		if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
+			continue;
+
+		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+		if (cpu != cpuid)
+			cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+		if (cpuid_topo->core_id != cpu_topo->core_id)
+			continue;
+
+		cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
+		if (cpu != cpuid)
+			cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
+	}
+	smp_wmb();
+}
+
+void store_cpu_topology(unsigned int cpuid)
+{
+	struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
+
+	/* DT should have been parsed by the time we get here */
+	if (cpuid_topo->core_id == -1)
+		pr_info("CPU%u: No topology information configured\n", cpuid);
+	else
+		update_siblings_masks(cpuid);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void __init init_cpu_topology(void)
+{
+	unsigned int cpu;
+
+	/* init core mask and power*/
+	for_each_possible_cpu(cpu) {
+		struct cpu_topology *cpu_topo = &cpu_topology[cpu];
+
+		cpu_topo->thread_id = -1;
+		cpu_topo->core_id =  -1;
+		cpu_topo->cluster_id = -1;
+		cpumask_clear(&cpu_topo->core_sibling);
+		cpumask_clear(&cpu_topo->thread_sibling);
+	}
+	smp_wmb();
+}