diff mbox series

[-next,for,tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()

Message ID 20200928124457.27289-1-lukas.bulwahn@gmail.com (mailing list archive)
State New, archived
Headers show
Series [-next,for,tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task() | expand

Commit Message

Lukas Bulwahn Sept. 28, 2020, 12:44 p.m. UTC
Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
validity for enable_l1d_flush_for_task() and introduces some superfluous
local variables for that implementation.

make clang-analyzer on x86_64 tinyconfig caught my attention with:

  arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
  [clang-analyzer-deadcode.DeadStores]

Compilers will detect these superfluous local variables and assignment and
optimize this anyway. So, the resulting binary is identical before and
after this change.

Simplify the code and remove superfluous local variables to make
clang-analyzer happy.

No functional change. No change in binary with supported compilers.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
---
applies cleanly on next-20200925

Balbir, please review and ack.
Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
branch of tip as follow-up to: 
https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/

I quickly confirmed that the binary did not change with this change to the
source code; The hash of tlb.o remained the same before and after the change.

So, in my setup:
md5sum tlb.o
7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o

linux-safety, please verify and validate this change.

 arch/x86/mm/tlb.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Nathan Chancellor Sept. 28, 2020, 8:43 p.m. UTC | #1
On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
> validity for enable_l1d_flush_for_task() and introduces some superfluous
> local variables for that implementation.
> 
> make clang-analyzer on x86_64 tinyconfig caught my attention with:
> 
>   arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
>   [clang-analyzer-deadcode.DeadStores]
> 
> Compilers will detect these superfluous local variables and assignment and
> optimize this anyway. So, the resulting binary is identical before and
> after this change.
> 
> Simplify the code and remove superfluous local variables to make
> clang-analyzer happy.
> 
> No functional change. No change in binary with supported compilers.
> 
> Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
> ---

Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>

> applies cleanly on next-20200925
> 
> Balbir, please review and ack.
> Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
> branch of tip as follow-up to: 
> https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/
> 
> I quickly confirmed that the binary did not change with this change to the
> source code; The hash of tlb.o remained the same before and after the change.
> 
> So, in my setup:
> md5sum tlb.o
> 7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o
> 
> linux-safety, please verify and validate this change.
> 
>  arch/x86/mm/tlb.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active == true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }
>  
>  int disable_l1d_flush_for_task(struct task_struct *tsk)
> -- 
> 2.17.1
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#38): https://lists.elisa.tech/g/linux-safety/message/38
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Peter Zijlstra Sept. 29, 2020, 7:12 a.m. UTC | #2
On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active == true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }

If you don't use the return value of get_cpu(), then change it over to
preempt_{dis,en}able(), but this got me looking at the function, wtf is
that garbage supposed to do in the first place

What do we need to disable preemption for?

Please explain the desired semantics against sched_setaffinity().


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#41): https://lists.elisa.tech/g/linux-safety/message/41
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Lukas Bulwahn Sept. 29, 2020, 8:33 a.m. UTC | #3
On Tue, 29 Sep 2020, Peter Zijlstra wrote:

> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active == true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place

I also thought that preempt_{dis,en}able() would do, but thought maybe 
Balbir just considered {get,put}_cpu stylistically nicer... so I stayed 
with the functions as-is.

> 
> What do we need to disable preemption for?
>

I have no clue... not a good premise for touching the code, but I just 
wanted to make clang-analyzer happy without modifying any semantics.

Balbir, can you help out here? What was your intent?
 
> Please explain the desired semantics against sched_setaffinity().
> 

I am happy to send a proper v2 once I understand if disabling preemption 
is required and the preempt_{dis,en}able() function are preferred to the 
{get,put}_cpu functions.

Lukas


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#39): https://lists.elisa.tech/g/linux-safety/message/39
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Peter Zijlstra Sept. 29, 2020, 8:37 a.m. UTC | #4
On Tue, Sep 29, 2020 at 09:12:11AM +0200, Peter Zijlstra wrote:
> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active == true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place
> 
> What do we need to disable preemption for?
> 
> Please explain the desired semantics against sched_setaffinity().

Here, I fixed it..

---
 arch/x86/mm/tlb.c | 24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..f02a2f1909da 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,31 +316,13 @@ EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
-
-	/*
-	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
-	 */
-
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D) ||
+	    sched_smt_active());
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active == true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#42): https://lists.elisa.tech/g/linux-safety/message/42
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Thomas Gleixner Sept. 30, 2020, 3:40 p.m. UTC | #5
On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> Here, I fixed it..

Well, no. What Balbir is trying to do here is to establish whether a
task runs on a !SMT core. sched_smt_active() is system wide, but their
setup is to have a bunch of SMT enabled cores and cores where SMT is off
because the sibling is offlined. They affine these processes to non SMT
cores and the check there validates that before it enabled that flush
thingy.

Of course this is best effort voodoo because if all CPUs in the mask are
offlined then the task is moved to a SMT enabled one where L1D flush is
useless. Though offlining their workhorse CPUs is probably not the daily
business for obvious raisins.

Thanks,

        tglx




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#43): https://lists.elisa.tech/g/linux-safety/message/43
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Lukas Bulwahn Sept. 30, 2020, 4:53 p.m. UTC | #6
On Wed, 30 Sep 2020, Thomas Gleixner wrote:

> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.
> 
> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.
>

Thanks, Thomas.

So, I will keep the semantics as-is, clean up the patch with 
preempt_{dis,en}able() and send out a v2.

Lukas 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#44): https://lists.elisa.tech/g/linux-safety/message/44
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Peter Zijlstra Sept. 30, 2020, 5:03 p.m. UTC | #7
On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.

Yes, I see that it does that. But it's still complete shit.

> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.

Not only hotplug, you can trivially change the affinity after this
check.

Also, that preempt_disable() in there doesn't actually do anything.
Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
static_cpu_has() and boot_cpu_has() in the same bloody condition and has
a pointless ret variable.

It's shoddy code, that only works if you align the planets right. We
really shouldn't provide interfaces that are this bad.

It's correct operation is only by accident.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#45): https://lists.elisa.tech/g/linux-safety/message/45
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Thomas Gleixner Sept. 30, 2020, 6 p.m. UTC | #8
On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> Also, that preempt_disable() in there doesn't actually do anything.
> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> a pointless ret variable.

I absolutely agree and I really missed it when looking at it before
merging. cpus_read_lock()/unlock() is the right thing to do if at all.

> It's shoddy code, that only works if you align the planets right. We
> really shouldn't provide interfaces that are this bad.
>
> It's correct operation is only by accident.

True :(

I understand Balbirs problem and it makes some sense to provide a
solution. We can:

    1) reject set_affinity() if the task has that flush muck enabled
       and user space tries to move it to a SMT enabled core

    2) disable the muck if if detects that it is runs on a SMT enabled
       core suddenly (hotplug says hello)

       This one is nasty because there is no feedback to user space
       about the wreckage.

Thanks,

        tglx


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#46): https://lists.elisa.tech/g/linux-safety/message/46
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Peter Zijlstra Sept. 30, 2020, 6:35 p.m. UTC | #9
On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> > Also, that preempt_disable() in there doesn't actually do anything.
> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> > a pointless ret variable.

Also, I forgot to add, it accesses ->cpus_mask without the proper
locking, so it could be reading intermediate state from whatever cpumask
operation that's in progress.

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 
> > It's shoddy code, that only works if you align the planets right. We
> > really shouldn't provide interfaces that are this bad.
> >
> > It's correct operation is only by accident.
> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core
> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

That's and, right, not or. because 1) deals with sched_setffinity()
and 2) deals wit hotplug.

Now 1) requires an arch hook in sched_setaffinity(), something I'm not
keen on providing, once we provide it, who knows what strange and
wonderful things archs will dream up.

And 2) also happens on hot-un-plug, when the task's affinity gets
forced because it became empty. No user feedback there either, and
information is lost.


I suppose we can do 2) but send a signal. That would cover all cases and
keep it in arch code. But yes, that's pretty terrible too.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#47): https://lists.elisa.tech/g/linux-safety/message/47
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Thomas Gleixner Sept. 30, 2020, 9:38 p.m. UTC | #10
On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> > Also, that preempt_disable() in there doesn't actually do anything.
>> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> > a pointless ret variable.
>
> Also, I forgot to add, it accesses ->cpus_mask without the proper
> locking, so it could be reading intermediate state from whatever cpumask
> operation that's in progress.

Yes. I saw that after hitting send. :(

>> I absolutely agree and I really missed it when looking at it before
>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>> 
>> > It's shoddy code, that only works if you align the planets right. We
>> > really shouldn't provide interfaces that are this bad.
>> >
>> > It's correct operation is only by accident.
>> 
>> True :(
>> 
>> I understand Balbirs problem and it makes some sense to provide a
>> solution. We can:
>> 
>>     1) reject set_affinity() if the task has that flush muck enabled
>>        and user space tries to move it to a SMT enabled core
>> 
>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>        core suddenly (hotplug says hello)
>> 
>>        This one is nasty because there is no feedback to user space
>>        about the wreckage.
>
> That's and, right, not or. because 1) deals with sched_setffinity()
> and 2) deals wit hotplug.

It was meant as AND of course.

> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
> keen on providing, once we provide it, who knows what strange and
> wonderful things archs will dream up.

I don't think so. We can have that magic in core:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
        	return true;
        /* Do magic stuff */
        return res;
}
#else
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	return true;
}
#endif

It's a pretty well defined problem and having the magic in core code
prevents an arch hook which allows abuse of all sorts.

And the same applies to enable_l1d_flush_for_task(). The only
architecture specific nonsense are the checks whether the CPU bug is
there and whether the hardware supports L1D flushing.

So we can have:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
int paranoid_l1d_enable(struct task_struct *tsk)
{
        /* Do the SMT validation under the proper locks */
        if (!res)
        	set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
        return res;
}
#endif

> And 2) also happens on hot-un-plug, when the task's affinity gets
> forced because it became empty. No user feedback there either, and
> information is lost.

Of course. It's both that suddenly SMT gets enabled on a core which was
isolated and when the last isolated core in the tasks CPU mask goes
offline.

> I suppose we can do 2) but send a signal. That would cover all cases and
> keep it in arch code. But yes, that's pretty terrible too.

Bah. I just looked at the condition to flush:

        if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
                (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

That fails to flush when SMT is disabled globally. Balbir?

Of course this should be:

        if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

Now we can make this:

        if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
        	if (!this_cpu_read(cpu_info.smt_active))
                	l1d_flush_hw();
                else
                	task_work_add(...);

And that task work clears the flag and sends a signal. We're not going
to send a signal from switch_mm() ....

Thanks,

        tglx


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#50): https://lists.elisa.tech/g/linux-safety/message/50
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Singh, Balbir via lists.elisa.tech Sept. 30, 2020, 10:46 p.m. UTC | #11
On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> Also, that preempt_disable() in there doesn't actually do anything.
>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> a pointless ret variable.
> 

I was being a bit crazy in mixing the two, considering that there might
be CPUs that do not support L1D flush (others might in the same system,
which is insane)

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 

It seems like the right thing to do, get_cpu() used to be the old method.
The idea is to use cpu_data(i) in a hotplug safe manner.

>> It's shoddy code, that only works if you align the planets right. We
>> really shouldn't provide interfaces that are this bad.
>>
>> It's correct operation is only by accident.



> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core

I thought of this and it would be difficult to debug for users, taskset -c
would not work on applications that flush, etc, etc.

> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

Yes, agreed.

> 

Trying to look at the concerns, I wonder if this can still be saved

-       if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-                       !static_cpu_has(X86_FEATURE_FLUSH_L1D))
+       if (!static_cpu_has(X86_BUG_L1TF) ||
+               !static_cpu_has(X86_FEATURE_FLUSH_L1D))
                return -EINVAL;

-       cpu = get_cpu();

+       cpus_read_lock();
        for_each_cpu(i, &tsk->cpus_mask) {
                if (cpu_data(i).smt_active == true) {
-                       put_cpu();
+                       cpus_read_unlock();
                        return -EINVAL;
                }
        }
+       cpus_read_unlock();

        set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-       put_cpu();
        return ret;
 }

I don't like the idea of iterating CPUs in the cpumask to check if they
all have SMT disabled, but that is a requirement for flush

Balbir


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#51): https://lists.elisa.tech/g/linux-safety/message/51
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Singh, Balbir via lists.elisa.tech Sept. 30, 2020, 10:59 p.m. UTC | #12
On 1/10/20 7:38 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?

It should have been 

!sched_smt_active() || (cond)

and not

sched_smt_active && (cond)

I'll fix that up, but your simplification below works as well.




> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 

Yes, I see MCE handling uses a similar pattern, so SIGBUS for a task that migrates/moves
to a SMT disabled core?

Thanks,
Balbir



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52): https://lists.elisa.tech/g/linux-safety/message/52
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Singh, Balbir via lists.elisa.tech Sept. 30, 2020, 11:49 p.m. UTC | #13
On 1/10/20 7:38 am, Thomas Gleixner wrote:

> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?
> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 
> Thanks,
> 


So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..6b0d0a9cd447 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
+	 * a. The CPU is not affected by the L1TF bug
+	 * b. The CPU does not have L1D FLUSH feature support
 	 */
 
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active == true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
 	return ret;
 }
 
@@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
 	return 0;
 }
 
+static void l1d_flush_kill(struct callback_head *ch)
+{
+	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
+	force_signal(SIGBUS);
+}
+
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	       struct task_struct *tsk)
 {
@@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
 	}
 
 	/*
-	 * Flush only if SMT is disabled as per the contract, which is checked
-	 * when the feature is enabled.
+	 * Flush only if SMT is disabled, if flushing is enabled
+	 * and we are on an SMT enabled core, kill the task
 	 */
-	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
-		(prev_mm & LAST_USER_MM_L1D_FLUSH))
-		l1d_flush_hw();
+	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
+		if (!this_cpu_read(cpu_info.smt_active))
+			l1d_flush_hw();
+		else
+			task_work_add(prev, l1d_flush_kill, true);
 
 	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53): https://lists.elisa.tech/g/linux-safety/message/53
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Singh, Balbir via lists.elisa.tech Oct. 1, 2020, 12:48 a.m. UTC | #14
On 1/10/20 9:49 am, Singh, Balbir wrote:
> On 1/10/20 7:38 am, Thomas Gleixner wrote:
> 
>>
>>
>>
>> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>>> a pointless ret variable.
>>>
>>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>>> locking, so it could be reading intermediate state from whatever cpumask
>>> operation that's in progress.
>>
>> Yes. I saw that after hitting send. :(
>>
>>>> I absolutely agree and I really missed it when looking at it before
>>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>>
>>>>> It's shoddy code, that only works if you align the planets right. We
>>>>> really shouldn't provide interfaces that are this bad.
>>>>>
>>>>> It's correct operation is only by accident.
>>>>
>>>> True :(
>>>>
>>>> I understand Balbirs problem and it makes some sense to provide a
>>>> solution. We can:
>>>>
>>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>>        and user space tries to move it to a SMT enabled core
>>>>
>>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>>        core suddenly (hotplug says hello)
>>>>
>>>>        This one is nasty because there is no feedback to user space
>>>>        about the wreckage.
>>>
>>> That's and, right, not or. because 1) deals with sched_setffinity()
>>> and 2) deals wit hotplug.
>>
>> It was meant as AND of course.
>>
>>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>>> keen on providing, once we provide it, who knows what strange and
>>> wonderful things archs will dream up.
>>
>> I don't think so. We can have that magic in core:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>>                 return true;
>>         /* Do magic stuff */
>>         return res;
>> }
>> #else
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         return true;
>> }
>> #endif
>>
>> It's a pretty well defined problem and having the magic in core code
>> prevents an arch hook which allows abuse of all sorts.
>>
>> And the same applies to enable_l1d_flush_for_task(). The only
>> architecture specific nonsense are the checks whether the CPU bug is
>> there and whether the hardware supports L1D flushing.
>>
>> So we can have:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> int paranoid_l1d_enable(struct task_struct *tsk)
>> {
>>         /* Do the SMT validation under the proper locks */
>>         if (!res)
>>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>>         return res;
>> }
>> #endif
>>
>>> And 2) also happens on hot-un-plug, when the task's affinity gets
>>> forced because it became empty. No user feedback there either, and
>>> information is lost.
>>
>> Of course. It's both that suddenly SMT gets enabled on a core which was
>> isolated and when the last isolated core in the tasks CPU mask goes
>> offline.
>>
>>> I suppose we can do 2) but send a signal. That would cover all cases and
>>> keep it in arch code. But yes, that's pretty terrible too.
>>
>> Bah. I just looked at the condition to flush:
>>
>>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> That fails to flush when SMT is disabled globally. Balbir?
>>
>> Of course this should be:
>>
>>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> Now we can make this:
>>
>>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>>                 if (!this_cpu_read(cpu_info.smt_active))
>>                         l1d_flush_hw();
>>                 else
>>                         task_work_add(...);
>>
>> And that task work clears the flag and sends a signal. We're not going
>> to send a signal from switch_mm() ....
>>
>> Thanks,
>>
> 
> 
> So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
> L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..6b0d0a9cd447 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> -	 * b. The CPU is not affected by the L1TF bug
> -	 * c. The CPU does not have L1D FLUSH feature support
> -	 * c. The task's affinity is on cores with SMT on.
> +	 * a. The CPU is not affected by the L1TF bug
> +	 * b. The CPU does not have L1D FLUSH feature support
>  	 */
>  
>  	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
> -			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> +		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> -
> -	for_each_cpu(i, &tsk->cpus_mask) {
> -		if (cpu_data(i).smt_active == true) {
> -			put_cpu();
> -			return -EINVAL;
> -		}
> -	}
> -
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> -	put_cpu();
>  	return ret;
>  }
>  
> @@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
>  	return 0;
>  }
>  
> +static void l1d_flush_kill(struct callback_head *ch)
> +{
> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
> +	force_signal(SIGBUS);
> +}
> +
>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  	       struct task_struct *tsk)
>  {
> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>  	}
>  
>  	/*
> -	 * Flush only if SMT is disabled as per the contract, which is checked
> -	 * when the feature is enabled.
> +	 * Flush only if SMT is disabled, if flushing is enabled
> +	 * and we are on an SMT enabled core, kill the task
>  	 */
> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
> -		l1d_flush_hw();
> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
> +		if (!this_cpu_read(cpu_info.smt_active))
> +			l1d_flush_hw();
> +		else
> +			task_work_add(prev, l1d_flush_kill, true);

We have no access the to the previous task and mm->owner depends on MEMCG :)
We can do the magic in mm_mangle_tif_spec_bits(), I suppose

Balbir




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55): https://lists.elisa.tech/g/linux-safety/message/55
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Thomas Gleixner Oct. 1, 2020, 8:17 a.m. UTC | #15
On Thu, Oct 01 2020 at 10:48, Balbir Singh wrote:
> On 1/10/20 9:49 am, Singh, Balbir wrote:
>>  
>> +static void l1d_flush_kill(struct callback_head *ch)
>> +{
>> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
>> +	force_signal(SIGBUS);
>> +}
>> +
>>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>  	       struct task_struct *tsk)
>>  {
>> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>>  	}
>>  
>>  	/*
>> -	 * Flush only if SMT is disabled as per the contract, which is checked
>> -	 * when the feature is enabled.
>> +	 * Flush only if SMT is disabled, if flushing is enabled
>> +	 * and we are on an SMT enabled core, kill the task
>>  	 */
>> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
>> -		l1d_flush_hw();
>> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>> +		if (!this_cpu_read(cpu_info.smt_active))
>> +			l1d_flush_hw();
>> +		else
>> +			task_work_add(prev, l1d_flush_kill, true);
>
> We have no access the to the previous task and mm->owner depends on MEMCG :)
> We can do the magic in mm_mangle_tif_spec_bits(), I suppose

No, because we don't have access to prev task there either. Interesting
problem to solve.

Thanks,

        tglx




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54): https://lists.elisa.tech/g/linux-safety/message/54
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
Peter Zijlstra Oct. 1, 2020, 8:19 a.m. UTC | #16
Your MUA is having trouble wrapping text at 78 chars.

On Thu, Oct 01, 2020 at 09:49:30AM +1000, Singh, Balbir wrote:

> So this is the change I am playing with, I don't like the idea of
> killing the task, but it's better than silently not flushing, I guess
> system administrators will learn with time not to correctly the
> affinity of tasks flushing L1D. For the affinity bits, not being able
> to change the affinity is better, but not being able to provide
> feedback on as to why is a bit weird as well, but I wonder if there
> are other cases where we might want to lock the affinity of a task for
> it's lifetime.

You can't really do that, hot-unplug can (and will) destroy any affinity
setting, and if the task/admin wants to recover it needs to be able to
re-set affinity after that.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#56): https://lists.elisa.tech/g/linux-safety/message/56
Mute This Topic: https://lists.elisa.tech/mt/77173148/4688437
Group Owner: linux-safety+owner@lists.elisa.tech
Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [patchwork-linux-safety@patchwork.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
diff mbox series

Patch

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..90515c04d90a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,7 +316,7 @@  EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
+	int i;
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
@@ -329,7 +329,7 @@  int enable_l1d_flush_for_task(struct task_struct *tsk)
 			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
+	get_cpu();
 
 	for_each_cpu(i, &tsk->cpus_mask) {
 		if (cpu_data(i).smt_active == true) {
@@ -340,7 +340,7 @@  int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
 	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)