diff mbox series

[RFC,33/37] mm: enable speculative fault handling only for multithreaded user space

Message ID 20210407014502.24091-34-michel@lespinasse.org (mailing list archive)
State New, archived
Headers show
Series [RFC,01/37] mmap locking API: mmap_lock_is_contended returns a bool | expand

Commit Message

Michel Lespinasse April 7, 2021, 1:44 a.m. UTC
Performance tuning: single threaded userspace does not benefit from
speculative page faults, so we turn them off to avoid any related
(small) extra overheads.

Signed-off-by: Michel Lespinasse <michel@lespinasse.org>
---
 arch/x86/mm/fault.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Matthew Wilcox April 7, 2021, 2:48 a.m. UTC | #1
On Tue, Apr 06, 2021 at 06:44:58PM -0700, Michel Lespinasse wrote:
> +	/* Only try spf for multithreaded user space faults. */

This comment is misleading ... mm_users will also be incremented for
ptraced programs as well as programs that are having their /proc/$pid/maps
examined, etc.  Maybe:

	/* No need to try spf for single-threaded programs */

Also, please, can we not use an acronym for this feature?  It's not a
speculative page fault.  The page fault is really happening.  We're
trying to handle it under RCU protection (if anything the faultaround
code is the speculative page fault code ...)  This is unlocked page
fault handling, perhaps?

> +	if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
> +		goto no_spf;
> +
>  	count_vm_event(SPF_ATTEMPT);
>  	seq = mmap_seq_read_start(mm);
>  	if (seq & 1)
> @@ -1351,6 +1355,7 @@ void do_user_addr_fault(struct pt_regs *regs,
>  
>  spf_abort:
>  	count_vm_event(SPF_ABORT);
> +no_spf:
>  
>  	/*
>  	 * Kernel-mode access to the user address space should only occur
> -- 
> 2.20.1
> 
>
diff mbox series

Patch

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 48b86911a6df..b1a07ca82d59 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1318,6 +1318,10 @@  void do_user_addr_fault(struct pt_regs *regs,
 	}
 #endif
 
+	/* Only try spf for multithreaded user space faults. */
+	if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
+		goto no_spf;
+
 	count_vm_event(SPF_ATTEMPT);
 	seq = mmap_seq_read_start(mm);
 	if (seq & 1)
@@ -1351,6 +1355,7 @@  void do_user_addr_fault(struct pt_regs *regs,
 
 spf_abort:
 	count_vm_event(SPF_ABORT);
+no_spf:
 
 	/*
 	 * Kernel-mode access to the user address space should only occur