diff mbox series

mm: ksm: fix data-race in __ksm_enter / run_store

Message ID 20220802151550.159076-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series mm: ksm: fix data-race in __ksm_enter / run_store | expand

Commit Message

Kefeng Wang Aug. 2, 2022, 3:15 p.m. UTC
Abhishek reported a data-race issue,

BUG: KCSAN: data-race in __ksm_enter / run_store
write to 0xffffffff881edae0 of 8 bytes by task 6542 on cpu 0:
 run_store+0x19a/0x2d0 mm/ksm.c:2897
 kobj_attr_store+0x44/0x60 lib/kobject.c:824
 sysfs_kf_write+0x16f/0x1a0 fs/sysfs/file.c:136
 kernfs_fop_write_iter+0x2ae/0x370 fs/kernfs/file.c:291
 call_write_iter include/linux/fs.h:2050 [inline]
 new_sync_write fs/read_write.c:504 [inline]
 vfs_write+0x779/0x900 fs/read_write.c:591
 ksys_write+0xde/0x190 fs/read_write.c:644
 __do_sys_write fs/read_write.c:656 [inline]
 __se_sys_write fs/read_write.c:653 [inline]
 __x64_sys_write+0x43/0x50 fs/read_write.c:653
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

read to 0xffffffff881edae0 of 8 bytes by task 6541 on cpu 1:
 __ksm_enter+0x114/0x260 mm/ksm.c:2501
 ksm_madvise+0x291/0x350 mm/ksm.c:2451
 madvise_vma_behavior mm/madvise.c:1039 [inline]
 madvise_walk_vmas mm/madvise.c:1221 [inline]
 do_madvise+0x656/0xeb0 mm/madvise.c:1399
 __do_sys_madvise mm/madvise.c:1412 [inline]
 __se_sys_madvise mm/madvise.c:1410 [inline]
 __x64_sys_madvise+0x64/0x70 mm/madvise.c:1410
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 6541 Comm: syz-executor2-n Not tainted 5.18.0-rc5+ #107
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014

The ksm_run is alread protected by ksm_thread_mutex in run_store, we
could add this lock in __ksm_enter() to avoid the above issue.

Reported-and-tested-by: Abhishek Shah <abhishek.shah@columbia.edu>
Cc: Gabriel Ryan <gabe@cs.columbia.edu>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/ksm.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox Aug. 2, 2022, 3:44 p.m. UTC | #1
On Tue, Aug 02, 2022 at 11:15:50PM +0800, Kefeng Wang wrote:
> The ksm_run is alread protected by ksm_thread_mutex in run_store, we
> could add this lock in __ksm_enter() to avoid the above issue.

I don't think this is a great fix.  Why not protect the store with
ksm_mmlist_lock?  ie:

        mutex_lock(&ksm_thread_mutex);
        wait_while_offlining();
        if (ksm_run != flags) {
+		spin_lock(&ksm_mmlist_lock);
                ksm_run = flags;
+		spin_unlock(&ksm_mmlist_lock);
                if (flags & KSM_RUN_UNMERGE) {
                        set_current_oom_origin();
                        err = unmerge_and_remove_all_rmap_items();
                        clear_current_oom_origin();
                        if (err) {
+				spin_lock(&ksm_mmlist_lock);
				ksm_run = KSM_RUN_STOP;
+				spin_unlock(&ksm_mmlist_lock);
...

(I also don't think this is a real bug, because the call to
unmerge_and_remove_all_rmap_items() will "cure" the misplacement of
items in the list, but there's value in shutting up the tools, I suppose)
Gabriel Ryan Aug. 2, 2022, 5:20 p.m. UTC | #2
Hi Matthew,

I don't believe execution of unmerge_and_remove_all_rmap_items() after an
mm is misplaced is guaranteed.

Consider the following interleaving:
Thread A executes *__ksm_enter* with KSM_RUN_MERGE set through the check on
https://elixir.bootlin.com/linux/v5.18-rc5/source/mm/ksm.c#L2501
Thread B executes *run_store* and sets KSM_RUN_UNMERGE and then also
executes unmerge_and_remove_all_rmap_items() on
https://elixir.bootlin.com/linux/v5.18-rc5/source/mm/ksm.c#L2900
Thread A completes *__ksm_enter *and misplaces the mm behind the scanning
cursor since it is still on the KSM_RUN_MERGE path on
https://elixir.bootlin.com/linux/v5.18-rc5/source/mm/ksm.c#L2504

I also noticed through manual inspection another check that appears racy of
the KSM_RUN_UNMERGE flag on
https://elixir.bootlin.com/linux/v5.18-rc5/source/mm/ksm.c#L2563

Best,

Gabe



On Tue, Aug 2, 2022 at 11:45 AM Matthew Wilcox <willy@infradead.org> wrote:

> On Tue, Aug 02, 2022 at 11:15:50PM +0800, Kefeng Wang wrote:
> > The ksm_run is alread protected by ksm_thread_mutex in run_store, we
> > could add this lock in __ksm_enter() to avoid the above issue.
>
> I don't think this is a great fix.  Why not protect the store with
> ksm_mmlist_lock?  ie:
>
>         mutex_lock(&ksm_thread_mutex);
>         wait_while_offlining();
>         if (ksm_run != flags) {
> +               spin_lock(&ksm_mmlist_lock);
>                 ksm_run = flags;
> +               spin_unlock(&ksm_mmlist_lock);
>                 if (flags & KSM_RUN_UNMERGE) {
>                         set_current_oom_origin();
>                         err = unmerge_and_remove_all_rmap_items();
>                         clear_current_oom_origin();
>                         if (err) {
> +                               spin_lock(&ksm_mmlist_lock);
>                                 ksm_run = KSM_RUN_STOP;
> +                               spin_unlock(&ksm_mmlist_lock);
> ...
>
> (I also don't think this is a real bug, because the call to
> unmerge_and_remove_all_rmap_items() will "cure" the misplacement of
> items in the list, but there's value in shutting up the tools, I suppose)
>
Andrew Morton Aug. 11, 2022, 11 p.m. UTC | #3
On Tue, 2 Aug 2022 23:15:50 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> Abhishek reported a data-race issue,

OK, but it would be better to perform an analysis of the alleged bug,
describe the potential effects if the race is hit, etc.

> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm)
>  {
>  	struct mm_slot *mm_slot;
>  	int needs_wakeup;
> +	bool ksm_run_unmerge;
>  
>  	mm_slot = alloc_mm_slot();
>  	if (!mm_slot)
> @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm)
>  	/* Check ksm_run too?  Would need tighter locking */
>  	needs_wakeup = list_empty(&ksm_mm_head.mm_list);
>  
> +	mutex_lock(&ksm_thread_mutex);
> +	ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE);
> +	mutex_unlock(&ksm_thread_mutex);
>
>  	spin_lock(&ksm_mmlist_lock);
>  	insert_to_mm_slots_hash(mm, mm_slot);
>  	/*
> @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm)
>  	 * scanning cursor, otherwise KSM pages in newly forked mms will be
>  	 * missed: then we might as well insert at the end of the list.
>  	 */
> -	if (ksm_run & KSM_RUN_UNMERGE)
> +	if (ksm_run_unmerge)

run_store() can alter ksm_run right here, so __ksm_enter() is still
acting on the old setting?

>  		list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list);
>  	else
>  		list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index 2f315c69fa2c..3f1908946a6f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2507,6 +2507,7 @@  int __ksm_enter(struct mm_struct *mm)
 {
 	struct mm_slot *mm_slot;
 	int needs_wakeup;
+	bool ksm_run_unmerge;
 
 	mm_slot = alloc_mm_slot();
 	if (!mm_slot)
@@ -2515,6 +2516,10 @@  int __ksm_enter(struct mm_struct *mm)
 	/* Check ksm_run too?  Would need tighter locking */
 	needs_wakeup = list_empty(&ksm_mm_head.mm_list);
 
+	mutex_lock(&ksm_thread_mutex);
+	ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE);
+	mutex_unlock(&ksm_thread_mutex);
+
 	spin_lock(&ksm_mmlist_lock);
 	insert_to_mm_slots_hash(mm, mm_slot);
 	/*
@@ -2527,7 +2532,7 @@  int __ksm_enter(struct mm_struct *mm)
 	 * scanning cursor, otherwise KSM pages in newly forked mms will be
 	 * missed: then we might as well insert at the end of the list.
 	 */
-	if (ksm_run & KSM_RUN_UNMERGE)
+	if (ksm_run_unmerge)
 		list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list);
 	else
 		list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);