From patchwork Thu Apr 7 18:42:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 12805604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE942C433F5 for ; Thu, 7 Apr 2022 18:43:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6166B6B0073; Thu, 7 Apr 2022 14:43:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C23C6B0074; Thu, 7 Apr 2022 14:43:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 462F86B0075; Thu, 7 Apr 2022 14:43:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id 348616B0073 for ; Thu, 7 Apr 2022 14:43:29 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DB14FACB61 for ; Thu, 7 Apr 2022 18:43:18 +0000 (UTC) X-FDA: 79330955676.23.A662A2A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 74B6220003 for ; Thu, 7 Apr 2022 18:43:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649356998; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=gGh/iM6e6CHwEnjIl2Ke5DGcT+ij0WjuoxQf3hSUZQk=; b=Ml1X3b1twWT77XMfHKu61r9lA669D1+3Hw1kkK+hegzJoBVbQuNM+SgqYLu/uY/Nzs478u cgBS+NqnT5bGEFXiM6ITj6Jc1iCoeC2bPY5XGInSoCPDWTPxyQ8tAb6d6fCfzANad0+gON 7NLxfSkRdi7vIN6pwKzsESMmADeK0hw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-27-J-mnCU6sNxuTWOZL5Y-9Tg-1; Thu, 07 Apr 2022 14:43:14 -0400 X-MC-Unique: J-mnCU6sNxuTWOZL5Y-9Tg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5BFC2899EC4; Thu, 7 Apr 2022 18:43:13 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.22.19.176]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3624840D2848; Thu, 7 Apr 2022 18:43:12 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Rafael Aquini , Waiman Long , Baoquan He , Christoph von Recklinghausen , Don Dutile , "Herton R . Krzesinski" , David Rientjes , Michal Hocko , Andrea Arcangeli , Andrew Morton , Davidlohr Bueso , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Joel Savitz , Darren Hart Subject: [PATCH v6] oom_kill.c: futex: Don't OOM reap the VMA containing the robust_list_head Date: Thu, 7 Apr 2022 14:42:54 -0400 Message-Id: <20220407184254.3612387-1-npache@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Stat-Signature: rsrh614tt7g5mmiagf9bh19bdohuoxcs Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ml1X3b1t; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf03.hostedemail.com: domain of npache@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=npache@redhat.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 74B6220003 X-HE-Tag: 1649356998-345031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The pthread struct is allocated on PRIVATE|ANONYMOUS memory [1] which can be targeted by the oom reaper. This mapping is used to store the futex robust list head; the kernel does not keep a copy of the robust list and instead references a userspace address to maintain the robustness during a process death. A race can occur between exit_mm and the oom reaper that allows the oom reaper to free the memory of the futex robust list before the exit path has handled the futex death: CPU1 CPU2 ------------------------------------------------------------------------ page_fault do_exit "signal" wake_oom_reaper oom_reaper oom_reap_task_mm (invalidates mm) exit_mm exit_mm_release futex_exit_release futex_cleanup exit_robust_list get_user (EFAULT- can't access memory) If the get_user EFAULT's, the kernel will be unable to recover the waiters on the robust_list, leaving userspace mutexes hung indefinitely. Use the robust_list address stored in the kernel to skip the VMA that holds it, allowing a successful futex_cleanup. Theoretically a failure can still occur if there are locks mapped as PRIVATE|ANON; however, the robust futexes are a best-effort approach. This patch only strengthens that best-effort. The following case can still fail: robust head (skipped) -> private lock (reaped) -> shared lock (skipped) Reproducer: https://gitlab.com/jsavitz/oom_futex_reproducer [1] https://elixir.bootlin.com/glibc/latest/source/nptl/allocatestack.c#L370 Fixes: 212925802454 ("mm: oom: let oom_reap_task and exit_mmap run concurrently") Cc: Rafael Aquini Cc: Waiman Long Cc: Baoquan He Cc: Christoph von Recklinghausen Cc: Don Dutile Cc: Herton R. Krzesinski Cc: David Rientjes Cc: Michal Hocko Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Davidlohr Bueso Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Joel Savitz Cc: Darren Hart Co-developed-by: Joel Savitz Signed-off-by: Joel Savitz Signed-off-by: Nico Pache --- include/linux/oom.h | 3 ++- mm/mmap.c | 3 ++- mm/oom_kill.c | 14 +++++++++++--- 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/oom.h b/include/linux/oom.h index 2db9a1432511..580c95a0541d 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -106,7 +106,8 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm) return 0; } -bool __oom_reap_task_mm(struct mm_struct *mm); +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head + __user *robust_list); long oom_badness(struct task_struct *p, unsigned long totalpages); diff --git a/mm/mmap.c b/mm/mmap.c index 3aa839f81e63..c14fe6f8e9a5 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3126,7 +3126,8 @@ void exit_mmap(struct mm_struct *mm) * to mmu_notifier_release(mm) ensures mmu notifier callbacks in * __oom_reap_task_mm() will not block. */ - (void)__oom_reap_task_mm(mm); + (void)__oom_reap_task_mm(mm, current->robust_list); + set_bit(MMF_OOM_SKIP, &mm->flags); } diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 7ec38194f8e1..727cfc3bd284 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -509,9 +509,11 @@ static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait); static struct task_struct *oom_reaper_list; static DEFINE_SPINLOCK(oom_reaper_lock); -bool __oom_reap_task_mm(struct mm_struct *mm) +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head + __user *robust_list) { struct vm_area_struct *vma; + unsigned long head = (unsigned long) robust_list; bool ret = true; /* @@ -526,6 +528,11 @@ bool __oom_reap_task_mm(struct mm_struct *mm) if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP)) continue; + if (vma->vm_start <= head && vma->vm_end > head) { + pr_info("oom_reaper: skipping vma, contains robust_list"); + continue; + } + /* * Only anonymous pages have a good chance to be dropped * without additional steps which we cannot afford as we @@ -587,7 +594,7 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) trace_start_task_reaping(tsk->pid); /* failed to reap part of the address space. Try again later */ - ret = __oom_reap_task_mm(mm); + ret = __oom_reap_task_mm(mm, tsk->robust_list); if (!ret) goto out_finish; @@ -1190,7 +1197,8 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags) * Check MMF_OOM_SKIP again under mmap_read_lock protection to ensure * possible change in exit_mmap is seen */ - if (!test_bit(MMF_OOM_SKIP, &mm->flags) && !__oom_reap_task_mm(mm)) + if (!test_bit(MMF_OOM_SKIP, &mm->flags) && + !__oom_reap_task_mm(mm, p->robust_list)) ret = -EAGAIN; mmap_read_unlock(mm);