From patchwork Fri Jun 9 01:38:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13273089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FC81C7EE23 for ; Fri, 9 Jun 2023 01:38:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23A398E0002; Thu, 8 Jun 2023 21:38:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C3BF8E0001; Thu, 8 Jun 2023 21:38:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03E288E0002; Thu, 8 Jun 2023 21:38:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E8B608E0001 for ; Thu, 8 Jun 2023 21:38:25 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B8687402C8 for ; Fri, 9 Jun 2023 01:38:25 +0000 (UTC) X-FDA: 80881499370.02.D9C90B2 Received: from mail-ot1-f46.google.com (mail-ot1-f46.google.com [209.85.210.46]) by imf02.hostedemail.com (Postfix) with ESMTP id E35FB80004 for ; Fri, 9 Jun 2023 01:38:22 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=T0Kllypm; spf=pass (imf02.hostedemail.com: domain of hughd@google.com designates 209.85.210.46 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686274702; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AiDluOv2NKQXWITQOVsUWe3sj/TRHmtPFYOGjZ2ttyw=; b=nQO/M0vlJt9rmdYVnsSeO2oj3L+15Qjp//srfbDzHFKkOtnEnVynqrvVKHvBNRhfzZ2VT+ 6Fqtazt4pyAXSLOd9tdzFVWtt6K4e7HjAEqsKKnAng83oZfKCHJTjRFqr6yrrf95ldMICs jzH3JhsNTKBaqXVcvY8Ufja89QaAHGM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686274702; a=rsa-sha256; cv=none; b=PxkMrCJdZOU2xJry+Spc9xAvdn88H3SHEummk6JagQpVwe4CUZU7k3Udkq/f/K9+aJ6bfs jxJZFoOaYrbJUo1Vq98+W/qCIkMjPdhe3GdWKuDgk79ZCJzurxAiJ5pv/1P7uX7CiNAivj qJhAPCR2XxT5sjGB7/VmWxFXVvXZL1w= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=T0Kllypm; spf=pass (imf02.hostedemail.com: domain of hughd@google.com designates 209.85.210.46 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ot1-f46.google.com with SMTP id 46e09a7af769-6b2993c9652so261145a34.1 for ; Thu, 08 Jun 2023 18:38:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686274702; x=1688866702; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=AiDluOv2NKQXWITQOVsUWe3sj/TRHmtPFYOGjZ2ttyw=; b=T0KllypmaI6dTB7DB6gV+sXkQkzh1ghuQsJWEK3jBazIFUwKZcsQ3r5kr988Onz36X Dex8mQRK1D2tkuguCbuqQKOnDZrak1tNbdPv9EQlJylAGS/JOPCnfCNAKd3nfZKgjEDX NyjKhyt6lzb+I77gA/PrGWpgqss/Ipf03liz3QsE5ZInJrgAzICnzUTYMrBYboMu8+Ko Te1BCCl46W+vNegr6tum9ZNWdColvrZcB8P0b/1QpqOrN9yiBM3NUya9DmMuMuxFqXqJ gRuJjfPoEiaWO0bbttWGNF01oOM4LC9iGoxIWBtycc73x4aBvX+fDbj/NGV2UJ/71lCf Nnag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686274702; x=1688866702; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AiDluOv2NKQXWITQOVsUWe3sj/TRHmtPFYOGjZ2ttyw=; b=i0deFrEqpcK9NODVKs9hBt8ojwi0HeB1OvqtK2apfKX0v+/4rDTM1OgQzH6+FXRFqj K+6PS02M0sk/m7GAM1K6UBCh8KzYoSzq0SfSGw8TYZBcox2EfApN+lQTQFcXJ0D+huz1 eivMFYicitCtLLbUfR2s8fCh0uFwlqqr1JdcDM84HJJP1pbKMRPvxTeSzG0RSUF53YH9 HC2KXisEJ08jN8HogL/d8SJAwT0AJ3xxmkhyXDLH1qDbAHzVZUof0W9XX1emDi/vZHLQ 4K5r5rIwLfFGONYOkFgHYY7LhdnVeNzKrLU3SmyGKXB+bSTsjUCNOQsO08gBuHRgYMor vfUQ== X-Gm-Message-State: AC+VfDz4gtlevSRUE4l0t708xmZKUao3x9M3lI+Y443gi56RvMfg1IXK rf82QXspJ1E906AE3z/8WBzl/w== X-Google-Smtp-Source: ACHHUZ4lATUYWdzZUuFyIeEB4NQ1kndPyTKKCxuz2TahCj6Q7NKx5y9YMV8W26QVvIO6910YNrrjBg== X-Received: by 2002:a05:6830:cb:b0:6b1:570c:de5 with SMTP id x11-20020a05683000cb00b006b1570c0de5mr88477oto.17.1686274701910; Thu, 08 Jun 2023 18:38:21 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id y7-20020a056902052700b00b8f13ff2a8esm586262ybs.61.2023.06.08.18.38.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jun 2023 18:38:21 -0700 (PDT) Date: Thu, 8 Jun 2023 18:38:17 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 24/32] mm/migrate_device: allow pte_offset_map_lock() to fail In-Reply-To: Message-ID: <1131be62-2e84-da2f-8f45-807b2cbeeec5@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: E35FB80004 X-Rspam-User: X-Stat-Signature: jirz7yh1i3mub7137ysdz33aydmai3yn X-Rspamd-Server: rspam03 X-HE-Tag: 1686274702-146844 X-HE-Meta: U2FsdGVkX1+QLZ7FL2P5f2PExacbArNIxN4juwYPahfWYYV9u6jwbZ+j6vPXMrrT3VSBuhFl22pHFAWmabPw3wobQAZs5HoLcWfGi2yx6npfBFlgXklUfU0QePiuh+WcIWeNOcGQ/fhZRfZAbhOEyl82QilBOpXjt9AN99thsMhKWOTu7emuPMojpEKDLYH1Xcd+KnlsRtjBQdieuJ4rbKLiDXQ6yxoV00TNkjFCnA5QM4wDmclhBkJCbWy4Pbx4NS2QGzZG+h94FmsXOyd/6kaVu7ObEMt48Us8UCgCoIxzozl8ofctHMHVCxTDzkRUlqHShBG6HsiA0OXkZ8ywDwGb7MA6+wFxwrPU2KLVbaDPZ/F1menXCGOfNre4F7IEHyMiYVPjPc4PnVzbcAS5qJDalRUfSYUlDeVq0SgUbkvNxYlZJ9A2d1C/qwHR/GKyreA5fHcAcyPWsASaO5qu4ZgUdqD9VN1YuGtv7i3sh/Z1e0SNBT+6odUBMh4yWKpQxR5v+DKsmdEP/4OmQYgOqLcBUBGjxpXuNueXADjq2g8uEDtQcME6Lo9Z+keCniJbD1xBngeMyzsGSBB70mrH962P3ZnbatQnMv//sYFIhYJG9BpykiNh+ON1Xogv5ZguJfC0lcqHgpm9BXbR/wcz+zy8FiRfBlnLCi3RfgJ54zkA2HB3CdrKOtC68xLZfBD433thd8pZMDaA5V/xfKnyWUoMTF439VgdINmhHo5TlPsxSlf/+hj8pfwVUOOAkqkk0sUn3tC3CG9jutg5375e9G5TQgGwdtxdxgygxuvVdWHVIeEOA1D7STgBwk9doxLWVl7q7iEvXidUT6vv8jpnG7808BNN1kIXaWbJDw4aLUdnmHtX3xYnihOBKwtOnXgqjmpZksEr70NiaIMfYrVgs5HhV4UyO/CpC+oF65vUDpEO49moWZAenLN41rXJRj6iJ3YoNzy1RtZZuOdeFw0 pOSXV3bY eYOotFZQs5YHL8KI+BzKGMytQ+PWalYPX7fYOSpFuBqWVa3zP1OvX/6eattIoNJmxXdIfyprcd4zLI7Z9EZDW1onfDtX1c1dkB0oxNhr1Vbk/T2tpP0kMqP5axiBtTzBarMkm4SZMksANQvm6KTWV/rstdIrjdMMegxblGvTC9UdRG1jfjTW94DbyDGEvxxM8/wde5hgma6CRhRWGebXr9rtbrEezqQjVxAu+jOaXpm1VstXzl0Dvm+WQdt9uU7XscyVLSNcuV5u6BY1AAoE/DYno7rF78BYd7+HsdT5kA0+PV2qyhWuGm6erY63PDLB5sqfVrFs91kSkXS8jBc3HytzScedPPYfZueUwzPmjGJSF7p6KX1o+6AZgd56q3rnL0avo4+y2JjC/f5VbRKIcKk5xIbGGz0HMOgiLAyKu1rEZczmpcQQALBaQFqSMv+CiT5Zdf6D2Uekx4g8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: migrate_vma_collect_pmd(): remove the pmd_trans_unstable() handling after splitting huge zero pmd, and the pmd_none() handling after successfully splitting huge page: those are now managed inside pte_offset_map_lock(), and by "goto again" when it fails. But the skip after unsuccessful split_huge_page() must stay: it avoids an endless loop. The skip when pmd_bad()? Remove that: it will be treated as a hole rather than a skip once cleared by pte_offset_map_lock(), but with different timing that would be so anyway; and it's arguably best to leave the pmd_bad() handling centralized there. migrate_vma_insert_page(): remove comment on the old pte_offset_map() and old locking limitations; remove the pmd_trans_unstable() check and just proceed to pte_offset_map_lock(), aborting when it fails (page has been charged to memcg, but as in other cases, it's uncharged when freed). Signed-off-by: Hugh Dickins Reviewed-by: Alistair Popple --- mm/migrate_device.c | 31 ++++--------------------------- 1 file changed, 4 insertions(+), 27 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index d30c9de60b0d..a14af6b12b04 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -83,9 +83,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (is_huge_zero_page(page)) { spin_unlock(ptl); split_huge_pmd(vma, pmdp, addr); - if (pmd_trans_unstable(pmdp)) - return migrate_vma_collect_skip(start, end, - walk); } else { int ret; @@ -100,16 +97,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (ret) return migrate_vma_collect_skip(start, end, walk); - if (pmd_none(*pmdp)) - return migrate_vma_collect_hole(start, end, -1, - walk); } } - if (unlikely(pmd_bad(*pmdp))) - return migrate_vma_collect_skip(start, end, walk); - ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!ptep) + goto again; arch_enter_lazy_mmu_mode(); for (; addr < end; addr += PAGE_SIZE, ptep++) { @@ -595,27 +588,10 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) goto abort; - - /* - * Use pte_alloc() instead of pte_alloc_map(). We can't run - * pte_offset_map() on pmds where a huge pmd might be created - * from a different thread. - * - * pte_alloc_map() is safe to use under mmap_write_lock(mm) or when - * parallel threads are excluded by other means. - * - * Here we only have mmap_read_lock(mm). - */ if (pte_alloc(mm, pmdp)) goto abort; - - /* See the comment in pte_alloc_one_map() */ - if (unlikely(pmd_trans_unstable(pmdp))) - goto abort; - if (unlikely(anon_vma_prepare(vma))) goto abort; if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) @@ -650,7 +626,8 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, } ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); - + if (!ptep) + goto abort; if (check_stable_address_space(mm)) goto unlock_abort;