From patchwork Fri Oct 22 07:46:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 12577271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62726C433F5 for ; Fri, 22 Oct 2021 07:46:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E629E60E9C for ; Fri, 22 Oct 2021 07:46:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E629E60E9C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=canonical.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7DECD900003; Fri, 22 Oct 2021 03:46:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78DD8900002; Fri, 22 Oct 2021 03:46:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6555B900003; Fri, 22 Oct 2021 03:46:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id 52D85900002 for ; Fri, 22 Oct 2021 03:46:24 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 033E419B3D for ; Fri, 22 Oct 2021 07:46:23 +0000 (UTC) X-FDA: 78723290688.35.7473E9C Received: from smtp-relay-internal-1.canonical.com (smtp-relay-internal-1.canonical.com [185.125.188.123]) by imf29.hostedemail.com (Postfix) with ESMTP id 42E899000249 for ; Fri, 22 Oct 2021 07:46:20 +0000 (UTC) Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id D6C053FFF2 for ; Fri, 22 Oct 2021 07:46:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1634888781; bh=tMGBs3Q7Msg13w5KnwdFpAd5cjXtVbTIuZqu2bUUbL0=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=UNImwhghIHXBY78zc7Bcbyp8owK2ZaTRwDqnfhLhsg2L6K3It41OznPQAy/bwr10O PK3byU/9JT1KoiG4NKzvc8SlPraX+xONbFyW8LZf8L/Hs3EogeFxVvD/sPngamL9Hd oafMXnzE07V2jAxK+LB7z3889IGfFAPCGBnoR4XzJP9zb+4+QcCxiI26+QjASrPy7w lmTE4TXmgQqIhh4SFPMdgtmbdhFRpWYY6qOoEkgL5nWaWgUwI7wxPBL98IFSFZvKh3 9Uo6LTD7QNHYrTsez/H36GhqqpTPovsdjAkb8IFzLgNCl4zVHhIM4iugjGgOoVPWoT Nh105SLzPmlRQ== Received: by mail-ed1-f70.google.com with SMTP id f4-20020a50e084000000b003db585bc274so2835688edl.17 for ; Fri, 22 Oct 2021 00:46:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=tMGBs3Q7Msg13w5KnwdFpAd5cjXtVbTIuZqu2bUUbL0=; b=e/fymLgksX6kkU7IEE5BS7SxG4o8PBsmWx1VJQ0vSoM7xAXKnpwPTVmYSWpd/IPb3C q8Mco66GUfEMuUMdMaLE1clGoeyhKHcdE62CPASL2HkbtFJKcdFw0kS6Dcryh/vQt8hD 1iDwo6vyD2lb/dtP7CdGjzgvUYvhArSvCZbTKGKIEbt0z8J3BeipX9HnxGp+RD1Cleaj UNP8GEWd6/iqksiIU78JdlhLU1HHO9adT0wQkAr6cAwtWeS4VBrmjanKsVablyRgCII8 iCSZYKRmmGIRHPAHP1pAxdz5v6KXcm5S+4y/XLiV3urFS7zrYtfH/OANlIYf4vYJ9KrW oq6Q== X-Gm-Message-State: AOAM533wEQfYNkSWsXjbVri1Di8I9zxVvpW5xg8xb1b6bQH9lGguJMGQ pfO3zSYSqi0q9JyEyoJZ5ZWZ/BBXooKZfeH5uccNy4AoGO4ep1F5ZQ9TiBViRqNW1LtONLhYagQ zt1WNksmm36RazkwKg5NQn1WDh395 X-Received: by 2002:a05:6402:1cc1:: with SMTP id ds1mr15201640edb.386.1634888781590; Fri, 22 Oct 2021 00:46:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxsMGdyxEI18ry4mlV7UKPzT2Qwm7WaiAz+upO29NRGJNXeAJ59goeQTmP+3dA193OgV29ZPg== X-Received: by 2002:a05:6402:1cc1:: with SMTP id ds1mr15201625edb.386.1634888781402; Fri, 22 Oct 2021 00:46:21 -0700 (PDT) Received: from arighi-desktop.homenet.telecomitalia.it ([2001:67c:1560:8007::aac:c1b6]) by smtp.gmail.com with ESMTPSA id i15sm4099112edk.2.2021.10.22.00.46.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Oct 2021 00:46:21 -0700 (PDT) From: Andrea Righi To: Andrew Morton Cc: Yang Shi , Minchan Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH] mm: fix sleeping copy_huge_page called from atomic context Date: Fri, 22 Oct 2021 09:46:19 +0200 Message-Id: <20211022074619.57355-1-andrea.righi@canonical.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 42E899000249 X-Stat-Signature: xjkimiy66d4pzz9584yx6kme1ahsaqme Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=canonical.com header.s=20210705 header.b=UNImwhgh; dmarc=pass (policy=none) header.from=canonical.com; spf=pass (imf29.hostedemail.com: domain of andrea.righi@canonical.com designates 185.125.188.123 as permitted sender) smtp.mailfrom=andrea.righi@canonical.com X-HE-Tag: 1634888780-388196 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: copy_huge_page() can be called with mapping->private_lock held from __buffer_migrate_page() -> migrate_page_copy(), so it is not safe to do a cond_resched() in this context. Introduce migrate_page_copy_nowait() and copy_huge_page_nowait() variants that can be used from an atomic context. The downside of this change is that we may experience temporary soft lockups when copying large huge pages in very slow systems, but this allows to prevent potential deadlocks. Link: https://syzkaller.appspot.com/bug?id=683b472eb7539d56da69de85f4bfb4b9af67f7ec Fixes: 79789db03fdd ("mm: Make copy_huge_page() always available") Signed-off-by: Andrea Righi --- include/linux/migrate.h | 10 +++++++++- include/linux/mm.h | 10 +++++++++- mm/migrate.c | 8 ++++---- mm/util.c | 5 +++-- 4 files changed, 25 insertions(+), 8 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index c8077e936691..3dc6dab9a3f7 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -52,7 +52,15 @@ extern struct page *alloc_migration_target(struct page *page, unsigned long priv extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void migrate_page_states(struct page *newpage, struct page *page); -extern void migrate_page_copy(struct page *newpage, struct page *page); +extern void __migrate_page_copy(struct page *newpage, struct page *page, bool atomic); +static inline void migrate_page_copy(struct page *newpage, struct page *page) +{ + return __migrate_page_copy(newpage, page, false); +} +static inline void migrate_page_copy_nowait(struct page *newpage, struct page *page) +{ + return __migrate_page_copy(newpage, page, true); +} extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..1c96bb084366 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -907,7 +907,15 @@ void __put_page(struct page *page); void put_pages_list(struct list_head *pages); void split_page(struct page *page, unsigned int order); -void copy_huge_page(struct page *dst, struct page *src); +void __copy_huge_page(struct page *dst, struct page *src, bool atomic); +static inline void copy_huge_page(struct page *dst, struct page *src) +{ + __copy_huge_page(dst, src, false); +} +static inline void copy_huge_page_nowait(struct page *dst, struct page *src) +{ + __copy_huge_page(dst, src, true); +} /* * Compound pages have a destructor function. Provide a diff --git a/mm/migrate.c b/mm/migrate.c index 1852d787e6ab..d8bc0586d157 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -613,16 +613,16 @@ void migrate_page_states(struct page *newpage, struct page *page) } EXPORT_SYMBOL(migrate_page_states); -void migrate_page_copy(struct page *newpage, struct page *page) +void __migrate_page_copy(struct page *newpage, struct page *page, bool atomic) { if (PageHuge(page) || PageTransHuge(page)) - copy_huge_page(newpage, page); + __copy_huge_page(newpage, page, atomic); else copy_highpage(newpage, page); migrate_page_states(newpage, page); } -EXPORT_SYMBOL(migrate_page_copy); +EXPORT_SYMBOL(__migrate_page_copy); /************************************************************ * Migration functions @@ -755,7 +755,7 @@ static int __buffer_migrate_page(struct address_space *mapping, } while (bh != head); if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + migrate_page_copy_nowait(newpage, page); else migrate_page_states(newpage, page); diff --git a/mm/util.c b/mm/util.c index bacabe446906..f84e65643d1d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -750,12 +750,13 @@ int __page_mapcount(struct page *page) } EXPORT_SYMBOL_GPL(__page_mapcount); -void copy_huge_page(struct page *dst, struct page *src) +void __copy_huge_page(struct page *dst, struct page *src, bool atomic) { unsigned i, nr = compound_nr(src); for (i = 0; i < nr; i++) { - cond_resched(); + if (!atomic) + cond_resched(); copy_highpage(nth_page(dst, i), nth_page(src, i)); } }