From patchwork Thu Sep 2 21:54:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12472995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D0A1C433EF for ; Thu, 2 Sep 2021 21:54:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E73F660F12 for ; Thu, 2 Sep 2021 21:54:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E73F660F12 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8EF496B00C9; Thu, 2 Sep 2021 17:54:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89FA88D0001; Thu, 2 Sep 2021 17:54:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7660B6B00CB; Thu, 2 Sep 2021 17:54:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 666026B00C9 for ; Thu, 2 Sep 2021 17:54:20 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 382BE2775B for ; Thu, 2 Sep 2021 21:54:20 +0000 (UTC) X-FDA: 78543987480.17.83EC77D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP id CFC513000100 for ; Thu, 2 Sep 2021 21:54:19 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id A15A260724; Thu, 2 Sep 2021 21:54:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1630619659; bh=QyLDbNGCUNLLhT2RWZYdkChkQrGue/C457sFaDGD6Kw=; h=Date:From:To:Subject:In-Reply-To:From; b=WL3D0VaSRLWwfgNtKnR8u1IPL3VnNqx2Luno15plhGrivNsXEkvw8nA5vLCtDK7Ey 9nN5amYk3q7njt0P0p0uSS6KwGeryK4r/sm4zAcohBwxiAgf7CXlnyHQeTbYNvnPRD gGQ0G4mpukQKHL/WlZHrejf1cKUS3rVHvZLD4gaU= Date: Thu, 02 Sep 2021 14:54:18 -0700 From: Andrew Morton To: akpm@linux-foundation.org, hughd@google.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, linux-mm@kvack.org, mhocko@suse.com, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, riel@surriel.com, shakeelb@google.com, shy828301@gmail.com, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 083/212] huge tmpfs: fix fallocate(vanilla) advance over huge pages Message-ID: <20210902215418.P3lH1STOG%akpm@linux-foundation.org> In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=WL3D0VaS; spf=pass (imf09.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: CFC513000100 X-Stat-Signature: b3tyut1ocew1ra6t1ez74sc3kh3wgpan X-HE-Tag: 1630619659-53140 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hugh Dickins Subject: huge tmpfs: fix fallocate(vanilla) advance over huge pages Patch series "huge tmpfs: shmem_is_huge() fixes and cleanups". A series of huge tmpfs fixes and cleanups. This patch (of 9): shmem_fallocate() goes to a lot of trouble to leave its newly allocated pages !Uptodate, partly to identify and undo them on failure, partly to leave the overhead of clearing them until later. But the huge page case did not skip to the end of the extent, walked through the tail pages one by one, and appeared to work just fine: but in doing so, cleared and Uptodated the huge page, so there was no way to undo it on failure. And by setting Uptodate too soon, it messed up both its nr_falloced and nr_unswapped counts, so that the intended "time to give up" heuristic did not work at all. Now advance immediately to the end of the huge extent, with a comment on why this is more than just an optimization. But although this speeds up huge tmpfs fallocation, it does leave the clearing until first use, and some users may have come to appreciate slow fallocate but fast first use: if they complain, then we can consider adding a pass to clear at the end. Link: https://lkml.kernel.org/r/da632211-8e3e-6b1-aee-ab24734429a0@google.com Link: https://lkml.kernel.org/r/16201bd2-70e-37e2-e89b-5f929430da@google.com Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Signed-off-by: Hugh Dickins Reviewed-by: Yang Shi Cc: Shakeel Butt Cc: "Kirill A. Shutemov" Cc: Miaohe Lin Cc: Mike Kravetz Cc: Michal Hocko Cc: Rik van Riel Cc: Matthew Wilcox Signed-off-by: Andrew Morton --- mm/shmem.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) --- a/mm/shmem.c~huge-tmpfs-fix-fallocatevanilla-advance-over-huge-pages +++ a/mm/shmem.c @@ -2719,7 +2719,7 @@ static long shmem_fallocate(struct file inode->i_private = &shmem_falloc; spin_unlock(&inode->i_lock); - for (index = start; index < end; index++) { + for (index = start; index < end; ) { struct page *page; /* @@ -2742,13 +2742,26 @@ static long shmem_fallocate(struct file goto undone; } + index++; + /* + * Here is a more important optimization than it appears: + * a second SGP_FALLOC on the same huge page will clear it, + * making it PageUptodate and un-undoable if we fail later. + */ + if (PageTransCompound(page)) { + index = round_up(index, HPAGE_PMD_NR); + /* Beware 32-bit wraparound */ + if (!index) + index--; + } + /* * Inform shmem_writepage() how far we have reached. * No need for lock or barrier: we have the page lock. */ - shmem_falloc.next++; if (!PageUptodate(page)) - shmem_falloc.nr_falloced++; + shmem_falloc.nr_falloced += index - shmem_falloc.next; + shmem_falloc.next = index; /* * If !PageUptodate, leave it that way so that freeable pages