From patchwork Sat Sep 30 03:25:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31E21E77350 for ; Sat, 30 Sep 2023 03:25:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67B118D00CD; Fri, 29 Sep 2023 23:25:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 629F58D002B; Fri, 29 Sep 2023 23:25:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F17D8D00CD; Fri, 29 Sep 2023 23:25:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3B3368D002B for ; Fri, 29 Sep 2023 23:25:57 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 11619C04E6 for ; Sat, 30 Sep 2023 03:25:57 +0000 (UTC) X-FDA: 81291824754.23.DC891C1 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf07.hostedemail.com (Postfix) with ESMTP id 3D4464000F for ; Sat, 30 Sep 2023 03:25:55 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4j7tcsHq; spf=pass (imf07.hostedemail.com: domain of hughd@google.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AVLzPp93pZABCVKyWP1mAbidMk3rqNAWBwutzFqeRLc=; b=crTNV0VgF5cRaoKauiW+xzoA/Ah+4UV1WfqpcfH7nLnhe6okL1qodkMg08V1nJV+hNmAi7 esXAFvOUIMFmgCLzOy1b8dno+BuhZmbWjY3+ZSMhGYk0r9xAqXkUji/bV6c18ICvGegLmY 1K5BG9h1ywlMTzDs/BxXGwqSOluvQDk= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4j7tcsHq; spf=pass (imf07.hostedemail.com: domain of hughd@google.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044355; a=rsa-sha256; cv=none; b=XUSbqACyaQdfYBGyvQUp1xf/Lzog+Me3paDPTucQaQBhWN08LvEZBtFmf56Jsmvd/xAv4e pOYx0ZAEZKGdSomOSZEyQ2I0v1W2SHSfYAE5l94/dQ35d/h8VcC83WYhD4Ks8BqFogRACh IUSlO3SGT2Geg1KvkPZe1hGVHQBDAYQ= Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7741c2e76a3so850057485a.1 for ; Fri, 29 Sep 2023 20:25:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044354; x=1696649154; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=AVLzPp93pZABCVKyWP1mAbidMk3rqNAWBwutzFqeRLc=; b=4j7tcsHqNjLGR0cp7OIGSSYZeSJ89AGVdGbYW985Rdk0SqPKy+vL+XmVm/8reDAC9m ef/OAPmPtHmhq7JXXt5w4qp77HvydDXnEnVdlkfAa1Fjo3nv+PCauSZjCImcWk8ICci/ XcDis2vRBGUD6r+ULOvFG8ML5XtP0JT/hzYeSAVtCBwtrYe/DLna1+KI4biInpvbm939 yj30cDvbOH9ohmYddXht0p7ZJKVvz36xovG15KZFZCoyXu2sJmpQx8MsP4unFudG/9ZB U7wOYw71WDFKkA5c+eMkZRA5/+FJDn1JgFjdQEZ1FZBmmHFwmpnJEtqrscgMKL43PcWC L2VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044354; x=1696649154; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AVLzPp93pZABCVKyWP1mAbidMk3rqNAWBwutzFqeRLc=; b=EIOV0EXwPxSnCcWqBuAMomub8zorfL+UGilLMJNPAWPwvcAsqUWwS5XrCpW6050lts 4hypHkWD+yxlLwsn84WM+EtIXnPzP9MIUFMb6H/4SY6a6LoWdDzXIXTraJD3z6oKr451 wIbfxnDlPvDOjb8kPR5rhZwjISGwMqns8MC6rUwf/GAaXnkAo/Izdd/IRdmyKLCzjd8i 0HWwSg/4nRn4AL3DHXLPAhWwEaKrYMcfzmliTzGTUFiIlAFJ2n83sDS+Ar9062G9S/u5 00sMaPJ0p8nHHHw/GDkWSL/LVgoOAn1V1f8/bXoQM/PNP8JwUK8/byS+N5bZaXKRcPtS X1cQ== X-Gm-Message-State: AOJu0Yyc6jKYUZVsKKCTmZtu6lSa7g3TvAaobfyRGMaiCVhl5tb1582B uq6PZtzRXVOdd5ySxaJWl1zg4A== X-Google-Smtp-Source: AGHT+IHhuq7jvvojYgsSWl2axsjTICJmxo+Endia0QYqZtTKEsh/ozvTbnorz5ewJjnAk71pgSQv9w== X-Received: by 2002:a05:620a:1a1d:b0:773:eb81:d043 with SMTP id bk29-20020a05620a1a1d00b00773eb81d043mr6501303qkb.52.1696044354298; Fri, 29 Sep 2023 20:25:54 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id o7-20020a257307000000b00d43697c429esm5462075ybc.50.2023.09.29.20.25.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:25:53 -0700 (PDT) Date: Fri, 29 Sep 2023 20:25:38 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/8] shmem: shrink shmem_inode_info: dir_offsets in a union In-Reply-To: Message-ID: <86ebb4b-c571-b9e8-27f5-cb82ec50357e@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 3D4464000F X-Rspam-User: X-Stat-Signature: z5346qwjcuiu3jxotd3en3zw4ksmzkzk X-Rspamd-Server: rspam01 X-HE-Tag: 1696044355-893876 X-HE-Meta: U2FsdGVkX1/JoNK9+id2BlJVjfn1vj3dg2sgX2MQ3eRBESwpgCrVEh8JLn3RS+fnYSfohQSEr8wICJ6chWP6wCbD8bT9iBstBWZxKMoof1YbLoVQsH7N+YMbs2D7Zyksm6fQfvNkdk3Qm44jEG1Z0NqLF1WaBFhClfIIa+Gxx70d+DVmsyzSCfZqcF7QDmLU4596zyxyOJV2x5bLQdWYRMZNEOOM/2iOu9K08ND7mogmosxfSti+aIIh15Kzq7qN6BZeb6WUX/AMwapxpD6kExfqEcUqbp5xy+4NGQharktgZarUA4sO8LF/aCiGO8JmtI6nx5QHt4Yc6+UphgSxUuUzebu9sbnDK07zRRHXCCmn60ICAZavFZIMEn4M0mbqXsI70G8b46PqYkUKFV8+MdiuTkZS1SE+tlUeec1LAMeyCGQqqkL6aKocCYxZT2wJArDhcKSlPNRDjMUdxWtso2xA6bcjho7Y354hUlYXItNi5oZfh7sZ6JgFNu33U91VQY7pEpgEh3R7S04kcAYCKFErSQTO61ovDowMlN4s3e7052TtbQONFzmAOEwP9O7WBdo2d+NPdD1mJoH+VLM7sV9o2RaLKS84yAo2Hlu1s1eqYKFAiqz5tT8MQq3pEDnaZ1+aXX2EGGyOV30Fo+biDvoYb1RqksY3da7Qehlpt7SVBi0tEj0p1KL26lvv/MpOr3jvqygLNhxCs2AZJ43kl4fLe7auXC/tDgJz/M4xXLC4dgaT3tvGZ6Gubly8DMJWqGJFVoy9FPSDfZZOAn8jxAcYkvmwSlGOzmf9pifh9P3mLvJmO2k4OC+mYRqxJXn4zZ5+hp6cbXikLc5hhhaTWE0iPhmP6wwKfCv8fbGgufpb/pIr++f8TXiC7qF98FPwtKCsBuVcQF+0rPBR/2oyDSc1EhP5wSWZ1ZRoQCvfFIgYB1LCpOwFSbkCfewT7pBW5wfTUYdZga1ltVPJo97 Bq5XQZaA wxKmCKQ33i//8I+Nb5PF3sS9DY0+Ni9XrD6PDQveawpK8H7BZKPXXEoZRPBWi2VJcG0kR6kxrMB4SekO5hEHVmjRECn64gMlcoPxcWTrVOpMkkdNpHljdxXO3qWedXpKq7aaHIZeWVYGXxNfLqzvK11wFa6F+BIsO+YE7fn2S9HG9/DYJmNIguHO/5LjmD3lQznsN8xZiMnGMzI430/VKaHoEiP7bouw8MHJdWbT3a9bjS/R5wDAPq3FByrsCfS2DkbtzQiymAoreT9wdu3zusCF+BASdR/DpOKCTby6MrDIS7YVtDW9jvbno6Q2ljiiL5sB7ggFJxglaa+/gA8GgYHumk6YrqEOaE9yu+BPc34hDxUKqH5Cmu+k0KL7LbDp0y/PoWAjMR+WgLmKdw8Lh9ePrFD8uZiaUJOBIOiFsOJK1aXfgQYLBgLKrKA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Shave 32 bytes off (the 64-bit) shmem_inode_info. There was a 4-byte pahole after stop_eviction, better filled by fsflags. And the 24-byte dir_offsets can only be used by directories, whereas shrinklist and swaplist only by shmem_mapping() inodes (regular files or long symlinks): so put those into a union. No change in mm/shmem.c is required for this. Signed-off-by: Hugh Dickins Reviewed-by: Chuck Lever Reviewed-by: Jan Kara --- include/linux/shmem_fs.h | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 6b0c626620f5..2caa6b86106a 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -23,18 +23,22 @@ struct shmem_inode_info { unsigned long flags; unsigned long alloced; /* data pages alloced to file */ unsigned long swapped; /* subtotal assigned to swap */ - pgoff_t fallocend; /* highest fallocate endindex */ - struct list_head shrinklist; /* shrinkable hpage inodes */ - struct list_head swaplist; /* chain of maybes on swap */ + union { + struct offset_ctx dir_offsets; /* stable directory offsets */ + struct { + struct list_head shrinklist; /* shrinkable hpage inodes */ + struct list_head swaplist; /* chain of maybes on swap */ + }; + }; + struct timespec64 i_crtime; /* file creation time */ struct shared_policy policy; /* NUMA memory alloc policy */ struct simple_xattrs xattrs; /* list of xattrs */ + pgoff_t fallocend; /* highest fallocate endindex */ + unsigned int fsflags; /* for FS_IOC_[SG]ETFLAGS */ atomic_t stop_eviction; /* hold when working on inode */ - struct timespec64 i_crtime; /* file creation time */ - unsigned int fsflags; /* flags for FS_IOC_[SG]ETFLAGS */ #ifdef CONFIG_TMPFS_QUOTA struct dquot *i_dquot[MAXQUOTAS]; #endif - struct offset_ctx dir_offsets; /* stable entry offsets */ struct inode vfs_inode; }; From patchwork Sat Sep 30 03:26:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E38F5E7734F for ; Sat, 30 Sep 2023 03:26:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 850C68D00E5; Fri, 29 Sep 2023 23:26:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FFB78D002B; Fri, 29 Sep 2023 23:26:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EEAC8D00E5; Fri, 29 Sep 2023 23:26:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5E7018D002B for ; Fri, 29 Sep 2023 23:26:59 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 27EE540675 for ; Sat, 30 Sep 2023 03:26:59 +0000 (UTC) X-FDA: 81291827358.12.C0C215C Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf01.hostedemail.com (Postfix) with ESMTP id 5BCF540003 for ; Sat, 30 Sep 2023 03:26:57 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0xcq8iIz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sJer9ozabqTWGrTyLZzHy82X8q3kDfNExG34jhppfvQ=; b=0GVsENqUBcvTouNdsYwp2ZntI2EMoJ24NoMCftN2IRqDRJGz98dVvzw8YiAUgcWmzIlsOi v7r0KFHFIGcsl0Ym1UWnaaV5wxo+NfrRYIs/1B8ib7lhlHJ/ChW8FlQZCMuIg+ndGk6ihG xXLC9rywZtTTrLvLzjvSGdTYJkwV7bs= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0xcq8iIz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044417; a=rsa-sha256; cv=none; b=NVsqf1fE3lt442IX5hebFlbrHgrvBQczp5qioPlBOSY6A2KyMNbUCU10pXLfsHZUGk2TsZ W/sldct0J6EF6CTOjPCr5xSuO4vXK2Ix+JM+NIF4LpK+Q+PUEvYO7QogAOB51l0IKIfaCd sm0qBELRvYGEgCXWUVnQxSOfES9H5UA= Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-5a1ec43870cso80224637b3.0 for ; Fri, 29 Sep 2023 20:26:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044416; x=1696649216; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=sJer9ozabqTWGrTyLZzHy82X8q3kDfNExG34jhppfvQ=; b=0xcq8iIzVnPI6fFNyt3Wp/oU/tAuQZPNvze2qyjRa2l1OnrizvuIL2KKB4HD4vo+S7 hiPekzhvKgFpnvrQO2cVIxOmbOdI2dRoVdwOu9iKMHFYAjjyuAqJGbtRrB7b3F3sUszs Hdti/e9kNDvcCupuKlEDnFI8rRJV9JrSBlacVg5lJ8zxwR6zAhHtYKBWxf59+28W+JLR Vd8K8tCz02jnex/mm7aOwsFYO+MGYdPtWPxnggoxOxnQs7Vbjw3UUe5xyNhcRJLHD3xB aPB40jfKrf9fpGxyq0Ry4GBOHLZUT+UBik9FDvfrk49yJMb+989emWbbZHVkkF5JjiFA ojYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044416; x=1696649216; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sJer9ozabqTWGrTyLZzHy82X8q3kDfNExG34jhppfvQ=; b=AlzxfWOQfdTLF3UbKdSqdeYk/OvngipTcFZz8EGW3vcJSJ+oyjbBC6uJkmwGi3N4og u1Vj9KbKHqolXSmzG5ediVNUMExN6khUcirnYG3iATUepVVtou6gdzPwNmdQRQbrgj8A W7Kp2J8D7UlsbnA0+dgKgpZtl8WLdn0LohHMcrwr69gR9DbLKaqj9fqRLsxsA+1ls/V3 RZgQ3JtO1til5o/nWBJfIt8uKLTlThqyRbxJXTRSz3vCaiedFtV6APYCBpzYH2Ifnc/b eGv5k8zNrtSOst0JHkpu9q9yXYavoGekEA3mtVJHf1/pOmfuGe1jGPWCTHCkkD8yR7L2 nVOA== X-Gm-Message-State: AOJu0YyNvTVZff5IWX/3dgGmQP9HurSa10n3qmvoup7Vk5eQDhP6hDhe rI7k1WTRbcXz9izRAaL93pCGVw== X-Google-Smtp-Source: AGHT+IGY8b1GCtjEjvU2ttKYxnk/izTsxRAU1oyYCFwLsKOA7sQuqVX2C4n+cWR1fH9H4i0FnhEwjg== X-Received: by 2002:a0d:cc4d:0:b0:5a1:8b2:4330 with SMTP id o74-20020a0dcc4d000000b005a108b24330mr6138174ywd.10.1696044416338; Fri, 29 Sep 2023 20:26:56 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id w5-20020a0dd405000000b00570599de9a5sm2955343ywd.88.2023.09.29.20.26.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:26:55 -0700 (PDT) Date: Fri, 29 Sep 2023 20:26:53 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/8] shmem: remove vma arg from shmem_get_folio_gfp() In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5BCF540003 X-Stat-Signature: 3fikazun7i6jcew347gh35czg4kmyo3r X-Rspam-User: X-HE-Tag: 1696044417-25432 X-HE-Meta: U2FsdGVkX1+D599oQ+3Ha5gph5aHw1Br0OzlBgz8m6F7A05/m0m5YIRwVuZGHh8+muSrkzielRQiE4MP3YaVyxMfEvAUBUTTKoR9e3EIA8p7bC1b0tIcZ7EWWlClGUsBrSKqPWM2sGJS5SPlymFnrOmsJXUiLyw2xBYaUUs7419mHLuYVuq8Izaej40mcwQ7F3K2DHK6+K8L4IPp24viqyhEP9DHezMN0Jq31CEMvbi5enlf4hOhQTHcmiXjmHUm22sSH1mV7N0xuHwioCEKjjCDPbNQC3MmxXtPnmbFKlgFwSSFA0mYyhya4BH5Xv+76rvsIpR8BYZ18chVPO5suXJ6cDaBl1yaDZZrjhi6+wckwVjZUS5T3JmIxf0pgRS9bpTUOTi6/0oob8t13kVVsYynWq5I1iaw+1GO6EP2So+9FyPaowyqdCH2haQkmQrwutOO6NxWJ0GNimh+0fvQspNhbbkqjN8tR2zwWqG+pdlW3hXKKzGTnW5bp28QW9RQRiSEdEOkc4FRRwLGE+CM5hX1KUqWdN7YQ8N9M6+GooGGBptvDok1ppsq0yWnuyYIFHULMsUjl2UgSI3U4PoinKDll6kOiPtPDSlPFfmUll1uPJZ9Gam0u9R84do7zAqZ9TmtVH22fFBDklN/eJP1XofmPxWyO7W14XcNKyZNmGciaCICQuzgUgNJqLWq/+uAzEsb+UCfTs8sNBOUnvtY7QSWg4UVS0r2qN+hPVOIwoTmlNgk9AC0ztvpGx2+eakfzkILVEMSFdHzUzK+BO87EYTmVuV1swfyIIwj98xTWDJvtAnRZWdWEWkGcmnWk66Kfw+AmaquLePxC86u0AZuaxY9RX8f/s+dR8sPYgOL3PWGs6KL1dk/Bm44JueCeF6dnztdLDcCi6cKmohEbMTrdYcO1IDcbgy6BDn8EYPkzUihFjbkVR3Ywo/mfT1cQRg73MfjRuMtNO7AhlfTnJ5 9VB4QF3Z CB9y1tlWUjSeS0Vlx55o2BsTO+MUYeGj2h+pftioTZbHUU5X85QDQVZbvdfSuOyYtmsmPpls/BUaDChwB1ZHMe9LAABJrrlFV5qBEbLbvtnXYj88AUwsLo/VpuPJRn4qTBFqPGdMC+31MKH8JB/IyX8DQWKX1ie2AEc/G1SMXxF9KWloz5T4QjD3iuEHFuiZPuDN44E5CNr+Xt3ZrKm/pa04Bk2XGedqmfps2nzCaj47WiPDfnPjvAHU+3b48yePmpiz0uPHmjfp2WqA6ioerkWcCEUADZRS28L1wiN2anGI4HXiJOZcSpeoQY6WclwymZMytxsGkxlD6fW4XhLpWYL7rRg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vma is already there in vmf->vma, so no need for a separate arg. Signed-off-by: Hugh Dickins Reviewed-by: Jan Kara --- mm/shmem.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 69595d341882..824eb55671d2 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1921,14 +1921,13 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vma, vmf, and fault_type are only supplied by shmem_fault: - * otherwise they are NULL. + * vmf and fault_type are only supplied by shmem_fault: otherwise they are NULL. */ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp, gfp_t gfp, - struct vm_area_struct *vma, struct vm_fault *vmf, - vm_fault_t *fault_type) + struct vm_fault *vmf, vm_fault_t *fault_type) { + struct vm_area_struct *vma = vmf ? vmf->vma : NULL; struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; @@ -2141,7 +2140,7 @@ int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp) { return shmem_get_folio_gfp(inode, index, foliop, sgp, - mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL); + mapping_gfp_mask(inode->i_mapping), NULL, NULL); } /* @@ -2225,7 +2224,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) } err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, - gfp, vma, vmf, &ret); + gfp, vmf, &ret); if (err) return vmf_error(err); if (folio) @@ -4897,7 +4896,7 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping, BUG_ON(!shmem_mapping(mapping)); error = shmem_get_folio_gfp(inode, index, &folio, SGP_CACHE, - gfp, NULL, NULL, NULL); + gfp, NULL, NULL); if (error) return ERR_PTR(error); From patchwork Sat Sep 30 03:27:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736BDE7734F for ; Sat, 30 Sep 2023 03:28:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 086578D00F7; Fri, 29 Sep 2023 23:28:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 036B98D002B; Fri, 29 Sep 2023 23:27:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E68778D00F7; Fri, 29 Sep 2023 23:27:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D48548D002B for ; Fri, 29 Sep 2023 23:27:59 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 28578160600 for ; Sat, 30 Sep 2023 03:27:59 +0000 (UTC) X-FDA: 81291829878.07.55379F3 Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) by imf27.hostedemail.com (Postfix) with ESMTP id 5CA1240008 for ; Sat, 30 Sep 2023 03:27:57 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=U8k78+zl; spf=pass (imf27.hostedemail.com: domain of hughd@google.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044477; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c3VmQUPLhrERK4N60GAFBYo4ZpXXYTUhvBGEGGQtmto=; b=O/dHvVPYQhT8CaSl2nKG4lzR77x0UEvUdicMcvtcqC+b+CP71QmTUIhYzF4ufsBDz4j3m2 6Clgfs3OpMTzi3UtkycwRa9IHWFfDqGNl43TVNXUfv1jjdnweuOXya5DAQmooRwqd+5giC 2CFUrB/nnQVgLs4+AP2KOocTX2fhzCE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=U8k78+zl; spf=pass (imf27.hostedemail.com: domain of hughd@google.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044477; a=rsa-sha256; cv=none; b=aGmdWgvdmG8uqQbssuEarzR6XgGd2k8FiR0fvvjLjpQPB4i8t6tfp5uopAhjATyI0FInQX PMK8ZFm9cbM3cnlMA+F63DyCCb0++bmY870zNfD13GG6rfnbl/6cVmGpqYDxAeAEvE2P8G gNoX4QRQnHiH8jrtLXWl6JI0jFp0oT4= Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-59bebd5bdadso181819297b3.0 for ; Fri, 29 Sep 2023 20:27:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044476; x=1696649276; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=c3VmQUPLhrERK4N60GAFBYo4ZpXXYTUhvBGEGGQtmto=; b=U8k78+zlemQ9G/Y6P7ntBjBcy019U42AOq2XJo+/FDzVMt8qayJO1JaV+vM98VDlr8 9BQBkkoJVTj5J1veKKJeSwP0HEdgx3gen+4no+kiAa6mq6LimYCJF31HZEbBp05QTTLS ikFkF88aBVVJWNfxMmP/ZntE+VOTl1WRdCZXJPRxOxxqoBu4ftueARL5uMBlEqHQI3OZ cTGXbLvBhke5RYzH6MuK1fYaFk7j9NcHP6/2i6pL+GPKNx8uFEpAnkqoAfIbOxB7zpfg HKgEJ2UY5yq0HvfSrTB/hOyPz/IvpsDYPS9yUFKSJu5LR0ix7TO6gfYSC+iYbd3uPBub rmZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044476; x=1696649276; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c3VmQUPLhrERK4N60GAFBYo4ZpXXYTUhvBGEGGQtmto=; b=kY3MMdVRaFnFURWCw+lwwkG2Bv6nyeu4laYak5n4x4qD3FhZ3W4Po1Z3LGmoRNXuet xjQLsL7JFaw4VBhgrW3G89/ptptYHQ3+TzcwjkXVjElvLGa6KZMs/74hw/oCGw9QxAzQ 2TMiwEErb7Ee2v0BU/BC3BYaiFQ6Dda4meNBxQ0FiaMysUsWt92NTG8Mrqa7M7Srwvto iwn6fYPrsPZZJwFKZALVJg2W6Q+JckPlyJwchKA2nxqkVNeaiNV14m0DtYBnrN/jaLJh v2rVgTtTzHviGUWnNECdRMU1DWXGKrT5/lxRxl2RKT1Wj4/SD8ZyxR3Fm74WCpmts3YA /avg== X-Gm-Message-State: AOJu0YxKSYoqtHwr5eJOrOvrhvDVyjt36lXH0XaMoZ9t03t+Zy8DqZ7S SD4jhUPM+1ZESDiTPqL5ds7uVg== X-Google-Smtp-Source: AGHT+IHp1qF65eHgx27QgmeBCNpipwBMvCNXdEQuNLXjnq7w6FkkrCvB+RR9BXWpFDUscD7OTgD8iw== X-Received: by 2002:a0d:ee46:0:b0:5a1:635e:e68 with SMTP id x67-20020a0dee46000000b005a1635e0e68mr5109602ywe.46.1696044476344; Fri, 29 Sep 2023 20:27:56 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id l8-20020a0de208000000b00586108dd8f5sm5983418ywe.18.2023.09.29.20.27.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:27:55 -0700 (PDT) Date: Fri, 29 Sep 2023 20:27:53 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/8] shmem: factor shmem_falloc_wait() out of shmem_fault() In-Reply-To: Message-ID: <6fe379a4-6176-9225-9263-fe60d2633c0@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 5CA1240008 X-Rspam-User: X-Stat-Signature: 7hh4trbkcfbzcngsnw59zio1u4srazfq X-Rspamd-Server: rspam01 X-HE-Tag: 1696044477-871404 X-HE-Meta: U2FsdGVkX1+snmAAX7A7B+suh4Auab8AL9cveVS0K6NzK6t/bS4GaA+kGwnUfDG68ij7iCBa5D0kADl7Nmvy/PhEiorNKtZTu1Sa59zYafLpl3zx7F5OYTcKF6xzd2/9Mtus6tcgx7jAtMyy7r2a0Pjo8Xhub/32NKFUuojKCRKUYcE4aS1nsj8FpfSGdU/F9A62kHOqzqxukptFJDNP4ogpaKsKNpJYHty5odGbVMoOAvbTtl+Bx9mN8U9sx8nHWZ2ItEmH/x4ignil1nuAJuXakfGxoqr6a6K1JMyjdePISJ6LCsOZj1UpKepmMvL0yz4YA5EN6SAbj2nclZ2woytPe3OEk2A/bDmdmLKx7PMEcE6IWt1TkKPGmSlX/5h5TZ4WJ2FbOThuvBPC5shEkcpB0wgW9pam4DQP7zv//Epdu7clBiKu9z8IXpNrXwdGq7gs2/LOlRQip8NEfGpVRvqvOryoChZnSsppDtFqNckYNwyKs3mVjAbX1xwEnCvFvlSTQPFkQFpipFJtlxYcC1AcZ2Yp/zOgXt8u6WAtr5a+BNprCk7VjhbVrCeojo4AYi7ewNPGhP5kYhMb3xuXFH//2KWXQs0qB/zT2FMvFcrG+yBYNzs2zFXavdAJYKOT1BRLFzDF5c+cc1aW+BB/mwq2KD4/B/A8iD7dK7TmQZAOBhEoDcwN5eFNz7blxG0sjpHe9W9wW7tadzFnOjbf/68Cs0nV7gJnxr7yNgpvC13Vdck7/+cumPFC5x88ZlFGWbkpg13NJFDwMwDbz3S4hk/Xy7lZnecTNOQLTo5M2iTan7NBuJOg6alUDgJGp86I7hr1b7tljhKFsysKjMnK0w56waTRr8x8xs5hhvSbUC+/f19wTEcdjRBXNn05rUeIfMLSaPHn6AL5gNyWC8ixd9RYo02HMVaXhco+lZvRVRKsVB/TDONNUJ83g1FA8cBmiglXRJW4uUXvC+bjbWg A1oK3T1/ 8eJHcegI7OSwijJhB91id+k4wlLw9b7F1OGCOnqUwjaHlKnR6ZIrTmiofCsA7+W9laF39ap7mJ+zJOSEzrZKwHcG3SbAM5IG4FgVzUYSlAXvxuHKDsHd5T6H4F2Q+VX9LOL3CVPHqo/9MPi9488QD1cFaWSXs/nHmyx4sSpwA5fg4iP1TJ1yeXWFGmXkD7EAbuUB/6C8XWxxIW1zcadMA9GhiDn3XNcINhnk+dVHv2lhzwcM6JmmeRAUa0l3r2PJMm44npqPgPXjg9OrD76E5m8yj3ilvgzyI/ggWdoQqQWHUICRdCCOh6A/4RSBJAQeyT7oUVGaPWI5Z5xhmSl48gRvczTCzuyBNZ96u86DYxPTVAhNSYx6/OvsBdkdeKHCltketBBYa9CN9ELMhsBU2mjtJUn0x/ynG9MPv3Aj/MFf85j76dyrXJYNWkA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: That Trinity livelock shmem_falloc avoidance block is unlikely, and a distraction from the proper business of shmem_fault(): separate it out. (This used to help compilers save stack on the fault path too, but both gcc and clang nowadays seem to make better choices anyway.) Signed-off-by: Hugh Dickins Reviewed-by: Jan Kara --- mm/shmem.c | 126 +++++++++++++++++++++++++++++------------------------ 1 file changed, 69 insertions(+), 57 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 824eb55671d2..5501a5bc8d8c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2148,87 +2148,99 @@ int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, * entry unconditionally - even if something else had already woken the * target. */ -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key) +static int synchronous_wake_function(wait_queue_entry_t *wait, + unsigned int mode, int sync, void *key) { int ret = default_wake_function(wait, mode, sync, key); list_del_init(&wait->entry); return ret; } +/* + * Trinity finds that probing a hole which tmpfs is punching can + * prevent the hole-punch from ever completing: which in turn + * locks writers out with its hold on i_rwsem. So refrain from + * faulting pages into the hole while it's being punched. Although + * shmem_undo_range() does remove the additions, it may be unable to + * keep up, as each new page needs its own unmap_mapping_range() call, + * and the i_mmap tree grows ever slower to scan if new vmas are added. + * + * It does not matter if we sometimes reach this check just before the + * hole-punch begins, so that one fault then races with the punch: + * we just need to make racing faults a rare case. + * + * The implementation below would be much simpler if we just used a + * standard mutex or completion: but we cannot take i_rwsem in fault, + * and bloating every shmem inode for this unlikely case would be sad. + */ +static vm_fault_t shmem_falloc_wait(struct vm_fault *vmf, struct inode *inode) +{ + struct shmem_falloc *shmem_falloc; + struct file *fpin = NULL; + vm_fault_t ret = 0; + + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; + if (shmem_falloc && + shmem_falloc->waitq && + vmf->pgoff >= shmem_falloc->start && + vmf->pgoff < shmem_falloc->next) { + wait_queue_head_t *shmem_falloc_waitq; + DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function); + + ret = VM_FAULT_NOPAGE; + fpin = maybe_unlock_mmap_for_io(vmf, NULL); + shmem_falloc_waitq = shmem_falloc->waitq; + prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, + TASK_UNINTERRUPTIBLE); + spin_unlock(&inode->i_lock); + schedule(); + + /* + * shmem_falloc_waitq points into the shmem_fallocate() + * stack of the hole-punching task: shmem_falloc_waitq + * is usually invalid by the time we reach here, but + * finish_wait() does not dereference it in that case; + * though i_lock needed lest racing with wake_up_all(). + */ + spin_lock(&inode->i_lock); + finish_wait(shmem_falloc_waitq, &shmem_fault_wait); + } + spin_unlock(&inode->i_lock); + if (fpin) { + fput(fpin); + ret = VM_FAULT_RETRY; + } + return ret; +} + static vm_fault_t shmem_fault(struct vm_fault *vmf) { - struct vm_area_struct *vma = vmf->vma; - struct inode *inode = file_inode(vma->vm_file); + struct inode *inode = file_inode(vmf->vma->vm_file); gfp_t gfp = mapping_gfp_mask(inode->i_mapping); struct folio *folio = NULL; + vm_fault_t ret = 0; int err; - vm_fault_t ret = VM_FAULT_LOCKED; /* * Trinity finds that probing a hole which tmpfs is punching can - * prevent the hole-punch from ever completing: which in turn - * locks writers out with its hold on i_rwsem. So refrain from - * faulting pages into the hole while it's being punched. Although - * shmem_undo_range() does remove the additions, it may be unable to - * keep up, as each new page needs its own unmap_mapping_range() call, - * and the i_mmap tree grows ever slower to scan if new vmas are added. - * - * It does not matter if we sometimes reach this check just before the - * hole-punch begins, so that one fault then races with the punch: - * we just need to make racing faults a rare case. - * - * The implementation below would be much simpler if we just used a - * standard mutex or completion: but we cannot take i_rwsem in fault, - * and bloating every shmem inode for this unlikely case would be sad. + * prevent the hole-punch from ever completing: noted in i_private. */ if (unlikely(inode->i_private)) { - struct shmem_falloc *shmem_falloc; - - spin_lock(&inode->i_lock); - shmem_falloc = inode->i_private; - if (shmem_falloc && - shmem_falloc->waitq && - vmf->pgoff >= shmem_falloc->start && - vmf->pgoff < shmem_falloc->next) { - struct file *fpin; - wait_queue_head_t *shmem_falloc_waitq; - DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function); - - ret = VM_FAULT_NOPAGE; - fpin = maybe_unlock_mmap_for_io(vmf, NULL); - if (fpin) - ret = VM_FAULT_RETRY; - - shmem_falloc_waitq = shmem_falloc->waitq; - prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, - TASK_UNINTERRUPTIBLE); - spin_unlock(&inode->i_lock); - schedule(); - - /* - * shmem_falloc_waitq points into the shmem_fallocate() - * stack of the hole-punching task: shmem_falloc_waitq - * is usually invalid by the time we reach here, but - * finish_wait() does not dereference it in that case; - * though i_lock needed lest racing with wake_up_all(). - */ - spin_lock(&inode->i_lock); - finish_wait(shmem_falloc_waitq, &shmem_fault_wait); - spin_unlock(&inode->i_lock); - - if (fpin) - fput(fpin); + ret = shmem_falloc_wait(vmf, inode); + if (ret) return ret; - } - spin_unlock(&inode->i_lock); } + WARN_ON_ONCE(vmf->page != NULL); err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, gfp, vmf, &ret); if (err) return vmf_error(err); - if (folio) + if (folio) { vmf->page = folio_file_page(folio, vmf->pgoff); + ret |= VM_FAULT_LOCKED; + } return ret; } From patchwork Sat Sep 30 03:28:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 488DCE77350 for ; Sat, 30 Sep 2023 03:28:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A25938D0109; Fri, 29 Sep 2023 23:28:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D3C48D002B; Fri, 29 Sep 2023 23:28:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 874B58D0109; Fri, 29 Sep 2023 23:28:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7799F8D002B for ; Fri, 29 Sep 2023 23:28:56 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 46E074067F for ; Sat, 30 Sep 2023 03:28:56 +0000 (UTC) X-FDA: 81291832272.29.E4757AB Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by imf22.hostedemail.com (Postfix) with ESMTP id 82290C000F for ; Sat, 30 Sep 2023 03:28:54 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LUDWTxHd; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of hughd@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044534; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/Dj6Oo+B5y8AHftgdToua+FX2M1F3oxEvkpI+qfcb0o=; b=wtfghXKuRsPuQ6JdC4g5jkz6qKh4E7sCWgRiO26jIi7FArfZznDG3WImBDiAIDAT+zvMcW yXKodsmwoSlQIo4wQzBV4gFVg+eKZUtWSh694Fs9CsEhlFWckNXCpHOYYNSxYQgF8Tv4g9 UeQXOylxXXY08v+TNil7m3Pok/P1Abo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LUDWTxHd; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of hughd@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044534; a=rsa-sha256; cv=none; b=fML/d5pSXtF7ZXSy/oXCVF4fXXJkThQcCtxx9gj+Xfx3VT+5V8gvtJ1JfqiX9keFPal0EK plKaiTCzuNNyvQwDewFbfPFDiMjXkRhN794ydB29hKeTIv5DennX4VEICS+wiMVj4P7j9e auiLVoLGJuyKmthKlEdRPh48qxQq0N0= Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-5a2536adaf3so5378647b3.2 for ; Fri, 29 Sep 2023 20:28:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044533; x=1696649333; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=/Dj6Oo+B5y8AHftgdToua+FX2M1F3oxEvkpI+qfcb0o=; b=LUDWTxHd2heqYIzyu040G4+FMqGeBXRP6Kweo9yd4peRpCdDtpRPLNsLAQWVODewi7 bgsGiZyHd8WqOVUMHpi2ZxqKZ0CJ8xLp/U25z7zVRN1FpQQAjq+f9UyB2fDK9NM4i+/M wKv3Z+DzZ7khwa2rInxM/7FQLJ7Y8d8L9fZlHv4R5IKG5AKz9Fgo/1eCFuGl4g2LyH2t T/HTTVh0/ZQe58DfN5jU8P/02UYO206sObEzh4cePU14fWoK6YRYXWLJmi3IIy6OehFM c+CFajMpyU22cFgG2PBU/fKqXyyJl4fjDOVrkqM94luWEiK1AKwfwYETxBP+JMDux8uR yxsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044533; x=1696649333; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/Dj6Oo+B5y8AHftgdToua+FX2M1F3oxEvkpI+qfcb0o=; b=vntHqblVMdnMOLeFC7FHOHM55EeVy+TJc5MTGJ0cd/de6dafLW72uRAPA5Assk4T8K hNJ8wIA8txgSvfz/ZXypWeVWtOQXgGSrU8H0B/fPD8LmJCvcWbzqiKOfJvKEPIE/fyq+ Obxpgx7SJj9XqnKTu1ONMLmUuEAKLHbMHQ8eC/cXO+l46FVW46Qo+hgJgv+uKdGnBSuv TnU5NeLCGF8Q5vMz4i4qc2pzwgSaTKhcT5gRceIofMyyELgFR62PL63EQ67COfJw5l51 /Muo9puSDbk5Z2oCjHjfoVaZkzLhTf8CLONVnYIgx7qK3MxWSXiTd0iD2s8wD7x7+yFx SnuA== X-Gm-Message-State: AOJu0YyJg1p/BF3ZZK/X1AHNTcbzjHCT98F0eJy+le6OKDow9GScWaGJ RpTfsvGSM9fjbpSGFD/4bn1T0w== X-Google-Smtp-Source: AGHT+IHPj4w9oZpttkU3FSY37D6TH9BiRNUeI9NUbTwdclt9Ncz+8LjmVJX7PwjhqGcgK9q2b2JpVA== X-Received: by 2002:a0d:df45:0:b0:595:9770:6914 with SMTP id i66-20020a0ddf45000000b0059597706914mr6356530ywe.35.1696044533585; Fri, 29 Sep 2023 20:28:53 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id s185-20020a8182c2000000b00597e912e67esm6008532ywf.131.2023.09.29.20.28.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:28:52 -0700 (PDT) Date: Fri, 29 Sep 2023 20:28:50 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/8] shmem: trivial tidyups, removing extra blank lines, etc In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 82290C000F X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: u1bm73dzftfon6mum38kqs9uwxc4yfcu X-HE-Tag: 1696044534-410198 X-HE-Meta: U2FsdGVkX18q629U9tB01y55F5N6JyV/x6kg4F6rU0VM7y8uH9DRCp2JnItzfrCBcKb+pUtzlmi5IIne1C8wVpcaxHuzVwazGUX8elnTJzwNmkb2XquXPuWWZftiiMDjOucLcezquw13U5m8Kr3ScfbOu2vlQjoJ0LjSJPYke0fs/gXzXjQEW/Ou9gdN1J3LPCq1HzM+Km3tQS5kuGUJRlwc88mhK5JeF5J6l5rJBc7F/GMFNFaZxUKit0JWxEwf1c3yj0MftoB555rodlntYCfSqeyubt/ppP3hzqL1HkJDEmwZoWt0O0gRXkwmpcrtKb56xVG/NR0mrXU4hn0k//Zkq4O5tJ7R8aj1gUHvv2Fx6oKVFvTZPPDioKH6hsbyWKik7aJXAX/M9gK2Wf4g1wiwoImp4FPN25it7tXl5CQhubsXsDRQUW9jhH11jFrVa7kl9SpXShKdJNAATa8nj0fpug9teQvwGlqPQyiHqT5JYUs/ILPwzhV2BXtEhbdOs6Vf8vd7JVoVJvnVlbhTMY/5nTbihqvG3oMoo/ZRV1Ipm6g64nxoAHtPCSUIGTj82UMG5vswEqX2K6NOZRylLLgF1Nx3iVms2/Rbzxk6AHbiWZOY2GhGfSYk+IZKswEgLKn6aYb688388gFz/ch3qSrwdNxwR87e9OBjIABJ5WniyQTD+0OLtshNtaNVSWZCIw/JLvkWXcqyjIl0vJY0yAbmeVftnBounSzPsG9//8KZHXTaFPwR0McBqg5clVrZ6BYwyLDPwmaRIv26GRre7zDzFe6DDo9VDKALunwGcwQJC4cEWc/K1p7S+TmqQRnMQjb0uicLZokDvsiVHdTDFA0i04BbNQCfexvqlJJHthEAYSOZ9AnlAel0f91hiNF8bTXAyiK0SzG29d8baZouCxljHJNlwyNY29f/+loGcn1MmDQwDuPvYCKSwYHMjekYGCzEx3obWKB0VCa1Eyd oSB9rbD1 tG3DuHlI3NBE3kX0sLN8zULYz40YdGJngmKZBfm7mhIOHWJAf//OPNGmkJR0rW6sf4bCRfluNzFrytOXR6s6XamIixfpVh9oJv0XLWtP8bblC0993CvMwY4VVrUokxq2GZctb6O5ZbXYvhma1Ew+LXtPs5dhQWSWf+c3/w+7C5ueLo1oYgByZquzEGqevkpNXTlTCRW1y9bofhEyApJ+k5geYF3AawB/Qf+gDlRDyywuWcxdLdPA0phYbHVWAfa4YX6Bg5UKaDVNpQX51n3FTqz8BlRElU4pSo0+OmZ3sWDNAzPgcx+fzZ2WqBvOWMUtbq4+kqw1fIfEMG5sEteN7fQ/vxwYuul8O6fCyBjIVmJ+dPfgGcO7geDkppDVZlmoKmaVi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mostly removing a few superfluous blank lines, joining short arglines, imposing some 80-column observance, correcting a couple of comments. None of it more interesting than deleting a repeated INIT_LIST_HEAD(). Signed-off-by: Hugh Dickins Reviewed-by: Jan Kara --- mm/shmem.c | 56 ++++++++++++++++++++---------------------------------- 1 file changed, 21 insertions(+), 35 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 5501a5bc8d8c..caee8ba841f7 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -756,7 +756,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ /* - * Like filemap_add_folio, but error if expected item has gone. + * Somewhat like filemap_add_folio, but error if expected item has gone. */ static int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, @@ -825,7 +825,7 @@ static int shmem_add_to_page_cache(struct folio *folio, } /* - * Like delete_from_page_cache, but substitutes swap for @folio. + * Somewhat like filemap_remove_folio, but substitutes swap for @folio. */ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap) { @@ -887,7 +887,6 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping, cond_resched_rcu(); } } - rcu_read_unlock(); return swapped << PAGE_SHIFT; @@ -1213,7 +1212,6 @@ static int shmem_setattr(struct mnt_idmap *idmap, if (i_uid_needs_update(idmap, attr, inode) || i_gid_needs_update(idmap, attr, inode)) { error = dquot_transfer(idmap, inode, attr); - if (error) return error; } @@ -2456,7 +2454,6 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap, if (err) return ERR_PTR(err); - inode = new_inode(sb); if (!inode) { shmem_free_inode(sb, 0); @@ -2481,11 +2478,10 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap, shmem_set_inode_flags(inode, info->fsflags); INIT_LIST_HEAD(&info->shrinklist); INIT_LIST_HEAD(&info->swaplist); - INIT_LIST_HEAD(&info->swaplist); - if (sbinfo->noswap) - mapping_set_unevictable(inode->i_mapping); simple_xattrs_init(&info->xattrs); cache_no_acl(inode); + if (sbinfo->noswap) + mapping_set_unevictable(inode->i_mapping); mapping_set_large_folios(inode->i_mapping); switch (mode & S_IFMT) { @@ -2697,7 +2693,6 @@ shmem_write_begin(struct file *file, struct address_space *mapping, } ret = shmem_get_folio(inode, index, &folio, SGP_WRITE); - if (ret) return ret; @@ -3229,8 +3224,7 @@ shmem_mknod(struct mnt_idmap *idmap, struct inode *dir, error = simple_acl_create(dir, inode); if (error) goto out_iput; - error = security_inode_init_security(inode, dir, - &dentry->d_name, + error = security_inode_init_security(inode, dir, &dentry->d_name, shmem_initxattrs, NULL); if (error && error != -EOPNOTSUPP) goto out_iput; @@ -3259,14 +3253,11 @@ shmem_tmpfile(struct mnt_idmap *idmap, struct inode *dir, int error; inode = shmem_get_inode(idmap, dir->i_sb, dir, mode, 0, VM_NORESERVE); - if (IS_ERR(inode)) { error = PTR_ERR(inode); goto err_out; } - - error = security_inode_init_security(inode, dir, - NULL, + error = security_inode_init_security(inode, dir, NULL, shmem_initxattrs, NULL); if (error && error != -EOPNOTSUPP) goto out_iput; @@ -3303,7 +3294,8 @@ static int shmem_create(struct mnt_idmap *idmap, struct inode *dir, /* * Link a file.. */ -static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry) +static int shmem_link(struct dentry *old_dentry, struct inode *dir, + struct dentry *dentry) { struct inode *inode = d_inode(old_dentry); int ret = 0; @@ -3334,7 +3326,7 @@ static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentr inode_inc_iversion(dir); inc_nlink(inode); ihold(inode); /* New dentry reference */ - dget(dentry); /* Extra pinning count for the created dentry */ + dget(dentry); /* Extra pinning count for the created dentry */ d_instantiate(dentry, inode); out: return ret; @@ -3354,7 +3346,7 @@ static int shmem_unlink(struct inode *dir, struct dentry *dentry) inode_set_ctime_current(inode)); inode_inc_iversion(dir); drop_nlink(inode); - dput(dentry); /* Undo the count from "create" - this does all the work */ + dput(dentry); /* Undo the count from "create" - does all the work */ return 0; } @@ -3464,7 +3456,6 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir, inode = shmem_get_inode(idmap, dir->i_sb, dir, S_IFLNK | 0777, 0, VM_NORESERVE); - if (IS_ERR(inode)) return PTR_ERR(inode); @@ -3518,8 +3509,7 @@ static void shmem_put_link(void *arg) folio_put(arg); } -static const char *shmem_get_link(struct dentry *dentry, - struct inode *inode, +static const char *shmem_get_link(struct dentry *dentry, struct inode *inode, struct delayed_call *done) { struct folio *folio = NULL; @@ -3593,8 +3583,7 @@ static int shmem_fileattr_set(struct mnt_idmap *idmap, * Callback for security_inode_init_security() for acquiring xattrs. */ static int shmem_initxattrs(struct inode *inode, - const struct xattr *xattr_array, - void *fs_info) + const struct xattr *xattr_array, void *fs_info) { struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); @@ -3778,7 +3767,6 @@ static struct dentry *shmem_find_alias(struct inode *inode) return alias ?: d_find_any_alias(inode); } - static struct dentry *shmem_fh_to_dentry(struct super_block *sb, struct fid *fid, int fh_len, int fh_type) { @@ -4362,8 +4350,8 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) } #endif /* CONFIG_TMPFS_QUOTA */ - inode = shmem_get_inode(&nop_mnt_idmap, sb, NULL, S_IFDIR | sbinfo->mode, 0, - VM_NORESERVE); + inode = shmem_get_inode(&nop_mnt_idmap, sb, NULL, + S_IFDIR | sbinfo->mode, 0, VM_NORESERVE); if (IS_ERR(inode)) { error = PTR_ERR(inode); goto failed; @@ -4666,11 +4654,9 @@ static ssize_t shmem_enabled_show(struct kobject *kobj, for (i = 0; i < ARRAY_SIZE(values); i++) { len += sysfs_emit_at(buf, len, - shmem_huge == values[i] ? "%s[%s]" : "%s%s", - i ? " " : "", - shmem_format_huge(values[i])); + shmem_huge == values[i] ? "%s[%s]" : "%s%s", + i ? " " : "", shmem_format_huge(values[i])); } - len += sysfs_emit_at(buf, len, "\n"); return len; @@ -4767,8 +4753,9 @@ EXPORT_SYMBOL_GPL(shmem_truncate_range); #define shmem_acct_size(flags, size) 0 #define shmem_unacct_size(flags, size) do {} while (0) -static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block *sb, struct inode *dir, - umode_t mode, dev_t dev, unsigned long flags) +static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap, + struct super_block *sb, struct inode *dir, + umode_t mode, dev_t dev, unsigned long flags) { struct inode *inode = ramfs_get_inode(sb, dir, mode, dev); return inode ? inode : ERR_PTR(-ENOSPC); @@ -4778,8 +4765,8 @@ static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct supe /* common code */ -static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, loff_t size, - unsigned long flags, unsigned int i_flags) +static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, + loff_t size, unsigned long flags, unsigned int i_flags) { struct inode *inode; struct file *res; @@ -4798,7 +4785,6 @@ static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, l inode = shmem_get_inode(&nop_mnt_idmap, mnt->mnt_sb, NULL, S_IFREG | S_IRWXUGO, 0, flags); - if (IS_ERR(inode)) { shmem_unacct_size(flags, size); return ERR_CAST(inode); From patchwork Sat Sep 30 03:30:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 093D2E7734F for ; Sat, 30 Sep 2023 03:30:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D5DA8D010A; Fri, 29 Sep 2023 23:30:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9863A8D002B; Fri, 29 Sep 2023 23:30:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 850118D010A; Fri, 29 Sep 2023 23:30:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 743768D002B for ; Fri, 29 Sep 2023 23:30:09 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3F5068061E for ; Sat, 30 Sep 2023 03:30:09 +0000 (UTC) X-FDA: 81291835338.29.0E15171 Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf19.hostedemail.com (Postfix) with ESMTP id 7C71F1A0008 for ; Sat, 30 Sep 2023 03:30:07 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=M+VP87pE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044607; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F5lBL1dIqosEB13vnHNJcMu3nNFm9z6wAJ0sm8x1c/Y=; b=8qdPj/OsMwPSCF8+q2p0eidcSb2QOZ/Dmdis6sOth8n/GEArfNqySdIioEBphqQm0OkrGr DBWwvwSOQVq0u/rEOcN7Ap62Z3moIvIwnfgm/gd01uRcmmL8bfbeTiX2fXI3kZaCacVwQv sjGLdac4Q91AXlPdIsXc7NsnzJ6Ae6s= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=M+VP87pE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044607; a=rsa-sha256; cv=none; b=R3J90DnCb0lBMt9y0vB7xCE5kGBSMSGTCsCVq1ME1xNqccciwgOZA70sI3aBD0nzDee/3N Q/VbTbFqPIF2y/x1g3r+vM5HivkqeZo4Fx2krAUrgaFm2tQebRBf/jpKWwS3MXKamV4MPw NmfVAI/0pH+M88tCw/aFEG2DzyPCa74= Received: by mail-qt1-f180.google.com with SMTP id d75a77b69052e-41811e7ac3dso62138401cf.2 for ; Fri, 29 Sep 2023 20:30:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044606; x=1696649406; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=F5lBL1dIqosEB13vnHNJcMu3nNFm9z6wAJ0sm8x1c/Y=; b=M+VP87pEJlZ1a+ZaNZ2BZVasBSvNNCg5oB2e1M52ZcjZUUiSB5ywVRtWeLsuA1Gw+q 692c3yUvXV0S7gGyDMTu/mdxhrclpTe/Ab/Se8UCMLHI9IhdpEI0rVTHx+H5zhULpucn 1zD6HxV/qiDeaBRj3uL6vY5QQNoj5Ga9ASwyjfuOeA9i0Hj2Hy7//g9BffXcO8dEWkyj Q5ZH84EWKsPCQvKbBdByT33Mrz202XqvdhZQzx3B6g/dhqtdMZfNUnqYCvy0vHlH3sST 5seXf98WgroC3WUxzPEXFbGmwc0Ku7/8JZp9dmOS7unX2TgfXwzFBtnRS4jpNqX8HBfW dubg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044606; x=1696649406; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=F5lBL1dIqosEB13vnHNJcMu3nNFm9z6wAJ0sm8x1c/Y=; b=ib6Cvr9TueQQRiAexL/ZfINdmGIk1PtijAvgdx7WRxTGkAOmFNQD/y0dg8D0q+jpEP de0eKN7uENr1yqbUvPD7r3v/7cUobhzlulZ9d80BcnxAH5RaFiNY+JdILP5LT2KsYmXr ubpENYAvGoZCUIICKHBEWb8sSZwz/rsyIXOFskLrZeF4ZlgPoeotobpp7Fg9uNvo+sJ2 5d3G6RQ2D6CCDyGcbqoX971+1LJVVQTBx56z5PMmd+sBVD8fEpIYyqVkiveFhpzfZ+R8 6QNiduJNzB1BvIQO9i/YmAEDI0sDH9BR5YzdAcDAdTDmRJ4DDDmCFlSjztaVYmTjNjlI n5GA== X-Gm-Message-State: AOJu0YxVfHE7WHFRpG581HTMtzelwo3u7t5RMYMUP8FO6xrAakZq5i7i 3rW4VucR1VFY1wdZ+5TxFHXTWQ== X-Google-Smtp-Source: AGHT+IGP2fxP6CWffX9Qi9wXG2I3wbOa1qZMTS36GprFKCZ9FBnqMrcBGIljlzMe7rrEDbitBUPjPw== X-Received: by 2002:a05:622a:5cb:b0:412:c2a:eaef with SMTP id d11-20020a05622a05cb00b004120c2aeaefmr6834494qtb.11.1696044606428; Fri, 29 Sep 2023 20:30:06 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id r74-20020a0de84d000000b0059bc980b1eesm5981929ywe.6.2023.09.29.20.30.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:30:05 -0700 (PDT) Date: Fri, 29 Sep 2023 20:30:03 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/8] shmem: shmem_acct_blocks() and shmem_inode_acct_blocks() In-Reply-To: Message-ID: <9124094-e4ab-8be7-ef80-9a87bdc2e4fc@google.com> References: MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7C71F1A0008 X-Stat-Signature: nfi1dgjkjy51xoaxpafybcxakdsagxy5 X-Rspam-User: X-HE-Tag: 1696044607-308855 X-HE-Meta: U2FsdGVkX1+ciPeDRB3mF5buByk0HhCoWyfKsKaZF34PzxH/8EveF3BIXYVqt1pHH7dw0VovnYsfGZpIoPejHJKw1aEHwaXo9jnXQkSKFI00jcpcFAQmkceOJk6L3Nkg4zfnSPiqW2cMyrErXf0/fJnqOsANp+Dmq9h2Rt99No/Qjs4C/UUdGuG/UyYpAdoFxljBZEJAIBTTZVi9BmFN0L8kXPaiNmB8etAHAvDlsRJbh3H9M6GRYNs8S5vF8jexLaWv3ykwD2OkWgkhoj/A/l39e5mTQtuVrN2NC6R6iNusFAa4GI+o7NAz4bfzF/HB05EJWTK7n2kN5aLog8FjoJF+iArVPE+zYGhwx5unaLMlMB7h4Fba3UNGhCWOjsK96eEw2bdmt3db+bwzrFcfgJqKke54tM1LDD/hZooknobRI2XSnniScV7H3LUq4+GV/lCBARV5b7bK3B+uWcz5Se5xu7bUa7HICOOqhWyvSC5aoaig/LNd+TEBywV3nEPl/5+8C3zj0E4DQViuXSyU0+8IIX/VbQjNASLOUWeMxQ5qtjfpSrjroCZhHlXtJY4GF0kJzSlK7limFJkOW8ETXfodjE8FvsiAaymM+jxhJLeJhGVF/BTwn1MTtwWux/mIt9Xl48sz1dyNDc0scsTix0A44ejoiKc1eb/LJFBi7aEea22t2e9iSf580WEinpEfalil9MNzTbpxAHkvSOYpUmNBnBiI11B8/70FzoW7fRlWwYFkowlt39+Bw75r5q8JZZ6B/7q0Z3RtoOzGed6RhmzcEQ2Zgo6Bqsazh0VsOC0ckGIjR8/GFRK7Djiej7aka2QxPBn2bVQxKOg1wiYdNh4eTTLuUHKxNCx13RSPfNDCKCWYrj5GuBbEGMlh6MyOy9ehLu3p4+6akSzitRyH5xOg5GRoND2QTD6oX8y7XXAtg6FPJmVHmXuMadG3gV81nukmof84MN0S32ggIBs BF1hdPUD yy5L1Xj2oZ8lqOxakOq9mnlc1lpkT/O1pHLkDVBlh5C4rC81IEyVQt1yJ8od4syIY2XsDouw/Amm0VHzMSpaMEGeHlt2e2qwC/KeekjhJ71CDE8rKWPLlvVuPU8RdiFPZyO+FqrSDX/y4RYTYE7xtS0+hxVwKxfeEmUfgn+EIZLe+x83f5fiJBEtM+LxL24Vn5QDi7YlPFOxpe3mKNouAa+x6A3NJeiMLAZoRboTD1REbzN9o0rbhXS43l6FrTWbOgacZKvVnOw8Xsfo+KQ08Lk90b58LJUuvp4APtnLvf4roqI8vT4alujkEbwfA8acLqvGmWiAS+7CsXb9fcyMQ/LsnjzDhfPkIdxUPL4J5V3yBhTlcwc56dZ6+cndfy+YJgpQg2vHlzfJDvrhrZr1nMWsFH9RnQb1bnY33xw5G5QbNGns= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: By historical accident, shmem_acct_block() and shmem_inode_acct_block() were never pluralized when the pages argument was added, despite their complements being shmem_unacct_blocks() and shmem_inode_unacct_blocks() all along. It has been an irritation: fix their naming at last. Signed-off-by: Hugh Dickins Reviewed-by: Jan Kara --- mm/shmem.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index caee8ba841f7..63ba6037b23a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -189,10 +189,10 @@ static inline int shmem_reacct_size(unsigned long flags, /* * ... whereas tmpfs objects are accounted incrementally as * pages are allocated, in order to allow large sparse files. - * shmem_get_folio reports shmem_acct_block failure as -ENOSPC not -ENOMEM, + * shmem_get_folio reports shmem_acct_blocks failure as -ENOSPC not -ENOMEM, * so that a failure on a sparse tmpfs mapping will give SIGBUS not OOM. */ -static inline int shmem_acct_block(unsigned long flags, long pages) +static inline int shmem_acct_blocks(unsigned long flags, long pages) { if (!(flags & VM_NORESERVE)) return 0; @@ -207,13 +207,13 @@ static inline void shmem_unacct_blocks(unsigned long flags, long pages) vm_unacct_memory(pages * VM_ACCT(PAGE_SIZE)); } -static int shmem_inode_acct_block(struct inode *inode, long pages) +static int shmem_inode_acct_blocks(struct inode *inode, long pages) { struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); int err = -ENOSPC; - if (shmem_acct_block(info->flags, pages)) + if (shmem_acct_blocks(info->flags, pages)) return err; might_sleep(); /* when quotas */ @@ -447,7 +447,7 @@ bool shmem_charge(struct inode *inode, long pages) { struct address_space *mapping = inode->i_mapping; - if (shmem_inode_acct_block(inode, pages)) + if (shmem_inode_acct_blocks(inode, pages)) return false; /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ @@ -1671,7 +1671,7 @@ static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, huge = false; nr = huge ? HPAGE_PMD_NR : 1; - err = shmem_inode_acct_block(inode, nr); + err = shmem_inode_acct_blocks(inode, nr); if (err) goto failed; @@ -2572,7 +2572,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, int ret; pgoff_t max_off; - if (shmem_inode_acct_block(inode, 1)) { + if (shmem_inode_acct_blocks(inode, 1)) { /* * We may have got a page, returned -ENOENT triggering a retry, * and now we find ourselves with -ENOMEM. Release the page, to From patchwork Sat Sep 30 03:31:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 308C2E7734F for ; Sat, 30 Sep 2023 03:31:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 876E58D010B; Fri, 29 Sep 2023 23:31:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8272B8D002B; Fri, 29 Sep 2023 23:31:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EFB48D010B; Fri, 29 Sep 2023 23:31:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5CF6A8D002B for ; Fri, 29 Sep 2023 23:31:33 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0E6F21CA4C9 for ; Sat, 30 Sep 2023 03:31:33 +0000 (UTC) X-FDA: 81291838866.23.539DBB5 Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf09.hostedemail.com (Postfix) with ESMTP id 40F33140003 for ; Sat, 30 Sep 2023 03:31:31 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=edD0Xd1G; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044691; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QspsCGzxpxq3SSLDt+LJp9Jy//8oF6+tYorlwqGdsRA=; b=NVL13vYf1vWYdBptvGL/drL0PQ/5myzX5kxnwWGsCwOoCq6F5tr+/DqlQgeIxL1Nt+dxDa jMCXSm57l6YW4FDjXDbQPYce3ANSHR7X9wCkTb6mZWouEpQKljPRroUxw+W6Cr5H7pmIvn C3t2B14L6enJQ4aoevacFKVT9qmKrR4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044691; a=rsa-sha256; cv=none; b=KHVKucGCWSloz2dk8wvZkxaQuxPJkRaVvu/uPmQqASl5dUKLOTFWYZ/igH1eHsiz1yL1Ls xGu0NUybubD/vgp7H8TYZl6Gpm0XWNlDaH/BL3JjCLf7UrbLcjbxUrgQ/Lv3D6jNJkjebc i9tsf2WnJIrX7okQGS5F1xwTFbA2Jo4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=edD0Xd1G; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-59f4db9e11eso146355057b3.0 for ; Fri, 29 Sep 2023 20:31:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044690; x=1696649490; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=QspsCGzxpxq3SSLDt+LJp9Jy//8oF6+tYorlwqGdsRA=; b=edD0Xd1GwGpDb2FHuKM8OKP5jLe3zW/b6qvkGvjH5JNMTj9fjeDyKujHnDtxVTI/eX vV8YLaZ/Aum8zoK8e+TlDfvC8Wyds4sz874UakdTiScMDRPFPrJ4eLkNRI58xgb716ZQ C3Sz4kh0aJTmMQS8M22QzPE8C5PldepagH0R8gqNpB/iQHCQ1emY8nMAZ3mgDUYesEgz rOWTmh0JsSNQj7P/60+eOV35c0vaVkM8lpknBSA3tfp20DdnKi1WH5FNvz3PX0wdMqUU 5TMIiV162HPfZnfDKq7ZGjI7dgG11jytbWVKMYmKNZ9hPNRm7AO9ErgpxYR5cyl3Nc7U s8kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044690; x=1696649490; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QspsCGzxpxq3SSLDt+LJp9Jy//8oF6+tYorlwqGdsRA=; b=oKrFo4QpHonzKYiZjvpIaPXEBZS5+Kc/Kve+jlTNt6Ne/bY7mLXrPkl7WdcCMbXVBO 9nXMm/+64JGH2h/4bdd0c42fn2l3q6ZJfgMY70WcpcE4kQiacJVFOe/BfgUkMVMuyfbA bthhgoQ5mDGOaCkbz15TCwxLK1mEQSPU1E8sw3DvtBQfIptlvV6JuCW7CP7hFy/hz47q fRod34GoMFHi7cKpwWQfzlt+QWn4TjBjSTYfSnt7iyiHPrDEYO1p9yL2evxzIuv6JKR1 ksJ9vdjuKzvlPTxrI0i7Fu7ddmnNfm/jvP7E8zT+C0WoNBRN++8RYEC+4uWbqiORCZ9s savg== X-Gm-Message-State: AOJu0YysennoEgJBrEC3zV+Z7ZyLdYIKgq/yjmg0lp/txDjyLHtttGLr Q1p3e/cN8OkzawoXu2mTZQSXUA== X-Google-Smtp-Source: AGHT+IEgXqnD7PMNlAh5dIBX1/Qy7aURmmr6eaqI4X8ytb5LLyV63aow9Zb+c+tnCOaAvJkKxO8m7Q== X-Received: by 2002:a0d:d3c5:0:b0:585:ef4e:6d93 with SMTP id v188-20020a0dd3c5000000b00585ef4e6d93mr5637010ywd.47.1696044690238; Fri, 29 Sep 2023 20:31:30 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id d71-20020a814f4a000000b0059be3d854b1sm5928458ywb.109.2023.09.29.20.31.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:31:29 -0700 (PDT) Date: Fri, 29 Sep 2023 20:31:27 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 6/8] shmem: move memcg charge out of shmem_add_to_page_cache() In-Reply-To: Message-ID: <4b2143c5-bf32-64f0-841-81a81158dac@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 40F33140003 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: rgidmrjcryn4kifguguag6umq3siqjoc X-HE-Tag: 1696044691-75825 X-HE-Meta: U2FsdGVkX1+YSt7BGdfDpEfaEzgYjO3eX1+H7nT7PBJ0VDk2ACKQQhZ8OFwS/+iRn3IYJLyWz2U684lqiSryOfz6naOPvQAW1XHhO3AA1MZfdpPnFu1rBPq+z1bruZbzrS0LSwLfOJ+6wi+o2fPttITxy6pGGrRg0FLhlDdBA8WzpMdjSKbG/SpupyCBdb+a3Uh8JPYxBAeE6z4pP+eiSTHH98qwQpukvDKbbQ94R9wpTK8qXDXsrT+H7p/h3eno4I1cNXtpfBWlTSNn0QpuWx94a5hIYilkJDPbRlaJgHYRxALp7KansbxhES1I6OcA67jztyGCiMRAH7Nt2/a9PvJtCHwZUvXUO6vWg3FppclF+4C2jtBQVvtchsfOz7Eyb/T3bQBg8vuNaHlEBRgUTFK7AZM0mxK/uFBAD+Y18FdmLKT71QRLl7e2xeRS8QGNzFOErHEI6D/W4pjb1FaZuyFntVGBacFeL9YtgjR6aLYyIcJH8ImssWmZqul1OgDQTvEnaowtD7UP+hZoGrUootNBYcN0DYuH0m8wBneivaNbuYi23zftBolNLwzvQa0rIsgzSjOShHYRaz1ACkC/RXFZpCBYjx8oD4VJDh0l5f1OEt53ZlUlkGAsEsaOwWq5i7dOH+I1S8DsoOBXItbuQANGRO7rkPz+A+z2bm5f5vwqwXjE05mVyi5JPQ0x5PsfFn9S5i8II9hzxDSA9OFvQ9lZwCo9FHYbaZZbUWyxUIpCe2BUgJXYOlFXNsZcFVUdgiSNLuYDJN06Z2yLgUe2rQkJme7FPkdj0aHSierVhKQ63YrCAEva+Y8q5fWXguf20gzmczFRPWWX+9hRVLN8MX5et6J48vFPYs+R9e2Os0M/cloEBR+CBdnT59SnJWAF8a0RfQArnCOwt2dk4qRQ9pnF57UV269yUr3NKwF9W8U5/7ySwz2c3RZ5ZxPutrBbrFA9VO/j499dU+nPNph e8cQEmRD N3LE/WTF3CitDnLl6+GVBfiRr7n/KDXvnaO0YiJ+m16mA8KVrIbkkqNoL4ALZOQjo7rSUQbRRtbVeLrlV+VDEOwKC3+PAgf0WnzBHF907EP9ntCqt371s/oNfobYyyy5cXcZAJzQs6HR/Cr4L976VxzeGaE2d0hf0tPxN7o7nsCBq/hapI0twBnQ3geYZW7qiyqfcD0InqHfyBrRzMs51dFtcp66IAbaiXiHZNYrlXU8LZjF4tuvECP+JkHIZmXONrOQ8mXtItv1d27rFpfnT36Eo/JN2FWCZH5UXGxGNNEQhColkRoYieazxVDCYSg3q2TXc68Xmv17PnAGkvft92RK3Pw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extract shmem's memcg charging out of shmem_add_to_page_cache(): it's misleading done there, because many calls are dealing with a swapcache page, whose memcg is nowadays always remembered while swapped out, then the charge re-levied when it's brought back into swapcache. Temporarily move it back up to the shmem_get_folio_gfp() level, where the memcg was charged before v5.8; but the next commit goes on to move it back down to a new home. In making this change, it becomes clear that shmem_swapin_folio() does not need to know the vma, just the fault mm (if any): call it fault_mm rather than charge_mm - let mem_cgroup_charge() decide whom to charge. Signed-off-by: Hugh Dickins Reviewed-by: Jan Kara --- mm/shmem.c | 68 +++++++++++++++++++++++------------------------------- 1 file changed, 29 insertions(+), 39 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 63ba6037b23a..0a7f7b567b80 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -146,9 +146,8 @@ static unsigned long shmem_default_max_inodes(void) #endif static int shmem_swapin_folio(struct inode *inode, pgoff_t index, - struct folio **foliop, enum sgp_type sgp, - gfp_t gfp, struct vm_area_struct *vma, - vm_fault_t *fault_type); + struct folio **foliop, enum sgp_type sgp, gfp_t gfp, + struct mm_struct *fault_mm, vm_fault_t *fault_type); static inline struct shmem_sb_info *SHMEM_SB(struct super_block *sb) { @@ -760,12 +759,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, */ static int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp, - struct mm_struct *charge_mm) + pgoff_t index, void *expected, gfp_t gfp) { XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); long nr = folio_nr_pages(folio); - int error; VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); @@ -776,16 +773,7 @@ static int shmem_add_to_page_cache(struct folio *folio, folio->mapping = mapping; folio->index = index; - if (!folio_test_swapcache(folio)) { - error = mem_cgroup_charge(folio, charge_mm, gfp); - if (error) { - if (folio_test_pmd_mappable(folio)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto error; - } - } + gfp &= GFP_RECLAIM_MASK; folio_throttle_swaprate(folio, gfp); do { @@ -813,15 +801,12 @@ static int shmem_add_to_page_cache(struct folio *folio, } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - error = xas_error(&xas); - goto error; + folio->mapping = NULL; + folio_ref_sub(folio, nr); + return xas_error(&xas); } return 0; -error: - folio->mapping = NULL; - folio_ref_sub(folio, nr); - return error; } /* @@ -1324,10 +1309,8 @@ static int shmem_unuse_swap_entries(struct inode *inode, if (!xa_is_value(folio)) continue; - error = shmem_swapin_folio(inode, indices[i], - &folio, SGP_CACHE, - mapping_gfp_mask(mapping), - NULL, NULL); + error = shmem_swapin_folio(inode, indices[i], &folio, SGP_CACHE, + mapping_gfp_mask(mapping), NULL, NULL); if (error == 0) { folio_unlock(folio); folio_put(folio); @@ -1810,12 +1793,11 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, */ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp, - gfp_t gfp, struct vm_area_struct *vma, + gfp_t gfp, struct mm_struct *fault_mm, vm_fault_t *fault_type) { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); - struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL; struct swap_info_struct *si; struct folio *folio = NULL; swp_entry_t swap; @@ -1843,7 +1825,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (fault_type) { *fault_type |= VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); - count_memcg_event_mm(charge_mm, PGMAJFAULT); + count_memcg_event_mm(fault_mm, PGMAJFAULT); } /* Here we actually start the io */ folio = shmem_swapin(swap, gfp, info, index); @@ -1880,8 +1862,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } error = shmem_add_to_page_cache(folio, mapping, index, - swp_to_radix_entry(swap), gfp, - charge_mm); + swp_to_radix_entry(swap), gfp); if (error) goto failed; @@ -1929,7 +1910,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; - struct mm_struct *charge_mm; + struct mm_struct *fault_mm; struct folio *folio; pgoff_t hindex; gfp_t huge_gfp; @@ -1946,7 +1927,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, } sbinfo = SHMEM_SB(inode->i_sb); - charge_mm = vma ? vma->vm_mm : NULL; + fault_mm = vma ? vma->vm_mm : NULL; folio = filemap_get_entry(mapping, index); if (folio && vma && userfaultfd_minor(vma)) { @@ -1958,7 +1939,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, if (xa_is_value(folio)) { error = shmem_swapin_folio(inode, index, &folio, - sgp, gfp, vma, fault_type); + sgp, gfp, fault_mm, fault_type); if (error == -EEXIST) goto repeat; @@ -2044,9 +2025,16 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __folio_set_referenced(folio); - error = shmem_add_to_page_cache(folio, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK, - charge_mm); + error = mem_cgroup_charge(folio, fault_mm, gfp); + if (error) { + if (folio_test_pmd_mappable(folio)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto unacct; + } + + error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp); if (error) goto unacct; @@ -2644,8 +2632,10 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, if (unlikely(pgoff >= max_off)) goto out_release; - ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK, dst_vma->vm_mm); + ret = mem_cgroup_charge(folio, dst_vma->vm_mm, gfp); + if (ret) + goto out_release; + ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp); if (ret) goto out_release; From patchwork Sat Sep 30 03:32:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B67AE77350 for ; Sat, 30 Sep 2023 03:32:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0AF68D010C; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CBBB58D002B; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5D558D010C; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A52078D002B for ; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7EC128061B for ; Sat, 30 Sep 2023 03:32:45 +0000 (UTC) X-FDA: 81291841890.06.801BF22 Received: from mail-yw1-f176.google.com (mail-yw1-f176.google.com [209.85.128.176]) by imf05.hostedemail.com (Postfix) with ESMTP id AA050100002 for ; Sat, 30 Sep 2023 03:32:43 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fUlJNYBj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.176 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044763; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=tvKoWD1mv8J+Stpkb75Dt4i7IFAqO3FEFhqeW7d85oDoQdV/E62VMZy27mQ/VS/XXzZjKr lTUSAIhaJy1nOXY4wfyCG3SoICQyR6cv0saubd+YQL355I7GRJTXGtGTeRISDL9E7Pf/l7 RFVjPsHkkqrQyqshvNQ3evCoADBCHPE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fUlJNYBj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.176 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044763; a=rsa-sha256; cv=none; b=AfiCGyXPPs9yMullHwro7blxWq7eMSG+jQoryYYor4TZhxDnGjRzcc9bwnYsWM5P+90r0Q X6y5ZSIVbYeSmQx+sC9YFyhuM98XS9T7Zm/rOQcbqxBM0XqrW3yH/VfVlXSv2RbAmgrzlR cOxKO0l6KbyrrBS6LGGII/wGdH9WPI8= Received: by mail-yw1-f176.google.com with SMTP id 00721157ae682-59b5484fbe6so181303407b3.1 for ; Fri, 29 Sep 2023 20:32:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044763; x=1696649563; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=fUlJNYBjcFt0+Kmvy+PKCldvlDpyQmrgZlO+Yi2A0/+Mxd7F1g31k1gQjMwflFPG2d oKMFV0l/NK3NWZX3mHKTC84chsPrWQkcVcQ441pyJ8PG32WzwreSMOq1eV3D90h9ZuCY zkL7ocp/GwA//C3gEIvkEZvp5r3z5OhhFXh7kKyqiO9YABrX6Bmk3U9/u1ySgbZJDNw3 bktpdOoGQjTNNEPSuDG+fZ6XybPQf+PN8wj1uNG9wYhhVTU5ZVYX33e/cV4KY9S2uxGI UevbNyketXhMrTWQzro6UR3dTE7bgtmZt6omBhIjhMJ8cjcxW0z6v2w6Y1qGzIv4O0pj gO8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044763; x=1696649563; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=Uhvfz3fGb/GGBxXYcIK0EmXk9lHxp1yukwWP5buh4TGmxryXTLJ0WBSbKAXhkCHxp4 44q+bM7tnZIXKl3/qeG/O78CyNNfpgyHbZ/DQvpatQ2Ba99trYoahKB0A87sgHy0RNSX 0ZaUoYSgdlpZFndGtg6ShdH1/Fvnt80iyIWf7L8mnSnHMNREcZZy1xfXHwd8K+SGLQWi RLQ9vVG2N2c/pXj7kxz7ZeD2Pgmhq5g6w9tkTFstEa/jSu2oTE4gOqgdwocS/G/xr6hN pL2MPN4bHOz5yzFMKCi8LWSTLLqMJUtQx2Wy+AiI/ElTV4yDPeUclkviEcAkvvLMeTMt GQkA== X-Gm-Message-State: AOJu0Yw5T0RjEIanzFnpp/R6dWgkPHOmuFv7UxNFa0OWcg+HwL3o0lvH h7ZbuHwgd04Dw8R2sMAsnGUj4Q== X-Google-Smtp-Source: AGHT+IHWFfod1ajuoBsT7KhrXsYUjHQWaiJ/anqkA/cYNa4/2kw3W7fwQLS18Z3KXGOj6fs+CBR/pA== X-Received: by 2002:a81:7951:0:b0:59b:ca2f:6eff with SMTP id u78-20020a817951000000b0059bca2f6effmr4258072ywc.40.1696044762549; Fri, 29 Sep 2023 20:32:42 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m131-20020a817189000000b005a1f7231cf5sm2704514ywc.142.2023.09.29.20.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:32:41 -0700 (PDT) Date: Fri, 29 Sep 2023 20:32:40 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 7/8] shmem: _add_to_page_cache() before shmem_inode_acct_blocks() In-Reply-To: Message-ID: <22ddd06-d919-33b-1219-56335c1bf28e@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: AA050100002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: m46sptr8b5fod4zga7htriiue4jqbbpb X-HE-Tag: 1696044763-176050 X-HE-Meta: U2FsdGVkX1/7qU/9ky2KImEuLaBYpqx/KDqkxM4hyQlS2fn512MlnosC9HgRHSyt8X7SqlFn5m1JLt8zurxhWdnXK7Ax1QAei885zli++5hFcIYaxFA3+Itnj553/5I6iQ1ws++nAWParaRoRygCjoOWfeJuaEgYgb5y2d1SuHF5HtdvJjfE2sSe2JaKoMXnIk0zCmgeD+y+eozlpyq54SEyHMLrnvJIMa8fiBC9F9agr9DgjoZ1eOx5HafolOxOYMOESgFgGLMbKqTVL7JGM3gkR0CQ/RgAbcJYfOE9s4vYpv8vAl4JtyKKuJHg7AZZmnyD1jKBShGCQrUY25s596EwXMG8O/F1yFtZwWw8/Eq3tj1RhxRrd9H27S6+hdjXmfd9TUfoV6g2+qPJL36nQBgBpVwY0FZgekV091f/vRwTzJFkHi4Jcb9KrYMIJt7qCxU/AwvYGHplCychD6c0ZeJbxDZxBCFSz31PwAydeaantEZ41eMe8ICyZ772hFumC1M/S4ThkaAV5ydgbzYmn4wMn0k71RvBm4LN0YPvSIVfELZkaSoEC2BxV0pfr5qmryZFlqadW53cLkj7LocgjG8Uy9lFUpI3Ewe0HXSynBxia6QA7M5OYR1UcYV4uweAc58dpBwe/a+/BaeXLlyYugVS9R2yFAp6w3ZL5rruEyopp/88v8vucmdoYs8gpoLI/+zx/mnrdBj2zhIW2+TnSBoOMvwfXhI1yZyJOlRo8rFnfvO1v7XUv0xf9QVRfo6pUIf2GC1d8nMXqj6henwdgUaa7fhCvJEPWFQqQ5uo6yYVuXlY1sr1PpkSV3prDRSvTg3WO2Aav85t4S7jkuvumFvRsf6k0kclgOG0IqGsFi6OLfLWAsFTHQ7k701XLrl2uCVW8I3w8HqniNnL3LL9lpcYfyiKdR/QDUk+iTaX6FjcXc7IDaKtcpBaEnKq0GOE7PUnFcp4UZ9Xv8efmkW oCJJKN1n s8aK2MjFWg0JFzGhi2QsFouZfbZ/Z2K2fZYl015ed9nsIZVIJV+YQ2i2jHtCzukFtyRBt7wbt317qJ6nccmOyf1xwDfsAZprhYvtKd06yUorjZjX4P6Yt0WcjXYjdt4IYDqgTiIGOTBx39Xzj1r9FRA6tkyyl0DCsEI4Eo/UnCSfR4O9ckJPi7vPNNYQsX0dUJNjKHLFEeNqzR3qCi/BEnI9eXNKuooohrQyfzJKGlJ1sPZ1NnhBHeRcvkVx5dk8GImxyCduruVNmd9176Nwrrqi4GKqIm1Rifmvy9543ZAihU3U5HLPvwhWChzPdTRcStuF/GRGPRS6FaikJ0PVdDu9K0YClgIxaYTts1tW4S/Ey1eNqXUh8d+L3WAZ6JMKtygGKoJrteUamd+AtADxFSuAJxRrqebX1G+SGq0Gc2cjiX3M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There has been a recurring problem, that when a tmpfs volume is being filled by racing threads, some fail with ENOSPC (or consequent SIGBUS or EFAULT) even though all allocations were within the permitted size. This was a problem since early days, but magnified and complicated by the addition of huge pages. We have often worked around it by adding some slop to the tmpfs size, but it's hard to say how much is needed, and some users prefer not to do that e.g. keeping sparse files in a tightly tailored tmpfs helps to prevent accidental writing to holes. This comes from the allocation sequence: 1. check page cache for existing folio 2. check and reserve from vm_enough_memory 3. check and account from size of tmpfs 4. if huge, check page cache for overlapping folio 5. allocate physical folio, huge or small 6. check and charge from mem cgroup limit 7. add to page cache (but maybe another folio already got in). Concurrent tasks allocating at the same position could deplete the size allowance and fail. Doing vm_enough_memory and size checks before the folio allocation was intentional (to limit the load on the page allocator from this source) and still has some virtue; but memory cgroup never did that, so I think it's better reordered to favour predictable behaviour. 1. check page cache for existing folio 2. if huge, check page cache for overlapping folio 3. allocate physical folio, huge or small 4. check and charge from mem cgroup limit 5. add to page cache (but maybe another folio already got in) 6. check and reserve from vm_enough_memory 7. check and account from size of tmpfs. The folio lock held from allocation onwards ensures that the !uptodate folio cannot be used by others, and can safely be deleted from the cache if checks 6 or 7 subsequently fail (and those waiting on folio lock already check that the folio was not truncated once they get the lock); and the early addition to page cache ensures that racers find it before they try to duplicate the accounting. Seize the opportunity to tidy up shmem_get_folio_gfp()'s ENOSPC retrying, which can be combined inside the new shmem_alloc_and_add_folio(): doing 2 splits twice (once huge, once nonhuge) is not exactly equivalent to trying 5 splits (and giving up early on huge), but let's keep it simple unless more complication proves necessary. Userfaultfd is a foreign country: they do things differently there, and for good reason - to avoid mmap_lock deadlock. Leave ordering in shmem_mfill_atomic_pte() untouched for now, but I would rather like to mesh it better with shmem_get_folio_gfp() in the future. Signed-off-by: Hugh Dickins --- mm/shmem.c | 235 +++++++++++++++++++++++++++-------------------------- 1 file changed, 121 insertions(+), 114 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0a7f7b567b80..4f4ab26bc58a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -789,13 +789,11 @@ static int shmem_add_to_page_cache(struct folio *folio, xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; - if (folio_test_pmd_mappable(folio)) { - count_vm_event(THP_FILE_ALLOC); + if (folio_test_pmd_mappable(folio)) __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); - } - mapping->nrpages += nr; __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); + mapping->nrpages += nr; unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -1612,25 +1610,17 @@ static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct address_space *mapping = info->vfs_inode.i_mapping; - pgoff_t hindex; struct folio *folio; - hindex = round_down(index, HPAGE_PMD_NR); - if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, - XA_PRESENT)) - return NULL; - - shmem_pseudo_vma_init(&pvma, info, hindex); + shmem_pseudo_vma_init(&pvma, info, index); folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); shmem_pseudo_vma_destroy(&pvma); - if (!folio) - count_vm_event(THP_FILE_FALLBACK); + return folio; } static struct folio *shmem_alloc_folio(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) + struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; struct folio *folio; @@ -1642,36 +1632,101 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return folio; } -static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, - pgoff_t index, bool huge) +static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, + struct inode *inode, pgoff_t index, + struct mm_struct *fault_mm, bool huge) { + struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct folio *folio; - int nr; - int err; + long pages; + int error; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge = false; - nr = huge ? HPAGE_PMD_NR : 1; - err = shmem_inode_acct_blocks(inode, nr); - if (err) - goto failed; + if (huge) { + pages = HPAGE_PMD_NR; + index = round_down(index, HPAGE_PMD_NR); + + /* + * Check for conflict before waiting on a huge allocation. + * Conflict might be that a huge page has just been allocated + * and added to page cache by a racing thread, or that there + * is already at least one small page in the huge extent. + * Be careful to retry when appropriate, but not forever! + * Elsewhere -EEXIST would be the right code, but not here. + */ + if (xa_find(&mapping->i_pages, &index, + index + HPAGE_PMD_NR - 1, XA_PRESENT)) + return ERR_PTR(-E2BIG); - if (huge) folio = shmem_alloc_hugefolio(gfp, info, index); - else + if (!folio) + count_vm_event(THP_FILE_FALLBACK); + } else { + pages = 1; folio = shmem_alloc_folio(gfp, info, index); - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - return folio; + } + if (!folio) + return ERR_PTR(-ENOMEM); + + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + gfp &= GFP_RECLAIM_MASK; + error = mem_cgroup_charge(folio, fault_mm, gfp); + if (error) { + if (xa_find(&mapping->i_pages, &index, + index + pages - 1, XA_PRESENT)) { + error = -EEXIST; + } else if (huge) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto unlock; } - err = -ENOMEM; - shmem_inode_unacct_blocks(inode, nr); -failed: - return ERR_PTR(err); + error = shmem_add_to_page_cache(folio, mapping, index, NULL, gfp); + if (error) + goto unlock; + + error = shmem_inode_acct_blocks(inode, pages); + if (error) { + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + long freed; + /* + * Try to reclaim some space by splitting a few + * large folios beyond i_size on the filesystem. + */ + shmem_unused_huge_shrink(sbinfo, NULL, 2); + /* + * And do a shmem_recalc_inode() to account for freed pages: + * except our folio is there in cache, so not quite balanced. + */ + spin_lock(&info->lock); + freed = pages + info->alloced - info->swapped - + READ_ONCE(mapping->nrpages); + if (freed > 0) + info->alloced -= freed; + spin_unlock(&info->lock); + if (freed > 0) + shmem_inode_unacct_blocks(inode, freed); + error = shmem_inode_acct_blocks(inode, pages); + if (error) { + filemap_remove_folio(folio); + goto unlock; + } + } + + shmem_recalc_inode(inode, pages, 0); + folio_add_lru(folio); + return folio; + +unlock: + folio_unlock(folio); + folio_put(folio); + return ERR_PTR(error); } /* @@ -1907,29 +1962,22 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct vm_fault *vmf, vm_fault_t *fault_type) { struct vm_area_struct *vma = vmf ? vmf->vma : NULL; - struct address_space *mapping = inode->i_mapping; - struct shmem_inode_info *info = SHMEM_I(inode); - struct shmem_sb_info *sbinfo; struct mm_struct *fault_mm; struct folio *folio; - pgoff_t hindex; - gfp_t huge_gfp; int error; - int once = 0; - int alloced = 0; + bool alloced; if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) return -EFBIG; repeat: if (sgp <= SGP_CACHE && - ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { + ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) return -EINVAL; - } - sbinfo = SHMEM_SB(inode->i_sb); + alloced = false; fault_mm = vma ? vma->vm_mm : NULL; - folio = filemap_get_entry(mapping, index); + folio = filemap_get_entry(inode->i_mapping, index); if (folio && vma && userfaultfd_minor(vma)) { if (!xa_is_value(folio)) folio_put(folio); @@ -1951,7 +1999,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, folio_lock(folio); /* Has the folio been truncated or swapped out? */ - if (unlikely(folio->mapping != mapping)) { + if (unlikely(folio->mapping != inode->i_mapping)) { folio_unlock(folio); folio_put(folio); goto repeat; @@ -1986,65 +2034,38 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - if (!shmem_is_huge(inode, index, false, - vma ? vma->vm_mm : NULL, vma ? vma->vm_flags : 0)) - goto alloc_nohuge; + if (shmem_is_huge(inode, index, false, fault_mm, + vma ? vma->vm_flags : 0)) { + gfp_t huge_gfp; - huge_gfp = vma_thp_gfp_mask(vma); - huge_gfp = limit_gfp_mask(huge_gfp, gfp); - folio = shmem_alloc_and_acct_folio(huge_gfp, inode, index, true); - if (IS_ERR(folio)) { -alloc_nohuge: - folio = shmem_alloc_and_acct_folio(gfp, inode, index, false); - } - if (IS_ERR(folio)) { - int retry = 5; - - error = PTR_ERR(folio); - folio = NULL; - if (error != -ENOSPC) - goto unlock; - /* - * Try to reclaim some space by splitting a large folio - * beyond i_size on the filesystem. - */ - while (retry--) { - int ret; - - ret = shmem_unused_huge_shrink(sbinfo, NULL, 1); - if (ret == SHRINK_STOP) - break; - if (ret) - goto alloc_nohuge; + huge_gfp = vma_thp_gfp_mask(vma); + huge_gfp = limit_gfp_mask(huge_gfp, gfp); + folio = shmem_alloc_and_add_folio(huge_gfp, + inode, index, fault_mm, true); + if (!IS_ERR(folio)) { + count_vm_event(THP_FILE_ALLOC); + goto alloced; } + if (PTR_ERR(folio) == -EEXIST) + goto repeat; + } + + folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false); + if (IS_ERR(folio)) { + error = PTR_ERR(folio); + if (error == -EEXIST) + goto repeat; + folio = NULL; goto unlock; } - hindex = round_down(index, folio_nr_pages(folio)); - - if (sgp == SGP_WRITE) - __folio_set_referenced(folio); - - error = mem_cgroup_charge(folio, fault_mm, gfp); - if (error) { - if (folio_test_pmd_mappable(folio)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } - - error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp); - if (error) - goto unacct; - - folio_add_lru(folio); - shmem_recalc_inode(inode, folio_nr_pages(folio), 0); +alloced: alloced = true; - if (folio_test_pmd_mappable(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < folio_next_index(folio) - 1) { + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + struct shmem_inode_info *info = SHMEM_I(inode); /* * Part of the large folio is beyond i_size: subject * to shrink under memory pressure. @@ -2062,6 +2083,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, spin_unlock(&sbinfo->shrinklist_lock); } + if (sgp == SGP_WRITE) + folio_set_referenced(folio); /* * Let SGP_FALLOC use the SGP_WRITE optimization on a new folio. */ @@ -2085,11 +2108,6 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, /* Perhaps the file has been truncated since we checked */ if (sgp <= SGP_CACHE && ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { - if (alloced) { - folio_clear_dirty(folio); - filemap_remove_folio(folio); - shmem_recalc_inode(inode, 0, 0); - } error = -EINVAL; goto unlock; } @@ -2100,25 +2118,14 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, /* * Error recovery. */ -unacct: - shmem_inode_unacct_blocks(inode, folio_nr_pages(folio)); - - if (folio_test_large(folio)) { - folio_unlock(folio); - folio_put(folio); - goto alloc_nohuge; - } unlock: + if (alloced) + filemap_remove_folio(folio); + shmem_recalc_inode(inode, 0, 0); if (folio) { folio_unlock(folio); folio_put(folio); } - if (error == -ENOSPC && !once++) { - shmem_recalc_inode(inode, 0, 0); - goto repeat; - } - if (error == -EEXIST) - goto repeat; return error; } From patchwork Sat Sep 30 03:42:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FA98E77350 for ; Sat, 30 Sep 2023 03:42:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B65C8D010D; Fri, 29 Sep 2023 23:42:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93E198D002B; Fri, 29 Sep 2023 23:42:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DEFA8D010D; Fri, 29 Sep 2023 23:42:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6F2F38D002B for ; Fri, 29 Sep 2023 23:42:51 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 37AD012068B for ; Sat, 30 Sep 2023 03:42:51 +0000 (UTC) X-FDA: 81291867342.19.32FC7A3 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf05.hostedemail.com (Postfix) with ESMTP id 79205100006 for ; Sat, 30 Sep 2023 03:42:49 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=b+rDDOu9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696045369; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2x6fH+w1PdDZvpTev5khLvy5t3y2+BALqf/PDla7s+s=; b=DT1awXWr9m5p15pakJd03UsIxR6uD2OJDgZKmM77a67Ve1nSi6GsvrBCh75lU34P5tZdO0 +JVmNSPHZ84LTS+0tgtjb4a8AAmuYsryqhlWnJcVVk7uOI0GLsnrB7XTH13kyz67H5DnH5 20T5MLNmpRtuo2ithe9kpBAu32DqcbA= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=b+rDDOu9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696045369; a=rsa-sha256; cv=none; b=GoP3sZT3CiVb8XLdrmygUUSxOhdCD6DFIjLZSZsA/9aHrnWMdJ9+tp4a8N9QmnTFgszF7t Dr8V35Q0Qj3f0Ft5ZKfpeTXMCX6elL6XBz1ikNiphWt6deNhQ4OpQJcbNqwMq4Dy1iXNWT tTlhSzupiQdFMXt1iv8w29QuQmEKJqo= Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-5a2536adaf3so5446207b3.2 for ; Fri, 29 Sep 2023 20:42:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696045368; x=1696650168; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=2x6fH+w1PdDZvpTev5khLvy5t3y2+BALqf/PDla7s+s=; b=b+rDDOu9Xfl3owdybsbdCoiEeYKnYSU4N8zrxMI0/MWxdHt9Gc8Zw0Qgb3ZxhW1q/B 1w3/6Oixtop8KsQIqkL03fp6aKz7Uph9P1NZkhVkWuiNDzSnJb7g0zPZ6YGN8X/GciCd ELab8255+4bM99QcFOk19G9QdVmDHOQDymmzVK9lJedaWWJZ3kfJG1I9WdMazLaru8NO LM+zQGWyrgdsKJ5Ylt5oRupdodYSAa/2L16FAQMeXnXZO8Peu36mzP7HB1krnjfikxjT EISoGvzalreGKj03wHUJSQzuOU7EsE9mnA9GMJITsD8lQWjQso01BjEh2zt0FgNkatE4 cDzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696045368; x=1696650168; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2x6fH+w1PdDZvpTev5khLvy5t3y2+BALqf/PDla7s+s=; b=jnDVIZc312Hi6AlMhnB1cRLTUlokIPaEjJpEDpzLR8eb9kTI66BPwZwbz5Nsa1Om1u 4EYDOpHf6/+cIgggPohBS7dSmLi7EwMwtUkzvTfF2qvIyFH0e+Wt940GvYZfBYwS3/kN n5DvlPXXDFCJCE+9kVSNzXiXSFvRcTzPhmBK6Kxu2OfINwMCa2bCY0UTE5tRN+hXmgUU QyunXm7U1ckGwcq9PE6VONBvlGw8aPaymaJbkOx0ZswYXkWriZmusH84QDiycq84b7qW 5j9bxmA4SE+zWkT/qho8fzeEf768/yZzKXy+ylp0LwJiVlVnqbfgR7e8C4TdYlJ2HO5N WsXA== X-Gm-Message-State: AOJu0YzPL9VZlSa4Mfpg9+cuW1GKFtsNS6YDUqVIqLtX9W0BzdMgb0pC WxaDGu1TUssdLKhshlsmfHJ0+A== X-Google-Smtp-Source: AGHT+IHb14+lcWYvYpZ5abpN3vaue9DpNWsOHa4WiXVhBeSyALADDXCj4J3VWtqEBUCWVKYnyTLA1A== X-Received: by 2002:a81:a24e:0:b0:59b:5170:a0f3 with SMTP id z14-20020a81a24e000000b0059b5170a0f3mr5957990ywg.36.1696045368515; Fri, 29 Sep 2023 20:42:48 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id f184-20020a0dc3c1000000b0059ae483b89dsm5983309ywd.50.2023.09.29.20.42.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:42:47 -0700 (PDT) Date: Fri, 29 Sep 2023 20:42:45 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Tim Chen , Dave Chinner , "Darrick J. Wong" , Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 8/8] shmem,percpu_counter: add _limited_add(fbc, limit, amount) In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 79205100006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: io44oagywsqum15ssnghd736natdkwnk X-HE-Tag: 1696045369-122912 X-HE-Meta: U2FsdGVkX193/uiRFYOgGeR49sQ68h4KsZmVF6tiVmaEZbM761JrRNSbs3N2koZeSMUGCg4NQm1wsUq6cU7txhsUAZx2IZW66PoECVdTBaJFJH3zs9CjZL9UGWSQkU0TF8HQlucXahpqAX7rA7Zx9FEtl5Tyoz3w8FqOShy0ld0mixIC7pPMt1QIsfrU86zjqJ5CWgPiSJBbyq3KM7s8KCjyWzjtBKGm2JwAzGNpWoekpIrck88/iV0zqlAGBKDK0vgH2ZBgICGzuL9SVYz2HqIAJzNwSCDewwXUSS+NK8rTe5PwSTqf0Zd4KLW8kppK1dsStxy8SmjlQ2Msm7zDfo7ARNDaP7y9bXQC+fP2WAAhI1cgOMJOtcg35X+1Ituo498bV2UkU8yyhvg/flBEGTS+GYEy+n28Uek8bM4JqCAfIBIZRkTdspELwsHLTkUWx00kYy0jSPzWqOHDrN03RbeuUiE4x1R5cwrInDOqH4WR+nr1iYUxbqkRRYSpQFFQW+cyuWZTqLbrlWP0O1XJBdSzWJDgjPFQkh/a6a6uafJZ+9bEZkn/BkGaUEeWTDNCiZmzQgTuoFff2RXxWaljgqY15Dc71Mg7g5lKzNB3F6Gy8eXOXsuFlsBnnPZFuiUoZ58PUl9DhyYeCCwmQJjaBSO3ZUYj7sxNDPBsXcJXGBwL1WcXi18sEsOHSA8EWbZCyAWJXbUWI2cs0iCu8K9oVGSk3I4XmkkOVkrmjpcRDBqF88bhZRgH1L+Pc2kMS7z69ks+qeN7zd2uGI+AOo75ux10/8Tieqm9XtW1dhAjTAM5IdAnhp2Z6acQeg7XdLLd9MApdh+deSMlyrgfBjn5laMZ/wVn7Tnj4Cuqai/Uj2V28Mh3qyUjg5+i6SSP2atup0/Hdq1jAo7TyAIJD7u+I/uiF7sOOjsXKbSf6mZBvImWn5tgf2bi6oUeg/kLohsWk6UM6U5aPddHvR+pJTI RX2ZVMxT LM8sILLEhNAMWvVo6Z4OTBczHQwW3nQszkaCaxIYtrm9cgnjNWVLpq8DU7Hj7cCdY1h0TiXBrl7fXuWpENeT6jAKjxJOT+nlqcyOpSqvWdEJmflQ+HaETCiiSqzfq+EKn+jAZYc+ygm+/4HOSWEP/x9iPVqgS5HY7S5bc2ZbAGbkvZjLh2KsK3GivcRhd8nl4BUQDyYGc8fT+daYweNurDRevXxGhgXXS8hVfDS84ui3gyuCPQr+rIY14iRN+T3Qi5AvuxVjTSdarfMzFqP+XPvXvVZs1r0BCES8MMM7ftJj3P3KnzwCSmnW8/gA/zbYk628dHO9PP0g6ymZuf4jvkZuhwPAMhYDCuownNeW/oF1vosWCuJ1+6KgOG+/eqDENT3mRuq52iTg3PsSsFDyk0JyBghYcvutoIKvZ0kZX0wLazQZZh6rdvtsbjzXdULT9NQj7Vzuth0sVFhUp4iHQJuloHx6NZuHfmtaXoUbIOUm/TB3oOBfF5IF2ZA9SfWxkX81aNzOyqJ1r4yRGkcrhwS/VWcFpbgYrMi+D X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Percpu counter's compare and add are separate functions: without locking around them (which would defeat their purpose), it has been possible to overflow the intended limit. Imagine all the other CPUs fallocating tmpfs huge pages to the limit, in between this CPU's compare and its add. I have not seen reports of that happening; but tmpfs's recent addition of dquot_alloc_block_nodirty() in between the compare and the add makes it even more likely, and I'd be uncomfortable to leave it unfixed. Introduce percpu_counter_limited_add(fbc, limit, amount) to prevent it. I believe this implementation is correct, and slightly more efficient than the combination of compare and add (taking the lock once rather than twice when nearing full - the last 128MiB of a tmpfs volume on a machine with 128 CPUs and 4KiB pages); but it does beg for a better design - when nearing full, there is no new batching, but the costly percpu counter sum across CPUs still has to be done, while locked. Follow __percpu_counter_sum()'s example, including cpu_dying_mask as well as cpu_online_mask: but shouldn't __percpu_counter_compare() and __percpu_counter_limited_add() then be adding a num_dying_cpus() to num_online_cpus(), when they calculate the maximum which could be held across CPUs? But the times when it matters would be vanishingly rare. Signed-off-by: Hugh Dickins Cc: Tim Chen Cc: Dave Chinner Cc: Darrick J. Wong Reviewed-by: Jan Kara --- Tim, Dave, Darrick: I didn't want to waste your time on patches 1-7, which are just internal to shmem, and do not affect this patch (which applies to v6.6-rc and linux-next as is): but want to run this by you. include/linux/percpu_counter.h | 23 +++++++++++++++ lib/percpu_counter.c | 53 ++++++++++++++++++++++++++++++++++ mm/shmem.c | 10 +++---- 3 files changed, 81 insertions(+), 5 deletions(-) diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h index d01351b1526f..8cb7c071bd5c 100644 --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -57,6 +57,8 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch); s64 __percpu_counter_sum(struct percpu_counter *fbc); int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch); +bool __percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, + s64 amount, s32 batch); void percpu_counter_sync(struct percpu_counter *fbc); static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs) @@ -69,6 +71,13 @@ static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) percpu_counter_add_batch(fbc, amount, percpu_counter_batch); } +static inline bool +percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64 amount) +{ + return __percpu_counter_limited_add(fbc, limit, amount, + percpu_counter_batch); +} + /* * With percpu_counter_add_local() and percpu_counter_sub_local(), counts * are accumulated in local per cpu counter and not in fbc->count until @@ -185,6 +194,20 @@ percpu_counter_add(struct percpu_counter *fbc, s64 amount) local_irq_restore(flags); } +static inline bool +percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64 amount) +{ + unsigned long flags; + s64 count; + + local_irq_save(flags); + count = fbc->count + amount; + if (count <= limit) + fbc->count = count; + local_irq_restore(flags); + return count <= limit; +} + /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */ static inline void percpu_counter_add_local(struct percpu_counter *fbc, s64 amount) diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index 9073430dc865..58a3392f471b 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -278,6 +278,59 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) } EXPORT_SYMBOL(__percpu_counter_compare); +/* + * Compare counter, and add amount if the total is within limit. + * Return true if amount was added, false if it would exceed limit. + */ +bool __percpu_counter_limited_add(struct percpu_counter *fbc, + s64 limit, s64 amount, s32 batch) +{ + s64 count; + s64 unknown; + unsigned long flags; + bool good; + + if (amount > limit) + return false; + + local_irq_save(flags); + unknown = batch * num_online_cpus(); + count = __this_cpu_read(*fbc->counters); + + /* Skip taking the lock when safe */ + if (abs(count + amount) <= batch && + fbc->count + unknown <= limit) { + this_cpu_add(*fbc->counters, amount); + local_irq_restore(flags); + return true; + } + + raw_spin_lock(&fbc->lock); + count = fbc->count + amount; + + /* Skip percpu_counter_sum() when safe */ + if (count + unknown > limit) { + s32 *pcount; + int cpu; + + for_each_cpu_or(cpu, cpu_online_mask, cpu_dying_mask) { + pcount = per_cpu_ptr(fbc->counters, cpu); + count += *pcount; + } + } + + good = count <= limit; + if (good) { + count = __this_cpu_read(*fbc->counters); + fbc->count += count + amount; + __this_cpu_sub(*fbc->counters, count); + } + + raw_spin_unlock(&fbc->lock); + local_irq_restore(flags); + return good; +} + static int __init percpu_counter_startup(void) { int ret; diff --git a/mm/shmem.c b/mm/shmem.c index 4f4ab26bc58a..7cb72c747954 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -217,15 +217,15 @@ static int shmem_inode_acct_blocks(struct inode *inode, long pages) might_sleep(); /* when quotas */ if (sbinfo->max_blocks) { - if (percpu_counter_compare(&sbinfo->used_blocks, - sbinfo->max_blocks - pages) > 0) + if (!percpu_counter_limited_add(&sbinfo->used_blocks, + sbinfo->max_blocks, pages)) goto unacct; err = dquot_alloc_block_nodirty(inode, pages); - if (err) + if (err) { + percpu_counter_sub(&sbinfo->used_blocks, pages); goto unacct; - - percpu_counter_add(&sbinfo->used_blocks, pages); + } } else { err = dquot_alloc_block_nodirty(inode, pages); if (err) From patchwork Thu Oct 12 04:40:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13418318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B27B8CDB46E for ; Thu, 12 Oct 2023 04:40:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEB8E8D0107; Thu, 12 Oct 2023 00:40:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9C378D0002; Thu, 12 Oct 2023 00:40:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B63158D0107; Thu, 12 Oct 2023 00:40:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A6D418D0002 for ; Thu, 12 Oct 2023 00:40:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6C095804E2 for ; Thu, 12 Oct 2023 04:40:15 +0000 (UTC) X-FDA: 81335557590.15.44A9C01 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf09.hostedemail.com (Postfix) with ESMTP id B1FF114000D for ; Thu, 12 Oct 2023 04:40:13 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LD3e055e; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697085613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+l9a+mEwNtEsZNHCdrwSvDlM5U4OjWSgjs4wg5g9aAk=; b=iGw+tJfAo/39qsJb9Oun4oU+SfAuSx3SjojvNgoeXdg6ZocMmiBiP37Fza9kFnSfu+szny buRZ7LRLbUYV40i/7GmFOR9IOkn6qzYsjNGpkQKpNUotgC98jsXM2uIMrHiTTmxPZYEcxb L4s9zhKZwIiLQBRsBU0zYfCLrYtF5RY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697085613; a=rsa-sha256; cv=none; b=l7Ep96+6zo3hSCTzKtMqrMX8TaZUdYdUqa/llOBZDrT0zQ61LY8grNDSuw+F85BbAsxCSA pzy+/vUmLlAg9ye1DSKfeW0n582sxYU99xSuapCOt3Sd+Ci0yegAPulpbcsJem2I7XWlUC Ay4jCJX6J5l/8k5qBphU1U+0e27ZsVk= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LD3e055e; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-59f82ad1e09so6949847b3.0 for ; Wed, 11 Oct 2023 21:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697085613; x=1697690413; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=+l9a+mEwNtEsZNHCdrwSvDlM5U4OjWSgjs4wg5g9aAk=; b=LD3e055eTs4z2GC0E+n8Chv4JxcoJBgXODSY68OVQiGUtOeZyfjGW5ORAzaupuFl++ uDOO41MmoIiRpl/Ci/DjPv/E7QsPhfiQCRBRY1OpGl/i+MijAKo176SPcPSuppAJxJns WzyQGXQAiLmmyVNKYHay/ShSSXfy5PnlFR62Qq94KVLVrkOcFInaus1hcoAO+sY40n3b DCOe13djdKXdnUSf0HzkEeVs0uRi0Sdv06y8CVYB1M5NM/Ks7kwtzp1TnOKKfd9vZi/s dyOm1YpiCKOVc2MVKVcBzYbClmjU9XVdB+7N8AnLvy9M0I63XZMWf+LtKyFaKrjvZ848 SBSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697085613; x=1697690413; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+l9a+mEwNtEsZNHCdrwSvDlM5U4OjWSgjs4wg5g9aAk=; b=T+an2mEJ+INTV+USMGb18NryiLRDXHWcDuCkwkcPLQGqasODivKepG0NAe1FQPxtjq 2mn0lpOzDZcpYs9RWKwTgYotYtLCLcXv0V3nbJeJ1JeUZChw9YfYYZ3niwvOqcuNvgXT g+U9M9+UaAuSde7vy6nvkTiIOWR2pveJDZOBB7RJm9l4xtS1LpwQ1tllFWVKPmz7Rehk xYUkDupk0+Pm78IT80lR+LK7zNfIWCuzBg8EV2DCXKAJAWu7PRp2Ez+YSU7Xqn5RADDz actz+V+E0xT246KEFvJzibk82658kShCEiwEqbnRttS6cqeTkXgUyC2ARXUNx0xDFAPe AU0A== X-Gm-Message-State: AOJu0YwI6g8L1e1KyXk2fOfPhMvekLGogyoQP2vMfBjftUP0ZwCl4lUQ jrstLOyd0JkiCWdXr9+v4rvcNQ== X-Google-Smtp-Source: AGHT+IEy29AB2gZVMxb+IATr0dLpEGj4Y9wZwZWi4zoiPAFB4Rim0MuDfZV+1buXtmaVCIwWelmNMQ== X-Received: by 2002:a5b:285:0:b0:d7b:9d44:7574 with SMTP id x5-20020a5b0285000000b00d7b9d447574mr20855492ybl.64.1697085612762; Wed, 11 Oct 2023 21:40:12 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x142-20020a25ce94000000b00d89679f6d22sm1734993ybe.64.2023.10.11.21.40.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 21:40:11 -0700 (PDT) Date: Wed, 11 Oct 2023 21:40:09 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Dave Chinner , Tim Chen , Dave Chinner , "Darrick J. Wong" , Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , Dennis Zhou , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 9/8] percpu_counter: extend _limited_add() to negative amounts In-Reply-To: Message-ID: <8f86083b-c452-95d4-365b-f16a2e4ebcd4@google.com> References: <2451f678-38b3-46c7-82fe-8eaf4d50a3a6@google.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B1FF114000D X-Rspam-User: X-Stat-Signature: 7cj3zwwxnzuiddafukjfq8hdm9sn8rp6 X-Rspamd-Server: rspam03 X-HE-Tag: 1697085613-612527 X-HE-Meta: U2FsdGVkX19Q8tCU8U1yNUgZyzpbl/NL0zzzutvl8ZW02X/tere5JWoKX7mRdyQH7Whiz8dHY8Vh7Z7htYMnHrQ/RvL0B2d+MSq2Y/dioDdnKbixMaPrg3QNnyqV5Xt3yXC1E3PaoAGIqv8VYtJk1O1B4loHuk5dqcGHKxqAC+14l/60EjhL2bBT351X5wC8jeBHuCYoDSci+6bDedFyOIavYzi7HIc6obS+25oatX2CV74IJvRdEtnohlECJ5ptwZeIqzzGJjtBMYSfXgvB7XZ8I0bLSmklfREByGuvXVsvl5Wjnt4g3NeHViODQN3LAf3NOFswVTbDVF52AgRKl9FxLHsAV4d74Wf0OQ6O6lBQBUKj5RQzwWTqCOkWd1sdCbLmp945o3/SYXgoVo41ZKYP7gU6adJN3tkRPDJKcBTkJQFSaT4TVFXFFvdZhD+kC9+4uKjC5+0HqOefrOFB6mNfqYK+4QogRaPXbAe44/e3mE4WOua6NXyxB0pxcs3As4sX39iNs1a3hvYSOD1LtFxc1Bx7OHV7mVxcAwNUHqaVDPVGcd4XsNwrD2pr9UoJlhmSQ8pzLSysZOBODL/pBvYtrGM3476xBHZqMf1h5hzb9kCtVcpV609Tj3/1HFepD6wf/jetIAzPntWFH9fDmNEHdXCCS8Ssew9c1dwM0UfyPqeYYve/zbpYl8wVV51XgmEEDgWFNoIueVtd/nHFJbXUmAObcJ85o+LvhrTVGGtGtUvPnltN7uVYABEf3w3GxdMfJEXQyRmKLiZsLbf7OhVLkfHlvNN44fecgKHPfM7Ce1Ll/E6g5439RbzETwyyYhxcvFaZXjqH381BNpjr9ESEwrz2gvMQkfkaDt6KTxHqyWnWNLgNsaGOOpcTzNOCpYsUDim9Z+h66Gd9OgIpU+06vktFuWZmv7hKSsdDCJNCGEnc+WOJQU+u48OfaWlUvCmU86Gm92JVN3leDV1 2PEo/T7P OUs/d/USG9r62ZkFnIOWUfyC/egPvqjYaDG0DAwh0X51hp4/fiXGswhmXEwOB0lSX5zmn610ZPRDogZOAb3d29K5ZEIeksEQyH9is+OYuIhIaaIcSXV224JCwSiHh+ytYMkEy0D5+CrxPjCpLXyneujL4K1N7mbGyJXFOe4qw+PuoWx0zzvE2QRhTUBaTKxRrp+z+tZogsffVWSf2ot0A+EcBctP6PEwYk/VcW7+KTNbOd+takvZHCXlV9VtZKFDlrcjolT2meySNYLgLlYoFEuBA9pIHtDpHjafGlkTgZufzTNlEttr3IRjGsdSZoPZPDbOvzVaUQ4rBiARfD5yu/KsaS8fIhV9/Qj7iDkW8ro/s3mrGnobYZEElAMc+JFB4JrLptoWCN9gbBS6aMLhMCsk61WMA7+/2Iw5voOZr8efz2tQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Though tmpfs does not need it, percpu_counter_limited_add() can be twice as useful if it works sensibly with negative amounts (subs) - typically decrements towards a limit of 0 or nearby: as suggested by Dave Chinner. And in the course of that reworking, skip the percpu counter sum if it is already obvious that the limit would be passed: as suggested by Tim Chen. Extend the comment above __percpu_counter_limited_add(), defining the behaviour with positive and negative amounts, allowing negative limits, but not bothering about overflow beyond S64_MAX. Signed-off-by: Hugh Dickins --- include/linux/percpu_counter.h | 11 +++++-- lib/percpu_counter.c | 54 +++++++++++++++++++++++++--------- 2 files changed, 49 insertions(+), 16 deletions(-) diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h index 8cb7c071bd5c..3a44dd1e33d2 100644 --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -198,14 +198,21 @@ static inline bool percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64 amount) { unsigned long flags; + bool good = false; s64 count; + if (amount == 0) + return true; + local_irq_save(flags); count = fbc->count + amount; - if (count <= limit) + if ((amount > 0 && count <= limit) || + (amount < 0 && count >= limit)) { fbc->count = count; + good = true; + } local_irq_restore(flags); - return count <= limit; + return good; } /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */ diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index 58a3392f471b..44dd133594d4 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -279,8 +279,16 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) EXPORT_SYMBOL(__percpu_counter_compare); /* - * Compare counter, and add amount if the total is within limit. - * Return true if amount was added, false if it would exceed limit. + * Compare counter, and add amount if total is: less than or equal to limit if + * amount is positive, or greater than or equal to limit if amount is negative. + * Return true if amount is added, or false if total would be beyond the limit. + * + * Negative limit is allowed, but unusual. + * When negative amounts (subs) are given to percpu_counter_limited_add(), + * the limit would most naturally be 0 - but other limits are also allowed. + * + * Overflow beyond S64_MAX is not allowed for: counter, limit and amount + * are all assumed to be sane (far from S64_MIN and S64_MAX). */ bool __percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64 amount, s32 batch) @@ -288,10 +296,10 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc, s64 count; s64 unknown; unsigned long flags; - bool good; + bool good = false; - if (amount > limit) - return false; + if (amount == 0) + return true; local_irq_save(flags); unknown = batch * num_online_cpus(); @@ -299,7 +307,8 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc, /* Skip taking the lock when safe */ if (abs(count + amount) <= batch && - fbc->count + unknown <= limit) { + ((amount > 0 && fbc->count + unknown <= limit) || + (amount < 0 && fbc->count - unknown >= limit))) { this_cpu_add(*fbc->counters, amount); local_irq_restore(flags); return true; @@ -309,7 +318,19 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc, count = fbc->count + amount; /* Skip percpu_counter_sum() when safe */ - if (count + unknown > limit) { + if (amount > 0) { + if (count - unknown > limit) + goto out; + if (count + unknown <= limit) + good = true; + } else { + if (count + unknown < limit) + goto out; + if (count - unknown >= limit) + good = true; + } + + if (!good) { s32 *pcount; int cpu; @@ -317,15 +338,20 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc, pcount = per_cpu_ptr(fbc->counters, cpu); count += *pcount; } + if (amount > 0) { + if (count > limit) + goto out; + } else { + if (count < limit) + goto out; + } + good = true; } - good = count <= limit; - if (good) { - count = __this_cpu_read(*fbc->counters); - fbc->count += count + amount; - __this_cpu_sub(*fbc->counters, count); - } - + count = __this_cpu_read(*fbc->counters); + fbc->count += count + amount; + __this_cpu_sub(*fbc->counters, count); +out: raw_spin_unlock(&fbc->lock); local_irq_restore(flags); return good;