From patchwork Wed Aug 10 11:10:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imran Khan X-Patchwork-Id: 12940446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9023C282E7 for ; Wed, 10 Aug 2022 11:11:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231826AbiHJLLC (ORCPT ); Wed, 10 Aug 2022 07:11:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231983AbiHJLKm (ORCPT ); Wed, 10 Aug 2022 07:10:42 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90AF8491E5; Wed, 10 Aug 2022 04:10:38 -0700 (PDT) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A8i0tH018850; Wed, 10 Aug 2022 11:10:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=zKis7ue4ZsFG7GTZK4ig7mmUTRzeClTrSiOgusnW8bI=; b=Ook+amNdFJZnhhAC4pLk4SSfjGpBCB7N+YMAUy/v+2whLQQhvp8CkYjhEOK5fil99oXe sRRoOVkFE4BDc11v9IOphHZbW5CqyLr0El9JNJnvR1GcfaWcG/Mt0Msmq0BFOM8cqwbw kmNp7BV3GNWzmTH/MSOOhjtcVJYS0wbQa3f0xdsdxk2mciuZeJfcNKSUVxQgr6rJdtbX xdmTE2FyoBgKTVDUFYt92ACnpIb1wW/hXaas64NDLpmLioIg9RqLGyXCNwrOaIuaML17 W5DSrO45+xGVyr+DHcwGxV1KQNGJSxDU5g9UvRYMDIXfAn+W19Tze3YZBUiMm0Moj4+M jA== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3huwqj1gxb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:34 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 27A9LfgR037406; Wed, 10 Aug 2022 11:10:33 GMT Received: from nam10-dm6-obe.outbound.protection.outlook.com (mail-dm6nam10lp2100.outbound.protection.outlook.com [104.47.58.100]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3huwqj1w9w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:33 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eH86GMCb9PRPXARPHuK36PGAzl1bo2/zMUbZ9/Xwi++vRR8oztMsNIXpjTvhzuPEssSQrlq6whBiE5ai+0N7J/J1O1u44tW8I2wDdfh7M8pLlDj3n8OpVrLB3IrPl2YHyKRe5xzBCvUs4wvBk7qFnCl8sFNPJF+cf6JAu3NIj89dIgOPB43eusXP1q8zhp0o7e8+mwepdVkNj9zhczMb4yXSoKnWcdmfXrbetHtlhCkRFH+1ww7yWoBkfnrOJDbNnQQMyuoOhOwu9lSzFM32nCqO5rtUtR4ByV40C42o7zIVxi7y3DBTEk33JGHwu17OyNWFBasNGoBnzbskORwpwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zKis7ue4ZsFG7GTZK4ig7mmUTRzeClTrSiOgusnW8bI=; b=Ng+4iO4b8secSm7jpEjSCSbGmDOwLXs0Hwt5VdlwmLAD00lIIkiY8KIncCZi7IeY792fQ1a4jLkcas74d/CjYbJF/904uWY98/2zjW2NmduxFZlEvBz+gYyJz98g/EwWx+YaH0gtqH9/ZIogcrxOK9Jgzm6ls1hYgkZ5SJmZ10i0gM8PzwP+ovsTdfYC/ayyeK8BtdGaHOKQdagoI5gW1fFbPygv/DM4Y/zxnbydTVIWi5hpsVueOTIhzGTW7Wdg2vuuw+iHwkpG2eEh0bnQUXhFAiht5fq2hpes1JfRauZ05X7zguIrWylrsPQrfIxols5tQVt9MGux4s8K4K4l9w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zKis7ue4ZsFG7GTZK4ig7mmUTRzeClTrSiOgusnW8bI=; b=z4o6yX/T+fePl2ILWCG3bROcKIXC2KD7e5UVj/6rM0UiEijPcHWqqytv/iCiVXROZJBlpzs23o426GZok1TsMzmt8o/GCMU47fIe9bn61VVDXlQEBI9/dFUQF3KhnZYx8CAKFBhtKPQtfwxzW3e7z9DxpKAwNissd+e9VP0I150= Received: from CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) by BYAPR10MB3431.namprd10.prod.outlook.com (2603:10b6:a03:86::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 10 Aug 2022 11:10:31 +0000 Received: from CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f]) by CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f%5]) with mapi id 15.20.5525.011; Wed, 10 Aug 2022 11:10:31 +0000 From: Imran Khan To: tj@kernel.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [RESEND PATCH 1/5] kernfs: Use a per-fs rwsem to protect per-fs list of kernfs_super_info. Date: Wed, 10 Aug 2022 21:10:13 +1000 Message-Id: <20220810111017.2267160-2-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220810111017.2267160-1-imran.f.khan@oracle.com> References: <20220810111017.2267160-1-imran.f.khan@oracle.com> X-ClientProxiedBy: SY3PR01CA0134.ausprd01.prod.outlook.com (2603:10c6:0:1b::19) To CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 327a1a9d-3915-4571-45d8-08da7ac0f03d X-MS-TrafficTypeDiagnostic: BYAPR10MB3431:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qLsRfYZu4wnWz2y414zM81IXYjTbRX/XMejRKGj9w+1tCDLqnOTpBvIGu88j0ZpAcfBwudpJkQiErDhyi0nPZfYFV1nset4RHdMWWkrz99OwQjkHsbL9UAbJi0WeTtft0mBxluwXiKFuKiVJd9rqMw13zDTKlgTTJqAjV+bNEyucZUMxlVY3Z7WIHmabvdM4pHWNImYAtH1+gH7A+UngmdsABTQbruI6bAR20ZJUB0gvwkzwhoDHOc2ih1q3XnszgV90fc5q9Xqy3pssT7h4uO5ZDWvBqppxKU6Hs/T4bur6vhVDwkPHMUaRkTvua/QDqW4bk5obo1WKZ6XpVNDCKzxP+k8EsATQcZYG//6X2Ei2DciGUNEiUZCpEVR/sUNuNtkgZHAvHYk0rqUt4VhoF+++b7NVAIW7qXw+RJk3nJ1d68l+jGQRyEawVKvUw8R+t6HHb0mRtksw51pnWyp4hraglyYqT4oiw7eKjLEyKMv0KmQzUAE2XZk1A+F0a0btoI8iwrGFK0y7nmOgFjBjnJ9R6m0LAkVpNbkNW0NB7zb7PLcco1T4OFh1z9Bm7MEqR5uEF+L9mPJ/Bp5Z9+bL0A8DSkbKzJflQOOMnGodUqa8mr7cOCKSn+x1ot+rZya10yE0lDsiJvkuS1Ma6MsRlDphBtlnK6ZFeMwP6Hkz459LckShSHFxcPRLb578z/za00fCR5AZE5jR0Ka8JEgSuvqvZPfXULWKytx43LI89KkcfgqDgiSteGX3C9vRimE9okZVkMPHgj5AkfCYOoJtsVovYT0onXY6imAKOQknoPs= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR10MB4468.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(346002)(396003)(136003)(376002)(39860400002)(8936002)(66556008)(4326008)(186003)(66476007)(36756003)(1076003)(103116003)(316002)(5660300002)(8676002)(6486002)(86362001)(66946007)(478600001)(2616005)(41300700001)(83380400001)(6512007)(26005)(6666004)(2906002)(6506007)(38350700002)(52116002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: N9BcnBl5EXDjne9eKIK3w5DHsDTVkI+gZWv8+w7wcwQFNidsfHxcdpxUTZnOlW2ky/4h9c3QGBYR4bsyH0HQS38U2wOijBi4TX8Q6jK9RaHh6XbqzoQp+QgPlJcPkWTCNOU6M0bx+Z/GjJzJ6TZpNBccXyP4uKlerIWimXgMIZcdphCZd9tqpNhlVzBiKYw6BwnhVUk+M9GacBl0QUjudAgpFsb9BlI9/S4MaLXfNPFRz8ZOWUtWY0jR43iiQPkb55L7SFn4kjOr685XPk8it/9BvcEUQUdKq1hkf4jZJfP0DZWcNdTadtqy/a/uMoBzBFZijy1RHicczVk8zlIebYvAA3P7DTok5Lbx0VRUEieA1b9gtet2JvccKU823PmXk4JMPKITh8eWigy8AgQWy77Hc9sj/+vaEfc6E18F0V06IozZPfD7wsP+LSIPDbX2ncZXcj9GqGu9lYvxyRocfQFA3BcxFjQmv/iTjfyonG4A3vYEFCDkn2+2uL+Z6JLj/rQuP8A5RU5rlrQLqtXfAthBLs6Vbbq29Y5lxl+zfP39HHunN6wFinnV8IJMyWITRaEgGAUP0sY0y5g/nC+54PJbLXuRFxXa6WtdANyKN//g5zy4WM3IdkOTNcjFwAIFPS2Zp6i8+Ds1HImKtH6Yk4GVE0TY7XMlhHPMto/IPjLvdVUtnPNowkqhOiH2hLuvjo24C80dEqfDgY/t3c38xaB8gvs4sqvm1IUbsUtUUy+KUCtOEkv2Q4SC3MI0c8WMNczZBjuXMyOZDhaz4JQfat54EYK5nziWYZewEslaSN2+F2AqzVbyrF7RuDM0FRsx1rP6ACcNYe9xBCx0B8ayIwBDbi/94179P8dj1nKE63Sd71TeGI7gabAfH5ku+ZGQrxMr//T9Pz5seunXi1SJs55JnE27OmWOQl4ABv6RelalSX8JTy8VQjlhmSKuTqymie+lGNVBjsga6cilyi1OiI8hN2o5bJuZnT3EzNxbp7CQwthkmFaiWbr3xT/HwNZB8eY6IygptGc7OmvOm4n2Vf273cCadcl37ukytr2oXZaxQ1hLytFhJzAlWKgyyvqxbzevEa5pdtpP+Ns4/qpny5AWmNuSK3WXxdLHwxASi45hCilxE1GIKdeYhn006hOLJYBBAgiJezYJl1Qm4NTW7xtgVc8B8+elfJJ4hTsVK4UR9X47OzxO2Jgc6+xYW9OKIk3XN3Q+aeU7t4x/XG7BMXsWEVKyCNu/ugSf+IQPuvpbCEmyV/vmxqJ6EDyRHoaV/BbO3DPQ4nKv2L0ZttS8Um/KCkO6T3hpdqDPNR8RgV9Ad3kctf8Te1k33fmjsc2cjJFvD60LCmPfDNjx1kAaLkCZc7nZF3VTf4yL+HeXz9LqR0SoA/z+tUzTzqIhuKPDfZ1NGDfkQP867oWXWSqx1iBG4CoYnoNmNTPuQmftRc2mdVSNXSSO4aDNhBV6FAIRWHjtoTQRrBZ+hCFX1ImtJvt7Rsjj4yePY4DTP6Nro6xY7toOOGk9j1euZBVjONj3aeBOY0TtbkaNr4O6nV7gKnrGQRT7SFE19U1gBJ0CGjB1UzvkLYeJn8xtJv2kDHLHxPpNTJsp6B3LMD4X+0tHCw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 327a1a9d-3915-4571-45d8-08da7ac0f03d X-MS-Exchange-CrossTenant-AuthSource: CO1PR10MB4468.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2022 11:10:31.2014 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: oI9uk0bGRQ74ETpeSHdsvdbnUWmbFzKwMPTH3jyHgAkyIcUI5S0dewLU2qhYPGvqAXDQiMt4b432OSfl4LR31g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3431 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_06,2022-08-10_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 mlxscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208100034 X-Proofpoint-ORIG-GUID: w5B6FBsvseLeYQcVW_U0ANWaK631u4oV X-Proofpoint-GUID: w5B6FBsvseLeYQcVW_U0ANWaK631u4oV Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Right now per-fs kernfs_rwsem protects list of kernfs_super_info instances for a kernfs_root. Since kernfs_rwsem is used to synchronize several other operations across kernfs and since most of these operations don't impact kernfs_super_info, we can use a separate per-fs rwsem to synchronize access to list of kernfs_super_info. This helps in reducing contention around kernfs_rwsem and also allows operations that change/access list of kernfs_super_info to proceed without contending for kernfs_rwsem. Signed-off-by: Imran Khan --- fs/kernfs/dir.c | 1 + fs/kernfs/file.c | 2 ++ fs/kernfs/kernfs-internal.h | 1 + fs/kernfs/mount.c | 8 ++++---- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c index 1cc88ba6de90..45e1882bd51f 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -924,6 +924,7 @@ struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops, idr_init(&root->ino_idr); init_rwsem(&root->kernfs_rwsem); INIT_LIST_HEAD(&root->supers); + init_rwsem(&root->supers_rwsem); /* * On 64bit ino setups, id is ino. On 32bit, low 32bits are ino. diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index b3ec34386b43..812165d8ab4f 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -927,6 +927,7 @@ static void kernfs_notify_workfn(struct work_struct *work) /* kick fsnotify */ down_write(&root->kernfs_rwsem); + down_write(&root->supers_rwsem); list_for_each_entry(info, &kernfs_root(kn)->supers, node) { struct kernfs_node *parent; struct inode *p_inode = NULL; @@ -962,6 +963,7 @@ static void kernfs_notify_workfn(struct work_struct *work) iput(inode); } + up_write(&root->supers_rwsem); up_write(&root->kernfs_rwsem); kernfs_put(kn); diff --git a/fs/kernfs/kernfs-internal.h b/fs/kernfs/kernfs-internal.h index 3ae214d02d44..3cd17c100d10 100644 --- a/fs/kernfs/kernfs-internal.h +++ b/fs/kernfs/kernfs-internal.h @@ -47,6 +47,7 @@ struct kernfs_root { wait_queue_head_t deactivate_waitq; struct rw_semaphore kernfs_rwsem; + struct rw_semaphore supers_rwsem; }; /* +1 to avoid triggering overflow warning when negating it */ diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index d0859f72d2d6..d2be1c304715 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -347,9 +347,9 @@ int kernfs_get_tree(struct fs_context *fc) } sb->s_flags |= SB_ACTIVE; - down_write(&root->kernfs_rwsem); + down_write(&root->supers_rwsem); list_add(&info->node, &info->root->supers); - up_write(&root->kernfs_rwsem); + up_write(&root->supers_rwsem); } fc->root = dget(sb->s_root); @@ -376,9 +376,9 @@ void kernfs_kill_sb(struct super_block *sb) struct kernfs_super_info *info = kernfs_info(sb); struct kernfs_root *root = info->root; - down_write(&root->kernfs_rwsem); + down_write(&root->supers_rwsem); list_del(&info->node); - up_write(&root->kernfs_rwsem); + up_write(&root->supers_rwsem); /* * Remove the superblock from fs_supers/s_instances From patchwork Wed Aug 10 11:10:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imran Khan X-Patchwork-Id: 12940445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C62E8C25B0C for ; Wed, 10 Aug 2022 11:11:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231809AbiHJLLB (ORCPT ); Wed, 10 Aug 2022 07:11:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231989AbiHJLKm (ORCPT ); Wed, 10 Aug 2022 07:10:42 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0AAD55094; Wed, 10 Aug 2022 04:10:39 -0700 (PDT) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A8hwiO013166; Wed, 10 Aug 2022 11:10:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=hHg7sSi+XB0DPN8wQPEPOmCi3Orh8EvgcK/kzjIuAFU=; b=pL9mkFORVijjAQJXd6j8q/Uf9pvDaj20uCo2MrtEFrTecRtUJZsYaRDx1fFSwQSle43/ eA5iR8VT9FTxpwExdTjP81cjqF8IL7hPMEqLo2/PqvebbQh3avtSaBCMuBh0a4BL9vSQ gLU+gMVKdniYpcBm8taf6PmX5wfo5hqPtiTkm3lA5XRp1nxFVIqjED3iKtEmxSENxZhc TlJF/Ksv5fMv3hM4/8WkiWJXOL2Otfm3J+8egA3Q9QcSKaCEJX3iVgLm69+Ne52d+QQY 5vh3qRm3jbm5b2AfDnj9n532W1QRjG/5RDgCGPtlZ0iwEISyHlbWZxKVWIttf9QIEnOQ kw== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3huwqghgrt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:36 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 27A9LeD6037384; Wed, 10 Aug 2022 11:10:34 GMT Received: from nam10-dm6-obe.outbound.protection.outlook.com (mail-dm6nam10lp2106.outbound.protection.outlook.com [104.47.58.106]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3huwqj1wam-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:34 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RHDuVJK1G1gYqXSRRa0yXcRz0WQhLyL/LOIj/ZYlJbS05qElkqbf5xw1RAqFZfX8zgJxXCWbwu+irygBFjB5v1rKBzkv1gUmgSVRJR++hlG8J+j3jObmi11y6pyV2J9LBb2IOOtCBv93YJpElEk6bcKh0J/yIOU9pYB/xsA2Q2eGtThsfv4Zd/pbXP6hdKs4PE2YlzjM/O5MrQPcHwP/FcNR1z2ukyzde7YoQKQHlt3mAdoHd2MGqnSvLGClT8Z+Hx1E6qw5mFLPV9EX2Z6xPLECPsXU+KxWZ0nEUVy+Kv49hzHHI4MHaDk7Qqad5q3noYYT5Qj29uynCHGZDDEY6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hHg7sSi+XB0DPN8wQPEPOmCi3Orh8EvgcK/kzjIuAFU=; b=PivhpxxmDDYnAOnWrHOYcGhc8vnROWB0CSnAuSAbLrux7lYyM+P7gbhegBuna72vbu6+1H9qaApRKLN05xA8RwsmGGt7xrPQ0+C3vJOOJKikx7YkB/TwbgpkrhrHR7QtMUhHEHc/NUiD8omUEMRqk92UW7BBAFjW4iQB4eS/X8lybZ2b8ou576P6tzd/4GTst8ts3lvTK9Lq3DhkgEY1rB9uDtG5ulrqnc4//TOlLfe/rdKnjBh/trhHq0J0394shZeEJy6q8Jgh54rE0jONmEN8v3epEzf2FRUDe1I7SkaWkznJvfGh09kVqx8FqI7dDO/aAUFPij2uki9v/5kOHA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hHg7sSi+XB0DPN8wQPEPOmCi3Orh8EvgcK/kzjIuAFU=; b=YjBjG2xH9T3xg4tulQRdCWwbkNQ1gy0Ux47b89bl03YCdSNXnPnoadmRgNje6P7i8r4+kP+Iw3vlS2ZgW4Qgh3laqwAkmbQkN+SDK4WnCasUKc/NOchp2IWyrQbU70MV/8kSXb213OR2DAorYfI9IRkn6vqp3RO+zFTWhdybyms= Received: from CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) by BYAPR10MB3431.namprd10.prod.outlook.com (2603:10b6:a03:86::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 10 Aug 2022 11:10:33 +0000 Received: from CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f]) by CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f%5]) with mapi id 15.20.5525.011; Wed, 10 Aug 2022 11:10:33 +0000 From: Imran Khan To: tj@kernel.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [RESEND PATCH 2/5] kernfs: Change kernfs_rename_lock into a read-write lock. Date: Wed, 10 Aug 2022 21:10:14 +1000 Message-Id: <20220810111017.2267160-3-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220810111017.2267160-1-imran.f.khan@oracle.com> References: <20220810111017.2267160-1-imran.f.khan@oracle.com> X-ClientProxiedBy: SY3PR01CA0134.ausprd01.prod.outlook.com (2603:10c6:0:1b::19) To CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 755f95cc-c148-46bd-1f0b-08da7ac0f159 X-MS-TrafficTypeDiagnostic: BYAPR10MB3431:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cGXEN35uleuH2RhdqjCYcR7CqTVSPFgJefQVmaEBQJCDMEKZF02wcNZyxobWoAk2/LwZlr75rgc4IBm9qcJ7V7kufYASd2P/sVIxB5A47vBIefzeSa6qWNcDjyT1P+mJBkL68RqlIlVQxZSoMhOloDqBL++ssIKiP3mBzgfzAr684HK0reBosAOdzY9dovrdA+tyDy7LAgBvKLs7Gg9/BVtlJN4T4MDA5TBtqPTQjOpcKDzMeQn8Lypr165GExmuwewMIaixVccNir5SiF24pYcNRWJ+w+DgYRkkf9SG3/zaJv7UL9nbn0LEhsyj/1CLxAh1aQjCrOFEmX1Qje1WnkZFEawGfZDVAf8FDrEJN6EvvEMEu0hlqSVPtFP9LkpsvyHw4Tjzj3DhQ4aUCFM3n1JhoBT7b18YKhakwKStgkzU820xAjJyQ6xCowpnJZIywiELmbwOd03gj6jdddG7Y3ursYyGk8Jhd669F7njYLBOzyd4K4cT3Tp91wV+hXT+o/ce2JtGOz8ApwUQGZVgErXL/y+u6DqhyWcaY6XYkwnCHuCdB6WnZk9G7mjS8k1ghOJTrt43BcF9wCHaY3cqMEaiVxpWl6v6SjNCXnLsGTGqxorTRahBZBvKpQnFdR4GEx6VqOL5plxYQl7CB4tpRoM6QJhzPIi/vjXycy6Gw2vRr7kXCruH54f0RZ+mfa8TX9GqZdyDg84Uh/3SxwbiaEm8n7P3u9m9EclbHpLlAvxYJ67Z25LZPTBVQcTS0ZOlHCrtXk+8hmcBRvdVWl5zXpTEYohMWuSm0ETX2ngS15o= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR10MB4468.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(346002)(396003)(136003)(376002)(39860400002)(8936002)(66556008)(4326008)(186003)(66476007)(36756003)(1076003)(103116003)(316002)(5660300002)(8676002)(6486002)(86362001)(66946007)(478600001)(2616005)(41300700001)(83380400001)(6512007)(26005)(6666004)(2906002)(6506007)(38350700002)(52116002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Lub3F2YIV0EKGBsQ5TjdXHZQL0fC+7aMem+K7vEE7UkdjO6oeILXM6Oq3bnfiU1g05qAL6UWe6e/zBbdvN+RBpRXmRjnLJpO3L7Q7JRZFjX8EW9wFBU5kc5vFFmIjblYW/nslwawMKronEkxdESr5+Ky/MsWTJteD+jIdlNo+GtG/8J2mHv3PWoI5TrnE4UJBrwfp6ZJYv+OQ7XdZXPsiE8OPqhlzvAdCBFJYK8gPvO7qI/kEcR4f9Cdugy8T7PN1C3FBB1jLpUeJaHJndPuarWD+kdLG2tlCT1RHTdiT3R3XiKB7r4HuvRkxbZTUAYYal3/xpFtwy7GYdgX/jAcAn6txCs7NTGRps2OdncF3w8wkCxuWn7GF4JSXoaav5Z6fVIiH0/lvSJalNB+KPgQX5WdFAlavNmcB2D74Dd5xzzlHXUUFaoDS9Tk15nYAPE1q4xK6XkF9vYnX8Tt8DIHzUYexaQNH7Qlhz3Q9KQ1oLGaaxG2e3z66kLKNkj+NLM+c/Gv2ql5QLlTVcqqBtTfK+jZyLVsEKfFOF06b8x1vhnnvYs5+wMwceF3W+kWcHWv9erFvMLM6vh/2l/q4di3nMVedvfATTcZ/kuyr7c+ehGkvqAM0aLJYaKM7kgGnr1PgAxKlE+doV+r0ZSfwcBQweJCM+Fx3uqLNklGVvXHZYSERS05UAWDOtBuf5TPDtbYvaI1CpcmFXVeLKlefUY5K1iOmpLqgdwB8sZLVGZzpZ1Lxb3tYBuqr/TgQvmn9yGeRQu5oc8mXk0x0FdG1JPxtTaFtxGBsnl3Z7rm6UP3okyHX31fVaSk+0ZrYasZnucvtn0nS2ZMwFDioy6/Evb8x+kR4VgaWSBXjYUKEKP7YTtcmM/Bvc4+HCE2wy++DNbMHg+yhJ8bqlqD+LiIogXBb4VGET5JdVwdWIUm0wXgTtEHWAC8W4kxcuNpblQVPIaTlLt+EPL6wZrA4NVwQ6X785izSCAV/31RwQKnbKUUP7wRzUf1NovI1K7Elkqli2ncI6halZJrsHkTTpsh8N4lv2UECrOt908GO1pNnF0erjn893fVaJcA24n4+gIxizOd7rpJx3mr9IZU6nzYXHznFnGYMJ5PqTSsB70GINYF8eOeRUW2pjVlRt1LKkdnCjh+ics+6vEhERawxvqVlkIPHsTYPQ0bdVtf1vwCudtl0jZMkkAW4Azh84760wDJqlMBKeza9aNYWNGVG7mBSbkr9o3+ApkMMcW5jenaW2+mirgUhGEz02Hq/+Ran98pLBqlvilRkTu0mhl+vTbxI2mjQiIB+KPj6Tyn6wrSAY0G4r6xn/mwj8yvEm8Nz4pKvGLEh8Tq1cAtuNtoghIuuqP/FyIcEZYW8O88Ex2XYr7c4nBXVmEnThjMcLniB8trR6zKkLQYUueozlxY68ig4Cj/wD3BDuilh164+i8F54n8EQALsQdIuvZlUNXo804fSUyDF/VWNgQuYol1BIxrvgOjRmPrAzrVBiO1mFUUrOQn5H4uH6YmgfPVeEoPrWlIea6guXTRrFTxTZuyF4/taFbT85RJx2A9sEu4B+Ke0unkzszHuBF9J+5IP40w6CfLqvjvMo9WOVK7RpEQsDCTrlIAUw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 755f95cc-c148-46bd-1f0b-08da7ac0f159 X-MS-Exchange-CrossTenant-AuthSource: CO1PR10MB4468.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2022 11:10:32.8899 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: fi7EScctoDLqXyjC9zbU4jgxGcepZVIM0ggYKjdue4KRClXxQWdn/WksSpkK6s3eDJ31AbYim2unGubXxaSvhA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3431 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_06,2022-08-10_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 mlxscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208100034 X-Proofpoint-GUID: FqLJYN4rvNhPGRourCnCpbGao5laxq25 X-Proofpoint-ORIG-GUID: FqLJYN4rvNhPGRourCnCpbGao5laxq25 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org kernfs_rename_lock protects a node's ->parent and thus kernfs topology. Thus it can be used in cases that rely on a stable kernfs topology. Change it to a read-write lock for better scalability. Suggested by: Al Viro Signed-off-by: Imran Khan --- fs/kernfs/dir.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c index 45e1882bd51f..d2a0b4acd073 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -17,7 +17,7 @@ #include "kernfs-internal.h" -static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */ +static DEFINE_RWLOCK(kernfs_rename_lock); /* kn->parent and ->name */ /* * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to * call pr_cont() while holding rename_lock. Because sometimes pr_cont() @@ -192,9 +192,9 @@ int kernfs_name(struct kernfs_node *kn, char *buf, size_t buflen) unsigned long flags; int ret; - spin_lock_irqsave(&kernfs_rename_lock, flags); + read_lock_irqsave(&kernfs_rename_lock, flags); ret = kernfs_name_locked(kn, buf, buflen); - spin_unlock_irqrestore(&kernfs_rename_lock, flags); + read_unlock_irqrestore(&kernfs_rename_lock, flags); return ret; } @@ -220,9 +220,9 @@ int kernfs_path_from_node(struct kernfs_node *to, struct kernfs_node *from, unsigned long flags; int ret; - spin_lock_irqsave(&kernfs_rename_lock, flags); + read_lock_irqsave(&kernfs_rename_lock, flags); ret = kernfs_path_from_node_locked(to, from, buf, buflen); - spin_unlock_irqrestore(&kernfs_rename_lock, flags); + read_unlock_irqrestore(&kernfs_rename_lock, flags); return ret; } EXPORT_SYMBOL_GPL(kernfs_path_from_node); @@ -288,10 +288,10 @@ struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn) struct kernfs_node *parent; unsigned long flags; - spin_lock_irqsave(&kernfs_rename_lock, flags); + read_lock_irqsave(&kernfs_rename_lock, flags); parent = kn->parent; kernfs_get(parent); - spin_unlock_irqrestore(&kernfs_rename_lock, flags); + read_unlock_irqrestore(&kernfs_rename_lock, flags); return parent; } @@ -1650,7 +1650,7 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, kernfs_get(new_parent); /* rename_lock protects ->parent and ->name accessors */ - spin_lock_irq(&kernfs_rename_lock); + write_lock_irq(&kernfs_rename_lock); old_parent = kn->parent; kn->parent = new_parent; @@ -1661,7 +1661,7 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, kn->name = new_name; } - spin_unlock_irq(&kernfs_rename_lock); + write_unlock_irq(&kernfs_rename_lock); kn->hash = kernfs_name_hash(kn->name, kn->ns); kernfs_link_sibling(kn); From patchwork Wed Aug 10 11:10:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imran Khan X-Patchwork-Id: 12940444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F2A5C25B06 for ; Wed, 10 Aug 2022 11:11:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231794AbiHJLK7 (ORCPT ); Wed, 10 Aug 2022 07:10:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231373AbiHJLKp (ORCPT ); Wed, 10 Aug 2022 07:10:45 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF27A3122A; Wed, 10 Aug 2022 04:10:42 -0700 (PDT) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A8hs4B031943; Wed, 10 Aug 2022 11:10:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=HenYNfEQ+SNT9MEjUix7QAkWAlWEVWHc8NRXGzeH3DM=; b=QDlAJo/z16tAfSzcjZ46wKYsQphvS9CunlCpipk7yrnMcrMbf9+XNkB2RPF9WK+Dywuh aNMI7JK2GCKIqOn9BhF62C8UNHjGNDhy6TcDlXqyqWkbzEAXsNmIE/J4bAh4VvgLpwJB 2iqyvxM9v8/DgDnoXHBhF0ednK227tnaNzSBMPJ4wd/G8elsz05iKPQL1RLMDhozEcwP iVkT87Ww1EOwN8CrZ/p665UJw3Ni/8qsKGwtv+FqGLKz1Zx8lUM1vVhlVZAJC0oR+LeV Ii5eaTTm+xI9PGDGf1t2q7UumRoFM37+VX+wyfE1Qgm2IL0uBbZFK6gbBpc+HyT6SAeX Ow== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3huwqdsj4n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:38 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 27A9LfAn037505; Wed, 10 Aug 2022 11:10:37 GMT Received: from nam10-dm6-obe.outbound.protection.outlook.com (mail-dm6nam10lp2103.outbound.protection.outlook.com [104.47.58.103]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3huwqj1wbd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:37 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hatv9+KHO0U8g3NWTQwjcIlWgGGWPx7Tm207pduQpLbt93i6nbwLCZKcoG/R3cnDOEvd5VCwlMCJBM4IAk8a95lksx6UUkUUCh9IUN7MgV+tcXhoEBtcLTvO72j0iI0OkgZnty6rtBi7NZyd35E2uG5D+hpyeZGqsAv0PKr8Bbvs6Ub3UMrVPGwqphPTPXyQqMaXpJgnXyIGWvE+mWjOPS8/HtGhb89L/AVjVjaJocr9ruK4S0cTWtyI9Vbnn5PQefKXHdvKNKZQ/cF0MAtBoSdA1Hj3/ctTHO0UWrnsmmH7o2pBSbIf9KpFqqcET11Wup1GlUIxDyyrDMZypJ8w2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HenYNfEQ+SNT9MEjUix7QAkWAlWEVWHc8NRXGzeH3DM=; b=RgxMPDacdhATR2bOUQ7YSj2DCOILVnN/iWSQ8tA6LS/SgaFiheuU7+lmPJ86as1W+4x4d4QKqd5ELBJabeg9Px+/jbsCh9yyTjdKt64TjIU4NxBhbvRrM0YnYsOYYKwHvpZIptLWs1moTf9iy7ZGx8Cv60zUmAYmftZJxRVjPo/kDObYIBuKM+wxvyXdzNuDlU1CgLyBnp5e1okwOri1kU+t/hcSfkzAwcyKdtsUT+0UKS06jRIHleGsYadQY9E//cajLXcG4NMky5UxdyqZ9fv43eiXfm5cYfr9SeCmhL6yxkDpqP+unxi26ofSV71ds+bSlf8RRSb6vpjr019U7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HenYNfEQ+SNT9MEjUix7QAkWAlWEVWHc8NRXGzeH3DM=; b=MQBtWiv8GzQtjCy+4iR/rg7Rx+FHiF3NvrhGByD34oFG29AnuV5Z3mcLmvNWCfY50pxZPCe0vcvJZbm/vOAPmtLeLCjG9Gn5kchThBECVit1Lw7ufC8k2vUvy6Pl5AS2nmZsH3yrmNpmrKaVdEms7UOr6TQXAjO3WY6VzXAfBz8= Received: from CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) by BYAPR10MB3431.namprd10.prod.outlook.com (2603:10b6:a03:86::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 10 Aug 2022 11:10:34 +0000 Received: from CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f]) by CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f%5]) with mapi id 15.20.5525.011; Wed, 10 Aug 2022 11:10:34 +0000 From: Imran Khan To: tj@kernel.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [RESEND PATCH 3/5] kernfs: Introduce interface to access per-fs rwsem. Date: Wed, 10 Aug 2022 21:10:15 +1000 Message-Id: <20220810111017.2267160-4-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220810111017.2267160-1-imran.f.khan@oracle.com> References: <20220810111017.2267160-1-imran.f.khan@oracle.com> X-ClientProxiedBy: SY3PR01CA0134.ausprd01.prod.outlook.com (2603:10c6:0:1b::19) To CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f8c05e13-ac6a-47a0-0bab-08da7ac0f270 X-MS-TrafficTypeDiagnostic: BYAPR10MB3431:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tCUM2p3eqx1P48GKKt0hGRGSEh6H7ATwXpMLIqHCFv2Hdu88osopo/c2dPLufdCarJGlKRotG+qOPx1kOfe/3r+Pk9ZkbE3LNTZNjW1tYI6Cnv76Vi6Jx7jsABhOxbOS2GZ216WiiplT8G+v19zPOkhj5Tf58lxgdYBHWUJm5OlGaX9j5E8hKTYSAxoQxJoRowFidXS/jOgE4j+qO2RMWw8XMKZWAVkHv4becN7gatQFudusjIdCQ8S1AR5oN3qZUu+KuSOnge0/+ZFzI1SzfBVtgr230tu/6xH0ewrBQiFRcrFAnGEACvPXLiRs9OrFCaZ/hIKUe2rJ/CI8yPic6qOS0g9DRbT+0jopINBZBA9knLckUkkb+b1JIpPB7ojPlee0T7TqVTIAcbVngzYxqA/RrTBfCVyFq4lqud3om6nfY2f+3laiCKJkNXKGs/5ySP23DbQBPvto7eSotYLYwT5QexptCB2h7XMM4W8ZgDJOsC7vGCW0PdPRgxXN3HwkvUEcbRH3vQj5p7z6idXk6oimRJWAlTUy3qG5vG/beoK18ZjXI5MsULQ2QnqR72kBHuD3by/Tmj6L2rFI+nL5rDQp0V11eOndDNkj2ph1/JT6M/rdizKDKt14VjsCkpmGKEijLT+GrDwBTETAAnH4X/tGx6SQcNX2+pR+XUsu0CdP/nOUUtyFuAefqbiZUgZL5xS4cOmarSR4jt2lsty2Be6m24CiHG+A2zfTu+D5mJtPcjkaLGBDXF18/RDVQABQGYrTA1knhkpqZur4sHuFyCzzNlDQdBKLti4aP3H4xMY= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR10MB4468.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(346002)(396003)(136003)(376002)(39860400002)(8936002)(66556008)(4326008)(186003)(66476007)(36756003)(1076003)(103116003)(316002)(5660300002)(8676002)(6486002)(86362001)(66946007)(478600001)(30864003)(2616005)(41300700001)(83380400001)(6512007)(26005)(6666004)(2906002)(6506007)(38350700002)(52116002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 5xyJLxnlP41JCun7DHYRxSnd1ZW2OR0xV3edcvvYq3D3/Ig8ZHqVG7OucDCuvY4neqJD7cUKWtbn/CD0nvKUMMiAPtFD5zT0E7Mc0V67zcIBfp3EPq3zE168uxfcbWFCV347LLtcFnBwofCBC0a0Ro95UKSb3dG9OP2Yae/0SQrZ68fAYB3q9jEVVxH+XEbHKszct76yNJoX2tbWas+KBy6MM7CdqpXy7kBClmVO1yZb/rmmSfgaI6SOACwWwtK2/yk0YjuotBJcZpo2ta/HjPZqvQSKtRHT2cVYZZgT+YtH7JsmD73Ex+K5fb6w8U/hcHZM8QX3sU06u7INYpXg3OG0zUMZlex3jTT07ftvN0kwxwaStRnFydSKdP/NHsrsv2pUpmq+yeQw2Xy+9TkIytcre1IQS2ZHAkepMD4DPTjJPXCZ2tGosEfXXS/osOX6Rxi2WmFfcWhOtFoRNVXq477X8onoeJ30QaxNtLPS7QL2VIgCyvz083kDSwYWsE0vEInP7Y/OlQ4spMDI0rEueoVPHGW9f6EyQGwdywfNaCeVK1VVK3psBekDgKNGJBrvriDvGjF7xlHF7Q5VvdcX9WcIa1PYr7zdbYCnjd5xBeZF9IbSSxnQ8HB8IASRO2xA/I3tXmLXFno2P3uQrRWB3K7xoMu7p8U7p8qdm8ABJXDd3vz8dtFySfjTqv8htxpOs8rnmcFr9hU5i0QD5PARHU0mqWBJ9U3re40TMKrJuFb9CU3bbSUKQIi6BfS2Kx0ENuhZLoFz5sne+P9zmUl3ZheSyMxvxbjOLZsNpZeoUASKzRPiwRiyxzUyhRsk0UODI6+S9fM7X2KktqDME2FcgYtmHLPaLPS/NZCEsMvDu3pkr+Mbw5Z2RT8XCuxWRjEGmyRFbCiSkcukGLI9wIBHxFn+Y60f+k0cDZSTkWFZf1YXx92qLz6d9T5OyTyYt9d7Zn3Tamolzl2+0gI86oAP73RYvCTuQACNk18OEGWtgfcV5x1M8ncDHKP11bz1jm+09BTj4Muxg/ti7lhmqCuC9fR0z7dMjC+r5F+U1N07rbqkBwsFa1g4pCyzlpujqZh404YYPGQM8ffyROAcODiQpNf9vIn/w2z0Im3chclVHDfcEVCvyyzXRqenPF3TFA0wzaF685mONbmSYNq+wj4RtyCl+A5H0WGlAic96Hu9JCL5YgxHhNNuAdlxgb/60LqOagbthpBY5qdaG28NlmonCB/z8MEpNYId8O/7CBLiJNlzgch5TFS3tkEVyTlT1YxPi9kcv78CI3V3KaYRx4Tgl8E6E9NQ0rB0Bx9QZ3yRwS8ZWouMottiprB+NH/2YPhErbkBQ5/frUVJ/4MJER1njXtvVXV8DazkU1nIfRE8LIDKhtOyJmcASYFiheFHaXUkKzUlB3GlDjWuqYLXKFGjxHY/I93yDh6yggN3+Ioop/vPDH6GTwOdOtgSP+5BoKic1xZ5fQznYx9QzyYzZEdrBS07TibPUV3OFB4p75FjG5RGHEOvyyFkcZycvKTBRDNMJXfYE090AF4C1BBVhIB3jNRcAJ7YIor1G3mb4auyZ6DSfxSuEUEqIvAuuWPdRmk7Lv63dxuJyfQyI9G0u6xCxA== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: f8c05e13-ac6a-47a0-0bab-08da7ac0f270 X-MS-Exchange-CrossTenant-AuthSource: CO1PR10MB4468.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2022 11:10:34.7804 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: n3/cE1zCLuCU1OupthzSZQps1whJBZNB3jocatp0P0T3Q5IAvmuKCS91KiYtDYTBFvx9KyiLu3Z9QYdJrxHl+A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3431 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_06,2022-08-10_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 mlxscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208100034 X-Proofpoint-GUID: lYGSBdhF63kIvhQynnDrR8etESegyTXZ X-Proofpoint-ORIG-GUID: lYGSBdhF63kIvhQynnDrR8etESegyTXZ Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org per-fs rwsem is used across kernfs for synchronization purposes. Having an interface to access it not only avoids code duplication, it can also help in changing the underlying locking mechanism without needing to change the lock users. For example next patch modifies this interface to make use of hashed rwsems in place of per-fs rwsem. Signed-off-by: Imran Khan --- fs/kernfs/dir.c | 119 ++++++++++++++++++------------------ fs/kernfs/file.c | 5 +- fs/kernfs/inode.c | 26 ++++---- fs/kernfs/kernfs-internal.h | 78 +++++++++++++++++++++++ fs/kernfs/mount.c | 6 +- fs/kernfs/symlink.c | 6 +- 6 files changed, 159 insertions(+), 81 deletions(-) diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c index d2a0b4acd073..73f4ebc1464e 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -33,7 +33,7 @@ static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */ static bool kernfs_active(struct kernfs_node *kn) { - lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem); + kernfs_rwsem_assert_held(kn); return atomic_read(&kn->active) >= 0; } @@ -467,12 +467,20 @@ static void kernfs_drain(struct kernfs_node *kn) __releases(&kernfs_root(kn)->kernfs_rwsem) __acquires(&kernfs_root(kn)->kernfs_rwsem) { + struct rw_semaphore *rwsem; struct kernfs_root *root = kernfs_root(kn); - lockdep_assert_held_write(&root->kernfs_rwsem); + /** + * kn has the same root as its ancestor, so it can be used to get + * per-fs rwsem. + */ + rwsem = kernfs_rwsem_ptr(kn); + + kernfs_rwsem_assert_held_write(kn); + WARN_ON_ONCE(kernfs_active(kn)); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); if (kernfs_lockdep(kn)) { rwsem_acquire(&kn->dep_map, 0, 0, _RET_IP_); @@ -491,7 +499,7 @@ static void kernfs_drain(struct kernfs_node *kn) kernfs_drain_open_files(kn); - down_write(&root->kernfs_rwsem); + kernfs_down_write(kn); } /** @@ -726,12 +734,12 @@ struct kernfs_node *kernfs_find_and_get_node_by_id(struct kernfs_root *root, int kernfs_add_one(struct kernfs_node *kn) { struct kernfs_node *parent = kn->parent; - struct kernfs_root *root = kernfs_root(parent); struct kernfs_iattrs *ps_iattr; + struct rw_semaphore *rwsem; bool has_ns; int ret; - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(parent); ret = -EINVAL; has_ns = kernfs_ns_enabled(parent); @@ -762,7 +770,7 @@ int kernfs_add_one(struct kernfs_node *kn) ps_iattr->ia_mtime = ps_iattr->ia_ctime; } - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); /* * Activate the new node unless CREATE_DEACTIVATED is requested. @@ -776,7 +784,7 @@ int kernfs_add_one(struct kernfs_node *kn) return 0; out_unlock: - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); return ret; } @@ -797,7 +805,7 @@ static struct kernfs_node *kernfs_find_ns(struct kernfs_node *parent, bool has_ns = kernfs_ns_enabled(parent); unsigned int hash; - lockdep_assert_held(&kernfs_root(parent)->kernfs_rwsem); + kernfs_rwsem_assert_held(parent); if (has_ns != (bool)ns) { WARN(1, KERN_WARNING "kernfs: ns %s in '%s' for '%s'\n", @@ -829,7 +837,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, size_t len; char *p, *name; - lockdep_assert_held_read(&kernfs_root(parent)->kernfs_rwsem); + kernfs_rwsem_assert_held_read(parent); spin_lock_irq(&kernfs_pr_cont_lock); @@ -867,12 +875,12 @@ struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent, const char *name, const void *ns) { struct kernfs_node *kn; - struct kernfs_root *root = kernfs_root(parent); + struct rw_semaphore *rwsem; - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); kn = kernfs_find_ns(parent, name, ns); kernfs_get(kn); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return kn; } @@ -892,12 +900,12 @@ struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent, const char *path, const void *ns) { struct kernfs_node *kn; - struct kernfs_root *root = kernfs_root(parent); + struct rw_semaphore *rwsem; - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); kn = kernfs_walk_ns(parent, path, ns); kernfs_get(kn); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return kn; } @@ -1062,7 +1070,7 @@ struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent, static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) { struct kernfs_node *kn; - struct kernfs_root *root; + struct rw_semaphore *rwsem; if (flags & LOOKUP_RCU) return -ECHILD; @@ -1078,13 +1086,12 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) parent = kernfs_dentry_node(dentry->d_parent); if (parent) { spin_unlock(&dentry->d_lock); - root = kernfs_root(parent); - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); if (kernfs_dir_changed(parent, dentry)) { - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return 0; } - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); } else spin_unlock(&dentry->d_lock); @@ -1095,8 +1102,7 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) } kn = kernfs_dentry_node(dentry); - root = kernfs_root(kn); - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(kn); /* The kernfs node has been deactivated */ if (!kernfs_active(kn)) @@ -1115,10 +1121,10 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) kernfs_info(dentry->d_sb)->ns != kn->ns) goto out_bad; - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return 1; out_bad: - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return 0; } @@ -1132,12 +1138,11 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir, { struct kernfs_node *parent = dir->i_private; struct kernfs_node *kn; - struct kernfs_root *root; struct inode *inode = NULL; const void *ns = NULL; + struct rw_semaphore *rwsem; - root = kernfs_root(parent); - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); if (kernfs_ns_enabled(parent)) ns = kernfs_info(dir->i_sb)->ns; @@ -1148,7 +1153,7 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir, * create a negative. */ if (!kernfs_active(kn)) { - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return NULL; } inode = kernfs_get_inode(dir->i_sb, kn); @@ -1163,7 +1168,7 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir, */ if (!IS_ERR(inode)) kernfs_set_rev(parent, dentry); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); /* instantiate and hash (possibly negative) dentry */ return d_splice_alias(inode, dentry); @@ -1286,7 +1291,7 @@ static struct kernfs_node *kernfs_next_descendant_post(struct kernfs_node *pos, { struct rb_node *rbn; - lockdep_assert_held_write(&kernfs_root(root)->kernfs_rwsem); + kernfs_rwsem_assert_held_write(root); /* if first iteration, visit leftmost descendant which may be root */ if (!pos) @@ -1321,9 +1326,9 @@ static struct kernfs_node *kernfs_next_descendant_post(struct kernfs_node *pos, void kernfs_activate(struct kernfs_node *kn) { struct kernfs_node *pos; - struct kernfs_root *root = kernfs_root(kn); + struct rw_semaphore *rwsem; - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); pos = NULL; while ((pos = kernfs_next_descendant_post(pos, kn))) { @@ -1337,7 +1342,7 @@ void kernfs_activate(struct kernfs_node *kn) pos->flags |= KERNFS_ACTIVATED; } - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); } static void __kernfs_remove(struct kernfs_node *kn) @@ -1348,7 +1353,7 @@ static void __kernfs_remove(struct kernfs_node *kn) if (!kn) return; - lockdep_assert_held_write(&kernfs_root(kn)->kernfs_rwsem); + kernfs_rwsem_assert_held_write(kn); /* * This is for kernfs_remove_self() which plays with active ref @@ -1417,16 +1422,14 @@ static void __kernfs_remove(struct kernfs_node *kn) */ void kernfs_remove(struct kernfs_node *kn) { - struct kernfs_root *root; + struct rw_semaphore *rwsem; if (!kn) return; - root = kernfs_root(kn); - - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); __kernfs_remove(kn); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); } /** @@ -1512,9 +1515,9 @@ void kernfs_unbreak_active_protection(struct kernfs_node *kn) bool kernfs_remove_self(struct kernfs_node *kn) { bool ret; - struct kernfs_root *root = kernfs_root(kn); + struct rw_semaphore *rwsem; - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); kernfs_break_active_protection(kn); /* @@ -1542,9 +1545,9 @@ bool kernfs_remove_self(struct kernfs_node *kn) atomic_read(&kn->active) == KN_DEACTIVATED_BIAS) break; - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); schedule(); - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); } finish_wait(waitq, &wait); WARN_ON_ONCE(!RB_EMPTY_NODE(&kn->rb)); @@ -1557,7 +1560,7 @@ bool kernfs_remove_self(struct kernfs_node *kn) */ kernfs_unbreak_active_protection(kn); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); return ret; } @@ -1574,7 +1577,7 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name, const void *ns) { struct kernfs_node *kn; - struct kernfs_root *root; + struct rw_semaphore *rwsem; if (!parent) { WARN(1, KERN_WARNING "kernfs: can not remove '%s', no directory\n", @@ -1582,14 +1585,14 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name, return -ENOENT; } - root = kernfs_root(parent); - down_write(&root->kernfs_rwsem); + + rwsem = kernfs_down_write(parent); kn = kernfs_find_ns(parent, name, ns); if (kn) __kernfs_remove(kn); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); if (kn) return 0; @@ -1608,16 +1611,15 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, const char *new_name, const void *new_ns) { struct kernfs_node *old_parent; - struct kernfs_root *root; const char *old_name = NULL; + struct rw_semaphore *rwsem; int error; /* can't move or rename root */ if (!kn->parent) return -EINVAL; - root = kernfs_root(kn); - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); error = -ENOENT; if (!kernfs_active(kn) || !kernfs_active(new_parent) || @@ -1671,7 +1673,7 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, error = 0; out: - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); return error; } @@ -1742,14 +1744,13 @@ static int kernfs_fop_readdir(struct file *file, struct dir_context *ctx) struct dentry *dentry = file->f_path.dentry; struct kernfs_node *parent = kernfs_dentry_node(dentry); struct kernfs_node *pos = file->private_data; - struct kernfs_root *root; const void *ns = NULL; + struct rw_semaphore *rwsem; if (!dir_emit_dots(file, ctx)) return 0; - root = kernfs_root(parent); - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); if (kernfs_ns_enabled(parent)) ns = kernfs_info(dentry->d_sb)->ns; @@ -1766,12 +1767,12 @@ static int kernfs_fop_readdir(struct file *file, struct dir_context *ctx) file->private_data = pos; kernfs_get(pos); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); if (!dir_emit(ctx, name, len, ino, type)) return 0; - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); } - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); file->private_data = NULL; ctx->pos = INT_MAX; return 0; diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index 812165d8ab4f..669619e01be2 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -911,6 +911,7 @@ static void kernfs_notify_workfn(struct work_struct *work) struct kernfs_node *kn; struct kernfs_super_info *info; struct kernfs_root *root; + struct rw_semaphore *rwsem; repeat: /* pop one off the notify_list */ spin_lock_irq(&kernfs_notify_lock); @@ -925,7 +926,7 @@ static void kernfs_notify_workfn(struct work_struct *work) root = kernfs_root(kn); /* kick fsnotify */ - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); down_write(&root->supers_rwsem); list_for_each_entry(info, &kernfs_root(kn)->supers, node) { @@ -965,7 +966,7 @@ static void kernfs_notify_workfn(struct work_struct *work) } up_write(&root->supers_rwsem); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); kernfs_put(kn); goto repeat; } diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c index 3d783d80f5da..efe5ae98abf4 100644 --- a/fs/kernfs/inode.c +++ b/fs/kernfs/inode.c @@ -99,11 +99,11 @@ int __kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr) int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr) { int ret; - struct kernfs_root *root = kernfs_root(kn); + struct rw_semaphore *rwsem; - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); ret = __kernfs_setattr(kn, iattr); - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); return ret; } @@ -112,14 +112,13 @@ int kernfs_iop_setattr(struct user_namespace *mnt_userns, struct dentry *dentry, { struct inode *inode = d_inode(dentry); struct kernfs_node *kn = inode->i_private; - struct kernfs_root *root; + struct rw_semaphore *rwsem; int error; if (!kn) return -EINVAL; - root = kernfs_root(kn); - down_write(&root->kernfs_rwsem); + rwsem = kernfs_down_write(kn); error = setattr_prepare(&init_user_ns, dentry, iattr); if (error) goto out; @@ -132,7 +131,7 @@ int kernfs_iop_setattr(struct user_namespace *mnt_userns, struct dentry *dentry, setattr_copy(&init_user_ns, inode, iattr); out: - up_write(&root->kernfs_rwsem); + kernfs_up_write(rwsem); return error; } @@ -187,14 +186,14 @@ int kernfs_iop_getattr(struct user_namespace *mnt_userns, { struct inode *inode = d_inode(path->dentry); struct kernfs_node *kn = inode->i_private; - struct kernfs_root *root = kernfs_root(kn); + struct rw_semaphore *rwsem; - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(kn); spin_lock(&inode->i_lock); kernfs_refresh_inode(kn, inode); generic_fillattr(&init_user_ns, inode, stat); spin_unlock(&inode->i_lock); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return 0; } @@ -277,22 +276,21 @@ void kernfs_evict_inode(struct inode *inode) int kernfs_iop_permission(struct user_namespace *mnt_userns, struct inode *inode, int mask) { + struct rw_semaphore *rwsem; struct kernfs_node *kn; - struct kernfs_root *root; int ret; if (mask & MAY_NOT_BLOCK) return -ECHILD; kn = inode->i_private; - root = kernfs_root(kn); - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(kn); spin_lock(&inode->i_lock); kernfs_refresh_inode(kn, inode); ret = generic_permission(&init_user_ns, inode, mask); spin_unlock(&inode->i_lock); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return ret; } diff --git a/fs/kernfs/kernfs-internal.h b/fs/kernfs/kernfs-internal.h index 3cd17c100d10..0babc3dc4f4a 100644 --- a/fs/kernfs/kernfs-internal.h +++ b/fs/kernfs/kernfs-internal.h @@ -169,4 +169,82 @@ extern const struct inode_operations kernfs_symlink_iops; * kernfs locks */ extern struct kernfs_global_locks *kernfs_locks; + +static inline struct rw_semaphore *kernfs_rwsem_ptr(struct kernfs_node *kn) +{ + struct kernfs_root *root = kernfs_root(kn); + + return &root->kernfs_rwsem; +} + +static inline void kernfs_rwsem_assert_held(struct kernfs_node *kn) +{ + lockdep_assert_held(kernfs_rwsem_ptr(kn)); +} + +static inline void kernfs_rwsem_assert_held_write(struct kernfs_node *kn) +{ + lockdep_assert_held_write(kernfs_rwsem_ptr(kn)); +} + +static inline void kernfs_rwsem_assert_held_read(struct kernfs_node *kn) +{ + lockdep_assert_held_read(kernfs_rwsem_ptr(kn)); +} + +/** + * kernfs_down_write() - Acquire kernfs rwsem + * + * @kn: kernfs_node for which rwsem needs to be taken + * + * Return: pointer to acquired rwsem + */ +static inline struct rw_semaphore *kernfs_down_write(struct kernfs_node *kn) +{ + struct rw_semaphore *rwsem = kernfs_rwsem_ptr(kn); + + down_write(rwsem); + + return rwsem; +} + +/** + * kernfs_up_write - Release kernfs rwsem + * + * @rwsem: address of rwsem to release + * + * Return: void + */ +static inline void kernfs_up_write(struct rw_semaphore *rwsem) +{ + up_write(rwsem); +} + +/** + * kernfs_down_read() - Acquire kernfs rwsem + * + * @kn: kernfs_node for which rwsem needs to be taken + * + * Return: pointer to acquired rwsem + */ +static inline struct rw_semaphore *kernfs_down_read(struct kernfs_node *kn) +{ + struct rw_semaphore *rwsem = kernfs_rwsem_ptr(kn); + + down_read(rwsem); + + return rwsem; +} + +/** + * kernfs_up_read - Release kernfs rwsem + * + * @rwsem: address of rwsem to release + * + * Return: void + */ +static inline void kernfs_up_read(struct rw_semaphore *rwsem) +{ + up_read(rwsem); +} #endif /* __KERNFS_INTERNAL_H */ diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index d2be1c304715..3c5334b74f36 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -237,9 +237,9 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn, static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *kfc) { struct kernfs_super_info *info = kernfs_info(sb); - struct kernfs_root *kf_root = kfc->root; struct inode *inode; struct dentry *root; + struct rw_semaphore *rwsem; info->sb = sb; /* Userspace would break if executables or devices appear on sysfs */ @@ -257,9 +257,9 @@ static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *k sb->s_shrink.seeks = 0; /* get root inode, initialize and unlock it */ - down_read(&kf_root->kernfs_rwsem); + rwsem = kernfs_down_read(info->root->kn); inode = kernfs_get_inode(sb, info->root->kn); - up_read(&kf_root->kernfs_rwsem); + kernfs_up_read(rwsem); if (!inode) { pr_debug("kernfs: could not get root inode\n"); return -ENOMEM; diff --git a/fs/kernfs/symlink.c b/fs/kernfs/symlink.c index 0ab13824822f..9d4103602554 100644 --- a/fs/kernfs/symlink.c +++ b/fs/kernfs/symlink.c @@ -113,12 +113,12 @@ static int kernfs_getlink(struct inode *inode, char *path) struct kernfs_node *kn = inode->i_private; struct kernfs_node *parent = kn->parent; struct kernfs_node *target = kn->symlink.target_kn; - struct kernfs_root *root = kernfs_root(parent); + struct rw_semaphore *rwsem; int error; - down_read(&root->kernfs_rwsem); + rwsem = kernfs_down_read(parent); error = kernfs_get_target_path(parent, target, path); - up_read(&root->kernfs_rwsem); + kernfs_up_read(rwsem); return error; } From patchwork Wed Aug 10 11:10:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imran Khan X-Patchwork-Id: 12940449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39A94C28B2C for ; Wed, 10 Aug 2022 11:11:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231853AbiHJLLF (ORCPT ); Wed, 10 Aug 2022 07:11:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232009AbiHJLKt (ORCPT ); Wed, 10 Aug 2022 07:10:49 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48E9132D90; Wed, 10 Aug 2022 04:10:47 -0700 (PDT) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A8hvN7013153; Wed, 10 Aug 2022 11:10:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=/9D8AdJnrTHvzXVreoGR878YXxxmzPk67ApfqSf1wfw=; b=qNbZHlFJGRy5eDsJ9wzOh8hWc7EYadQ4lnPyj8z/nyjS3iVnv/Z0NrD8C+yrC0m75qZS pejw6FKxqjwLPqMVjr9gEXyqifRoKsq4zYsHCX+XkJAh2WvgGXzcJ2ChoQdAhreAis7k taZrYK+AUsgWUR0DMsgSoBchrIFyYdbuKnL7UjbaVemolSjpvD5slyHJp5kXqMCvaKDF lPDJ0TzaspyWgUTgwJ9hWOug61LieVChpXCAvCNZUI2u6adkMEn4pcQG0aVqwHllFzE6 EoYiZ0qGSzfMxIedh1gI+YjnaN4iNqZ4ADXUfADOEnjZr8P7HVM6dNYF+xRQhLUsOH9A rw== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3huwqghgs1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:41 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 27A9L4G1036828; Wed, 10 Aug 2022 11:10:39 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2172.outbound.protection.outlook.com [104.47.57.172]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3huwqhhwvj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:39 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ml1YMmJXKTFpqlQqgy20I0WkODtSnxwHWP+iqHFNgtml1aqn0tMynfKvZTyRf9VkFIMJGS5jHekryS0PhJmapNpB9OTibnxoOi/eOmMqm9pVeOHD1B0qkCXNqugLTGhmijPsaIdl7ZFH9wbYk40u3p1jrDd4LxCKN5xyKbJFN8ehn6o1daZBcuBmFK9BlxJRWTzqb4umoYaFgSL2vXLu0/fwbRkt2+DT9DAY29zqRwKxYT8UhEy1p8acP2wlNr2Qo2D0FAHj183QlGDFzyM8+KhLXsDAuYzclIRnuxjvv1FJhhQICvFgfS2/OZYwfty1otuA84XU9Kqwz5rPFDxRiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/9D8AdJnrTHvzXVreoGR878YXxxmzPk67ApfqSf1wfw=; b=dT9uKRGjOu9SikNyJGUp/Lpu6+Vjclw6Tr+MURs6DaOqFgdLS/lfS4iv3l46LGq1aXoiQpifxRTlJCK7QI/bJNC75S2Pf1gei39YAHsxrufdgLTi5szT4mx7AtMKCqRq1sdS6f8mBb1Cx8lhLZIbdLgamgmw4nmphT0wt5t94dx37bad/8Pial2xX4r9i+mkwZTZ6/nKzfQMgUcD6FYJdpDX8OEq/JpeyxqTgt+tsDlIEbAR5DeqkNRKp5tIVpTGEkK91sN+4BQWB8WtinXs0okae7RLHc8zLZ8C84HbJXIdhms6z6nHklhg7iS/a3/zPD9wBScNLdPkdueZA2rZ7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/9D8AdJnrTHvzXVreoGR878YXxxmzPk67ApfqSf1wfw=; b=rskYZXinjQrkPMpiv0XkXO6XGTHp2VR1m7OhUdBJCjOQyCXLvW9Y0WUSW5z+qhfpcWGLM7uHLG6vOfRzIz3i8p0VZ3FxKEyfsW+E215lkXNZzrX8cItSq5UPVKAQYe8FoXmPenjRX9HWqBdp8ONVb+T389OdpIxEHEe88gWDEXM= Received: from CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) by BYAPR10MB3431.namprd10.prod.outlook.com (2603:10b6:a03:86::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 10 Aug 2022 11:10:36 +0000 Received: from CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f]) by CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f%5]) with mapi id 15.20.5525.011; Wed, 10 Aug 2022 11:10:36 +0000 From: Imran Khan To: tj@kernel.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [RESEND PATCH 4/5] kernfs: Replace per-fs rwsem with hashed rwsems. Date: Wed, 10 Aug 2022 21:10:16 +1000 Message-Id: <20220810111017.2267160-5-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220810111017.2267160-1-imran.f.khan@oracle.com> References: <20220810111017.2267160-1-imran.f.khan@oracle.com> X-ClientProxiedBy: SY3PR01CA0134.ausprd01.prod.outlook.com (2603:10c6:0:1b::19) To CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 51dac534-56ec-434f-558a-08da7ac0f389 X-MS-TrafficTypeDiagnostic: BYAPR10MB3431:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: n6GVRtOGcqqijBspdq79OojezSRtSDmTDNh2nTfnLNEUb5amAlN0/XdckrglO0feyN7CVM6FPnoK6hYnbfWNjxHggeg21IK1+PuUgZ/yvc0lx//rZ2Pcv4MdyZX7io0FAq/5flRleo+FLb9uXBkSqXmBuifGaYf/zRX5MdvYEoi2OSIFYm7bM7EhP/nk9fGj0E6EuS4n7HAOAwjsenF/1/fzEcqPAaklewK2Pj0Cz8uFbDocQkBF2gW12bb9C5RPCqrlTls2RaU5D2jE0dg5kXsaOKy8DQV1TMsPGT1AnqJ3vU9Xtb5TB/nWu8h1OIPzvsxQex/rfNxvrvtecMPLhKKochKI1Gy4J82Ot9BWwT/zg6jfDCyTBQn9GCMiD+2RTTKR9qH96U33IuhtugrMql0XBfHRvWZtJqsPCQ1BhqSvkncygRHAZsbr0e1OjOWM8Phaqhbg3nwdg2Ewxaovhe7hiQccScXnnErF+nhuwdX9HZ9i/zg94MkopbePoVl6QdT53sTBvsaAIxcSZ3egiX9SQD0kSI/xMx+LOFcsARzHDQnntIYI8LmBV/FAZ5CL4qOVY6Qxhpm1k2D3PkH7e6iLODjELZF7ZzsI0BD6srlUt42xqtnpYdNGBkTpR51Y9oP1pT3vIyHR1LEUkjgJFDpHPgRU7zAU1hmIWQLcRIrL0WRd6aSI5ge0lOR6ETzyiQAW6nld9eLH9uqnoQ4gGEdujlLZ9DjFlQfZvYfi3OUUt+yXijS1WsD2LAD4XhlSUQBA8ii5aZAisRUAOZv+ybnSVAzQLEUhDOLorj36YLA= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR10MB4468.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(346002)(396003)(136003)(376002)(39860400002)(8936002)(66556008)(4326008)(186003)(66476007)(36756003)(1076003)(103116003)(316002)(5660300002)(8676002)(6486002)(86362001)(66946007)(478600001)(30864003)(2616005)(41300700001)(83380400001)(6512007)(26005)(6666004)(2906002)(6506007)(38350700002)(52116002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: l4b99hEwR1pVl3NucAW1a4aT0QcCibX76xyTtZmmjHDu9nr47B8BxmB7fGJR9/GcGbFjhYjMulV+wx94JZwE+NauGNWjUMRaTkNiou2ovldwQZlidnPEf5EXLYcNHvV7QI/5OjuujXNhwEnxnnxDolvGcwIWsmllPDGpGTOLm0M4DKtSIaiV9mHx1qIby+ngjn6VNVOU5UvjC1TVJc307DhfplRBUWm+5AwQq6EkBALxBHMiIZ7jlHyDNHaeT0Qj+SS6kpQ9+eDLEzT2paUOic8qUDzxCe6cbBsmBSYoxvgaMKuqC4A4bBffyq0Azdw+br7qLD/05UA9zV5hkc526aW/z2AZQytdsKlMCU3mze37cQj/bxoIHhBxxF0JUvHSqctA0KZ0paemaXLsswD8C7yVUKifK5YFVPO2yE+tMpBfpmOmpHxIDKKVEAEFH1yQeh8+Fi0AG/yr0+uEqhEeilFBYUCh1UTOhWQvV8fDmCZm95L47gwVAvk5RoezZMmyXhoNM4X+Lt6B4RghMdTB4bHTWfrSMwuFCf/GPXu0CzsUC5vX9KHUvP1bz2ub5DQeVr+6B9aJ7XajJU0bx3vz5oygrKn2S7jP+9TRHqpWZD7BD2uXVNRez1+uL8Uv6y1OTVXXzTqRHbgHbNFK0XSRJ2EREusbj5bye5+7p8H9MQKuCIEGDwqa/RaaeW+qetkHC0EWSAoQKCoFjDst0pd1FtxDT/jk6RmSBHGVg1/7yLLPpSfWlFgnDIpbiM89pWL6Ofg/hZcIPGyc8aiuohEgpSRpZT/rz19LM0FiJIOpFnkHvIWRu3q3HU+iI1+8UI756hXdfpO9r8+6OCSVkKw8SEYgNXT/ZQ833FePL+fKYKSRODuWD6dq9ZbZTuenD0pvLHwXohQ9DUcwk8cL61De3tJZ1tdo+mLQq1K639ySMEjAqwH12EfkkScgnQ7DUb7ji24+llZYGc79wPV0J7ZgK9CS2m+NRx6Mp3hBxGuKIS6BDip16a+O7JIcmSpwvimJatFw8ExcpgphCGBMiRlm70aQgXtu0o+sFu5zFNoqekr5vDQaP6JCtuPF6VA56R2rX9sIotQm7/94G94vs+WzCVLTDvV0PsYtqQ/5lWEJ47/hrDjkGkZwNzkZgV9VbUV5DEGtUhGqtowOEmBxKKHWmM82O6MWupBZNrIq9gTvQhvn9jf80pBTR4ShK3yXqXhaScLzdWS+0fKF8R5P00ZCvW7iIVEUtpPy2mwF7fn/5shH3KUaphFFs5BpW+xtoJoRHZYeAnrl3lMoc1NoL1wdBnezcWRmG7AyWb0tqDz5o/ie7YRgCd4tsw0cSQ6txzlKv/IDBxY1KTvOwepep6oUPWpW/9oIqFTgSciXp79Nnby4/qQjTlUKx01TLPx7LBc0g5GS84n9qPR5fGFoUewzZCP45OfabPFJGBLmtTmKY/W092u6nymgALGygs30Gkg2THK5n2YUsBEAULfF17Ok69mNZD9yhAbL+ngC9njoD2p9ksmc+5WTt8W9gLa+mMaoYOYEstsU9kkSsHvB+ZL5RIa4Eek0cpHFyG0DglIwDy/TkOa43xzVdM1aLImr4qc+g56b+xojsqiGiskE+pdp9A== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 51dac534-56ec-434f-558a-08da7ac0f389 X-MS-Exchange-CrossTenant-AuthSource: CO1PR10MB4468.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2022 11:10:36.7812 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: YmyAgFplH9EGeywb1HSw1rY0kwj7vC76r+RiAbQIiXJzBaoYwjfBM2FLmTZb3vIxT/WckHH+8SK6L3ZQrNzdFg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3431 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_06,2022-08-10_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0 mlxlogscore=999 spamscore=0 adultscore=0 mlxscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208100034 X-Proofpoint-GUID: Bgyot8DoNHG_QSbBc_xKBP4VQIcTlRmL X-Proofpoint-ORIG-GUID: Bgyot8DoNHG_QSbBc_xKBP4VQIcTlRmL Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Having a single rwsem to synchronize all operations across a kernfs based file system (cgroup, sysfs etc.) does not scale well. The contention around this single rwsem becomes more apparent in large scale systems with few hundred CPUs where most of the CPUs have running tasks that are opening, accessing or closing sysfs files at any point of time. Using hashed rwsems in place of a per-fs rwsem, can significantly reduce contention around per-fs rwsem and hence provide better scalability. Moreover as these hashed rwsems are not part of kernfs_node objects we will not see any singnificant change in memory utilization of kernfs based file systems like sysfs, cgroupfs etc. Modify interface introduced in previous patch to make use of hashed rwsems. Just like earlier change use kernfs_node address as hashing key. Since we are getting rid of per-fs lock, in certain cases we may need to acquire locks corresponding to multiple nodes and in such cases of nested locking, locks are taken in order of their addresses. Introduce helpers to acquire rwsems corresponding to multiple nodes for such cases. For operations that involve finding the node first and then operating on it (for example operations involving find_and_get_ns), acquiring rwsem for the node being searched is not possible. Such operations need to make sure that a concurrent remove does not remove the found node. Introduce a per-fs mutex (kernfs_topo_mutex) that can be used to synchronize these operations against parallel removal/renaming of involved node. Also replace usage of kernfs_pr_cont_buf with another global buffer in kernfs_walk_ns. This is because kernfs_pr_cont_buf is protected by a spinlock but now kernfs_walk_ns needs to acquire hashed rwsem corresponding to nodes further down in the path and this can't be done under spinlock. Having another buffer to piggyback the path in kernfs_walk_ns and protecting this with kernfs_topo_mutex (mentioned earlier) would avoid need of spinlock and also ensure that there is no topology change. Replacing global mutex and spinlocks with hashed ones (as mentioned in previous changes) and global rwsem with hashed rwsem (as done in this change) reduces contention around kernfs and results in better performance numbers. For example on a system with 384 cores, if I run 200 instances of an application which is mostly executing the following loop: for (int loop = 0; loop <100 ; loop++) { for (int port_num = 1; port_num < 2; port_num++) { for (int gid_index = 0; gid_index < 254; gid_index++ ) { char ret_buf[64], ret_buf_lo[64]; char gid_file_path[1024]; int ret_len; int ret_fd; ssize_t ret_rd; ub4 i, saved_errno; memset(ret_buf, 0, sizeof(ret_buf)); memset(gid_file_path, 0, sizeof(gid_file_path)); ret_len = snprintf(gid_file_path, sizeof(gid_file_path), "/sys/class/infiniband/%s/ports/%d/gids/%d", dev_name, port_num, gid_index); ret_fd = open(gid_file_path, O_RDONLY | O_CLOEXEC); if (ret_fd < 0) { printf("Failed to open %s\n", gid_file_path); continue; } /* Read the GID */ ret_rd = read(ret_fd, ret_buf, 40); if (ret_rd == -1) { printf("Failed to read from file %s, errno: %u\n", gid_file_path, saved_errno); continue; } close(ret_fd); } } I can see contention around above mentioned locks as follows: - 54.07% 53.60% showgids [kernel.kallsyms] [k] osq_lock - 53.60% __libc_start_main - 32.29% __GI___libc_open entry_SYSCALL_64_after_hwframe do_syscall_64 sys_open do_sys_open do_filp_open path_openat vfs_open do_dentry_open kernfs_fop_open mutex_lock - __mutex_lock_slowpath - 32.23% __mutex_lock.isra.5 osq_lock - 21.31% __GI___libc_close entry_SYSCALL_64_after_hwframe do_syscall_64 exit_to_usermode_loop task_work_run ____fput __fput kernfs_fop_release kernfs_put_open_node.isra.8 mutex_lock - __mutex_lock_slowpath - 21.28% __mutex_lock.isra.5 osq_lock - 10.49% 10.39% showgids [kernel.kallsyms] [k] down_read 10.39% __libc_start_main __GI___libc_open entry_SYSCALL_64_after_hwframe do_syscall_64 sys_open do_sys_open do_filp_open - path_openat - 9.72% link_path_walk - 5.21% inode_permission - __inode_permission - 5.21% kernfs_iop_permission down_read - 4.08% walk_component lookup_fast - d_revalidate.part.24 - 4.08% kernfs_dop_revalidate - 7.48% 7.41% showgids [kernel.kallsyms] [k] up_read 7.41% __libc_start_main __GI___libc_open entry_SYSCALL_64_after_hwframe do_syscall_64 sys_open do_sys_open do_filp_open - path_openat - 7.01% link_path_walk - 4.12% inode_permission - __inode_permission - 4.12% kernfs_iop_permission up_read - 2.61% walk_component lookup_fast - d_revalidate.part.24 - 2.61% kernfs_dop_revalidate Moreover this run of 200 application isntances takes 32-34 secs. to complete. With the patched kernel and on the same test setup, we no longer see contention around osq_lock (i.e kernfs_open_file_mutex) and also contention around per-fs kernfs_rwsem has reduced significantly as well. This can be seen in the following perf snippet: - 1.66% 1.65% showgids [kernel.kallsyms] [k] down_read 1.65% __libc_start_main __GI___libc_open entry_SYSCALL_64_after_hwframe do_syscall_64 sys_open do_sys_open do_filp_open - path_openat - 1.62% link_path_walk - 0.98% inode_permission - __inode_permission + 0.98% kernfs_iop_permission - 0.52% walk_component lookup_fast - d_revalidate.part.24 - 0.52% kernfs_dop_revalidate - 1.12% 1.11% showgids [kernel.kallsyms] [k] up_read 1.11% __libc_start_main __GI___libc_open entry_SYSCALL_64_after_hwframe do_syscall_64 sys_open do_sys_open do_filp_open - path_openat - 1.11% link_path_walk - 0.69% inode_permission - __inode_permission - 0.69% kernfs_iop_permission up_read Moreover the test execution time has reduced from 32-34 secs to 18-19 secs. Signed-off-by: Imran Khan --- fs/kernfs/Makefile | 2 +- fs/kernfs/dir.c | 161 +++++++++++++++++----- fs/kernfs/inode.c | 20 +++ fs/kernfs/kernfs-internal.c | 259 ++++++++++++++++++++++++++++++++++++ fs/kernfs/kernfs-internal.h | 47 ++++++- fs/kernfs/mount.c | 9 ++ fs/kernfs/symlink.c | 13 +- include/linux/kernfs.h | 1 + 8 files changed, 474 insertions(+), 38 deletions(-) create mode 100644 fs/kernfs/kernfs-internal.c diff --git a/fs/kernfs/Makefile b/fs/kernfs/Makefile index 4ca54ff54c98..778da6b118e9 100644 --- a/fs/kernfs/Makefile +++ b/fs/kernfs/Makefile @@ -3,4 +3,4 @@ # Makefile for the kernfs pseudo filesystem # -obj-y := mount.o inode.o dir.o file.o symlink.o +obj-y := mount.o inode.o dir.o file.o symlink.o kernfs-internal.o diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c index 73f4ebc1464e..7d02c3dd2c20 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -17,7 +17,7 @@ #include "kernfs-internal.h" -static DEFINE_RWLOCK(kernfs_rename_lock); /* kn->parent and ->name */ +DEFINE_RWLOCK(kernfs_rename_lock); /* kn->parent and ->name */ /* * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to * call pr_cont() while holding rename_lock. Because sometimes pr_cont() @@ -27,13 +27,13 @@ static DEFINE_RWLOCK(kernfs_rename_lock); /* kn->parent and ->name */ */ static DEFINE_SPINLOCK(kernfs_pr_cont_lock); static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by pr_cont_lock */ +static char kernfs_path_buf[PATH_MAX]; /* protected by kernfs_topo_mutex */ static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */ #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb) static bool kernfs_active(struct kernfs_node *kn) { - kernfs_rwsem_assert_held(kn); return atomic_read(&kn->active) >= 0; } @@ -458,14 +458,15 @@ void kernfs_put_active(struct kernfs_node *kn) /** * kernfs_drain - drain kernfs_node * @kn: kernfs_node to drain + * @anc: ancestor of kernfs_node to drain * * Drain existing usages and nuke all existing mmaps of @kn. Mutiple * removers may invoke this function concurrently on @kn and all will * return after draining is complete. */ -static void kernfs_drain(struct kernfs_node *kn) - __releases(&kernfs_root(kn)->kernfs_rwsem) - __acquires(&kernfs_root(kn)->kernfs_rwsem) +static void kernfs_drain(struct kernfs_node *kn, struct kernfs_node *anc) + __releases(kernfs_rwsem_ptr(anc)) + __acquires(kernfs_rwsem_ptr(anc)) { struct rw_semaphore *rwsem; struct kernfs_root *root = kernfs_root(kn); @@ -476,10 +477,11 @@ static void kernfs_drain(struct kernfs_node *kn) */ rwsem = kernfs_rwsem_ptr(kn); - kernfs_rwsem_assert_held_write(kn); + kernfs_rwsem_assert_held_write(anc); WARN_ON_ONCE(kernfs_active(kn)); + rwsem = kernfs_rwsem_ptr(anc); kernfs_up_write(rwsem); if (kernfs_lockdep(kn)) { @@ -499,7 +501,7 @@ static void kernfs_drain(struct kernfs_node *kn) kernfs_drain_open_files(kn); - kernfs_down_write(kn); + kernfs_down_write(anc); } /** @@ -739,6 +741,11 @@ int kernfs_add_one(struct kernfs_node *kn) bool has_ns; int ret; + /** + * The node being added is not active at this point of time and may + * be activated later depending on CREATE_DEACTIVATED flag. So at + * this point of time just locking the parent is enough. + */ rwsem = kernfs_down_write(parent); ret = -EINVAL; @@ -836,28 +843,35 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, { size_t len; char *p, *name; + struct rw_semaphore *rwsem; kernfs_rwsem_assert_held_read(parent); - spin_lock_irq(&kernfs_pr_cont_lock); + lockdep_assert_held(&kernfs_root(parent)->kernfs_topo_mutex); - len = strlcpy(kernfs_pr_cont_buf, path, sizeof(kernfs_pr_cont_buf)); + /* Caller has kernfs_topo_mutex so topology will not change */ + p = kernfs_path_buf; + len = strlcpy(p, path, PATH_MAX); - if (len >= sizeof(kernfs_pr_cont_buf)) { - spin_unlock_irq(&kernfs_pr_cont_lock); + if (len >= PATH_MAX) { + kfree(p); return NULL; } - p = kernfs_pr_cont_buf; - + rwsem = kernfs_rwsem_ptr(parent); while ((name = strsep(&p, "/")) && parent) { if (*name == '\0') continue; parent = kernfs_find_ns(parent, name, ns); + /* + * Release rwsem for node whose child RB tree has been + * traversed. + */ + kernfs_up_read(rwsem); + if (parent) /* Acquire rwsem before traversing child RB tree */ + rwsem = kernfs_down_read(parent); } - spin_unlock_irq(&kernfs_pr_cont_lock); - return parent; } @@ -876,11 +890,20 @@ struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent, { struct kernfs_node *kn; struct rw_semaphore *rwsem; + struct kernfs_root *root = kernfs_root(parent); + /** + * We don't have address of kernfs_node (that is being searched) + * yet. Acquiring root->kernfs_topo_mutex and releasing it after + * pinning the found kernfs_node, ensures that found kernfs_node + * will not disappear due to a parallel remove operation. + */ + mutex_lock(&root->kernfs_topo_mutex); rwsem = kernfs_down_read(parent); kn = kernfs_find_ns(parent, name, ns); kernfs_get(kn); kernfs_up_read(rwsem); + mutex_unlock(&root->kernfs_topo_mutex); return kn; } @@ -901,11 +924,26 @@ struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent, { struct kernfs_node *kn; struct rw_semaphore *rwsem; + struct kernfs_root *root = kernfs_root(parent); + /** + * We don't have address of kernfs_node (that is being searched) + * yet. Acquiring root->kernfs_topo_mutex and releasing it after + * pinning the found kernfs_node, ensures that found kernfs_node + * will not disappear due to a parallel remove operation. + */ + mutex_lock(&root->kernfs_topo_mutex); rwsem = kernfs_down_read(parent); kn = kernfs_walk_ns(parent, path, ns); kernfs_get(kn); - kernfs_up_read(rwsem); + if (kn) + /* Release lock taken under kernfs_walk_ns */ + kernfs_up_read(kernfs_rwsem_ptr(kn)); + else + /* Release parent lock because walk_ns bailed out early */ + kernfs_up_read(rwsem); + + mutex_unlock(&root->kernfs_topo_mutex); return kn; } @@ -930,9 +968,9 @@ struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops, return ERR_PTR(-ENOMEM); idr_init(&root->ino_idr); - init_rwsem(&root->kernfs_rwsem); INIT_LIST_HEAD(&root->supers); init_rwsem(&root->supers_rwsem); + mutex_init(&root->kernfs_topo_mutex); /* * On 64bit ino setups, id is ino. On 32bit, low 32bits are ino. @@ -1102,6 +1140,11 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) } kn = kernfs_dentry_node(dentry); + /** + * For dentry revalidation just acquiring kernfs_node's rwsem for + * reading should be enough. If a competing rename or remove wins + * one of the checks below will fail. + */ rwsem = kernfs_down_read(kn); /* The kernfs node has been deactivated */ @@ -1141,24 +1184,35 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir, struct inode *inode = NULL; const void *ns = NULL; struct rw_semaphore *rwsem; + struct kernfs_root *root = kernfs_root(parent); + /** + * We don't have address of kernfs_node (that is being searched) + * yet. So take root->kernfs_topo_mutex to avoid parallel removal of + * found kernfs_node. + */ + mutex_lock(&root->kernfs_topo_mutex); rwsem = kernfs_down_read(parent); if (kernfs_ns_enabled(parent)) ns = kernfs_info(dir->i_sb)->ns; kn = kernfs_find_ns(parent, dentry->d_name.name, ns); + kernfs_up_read(rwsem); /* attach dentry and inode */ if (kn) { /* Inactive nodes are invisible to the VFS so don't * create a negative. */ + rwsem = kernfs_down_read(kn); if (!kernfs_active(kn)) { kernfs_up_read(rwsem); + mutex_unlock(&root->kernfs_topo_mutex); return NULL; } inode = kernfs_get_inode(dir->i_sb, kn); if (!inode) inode = ERR_PTR(-ENOMEM); + kernfs_up_read(rwsem); } /* * Needed for negative dentry validation. @@ -1166,9 +1220,11 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir, * or transforms from positive dentry in dentry_unlink_inode() * called from vfs_rmdir(). */ + rwsem = kernfs_down_read(parent); if (!IS_ERR(inode)) kernfs_set_rev(parent, dentry); kernfs_up_read(rwsem); + mutex_unlock(&root->kernfs_topo_mutex); /* instantiate and hash (possibly negative) dentry */ return d_splice_alias(inode, dentry); @@ -1348,19 +1404,26 @@ void kernfs_activate(struct kernfs_node *kn) static void __kernfs_remove(struct kernfs_node *kn) { struct kernfs_node *pos; + struct rw_semaphore *rwsem; + struct kernfs_root *root; /* Short-circuit if non-root @kn has already finished removal. */ if (!kn) return; - kernfs_rwsem_assert_held_write(kn); + root = kernfs_root(kn); /* * This is for kernfs_remove_self() which plays with active ref * after removal. */ - if (kn->parent && RB_EMPTY_NODE(&kn->rb)) + mutex_lock(&root->kernfs_topo_mutex); + rwsem = kernfs_down_write(kn); + if (kn->parent && RB_EMPTY_NODE(&kn->rb)) { + kernfs_up_write(rwsem); + mutex_unlock(&root->kernfs_topo_mutex); return; + } pr_debug("kernfs %s: removing\n", kn->name); @@ -1370,8 +1433,11 @@ static void __kernfs_remove(struct kernfs_node *kn) if (kernfs_active(pos)) atomic_add(KN_DEACTIVATED_BIAS, &pos->active); + kernfs_up_write(rwsem); + /* deactivate and unlink the subtree node-by-node */ do { + rwsem = kernfs_down_write(kn); pos = kernfs_leftmost_descendant(kn); /* @@ -1389,10 +1455,25 @@ static void __kernfs_remove(struct kernfs_node *kn) * error paths without worrying about draining. */ if (kn->flags & KERNFS_ACTIVATED) - kernfs_drain(pos); + kernfs_drain(pos, kn); else WARN_ON_ONCE(atomic_read(&kn->active) != KN_DEACTIVATED_BIAS); + kernfs_up_write(rwsem); + + /** + * By now node and all of its descendants have been deactivated + * Once a descendant has been drained, acquire its parent's lock + * and unlink it from parent's children rb tree. + * We drop kn's lock before acquiring pos->parent's lock to avoid + * deadlock that will happen if pos->parent and kn hash to same lock. + * Dropping kn's lock should be safe because it is in deactived state. + * Further root->kernfs_topo_mutex ensures that we will not have + * concurrent instances of __kernfs_remove + */ + if (pos->parent) + rwsem = kernfs_down_write(pos->parent); + /* * kernfs_unlink_sibling() succeeds once per node. Use it * to decide who's responsible for cleanups. @@ -1410,8 +1491,12 @@ static void __kernfs_remove(struct kernfs_node *kn) kernfs_put(pos); } + if (pos->parent) + kernfs_up_write(rwsem); kernfs_put(pos); } while (pos != kn); + + mutex_unlock(&root->kernfs_topo_mutex); } /** @@ -1422,14 +1507,10 @@ static void __kernfs_remove(struct kernfs_node *kn) */ void kernfs_remove(struct kernfs_node *kn) { - struct rw_semaphore *rwsem; - if (!kn) return; - rwsem = kernfs_down_write(kn); __kernfs_remove(kn); - kernfs_up_write(rwsem); } /** @@ -1531,9 +1612,11 @@ bool kernfs_remove_self(struct kernfs_node *kn) */ if (!(kn->flags & KERNFS_SUICIDAL)) { kn->flags |= KERNFS_SUICIDAL; + kernfs_up_write(rwsem); __kernfs_remove(kn); kn->flags |= KERNFS_SUICIDED; ret = true; + rwsem = kernfs_down_write(kn); } else { wait_queue_head_t *waitq = &kernfs_root(kn)->deactivate_waitq; DEFINE_WAIT(wait); @@ -1588,11 +1671,17 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name, rwsem = kernfs_down_write(parent); + /** + * Since the node being searched will be removed eventually, + * we don't need to take root->kernfs_topo_mutex. + * Even if a parallel remove succeeds, the subsequent __kernfs_remove + * will detect it and bail-out early. + */ kn = kernfs_find_ns(parent, name, ns); - if (kn) - __kernfs_remove(kn); kernfs_up_write(rwsem); + if (kn) + __kernfs_remove(kn); if (kn) return 0; @@ -1612,14 +1701,26 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, { struct kernfs_node *old_parent; const char *old_name = NULL; - struct rw_semaphore *rwsem; + struct kernfs_rwsem_token token; int error; + struct kernfs_root *root = kernfs_root(kn); /* can't move or rename root */ if (!kn->parent) return -EINVAL; - rwsem = kernfs_down_write(kn); + mutex_lock(&root->kernfs_topo_mutex); + old_parent = kn->parent; + kernfs_get(old_parent); + kernfs_down_write_triple_nodes(kn, old_parent, new_parent, &token); + while (old_parent != kn->parent) { + kernfs_put(old_parent); + kernfs_up_write_triple_nodes(kn, old_parent, new_parent, &token); + old_parent = kn->parent; + kernfs_get(old_parent); + kernfs_down_write_triple_nodes(kn, old_parent, new_parent, &token); + } + kernfs_put(old_parent); error = -ENOENT; if (!kernfs_active(kn) || !kernfs_active(new_parent) || @@ -1654,7 +1755,6 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, /* rename_lock protects ->parent and ->name accessors */ write_lock_irq(&kernfs_rename_lock); - old_parent = kn->parent; kn->parent = new_parent; kn->ns = new_ns; @@ -1673,7 +1773,8 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent, error = 0; out: - kernfs_up_write(rwsem); + mutex_unlock(&root->kernfs_topo_mutex); + kernfs_up_write_triple_nodes(kn, new_parent, old_parent, &token); return error; } diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c index efe5ae98abf4..36a40b08b97f 100644 --- a/fs/kernfs/inode.c +++ b/fs/kernfs/inode.c @@ -101,6 +101,12 @@ int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr) int ret; struct rw_semaphore *rwsem; + /** + * Since we are only modifying the inode attribute, we just need + * to lock involved node. Operations that add or remove a node + * acquire parent's lock before changing the inode attributes, so + * such operations are also in sync with this interface. + */ rwsem = kernfs_down_write(kn); ret = __kernfs_setattr(kn, iattr); kernfs_up_write(rwsem); @@ -118,6 +124,12 @@ int kernfs_iop_setattr(struct user_namespace *mnt_userns, struct dentry *dentry, if (!kn) return -EINVAL; + /** + * Since we are only modifying the inode attribute, we just need + * to lock involved node. Operations that add or remove a node + * acquire parent's lock before changing the inode attributes, so + * such operations are also in sync with .setattr backend. + */ rwsem = kernfs_down_write(kn); error = setattr_prepare(&init_user_ns, dentry, iattr); if (error) @@ -188,6 +200,10 @@ int kernfs_iop_getattr(struct user_namespace *mnt_userns, struct kernfs_node *kn = inode->i_private; struct rw_semaphore *rwsem; + /** + * As we are only reading ->iattr, acquiring kn's rwsem for + * reading is enough. + */ rwsem = kernfs_down_read(kn); spin_lock(&inode->i_lock); kernfs_refresh_inode(kn, inode); @@ -285,6 +301,10 @@ int kernfs_iop_permission(struct user_namespace *mnt_userns, kn = inode->i_private; + /** + * As we are only reading ->iattr, acquiring kn's rwsem for + * reading is enough. + */ rwsem = kernfs_down_read(kn); spin_lock(&inode->i_lock); kernfs_refresh_inode(kn, inode); diff --git a/fs/kernfs/kernfs-internal.c b/fs/kernfs/kernfs-internal.c new file mode 100644 index 000000000000..80d7d64532fe --- /dev/null +++ b/fs/kernfs/kernfs-internal.c @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * This file provides inernal helpers for kernfs. + */ + +#include "kernfs-internal.h" + +static void kernfs_swap_rwsems(struct rw_semaphore **array, int i, int j) +{ + struct rw_semaphore *tmp; + + tmp = array[i]; + array[i] = array[j]; + array[j] = tmp; +} + +static void kernfs_sort_rwsems(struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + if (token->count == 2) { + if (array[0] == array[1]) + token->count = 1; + else if (array[0] > array[1]) + kernfs_swap_rwsems(array, 0, 1); + } else { + if (array[0] == array[1] && array[0] == array[2]) + token->count = 1; + else { + if (array[0] > array[1]) + kernfs_swap_rwsems(array, 0, 1); + + if (array[0] > array[2]) + kernfs_swap_rwsems(array, 0, 2); + + if (array[1] > array[2]) + kernfs_swap_rwsems(array, 1, 2); + + if (array[0] == array[1] || array[1] == array[2]) + token->count = 2; + } + } +} + +/** + * kernfs_down_write_double_nodes() - take hashed rwsem for 2 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @token: token to pass unlocking information to caller + * + * Acquire hashed rwsem for 2 nodes. Some operation may need to acquire + * hashed rwsems for 2 nodes (for example for a node and its parent). + * This function can be used in such cases. + * + * Return: void + */ +void kernfs_down_write_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + array[0] = kernfs_rwsem_ptr(kn1); + array[1] = kernfs_rwsem_ptr(kn2); + token->count = 2; + + kernfs_sort_rwsems(token); + + if (token->count == 1) { + /* Both nodes hash to same rwsem */ + down_write_nested(array[0], 0); + } else { + /* Both nodes hash to different rwsems */ + down_write_nested(array[0], 0); + down_write_nested(array[1], 1); + } +} + +/** + * kernfs_up_write_double_nodes - release hashed rwsem for 2 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @token: token to indicate unlocking information + * ->rwsems is a sorted list of rwsem addresses + * ->count contains number of unique locks + * + * Release hashed rwsems for 2 nodes + * + * Return: void + */ +void kernfs_up_write_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + if (token->count == 1) { + /* Both nodes hash to same rwsem */ + up_write(array[0]); + } else { + /* Both nodes hashe to different rwsems */ + up_write(array[0]); + up_write(array[1]); + } +} + +/** + * kernfs_down_read_double_nodes() - take hashed rwsem for 2 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @token: token to pass unlocking information to caller + * + * Acquire hashed rwsem for 2 nodes. Some operation may need to acquire + * hashed rwsems for 2 nodes (for example for a node and its parent). + * This function can be used in such cases. + * + * Return: void + */ +void kernfs_down_read_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + array[0] = kernfs_rwsem_ptr(kn1); + array[1] = kernfs_rwsem_ptr(kn2); + token->count = 2; + + kernfs_sort_rwsems(token); + + if (token->count == 1) { + /* Both nodes hash to same rwsem */ + down_read_nested(array[0], 0); + } else { + /* Both nodes hash to different rwsems */ + down_read_nested(array[0], 0); + down_read_nested(array[1], 1); + } +} + +/** + * kernfs_up_read_double_nodes - release hashed rwsem for 2 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @token: token to indicate unlocking information + * ->rwsems is a sorted list of rwsem addresses + * ->count contains number of unique locks + * + * Release hashed rwsems for 2 nodes + * + * Return: void + */ +void kernfs_up_read_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + if (token->count == 1) { + /* Both nodes hash to same rwsem */ + up_read(array[0]); + } else { + /* Both nodes hashe to different rwsems */ + up_read(array[0]); + up_read(array[1]); + } +} + +/** + * kernfs_down_write_triple_nodes() - take hashed rwsem for 3 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @kn3: third kernfs_node of interest + * @token: token to pass unlocking information to caller + * + * Acquire hashed rwsem for 3 nodes. Some operation may need to acquire + * hashed rwsems for 3 nodes (for example rename operation needs to + * acquire rwsem corresponding to node, its current parent and its future + * parent). This function can be used in such cases. + * + * Return: void + */ +void kernfs_down_write_triple_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_node *kn3, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + array[0] = kernfs_rwsem_ptr(kn1); + array[1] = kernfs_rwsem_ptr(kn2); + array[2] = kernfs_rwsem_ptr(kn3); + token->count = 3; + + kernfs_sort_rwsems(token); + + if (token->count == 1) { + /* All 3 nodes hash to same rwsem */ + down_write_nested(array[0], 0); + } else if (token->count == 2) { + /** + * Two nodes hash to same rwsem, and these + * will occupy consecutive places in array after + * sorting. + */ + down_write_nested(array[0], 0); + down_write_nested(array[2], 1); + } else { + /* All 3 nodes hashe to different rwsems */ + down_write_nested(array[0], 0); + down_write_nested(array[1], 1); + down_write_nested(array[2], 2); + } +} + +/** + * kernfs_up_write_triple_nodes - release hashed rwsem for 3 nodes + * + * @kn1: first kernfs_node of interest + * @kn2: second kernfs_node of interest + * @kn3: third kernfs_node of interest + * @token: token to indicate unlocking information + * ->rwsems is a sorted list of rwsem addresses + * ->count contains number of unique locks + * + * Release hashed rwsems for 3 nodes + * + * Return: void + */ +void kernfs_up_write_triple_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_node *kn3, + struct kernfs_rwsem_token *token) +{ + struct rw_semaphore **array = &token->rwsems[0]; + + if (token->count == 1) { + /* All 3 nodes hash to same rwsem */ + up_write(array[0]); + } else if (token->count == 2) { + /** + * Two nodes hash to same rwsem, and these + * will occupy consecutive places in array after + * sorting. + */ + up_write(array[0]); + up_write(array[2]); + } else { + /* All 3 nodes hashe to different rwsems */ + up_write(array[0]); + up_write(array[1]); + up_write(array[2]); + } +} diff --git a/fs/kernfs/kernfs-internal.h b/fs/kernfs/kernfs-internal.h index 0babc3dc4f4a..8dc99875da32 100644 --- a/fs/kernfs/kernfs-internal.h +++ b/fs/kernfs/kernfs-internal.h @@ -19,6 +19,20 @@ #include #include +/** + * Token for nested locking interfaces. + * + * rwsems: array of rwsems to acquire + * count: has 2 uses + * As input argument it specifies size of ->rwsems array + * As return argument it specifies number of unique rwsems + * present in ->rwsems array + */ +struct kernfs_rwsem_token { + struct rw_semaphore *rwsems[3]; + int count; +}; + struct kernfs_iattrs { kuid_t ia_uid; kgid_t ia_gid; @@ -46,8 +60,8 @@ struct kernfs_root { struct list_head supers; wait_queue_head_t deactivate_waitq; - struct rw_semaphore kernfs_rwsem; struct rw_semaphore supers_rwsem; + struct mutex kernfs_topo_mutex; }; /* +1 to avoid triggering overflow warning when negating it */ @@ -169,12 +183,13 @@ extern const struct inode_operations kernfs_symlink_iops; * kernfs locks */ extern struct kernfs_global_locks *kernfs_locks; +extern rwlock_t kernfs_rename_lock; static inline struct rw_semaphore *kernfs_rwsem_ptr(struct kernfs_node *kn) { - struct kernfs_root *root = kernfs_root(kn); + int idx = hash_ptr(kn, NR_KERNFS_LOCK_BITS); - return &root->kernfs_rwsem; + return &kernfs_locks->kernfs_rwsem[idx]; } static inline void kernfs_rwsem_assert_held(struct kernfs_node *kn) @@ -247,4 +262,30 @@ static inline void kernfs_up_read(struct rw_semaphore *rwsem) { up_read(rwsem); } + +void kernfs_down_write_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token); + +void kernfs_up_write_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token); + +void kernfs_down_read_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token); + +void kernfs_up_read_double_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_rwsem_token *token); + +void kernfs_down_write_triple_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_node *kn3, + struct kernfs_rwsem_token *token); + +void kernfs_up_write_triple_nodes(struct kernfs_node *kn1, + struct kernfs_node *kn2, + struct kernfs_node *kn3, + struct kernfs_rwsem_token *token); #endif /* __KERNFS_INTERNAL_H */ diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index 3c5334b74f36..b9b8cc2c16fd 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -388,6 +388,14 @@ void kernfs_kill_sb(struct super_block *sb) kfree(info); } +static void __init kernfs_rwsem_init(void) +{ + int count; + + for (count = 0; count < NR_KERNFS_LOCKS; count++) + init_rwsem(&kernfs_locks->kernfs_rwsem[count]); +} + static void __init kernfs_mutex_init(void) { int count; @@ -402,6 +410,7 @@ static void __init kernfs_lock_init(void) WARN_ON(!kernfs_locks); kernfs_mutex_init(); + kernfs_rwsem_init(); } void __init kernfs_init(void) diff --git a/fs/kernfs/symlink.c b/fs/kernfs/symlink.c index 9d4103602554..d71aa73acec8 100644 --- a/fs/kernfs/symlink.c +++ b/fs/kernfs/symlink.c @@ -113,12 +113,17 @@ static int kernfs_getlink(struct inode *inode, char *path) struct kernfs_node *kn = inode->i_private; struct kernfs_node *parent = kn->parent; struct kernfs_node *target = kn->symlink.target_kn; - struct rw_semaphore *rwsem; int error; - - rwsem = kernfs_down_read(parent); + unsigned long flags; + + /** + * kernfs_get_target_path needs that all nodes in the path don't + * undergo a parent change in the middle of it. Since ->parent + * change happens under kernfs_rename_lock, acquire the same. + */ + read_lock_irqsave(&kernfs_rename_lock, flags); error = kernfs_get_target_path(parent, target, path); - kernfs_up_read(rwsem); + read_unlock_irqrestore(&kernfs_rename_lock, flags); return error; } diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h index 367044d7708c..7d9de9aee102 100644 --- a/include/linux/kernfs.h +++ b/include/linux/kernfs.h @@ -89,6 +89,7 @@ struct kernfs_iattrs; */ struct kernfs_global_locks { struct mutex open_file_mutex[NR_KERNFS_LOCKS]; + struct rw_semaphore kernfs_rwsem[NR_KERNFS_LOCKS]; }; enum kernfs_node_type { From patchwork Wed Aug 10 11:10:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imran Khan X-Patchwork-Id: 12940447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28536C25B0E for ; Wed, 10 Aug 2022 11:11:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231838AbiHJLLE (ORCPT ); Wed, 10 Aug 2022 07:11:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232011AbiHJLKw (ORCPT ); Wed, 10 Aug 2022 07:10:52 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D95B32F676; Wed, 10 Aug 2022 04:10:50 -0700 (PDT) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A8hs4E031943; Wed, 10 Aug 2022 11:10:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=W+HbhXe7u+kZ3PrIDVcErvTI0VYljWAbmdOuNco+Oio=; b=lFyfOLa/R2aEXCF3i/f4GUA9P9YRHEjj4ZvHCzJAPz+MJtaP4YHndeoJ97hNMNFH+cWb ofR6MrvB2QaQMtxQFo3MfqlKNqucUOQRiJiurvEhcJ5XaB1/4OfmElDvPC3m6kDVMUxs t488+dih2tbzZhnenkVd3+vkH4t01n8ptf4EZIr88nDBbrS+JHU1OU1RCD9x3o7f4Iah wuaMeRASP/7vCouDpihnU6mKkuIyalZQLJ7hRzatxLhud4wK4yHMPNWkdreRI5pHcT/9 AtHb8xkwG46+S1xlVxtsioBgYKjhZrZq60VzqlKShFcKPf1DVMceS+9FMD2TIzb9HA46 WQ== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3huwqdsj54-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:46 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 27A9Krwk015352; Wed, 10 Aug 2022 11:10:46 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2175.outbound.protection.outlook.com [104.47.57.175]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3huwqj1m5d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Aug 2022 11:10:46 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TpqIrEB4UBiKoWC47Uo/ZqhFd+ZyXoN/KkJE/5/sU7eaOerUFXoMrHNBc3ZwZ/uiw8PJQ4EmwMGGqGUZ0b5NMkAO+X2nKrct53HoxBMa0vAs7TCqbHmeoSAY0QYWHlWnmQXoUSBiBoe3yz1q21AHyGDf4O2IiEGUDBZq5W8NLa3jg8IQr4Jj9etE42AKBAFvWDpjixi/yRArzlE1zyCLfW0rFCDKf1WLh7bZ0xW+gnh04VRnmrVlB+xIYpAUln7eea55ng4ueiIAOLPJQ/RKytywU7s+0BYIS86UddPOauMeQiMT7ehWWzS0TQ+kpZ3pRdwkrHh2dGATEFy0hryBsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W+HbhXe7u+kZ3PrIDVcErvTI0VYljWAbmdOuNco+Oio=; b=a2l9BxZmFDDfAmEkpINbfgC4mlLDUdkA4c08dxpmHOapIyJuhkR7Q6/VTGdZYG+bUCOFyuuoI3Si4Xs+AOniZ3TpLXx225AfvdjE02agaPlYR5aABSfEd1aX14DUJljovbp6MUUWfS9V4FGODiucOivg2WibdqwCIJLnscH6R4p2a5XTtu4SbZVAZgUnq1UU1ArZjxDvEbCFS/7itmN7nmJRfOVmfyPjxw1m5TOiBHXLYY0sb4dGN3flwuAExFpykUJU5/Z/prDwF06Yp1Z7Zby3emEag6m96dGAaPFS0wRAn/Cqesk+snri9JofVk/fqTYS78kCXf2tnGfnVG8/4g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W+HbhXe7u+kZ3PrIDVcErvTI0VYljWAbmdOuNco+Oio=; b=MoNJCr0YMxxbS/WgSo/iTBGfDh0wfUKaebNgg4ctnlAswU6MIzGOejQP+qf4p415tlcSgsdi1Dxi1y7a8q1xG2QOXxa/gfFYUhLJcK+8d0vPpJUTrqUGLdvmocc4zpM8uhfaAzyrrgB878mIJjjDiAyANCak5z/rLQFRSTiOCnc= Received: from CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) by BYAPR10MB3431.namprd10.prod.outlook.com (2603:10b6:a03:86::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 10 Aug 2022 11:10:39 +0000 Received: from CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f]) by CO1PR10MB4468.namprd10.prod.outlook.com ([fe80::c504:bfd6:7940:544f%5]) with mapi id 15.20.5525.011; Wed, 10 Aug 2022 11:10:38 +0000 From: Imran Khan To: tj@kernel.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [RESEND PATCH 5/5] kernfs: Add a document to describe hashed locks used in kernfs. Date: Wed, 10 Aug 2022 21:10:17 +1000 Message-Id: <20220810111017.2267160-6-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220810111017.2267160-1-imran.f.khan@oracle.com> References: <20220810111017.2267160-1-imran.f.khan@oracle.com> X-ClientProxiedBy: SY3PR01CA0134.ausprd01.prod.outlook.com (2603:10c6:0:1b::19) To CO1PR10MB4468.namprd10.prod.outlook.com (2603:10b6:303:6c::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d917f083-24e9-43fc-3269-08da7ac0f4c2 X-MS-TrafficTypeDiagnostic: BYAPR10MB3431:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EbpGUrq50ZtHiokwY+OgA0TB7eU7ZGviaowKzT3XjsSD3slO6lsyM6/gBcvmSyF6vZ16OoGgR6GctvdS7JKhjZMRgLnbt1oVO+9aQyI8EIx3omK2ioqQCUVvcDg/W9ncbr1QGB2v7sgAI08jzAxao1f30bCygV3aUMFf6Ve4zUi2ZE9uBXdl2lhaHOc1Nm6NiNR/F4NvpTMArPlroKyF33oalVV6fhTjlKyNp9Oln0D7YjJrBFTHOdlVh2UnZ16C6QnzYFbMQ4E0gl54+Zdpnwqc91MmqdOGyUGQE/l8UfEdT2Y9mQfganJCO2WOYgZhuC+7TapLf5q9uqPizATyshvO+RWxmJY6MjT4msLXNSt8wP04EojRx9UZmv3hfW5IYB00iVILsphW3v87iYmnL3DN47ivzSO0e6fswZSIqdhSDXkMSipdDaAZ7Klx/WlQFQDjz6HM6jbGytozflCIeDPjtM+8WvBl2+Jzoyh3wMoSbfz3GwEhgjaSuEJGyruQNFwSinKex61OTq+mBqmIB4YeD/qdd/ykyrKbUBuKfUCESaW4ZNPc8Rytlkh9biibf+dZ0fAXBJMwvxbrE2dSRk+AJDkb/55Z+SF96PZ/3kMptbfOxULAadWsjzryj+rJgFd0p7h9d2mVh6Kdg2wz1W+9BuOyyTC1TGbppwFzOsjVYOSRM1Rd8rlHEW6VuRVJznn/LucqpAGx1jxcrdh7Ugs/TZLZ+Zygij3xlJA3wfb8tmnicyS1b0LTCm4+KvYWIUFTAaLVUtR8KO+5+yc6LL0nSG1VzuelTXUFcIDCMS8= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR10MB4468.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(346002)(396003)(136003)(376002)(39860400002)(8936002)(66556008)(4326008)(186003)(66476007)(36756003)(1076003)(103116003)(316002)(5660300002)(8676002)(6486002)(86362001)(66946007)(478600001)(2616005)(41300700001)(83380400001)(6512007)(26005)(6666004)(2906002)(6506007)(38350700002)(52116002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: x2n7EfG2ZX8JkTqDiWle8h3AH1hiLZTmssCi6UPo4nt9xFipWCHvJ8iUahfhSvHpT1ma1K9wagxdcRsvYWA+th80ONT+RtDWAUk2+ZK05wHZZSMCWz3ka/N3pdao8PioAidDJZo8FQjzcLzSjfwORSolZTOGulDpyKHBLc0S7SK4qepC9VXKGgpEz/aUYB87PG65ZL5rbCTOYnFv87c2zB18yMwN8HlDTx0msSZYdl5IQwhOpz8+2j6fF6kWY0M2UdhROBqOF5N4Ucb7aSs00niuNDWxSZirvmxuoDpgw/7wE4qYrrkYFJF+CJbcL/DsoDzFG1Nb3oRSsqPkVlRuRooqIVSYFaWrb/bkkCHdR+WVHgbL0q9IeKZDrdEpZejLxy2/wcnIbkBFMiBqB9D3vbu0O8tLwCBn6dmWosVrTZZr/FhGsm2fwSD4zqmfXyyofjKS1z8Hi30rKGiiteH9eh/JWXeSrguLbBJKRF+XPPGQQ/QM/v8NArxXfFCE2ebeqVEofVyQ4oDX2fiYt+vjYpDph+ZpuuDar0HqaAaj0KlrMvff239/PNMeLerSlCs9ZLl55LquRY4r/rvH4a648HNY/Ckv2/AfLIvFpzb5ANFkIbAFDIfVOcG1QS5ax5nhVV4NGdfyFu9SMHSOltA3OtzgZaNucDBLwQLsTNvugJaSL21vU0Izl4SXVKHaHj1Vl7c5SxOxzCMA6jCRMyt+nbwxeHnxr1VqmgePkl2gNNwPDASlaPIppwZRFZPJ+wmEafTjEi3h5mvVkHHbyujLe6XhW+WZJtKwWsxaiHZENLOzdfdH2R6tHyQEx8LTf27/nfyyCG27IVyJkGpspkBXEOrPfN6u3LEnjy12OCNHucP+6RkHf+p0tNtYA37FESMgVQjKLCSStaFJjolYkZ9w4N+fdA1A5gDCA7BBwEqc/uoCVc9FFADhgqsSooQXGL4gXKRyY0HmThayhfb+KbsXopONy0yp7UsckCYxVT4p0f/c+YD5eXTfMi6NrnCXnOu+w1r8A/srE1nY54TT8e/dT3eCr0GcQJLJNqwIYtd1geCqutj78/TyZO2asq6IgzK6RhKOu72F1IU+KbklSJ4JXzKs0pokffKsajIY17HLb7SphRpqjeJcTGlxB2ZzblnjFnjOPJyV/E7/tTdwaKigOuw//iSFnc0LbMgDTYnKyhqX0co3Y63HYeyTfUtTXUdN4ZfGeKXORojoWI56XLr9hPaV40l8qnsG9FbSMNqlrdqs3p/cC3xev95HiZZBO57RWruDyCfUQboVHNaq6bNK/9CP1Tyqc2Ufj9dzjJzdPw1BFMfVTM0gvMeDH/yyIVwnawR+uLqDmmLvJN0LuTfFR5IlqAcj/cAZRuzStSDuJgcfAASmkMl2Ic5giQVq1N3ui+KOcIpp8BLviEfPcNP0HjKpLR0clc7yC3C2VVJvJa1olgC+UrJARFAQTj2xmqq6EGAd0uPmn09axKY0JRrk9ktKlat2ypQj6zSU37zj4ceMlQLkRML8QX9bLIsLUfyi8SgKrU2Mjo5ej/d5V/l7giXnK+Lt83KDGXFl21muL81/K4JKc32asndo+KCguZkUea3GYP7wGDa2NbRRcPsz0Q== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: d917f083-24e9-43fc-3269-08da7ac0f4c2 X-MS-Exchange-CrossTenant-AuthSource: CO1PR10MB4468.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2022 11:10:38.7679 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: K5iXrEzUvZ1aTf5nIUzdvxXZVvN/nKtK1R67H47shVUkdaooq9RIF/xjNWy798eZAhMcmnM0x9ffwz7mFUPMiQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3431 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_06,2022-08-10_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 spamscore=0 phishscore=0 adultscore=0 mlxscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208100034 X-Proofpoint-GUID: IwcEA6Z9qp4SFPmCUiVorzP71n24-kk_ X-Proofpoint-ORIG-GUID: IwcEA6Z9qp4SFPmCUiVorzP71n24-kk_ Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This document describes usage and proof of various hashed locks introduced in this patch set. Signed-off-by: Imran Khan --- .../filesystems/kernfs-hashed-locks.rst | 214 ++++++++++++++++++ 1 file changed, 214 insertions(+) create mode 100644 Documentation/filesystems/kernfs-hashed-locks.rst diff --git a/Documentation/filesystems/kernfs-hashed-locks.rst b/Documentation/filesystems/kernfs-hashed-locks.rst new file mode 100644 index 000000000000..a000e6fcf78c --- /dev/null +++ b/Documentation/filesystems/kernfs-hashed-locks.rst @@ -0,0 +1,214 @@ +.. SPDX-License-Identifier: GPL-2.0-only + +=================== +kernfs hashed locks +=================== + +kernfs uses following hashed locks + +1. Hashed mutexes +2. Hashed rwsem + +In certain cases hashed rwsem needs to work in conjunction with a per-fs mutex +(Described further below).So this document describes this mutex as well. + +A kernfs_global_locks object (defined below) provides hashed mutexes, +hashed spinlocks and hashed rwsems. + + struct kernfs_global_locks { + struct mutex open_file_mutex[NR_KERNFS_LOCKS]; + struct rw_semaphore kernfs_rwsem[NR_KERNFS_LOCKS]; + }; + +For all hashed locks address of a kernfs_node object acts as hashing key. + +For the remainder of this document a node means a kernfs_node object. The +node can refer to a file, directory or symlink of a kernfs based file system. +Also a node's mutex or rwsem refers to hashed mutex, or hashed rwsem +corresponding to the node. It does not mean any locking construct embedded in +the kernfs_node itself. + +What is protected by hashed locks +================================= + +(1) There's one kernfs_open_file for each open file and all kernfs_open_file + instances corresponding to a kernfs_node are maintained in a list. + hashed mutexes or kernfs_global_locks.open_file_mutex[index] protects + this list. + +(2) Hashed rwsems or kernfs_global_locks.kernfs_rwsem[index] protects node's + state and synchronizes operations that change state of a node or depend on + the state of a node. + +(3) per-fs mutex (mentioned earlier) provides synchronization between lookup + and remove operations. It also protects against topology change. + While looking for a node we will not have address of corresponding node + so we can't acquire node's rwsem right from the beginning. + On the other hand a parallel remove operation for the same node can acquire + corresponding rwsem and go ahead with node removal. So it may happen that + search operation for the node finds and returns it but before it can be + pinned or used, the remove operation, that was going on in parallel, removes + the node and hence makes its any future use wrong. + per-fs mutex ensures that for competing search and remove operations only + one proceeds at a time and since object returned by search is pinned before + releasing the per-fs mutex, it will be available for subsequent usage. + + This per-fs mutex also protects against topology change during path walks. + During path walks we need to acquire and release rwsems corresponding to + directories so that these directories don't move and their children RB tree + does not change. Since these rwsems can't be taken under a spinlock, + kernfs_rename_lock can't be used and needed protection against topology + change is provided by per-fs mutex. + +Lock usage and proof +======================= + +(1) Hashed mutexes + + Since hashed mutexes protect the list of kernfs_open_file instances + corresponding to a kernfs_node, ->open and ->release backends of + file_operations need to acquire hashed mutex corresponding to kernfs_node. + Also when a kernfs_node is removed, all of its kernfs_open_file instances + are drained after deactivating the node. This drain operation acquires + hashed mutex to traverse list of kernfs_open_file instances. + So addition (via ->open), deletion (via ->release) and traversal + (during kernfs_drain) of kernfs_open_file list occurs in a synchronous + manner. + +(2) Hashed rwsems + + 3.1. A node's rwsem protects its state and needs to be acquired to: + 3.1.a. Remove the node + 3.1.b. Move the node + 3.1.c. Travers or modify a node's children RB tree (for + directories), i.e to add/remove files/subdirectories + within/from a directory. + 3.1.d. Modify or access node's inode attributes + + 3.2. Hashed rwsems are used in following operations: + + 3.2.a. Addition of a new node + + While adding a new kernfs_node under a kernfs directory + kernfs_add_one acquires directory node's rwsem for + writing. Clause 3.1.a ensures that directory exists + throughout the operation. Clause 3.1.c ensures proper + updation of children rb tree (i.e ->dir.children). + Clause 3.1.d ensures correct modification of inode + attribute to reflect timestamp of this operation. + If the directory gets removed while waiting for semaphore, + the subsequent checks in kernfs_add_one will fail resulting + in early bail out from kernfs_add_one. + + 3.2.b. Removal of a node + + Removal of a node involves recursive removal of all of its + descendants as well. per-fs mutex (i.e kernfs_rm_mutex) avoids + concurrent node removals even if the nodes are different. + + At first node's rwsem is acquired. Clause 3.1.c avoids parallel + modification of descendant tree and while holding this rwsem + each of the descendants are deactivated. + + Once a descendant has been deactivated and drained, its parent's + rwsem is taken. Clause 3.1.c ensures proper unlinking of this + descendant from its siblings. Clause 3.1.d ensures that parent's + inode attributes are correctly updated to record time stamp of + removal. + + 3.2.c. Movement of a node + + Moving or renaming a node (kernfs_rename_ns) acquires rwsem for + node and its old and new parents. Clauses 3.1.b and 3.1.c avoid + concurrent move operations for the same node. + Also if old parent of a node changes while waiting for rwsem, + the acquisition of rwsem for 3 involved nodes is attempted + again. It is always ensured that as far as old parent is + concerned, rwsem corresponding to current parent is acquired. + + 3.2.d. Reading a directory + + For diectory reading kernfs_fop_readdir acquires directory + node's rwsem for reading. Clause 3.1.c ensures a consistent view + of children RB tree. + As far as directroy being read is concerned, if it gets removed + while waiting for semaphore, the for loop that iterates through + children will be ineffective. So for this operation acquiring + directory node's rwsem for reading is enough. + + 3.2.e. Dentry revalidation + + A dentry revalidation (kernfs_dop_revalidate) can happen for a + negative or for a normal dentry. + For negative dentries we just need to check parent change, so in + this case acquiring parent kernfs_node's rwsem for reading is + enough. + For a normal dentry acquiring node's rwsem for reading is enough + (Clause 3.1.a and 3.1.b). + If node gets removed while waiting for the lock subsequent checks + in kernfs_dop_revalidate will fail and kernfs_dop_revalidate will + exit early. + + 3.2.f. kernfs_node lookup + + While searching for a node under a given parent + (kernfs_find_and_get_ns, kernfs_walk_and_get_ns) rwsem of parent + node is acquired for reading. Clause 3.1.c ensures a consistent + view of parent's children RB tree. To avoid parallel removal of + found node before it gets pinned, these operation make use of + per-fs mutex (kernfs_rm_mutex) as explained earlier. + This per-fs mutex is also taken during kernfs_node removal + (__kernfs_remove). + + If the node being searched gets removed while waiting for the + mutex or rwsem, the subsequent kernfs_find_ns or kernfs_walk_ns + will fail. + + 3.2.g. kenfs_node's inode lookup + + Looking up for inode instances via kernfs_iop_lookup involves + node lookup. So locks acquired are same as ones required in 3.2.f. + Also once node lookup is complete parent's rwsem is released and + rwsem of found node is acquired to get corresponding inode. + Since we are operating under per-fs kernfs_rm_mutex the found node + will not disappear in the middle. + + 3.2.h. Updating or reading inode attribute + + Interfaces that change inode attributes(i.e kernfs_setattr and + kernfs_iop_setattr) acquire node's rwsem for writing. + If the kernfs_node gets removed while waiting for the semaphore + the subsequent __kernfs_setattr will fail. + From 3.2.a and 3.2.b we know that updates due to addition or + removal of nodes will not happen in parallel. + So just locking the kernfs_node in these cases is enough to + guarantee correct modification of inode attributes. + Similarly the interfaces that read inode attributes + (i.e kernfs_iop_getattr, kernfs_iop_permission) just need to + acquire involved node's rwsem for reading. + + 3.2.i. kernfs file event generation + + kernfs_notify pins involved node before scheduling + kernfs_notify_work and kernfs_notify_workfn acquires node's + rwsem. Clauses in 3.1 ensure a consistent view of node state + throughout execution of work handler. + + 3.2.j. mount + + kernfs_fill_super, invoked during mount operation, acquires root + node's rwsem. During mount process there can't be other execution + contexts trying to move or delete the node so just locking the + involved node(i.e the root node) is enough. + + 3.2.k. while activating a node + + For a node that started as deactivated, kernfs_activate + activates the node. In this case acquiring node's rwsem is + enough. Since the node is not active yet any parallel removal + that wins the race for rwsem will skip this node and its + descendents. Also user space can't see a deactivated node so we + don't have any parallel access emanating from their as well. + + 3.3 For operations that involve locking multiple nodes at the same time + locks are acquired in order of their addresses.