From patchwork Tue Feb 28 08:06:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FACDC7EE32 for ; Tue, 28 Feb 2023 08:09:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229671AbjB1IJI (ORCPT ); Tue, 28 Feb 2023 03:09:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229534AbjB1IJF (ORCPT ); Tue, 28 Feb 2023 03:09:05 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14A331A64C; Tue, 28 Feb 2023 00:08:57 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhb3WrFz9tBq; Tue, 28 Feb 2023 16:06:55 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:38 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 1/6] IB/core: Don't register each MAD agent for LSM notifier Date: Tue, 28 Feb 2023 16:06:25 +0800 Message-ID: <20230228080630.52370-2-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Daniel Jurgens [ Upstream commit c66f67414c1f88554485bb2a0abf8b5c0d741de7 ] When creating many MAD agents in a short period of time, receive packet processing can be delayed long enough to cause timeouts while new agents are being added to the atomic notifier chain with IRQs disabled. Notifier chain registration and unregstration is an O(n) operation. With large numbers of MAD agents being created and destroyed simultaneously the CPUs spend too much time with interrupts disabled. Instead of each MAD agent registering for it's own LSM notification, maintain a list of agents internally and register once, this registration already existed for handling the PKeys. This list is write mostly, so a normal spin lock is used vs a read/write lock. All MAD agents must be checked, so a single list is used instead of breaking them down per device. Notifier calls are done under rcu_read_lock, so there isn't a risk of similar packet timeouts while checking the MAD agents security settings when notified. Signed-off-by: Daniel Jurgens Reviewed-by: Parav Pandit Signed-off-by: Leon Romanovsky Acked-by: Paul Moore Signed-off-by: Jason Gunthorpe Signed-off-by: GUO Zihua --- drivers/infiniband/core/core_priv.h | 5 +++ drivers/infiniband/core/device.c | 1 + drivers/infiniband/core/security.c | 51 ++++++++++++++++------------- include/rdma/ib_mad.h | 3 +- 4 files changed, 35 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index 77c7005c396c..8fd8a822d9b6 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -227,6 +227,7 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent, enum ib_qp_type qp_type); void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent); int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index); +void ib_mad_agent_security_change(void); #else static inline void ib_security_destroy_port_pkey_list(struct ib_device *device) { @@ -292,6 +293,10 @@ static inline int ib_mad_enforce_security(struct ib_mad_agent_private *map, { return 0; } + +static inline void ib_mad_agent_security_change(void) +{ +} #endif struct ib_device *ib_device_get_by_index(u32 ifindex); diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index ffd0f43e2129..9740614a108d 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -417,6 +417,7 @@ static int ib_security_change(struct notifier_block *nb, unsigned long event, return NOTIFY_DONE; schedule_work(&ib_policy_change_work); + ib_mad_agent_security_change(); return NOTIFY_OK; } diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c index 6df6cc55fd16..166a4ef9dce0 100644 --- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -39,6 +39,10 @@ #include "core_priv.h" #include "mad_priv.h" +static LIST_HEAD(mad_agent_list); +/* Lock to protect mad_agent_list */ +static DEFINE_SPINLOCK(mad_agent_list_lock); + static struct pkey_index_qp_list *get_pkey_idx_qp_list(struct ib_port_pkey *pp) { struct pkey_index_qp_list *pkey = NULL; @@ -669,20 +673,18 @@ static int ib_security_pkey_access(struct ib_device *dev, return security_ib_pkey_access(sec, subnet_prefix, pkey); } -static int ib_mad_agent_security_change(struct notifier_block *nb, - unsigned long event, - void *data) +void ib_mad_agent_security_change(void) { - struct ib_mad_agent *ag = container_of(nb, struct ib_mad_agent, lsm_nb); - - if (event != LSM_POLICY_CHANGE) - return NOTIFY_DONE; - - ag->smp_allowed = !security_ib_endport_manage_subnet(ag->security, - ag->device->name, - ag->port_num); - - return NOTIFY_OK; + struct ib_mad_agent *ag; + + spin_lock(&mad_agent_list_lock); + list_for_each_entry(ag, + &mad_agent_list, + mad_agent_sec_list) + WRITE_ONCE(ag->smp_allowed, + !security_ib_endport_manage_subnet(ag->security, + ag->device->name, ag->port_num)); + spin_unlock(&mad_agent_list_lock); } int ib_mad_agent_security_setup(struct ib_mad_agent *agent, @@ -693,6 +695,8 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent, if (!rdma_protocol_ib(agent->device, agent->port_num)) return 0; + INIT_LIST_HEAD(&agent->mad_agent_sec_list); + ret = security_ib_alloc_security(&agent->security); if (ret) return ret; @@ -700,22 +704,20 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent, if (qp_type != IB_QPT_SMI) return 0; + spin_lock(&mad_agent_list_lock); ret = security_ib_endport_manage_subnet(agent->security, agent->device->name, agent->port_num); if (ret) goto free_security; - agent->lsm_nb.notifier_call = ib_mad_agent_security_change; - ret = register_lsm_notifier(&agent->lsm_nb); - if (ret) - goto free_security; - - agent->smp_allowed = true; - agent->lsm_nb_reg = true; + WRITE_ONCE(agent->smp_allowed, true); + list_add(&agent->mad_agent_sec_list, &mad_agent_list); + spin_unlock(&mad_agent_list_lock); return 0; free_security: + spin_unlock(&mad_agent_list_lock); security_ib_free_security(agent->security); return ret; } @@ -725,8 +727,11 @@ void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent) if (!rdma_protocol_ib(agent->device, agent->port_num)) return; - if (agent->lsm_nb_reg) - unregister_lsm_notifier(&agent->lsm_nb); + if (agent->qp->qp_type == IB_QPT_SMI) { + spin_lock(&mad_agent_list_lock); + list_del(&agent->mad_agent_sec_list); + spin_unlock(&mad_agent_list_lock); + } security_ib_free_security(agent->security); } @@ -737,7 +742,7 @@ int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index) return 0; if (map->agent.qp->qp_type == IB_QPT_SMI) { - if (!map->agent.smp_allowed) + if (!READ_ONCE(map->agent.smp_allowed)) return -EACCES; return 0; } diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h index f6ba366051c7..69b838afe2cf 100644 --- a/include/rdma/ib_mad.h +++ b/include/rdma/ib_mad.h @@ -610,8 +610,7 @@ struct ib_mad_agent { u8 rmpp_version; void *security; bool smp_allowed; - bool lsm_nb_reg; - struct notifier_block lsm_nb; + struct list_head mad_agent_sec_list; }; /** From patchwork Tue Feb 28 08:06:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F4CEC7EE37 for ; Tue, 28 Feb 2023 08:09:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229984AbjB1IJJ (ORCPT ); Tue, 28 Feb 2023 03:09:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230258AbjB1IJF (ORCPT ); Tue, 28 Feb 2023 03:09:05 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A492D18A8A; Tue, 28 Feb 2023 00:08:56 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhb6h4lz9tCC; Tue, 28 Feb 2023 16:06:55 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:39 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 2/6] LSM: switch to blocking policy update notifiers Date: Tue, 28 Feb 2023 16:06:26 +0800 Message-ID: <20230228080630.52370-3-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Janne Karhunen [ Upstream commit 42df744c4166af6959eda2df1ee5cde744d4a1c3 ] Atomic policy updaters are not very useful as they cannot usually perform the policy updates on their own. Since it seems that there is no strict need for the atomicity, switch to the blocking variant. While doing so, rename the functions accordingly. Signed-off-by: Janne Karhunen Acked-by: Paul Moore Acked-by: James Morris Signed-off-by: Mimi Zohar Signed-off-by: GUO Zihua --- drivers/infiniband/core/device.c | 4 ++-- include/linux/security.h | 12 ++++++------ security/security.c | 23 +++++++++++++---------- security/selinux/hooks.c | 2 +- security/selinux/selinuxfs.c | 2 +- 5 files changed, 23 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 9740614a108d..77e9c0c3c96b 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1208,7 +1208,7 @@ static int __init ib_core_init(void) goto err_mad; } - ret = register_lsm_notifier(&ibdev_lsm_nb); + ret = register_blocking_lsm_notifier(&ibdev_lsm_nb); if (ret) { pr_warn("Couldn't register LSM notifier. ret %d\n", ret); goto err_sa; @@ -1244,7 +1244,7 @@ static void __exit ib_core_cleanup(void) roce_gid_mgmt_cleanup(); nldev_exit(); rdma_nl_unregister(RDMA_NL_LS); - unregister_lsm_notifier(&ibdev_lsm_nb); + unregister_blocking_lsm_notifier(&ibdev_lsm_nb); ib_sa_cleanup(); ib_mad_cleanup(); addr_cleanup(); diff --git a/include/linux/security.h b/include/linux/security.h index 273877cf47bf..81e795c9c09a 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -191,9 +191,9 @@ struct security_mnt_opts { int num_mnt_opts; }; -int call_lsm_notifier(enum lsm_event event, void *data); -int register_lsm_notifier(struct notifier_block *nb); -int unregister_lsm_notifier(struct notifier_block *nb); +int call_blocking_lsm_notifier(enum lsm_event event, void *data); +int register_blocking_lsm_notifier(struct notifier_block *nb); +int unregister_blocking_lsm_notifier(struct notifier_block *nb); static inline void security_init_mnt_opts(struct security_mnt_opts *opts) { @@ -409,17 +409,17 @@ int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen); struct security_mnt_opts { }; -static inline int call_lsm_notifier(enum lsm_event event, void *data) +static inline int call_blocking_lsm_notifier(enum lsm_event event, void *data) { return 0; } -static inline int register_lsm_notifier(struct notifier_block *nb) +static inline int register_blocking_lsm_notifier(struct notifier_block *nb) { return 0; } -static inline int unregister_lsm_notifier(struct notifier_block *nb) +static inline int unregister_blocking_lsm_notifier(struct notifier_block *nb) { return 0; } diff --git a/security/security.c b/security/security.c index fc1410550b79..4d20a15f18b4 100644 --- a/security/security.c +++ b/security/security.c @@ -38,7 +38,7 @@ #define SECURITY_NAME_MAX 10 struct security_hook_heads security_hook_heads __lsm_ro_after_init; -static ATOMIC_NOTIFIER_HEAD(lsm_notifier_chain); +static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain); char *lsm_names; /* Boot-time LSM user choice */ @@ -180,23 +180,26 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count, panic("%s - Cannot get early memory.\n", __func__); } -int call_lsm_notifier(enum lsm_event event, void *data) +int call_blocking_lsm_notifier(enum lsm_event event, void *data) { - return atomic_notifier_call_chain(&lsm_notifier_chain, event, data); + return blocking_notifier_call_chain(&blocking_lsm_notifier_chain, + event, data); } -EXPORT_SYMBOL(call_lsm_notifier); +EXPORT_SYMBOL(call_blocking_lsm_notifier); -int register_lsm_notifier(struct notifier_block *nb) +int register_blocking_lsm_notifier(struct notifier_block *nb) { - return atomic_notifier_chain_register(&lsm_notifier_chain, nb); + return blocking_notifier_chain_register(&blocking_lsm_notifier_chain, + nb); } -EXPORT_SYMBOL(register_lsm_notifier); +EXPORT_SYMBOL(register_blocking_lsm_notifier); -int unregister_lsm_notifier(struct notifier_block *nb) +int unregister_blocking_lsm_notifier(struct notifier_block *nb) { - return atomic_notifier_chain_unregister(&lsm_notifier_chain, nb); + return blocking_notifier_chain_unregister(&blocking_lsm_notifier_chain, + nb); } -EXPORT_SYMBOL(unregister_lsm_notifier); +EXPORT_SYMBOL(unregister_blocking_lsm_notifier); /* * Hook list operation macros. diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 41e24df986eb..77f2147709b1 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -199,7 +199,7 @@ static int selinux_lsm_notifier_avc_callback(u32 event) { if (event == AVC_CALLBACK_RESET) { sel_ib_pkey_flush(); - call_lsm_notifier(LSM_POLICY_CHANGE, NULL); + call_blocking_lsm_notifier(LSM_POLICY_CHANGE, NULL); } return 0; diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c index 60b3f16bb5c7..4f72d0998580 100644 --- a/security/selinux/selinuxfs.c +++ b/security/selinux/selinuxfs.c @@ -180,7 +180,7 @@ static ssize_t sel_write_enforce(struct file *file, const char __user *buf, selnl_notify_setenforce(new_value); selinux_status_update_setenforce(state, new_value); if (!new_value) - call_lsm_notifier(LSM_POLICY_CHANGE, NULL); + call_blocking_lsm_notifier(LSM_POLICY_CHANGE, NULL); } length = count; out: From patchwork Tue Feb 28 08:06:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26A5AC7EE33 for ; Tue, 28 Feb 2023 08:09:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229534AbjB1IJJ (ORCPT ); Tue, 28 Feb 2023 03:09:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230084AbjB1IJF (ORCPT ); Tue, 28 Feb 2023 03:09:05 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EF9C166D7; Tue, 28 Feb 2023 00:08:56 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhc1JBzz9tCF; Tue, 28 Feb 2023 16:06:56 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:40 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 3/6] ima: use the lsm policy update notifier Date: Tue, 28 Feb 2023 16:06:27 +0800 Message-ID: <20230228080630.52370-4-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Janne Karhunen [ Upstream commit b169424551930a9325f700f502802f4d515194e5 ] Don't do lazy policy updates while running the rule matching, run the updates as they happen. Depends on commit f242064c5df3 ("LSM: switch to blocking policy update notifiers") Signed-off-by: Janne Karhunen Signed-off-by: Mimi Zohar Signed-off-by: GUO Zihua --- security/integrity/ima/ima.h | 2 + security/integrity/ima/ima_main.c | 8 ++ security/integrity/ima/ima_policy.c | 118 ++++++++++++++++++++++------ 3 files changed, 105 insertions(+), 23 deletions(-) diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index e2916b115b93..dc564ed9a790 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -154,6 +154,8 @@ int ima_measurements_show(struct seq_file *m, void *v); unsigned long ima_get_binary_runtime_size(void); int ima_init_template(void); void ima_init_template_list(void); +int ima_lsm_policy_change(struct notifier_block *nb, unsigned long event, + void *lsm_data); /* * used to protect h_table and sha_table diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c index 2d31921fbda4..55681872c6ce 100644 --- a/security/integrity/ima/ima_main.c +++ b/security/integrity/ima/ima_main.c @@ -41,6 +41,10 @@ int ima_appraise; int ima_hash_algo = HASH_ALGO_SHA1; static int hash_setup_done; +static struct notifier_block ima_lsm_policy_notifier = { + .notifier_call = ima_lsm_policy_change, +}; + static int __init hash_setup(char *str) { struct ima_template_desc *template_desc = ima_template_desc_current(); @@ -553,6 +557,10 @@ static int __init init_ima(void) error = ima_init(); } + error = register_blocking_lsm_notifier(&ima_lsm_policy_notifier); + if (error) + pr_warn("Couldn't register LSM notifier, error %d\n", error); + if (!error) ima_update_policy_flag(); diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index b2dadff3626b..5a15524bca4c 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -241,46 +241,124 @@ static int __init default_appraise_policy_setup(char *str) } __setup("ima_appraise_tcb", default_appraise_policy_setup); -static void ima_free_rule(struct ima_rule_entry *entry) +static void ima_lsm_free_rule(struct ima_rule_entry *entry) { int i; + for (i = 0; i < MAX_LSM_RULES; i++) { + security_filter_rule_free(entry->lsm[i].rule); + kfree(entry->lsm[i].args_p); + } +} + +static void ima_free_rule(struct ima_rule_entry *entry) +{ if (!entry) return; kfree(entry->fsname); + ima_lsm_free_rule(entry); + kfree(entry); +} + +static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry) +{ + struct ima_rule_entry *nentry; + int i, result; + + nentry = kmalloc(sizeof(*nentry), GFP_KERNEL); + if (!nentry) + return NULL; + + /* + * Immutable elements are copied over as pointers and data; only + * lsm rules can change + */ + memcpy(nentry, entry, sizeof(*nentry)); + memset(nentry->lsm, 0, FIELD_SIZEOF(struct ima_rule_entry, lsm)); + for (i = 0; i < MAX_LSM_RULES; i++) { - security_filter_rule_free(entry->lsm[i].rule); - kfree(entry->lsm[i].args_p); + if (!entry->lsm[i].rule) + continue; + + nentry->lsm[i].type = entry->lsm[i].type; + nentry->lsm[i].args_p = kstrdup(entry->lsm[i].args_p, + GFP_KERNEL); + if (!nentry->lsm[i].args_p) + goto out_err; + + result = security_filter_rule_init(nentry->lsm[i].type, + Audit_equal, + nentry->lsm[i].args_p, + &nentry->lsm[i].rule); + if (result == -EINVAL) + pr_warn("ima: rule for LSM \'%d\' is undefined\n", + entry->lsm[i].type); } + return nentry; + +out_err: + ima_lsm_free_rule(nentry); + kfree(nentry); + return NULL; +} + +static int ima_lsm_update_rule(struct ima_rule_entry *entry) +{ + struct ima_rule_entry *nentry; + + nentry = ima_lsm_copy_rule(entry); + if (!nentry) + return -ENOMEM; + + list_replace_rcu(&entry->list, &nentry->list); + synchronize_rcu(); + ima_lsm_free_rule(entry); kfree(entry); + + return 0; } /* * The LSM policy can be reloaded, leaving the IMA LSM based rules referring * to the old, stale LSM policy. Update the IMA LSM based rules to reflect - * the reloaded LSM policy. We assume the rules still exist; and BUG_ON() if - * they don't. + * the reloaded LSM policy. */ static void ima_lsm_update_rules(void) { - struct ima_rule_entry *entry; - int result; - int i; + struct ima_rule_entry *entry, *e; + int i, result, needs_update; - list_for_each_entry(entry, &ima_policy_rules, list) { + list_for_each_entry_safe(entry, e, &ima_policy_rules, list) { + needs_update = 0; for (i = 0; i < MAX_LSM_RULES; i++) { - if (!entry->lsm[i].rule) - continue; - result = security_filter_rule_init(entry->lsm[i].type, - Audit_equal, - entry->lsm[i].args_p, - &entry->lsm[i].rule); - BUG_ON(!entry->lsm[i].rule); + if (entry->lsm[i].rule) { + needs_update = 1; + break; + } + } + if (!needs_update) + continue; + + result = ima_lsm_update_rule(entry); + if (result) { + pr_err("ima: lsm rule update error %d\n", + result); + return; } } } +int ima_lsm_policy_change(struct notifier_block *nb, unsigned long event, + void *lsm_data) +{ + if (event != LSM_POLICY_CHANGE) + return NOTIFY_DONE; + + ima_lsm_update_rules(); + return NOTIFY_OK; +} + /** * ima_match_rules - determine whether an inode matches the measure rule. * @rule: a pointer to a rule @@ -334,11 +412,10 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, for (i = 0; i < MAX_LSM_RULES; i++) { int rc = 0; u32 osid; - int retried = 0; if (!rule->lsm[i].rule) continue; -retry: + switch (i) { case LSM_OBJ_USER: case LSM_OBJ_ROLE: @@ -361,11 +438,6 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, default: break; } - if ((rc < 0) && (!retried)) { - retried = 1; - ima_lsm_update_rules(); - goto retry; - } if (!rc) return false; } From patchwork Tue Feb 28 08:06:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EDD7C7EE39 for ; Tue, 28 Feb 2023 08:09:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229734AbjB1IJK (ORCPT ); Tue, 28 Feb 2023 03:09:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230225AbjB1IJF (ORCPT ); Tue, 28 Feb 2023 03:09:05 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6178117CD9; Tue, 28 Feb 2023 00:08:56 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhc1ZVvz9tCN; Tue, 28 Feb 2023 16:06:56 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:40 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 4/6] ima: ima/lsm policy rule loading logic bug fixes Date: Tue, 28 Feb 2023 16:06:28 +0800 Message-ID: <20230228080630.52370-5-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Janne Karhunen [ Upstream commit 483ec26eed42bf050931d9a5c5f9f0b5f2ad5f3b ] Keep the ima policy rules around from the beginning even if they appear invalid at the time of loading, as they may become active after an lsm policy load. However, loading a custom IMA policy with unknown LSM labels is only safe after we have transitioned from the "built-in" policy rules to a custom IMA policy. Patch also fixes the rule re-use during the lsm policy reload and makes some prints a bit more human readable. Changelog: v4: - Do not allow the initial policy load refer to non-existing lsm rules. v3: - Fix too wide policy rule matching for non-initialized LSMs v2: - Fix log prints Fixes: b16942455193 ("ima: use the lsm policy update notifier") Cc: Casey Schaufler Reported-by: Mimi Zohar Signed-off-by: Janne Karhunen Signed-off-by: Konsta Karsisto Signed-off-by: Mimi Zohar Signed-off-by: GUO Zihua --- security/integrity/ima/ima_policy.c | 46 +++++++++++++++++------------ 1 file changed, 27 insertions(+), 19 deletions(-) diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index 5a15524bca4c..5256ff008f11 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -264,7 +264,7 @@ static void ima_free_rule(struct ima_rule_entry *entry) static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry) { struct ima_rule_entry *nentry; - int i, result; + int i; nentry = kmalloc(sizeof(*nentry), GFP_KERNEL); if (!nentry) @@ -278,7 +278,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry) memset(nentry->lsm, 0, FIELD_SIZEOF(struct ima_rule_entry, lsm)); for (i = 0; i < MAX_LSM_RULES; i++) { - if (!entry->lsm[i].rule) + if (!entry->lsm[i].args_p) continue; nentry->lsm[i].type = entry->lsm[i].type; @@ -287,13 +287,13 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry) if (!nentry->lsm[i].args_p) goto out_err; - result = security_filter_rule_init(nentry->lsm[i].type, - Audit_equal, - nentry->lsm[i].args_p, - &nentry->lsm[i].rule); - if (result == -EINVAL) - pr_warn("ima: rule for LSM \'%d\' is undefined\n", - entry->lsm[i].type); + security_filter_rule_init(nentry->lsm[i].type, + Audit_equal, + nentry->lsm[i].args_p, + &nentry->lsm[i].rule); + if (!nentry->lsm[i].rule) + pr_warn("rule for LSM \'%s\' is undefined\n", + (char *)entry->lsm[i].args_p); } return nentry; @@ -332,7 +332,7 @@ static void ima_lsm_update_rules(void) list_for_each_entry_safe(entry, e, &ima_policy_rules, list) { needs_update = 0; for (i = 0; i < MAX_LSM_RULES; i++) { - if (entry->lsm[i].rule) { + if (entry->lsm[i].args_p) { needs_update = 1; break; } @@ -342,8 +342,7 @@ static void ima_lsm_update_rules(void) result = ima_lsm_update_rule(entry); if (result) { - pr_err("ima: lsm rule update error %d\n", - result); + pr_err("lsm rule update error %d\n", result); return; } } @@ -360,7 +359,7 @@ int ima_lsm_policy_change(struct notifier_block *nb, unsigned long event, } /** - * ima_match_rules - determine whether an inode matches the measure rule. + * ima_match_rules - determine whether an inode matches the policy rule. * @rule: a pointer to a rule * @inode: a pointer to an inode * @cred: a pointer to a credentials structure for user validation @@ -413,9 +412,12 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, int rc = 0; u32 osid; - if (!rule->lsm[i].rule) - continue; - + if (!rule->lsm[i].rule) { + if (!rule->lsm[i].args_p) + continue; + else + return false; + } switch (i) { case LSM_OBJ_USER: case LSM_OBJ_ROLE: @@ -733,9 +735,15 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry, entry->lsm[lsm_rule].args_p, &entry->lsm[lsm_rule].rule); if (!entry->lsm[lsm_rule].rule) { - kfree(entry->lsm[lsm_rule].args_p); - entry->lsm[lsm_rule].args_p = NULL; - return -EINVAL; + pr_warn("rule for LSM \'%s\' is undefined\n", + (char *)entry->lsm[lsm_rule].args_p); + + if (ima_rules == &ima_default_rules) { + kfree(entry->lsm[lsm_rule].args_p); + entry->lsm[lsm_rule].args_p = NULL; + result = -EINVAL; + } else + result = 0; } return result; From patchwork Tue Feb 28 08:06:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8059C7EE32 for ; Tue, 28 Feb 2023 08:09:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230303AbjB1IJL (ORCPT ); Tue, 28 Feb 2023 03:09:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230268AbjB1IJG (ORCPT ); Tue, 28 Feb 2023 03:09:06 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B31D1D926; Tue, 28 Feb 2023 00:09:04 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhc3H04z9tBn; Tue, 28 Feb 2023 16:06:56 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:41 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 5/6] ima: Evaluate error in init_ima() Date: Tue, 28 Feb 2023 16:06:29 +0800 Message-ID: <20230228080630.52370-6-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Roberto Sassu [ Upstream commit e144d6b265415ddbdc54b3f17f4f95133effa5a8 ] Evaluate error in init_ima() before register_blocking_lsm_notifier() and return if not zero. Cc: stable@vger.kernel.org # 5.3.x Fixes: b16942455193 ("ima: use the lsm policy update notifier") Signed-off-by: Roberto Sassu Reviewed-by: James Morris Signed-off-by: Mimi Zohar Signed-off-by: GUO Zihua --- security/integrity/ima/ima_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c index 55681872c6ce..fcaaf2c2ba4b 100644 --- a/security/integrity/ima/ima_main.c +++ b/security/integrity/ima/ima_main.c @@ -557,6 +557,9 @@ static int __init init_ima(void) error = ima_init(); } + if (error) + return error; + error = register_blocking_lsm_notifier(&ima_lsm_policy_notifier); if (error) pr_warn("Couldn't register LSM notifier, error %d\n", error); From patchwork Tue Feb 28 08:06:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guozihua (Scott)" X-Patchwork-Id: 13154505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14415C64EC7 for ; Tue, 28 Feb 2023 08:09:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229724AbjB1IJL (ORCPT ); Tue, 28 Feb 2023 03:09:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230267AbjB1IJG (ORCPT ); Tue, 28 Feb 2023 03:09:06 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3E711ADE2; Tue, 28 Feb 2023 00:08:59 -0800 (PST) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PQqhc3WwHz9tCQ; Tue, 28 Feb 2023 16:06:56 +0800 (CST) Received: from huawei.com (10.67.175.31) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Tue, 28 Feb 2023 16:08:42 +0800 From: GUO Zihua To: , CC: , , , Subject: [PATCH 4.19 v3 6/6] ima: Handle -ESTALE returned by ima_filter_rule_match() Date: Tue, 28 Feb 2023 16:06:30 +0800 Message-ID: <20230228080630.52370-7-guozihua@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230228080630.52370-1-guozihua@huawei.com> References: <20230228080630.52370-1-guozihua@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.175.31] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500024.china.huawei.com (7.185.36.203) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org [ Upstream commit c7423dbdbc9ecef7fff5239d144cad4b9887f4de ] IMA relies on the blocking LSM policy notifier callback to update the LSM based IMA policy rules. When SELinux update its policies, IMA would be notified and starts updating all its lsm rules one-by-one. During this time, -ESTALE would be returned by ima_filter_rule_match() if it is called with a LSM rule that has not yet been updated. In ima_match_rules(), -ESTALE is not handled, and the LSM rule is considered a match, causing extra files to be measured by IMA. Fix it by re-initializing a temporary rule if -ESTALE is returned by ima_filter_rule_match(). The origin rule in the rule list would be updated by the LSM policy notifier callback. Fixes: b16942455193 ("ima: use the lsm policy update notifier") Signed-off-by: GUO Zihua Reviewed-by: Roberto Sassu Signed-off-by: Mimi Zohar Signed-off-by: GUO Zihua --- security/integrity/ima/ima_policy.c | 40 ++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 9 deletions(-) diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index 5256ff008f11..e9e15e622cf2 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -374,6 +374,9 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, enum ima_hooks func, int mask) { int i; + bool result = false; + struct ima_rule_entry *lsm_rule = rule; + bool rule_reinitialized = false; if ((rule->flags & IMA_FUNC) && (rule->func != func && func != POST_SETATTR)) @@ -412,38 +415,57 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, int rc = 0; u32 osid; - if (!rule->lsm[i].rule) { - if (!rule->lsm[i].args_p) + if (!lsm_rule->lsm[i].rule) { + if (!lsm_rule->lsm[i].args_p) continue; else return false; } + +retry: switch (i) { case LSM_OBJ_USER: case LSM_OBJ_ROLE: case LSM_OBJ_TYPE: security_inode_getsecid(inode, &osid); rc = security_filter_rule_match(osid, - rule->lsm[i].type, + lsm_rule->lsm[i].type, Audit_equal, - rule->lsm[i].rule, + lsm_rule->lsm[i].rule, NULL); break; case LSM_SUBJ_USER: case LSM_SUBJ_ROLE: case LSM_SUBJ_TYPE: rc = security_filter_rule_match(secid, - rule->lsm[i].type, + lsm_rule->lsm[i].type, Audit_equal, - rule->lsm[i].rule, + lsm_rule->lsm[i].rule, NULL); default: break; } - if (!rc) - return false; + + if (rc == -ESTALE && !rule_reinitialized) { + lsm_rule = ima_lsm_copy_rule(rule); + if (lsm_rule) { + rule_reinitialized = true; + goto retry; + } + } + if (!rc) { + result = false; + goto out; + } + } + result = true; + +out: + if (rule_reinitialized) { + ima_lsm_free_rule(lsm_rule); + kfree(lsm_rule); } - return true; + return result; } /*