From patchwork Fri Feb 23 12:44:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13569025 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EAAE78699; Fri, 23 Feb 2024 12:45:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708692336; cv=none; b=lwTJidJMvZ8BRgnF+qrWdlW5OJfzIb9I4Sv5+5b7WyfITBbNMuyg46XaMCDT6vP5hY8foxzOnu2W+ROAKqP+6clGDEyVMgBHBSeUfdb1IjzJU1JWArUtJjSBh1VX6n3qRnB1exk3d0kt8hh9mCbWDMiW6OXJNWfqp4LJVOzwqjM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708692336; c=relaxed/simple; bh=1oVIycT7UB//ppXfwoHRBgxFkM52ubn9RNvoKjPCOPQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jFLmSzjWCjb4199RAUiallaO4xZxTkyNBtb5Rh54KdXcT5Y+W6oSAU0KMpqeYGEzOqWSJz5PNe0NbOtmQib5AZohDStXUU6Iix+eVIigo9cyJIIAUL89n1eZcu5kEbOPkQiZ6q+3OWbz/9inlfkgAueRZ+2hQQBLi7WdCQVw1k0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Th8l62M20z67Q7R; Fri, 23 Feb 2024 20:41:22 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 646D0140B33; Fri, 23 Feb 2024 20:45:31 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 12:45:30 +0000 From: Jonathan Cameron To: , Rob Herring , Frank Rowand , , Julia Lawall CC: Peter Zijlstra , Andy Shevchenko , Greg Kroah-Hartman , Subject: [PATCH v2 2/4] of: Introduce for_each_*_child_of_node_scoped() to automate of_node_put() handling Date: Fri, 23 Feb 2024 12:44:30 +0000 Message-ID: <20240223124432.26443-3-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223124432.26443-1-Jonathan.Cameron@huawei.com> References: <20240223124432.26443-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-iio@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100004.china.huawei.com (7.191.162.219) To lhrpeml500005.china.huawei.com (7.191.163.240) To avoid issues with out of order cleanup, or ambiguity about when the auto freed data is first instantiated, do it within the for loop definition. The disadvantage is that the struct device_node *child variable creation is not immediately obvious where this is used. However, in many cases, if there is another definition of struct device_node *child; the compiler / static analysers will notify us that it is unused, or uninitialized. Note that, in the vast majority of cases, the _available_ form should be used and as code is converted to these scoped handers, we should confirm that any cases that do not check for available have a good reason not to. Signed-off-by: Jonathan Cameron --- include/linux/of.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/include/linux/of.h b/include/linux/of.h index 50e882ee91da..024dda54b9c7 100644 --- a/include/linux/of.h +++ b/include/linux/of.h @@ -1430,10 +1430,23 @@ static inline int of_property_read_s32(const struct device_node *np, #define for_each_child_of_node(parent, child) \ for (child = of_get_next_child(parent, NULL); child != NULL; \ child = of_get_next_child(parent, child)) + +#define for_each_child_of_node_scoped(parent, child) \ + for (struct device_node *child __free(device_node) = \ + of_get_next_child(parent, NULL); \ + child != NULL; \ + child = of_get_next_child(parent, child)) + #define for_each_available_child_of_node(parent, child) \ for (child = of_get_next_available_child(parent, NULL); child != NULL; \ child = of_get_next_available_child(parent, child)) +#define for_each_available_child_of_node_scoped(parent, child) \ + for (struct device_node *child __free(device_node) = \ + of_get_next_available_child(parent, NULL); \ + child != NULL; \ + child = of_get_next_available_child(parent, child)) + #define for_each_of_cpu_node(cpu) \ for (cpu = of_get_next_cpu_node(NULL); cpu != NULL; \ cpu = of_get_next_cpu_node(cpu))