From patchwork Sun Mar 2 14:09:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39C1DC282C6 for ; Sun, 2 Mar 2025 14:20:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D770310E166; Sun, 2 Mar 2025 14:20:41 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="n+ZNWd+E"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 925D710E163; Sun, 2 Mar 2025 14:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925241; x=1772461241; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l70KLgYLIyv4aSEcSxo3W5vkTl81a0q5W5XdNZ8aaR8=; b=n+ZNWd+EsP/gJLVghCRzuTB6zlNDzpfi3bXSMnoqUmeYO5Jj/WOc5WFh zyK2Y63HlsYWP5PrSOMe8S5LUrTTP0BSg8BaPIawCGRzupqH2ZfT5Lydp eOYRVul9lbgrThpllKeH0WGIkMrRs7EVnYooYoh3iwHrCDaB4fb+JnPNv +4lM+/BTVy7vGCYH8cUFntSd1PGvHUbqDpKtE8nwzxTTTHlp8o5nm/ztY Y8kg1JlHIQntxM68VngPm1W5kXKDjUAa6L04IXcnH1MU98i4aWxYVDhjY /V2Ak2RLeMADl5T9KDbFnwIZmIJ9JE6xe1hZLUVsg4EQV6kaw4NL5Y5wq A==; X-CSE-ConnectionGUID: IcZyacUbQ4KC6GlLkL+34A== X-CSE-MsgGUID: sEgh6njMTXSsY2NUADb2Sg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176369" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176369" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:40 -0800 X-CSE-ConnectionGUID: U8iCNKiPRq2g5cr5uwOaPQ== X-CSE-MsgGUID: Aeln2kwwSp2PyXb7Sx3gQA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737277" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:34 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 01/11] mtd: core: always create master device Date: Sun, 2 Mar 2025 16:09:11 +0200 Message-ID: <20250302140921.504304-2-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Create master device without partition when CONFIG_MTD_PARTITIONED_MASTER flag is unset. This streamlines device tree and allows to anchor runtime power management on master device in all cases. Signed-off-by: Alexander Usyskin --- drivers/mtd/mtdcore.c | 141 +++++++++++++++++++++++++++++------------- drivers/mtd/mtdcore.h | 2 +- drivers/mtd/mtdpart.c | 17 ++--- 3 files changed, 110 insertions(+), 50 deletions(-) diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c index 724f917f91ba..d0e7fb027eb6 100644 --- a/drivers/mtd/mtdcore.c +++ b/drivers/mtd/mtdcore.c @@ -68,7 +68,13 @@ static struct class mtd_class = { .pm = MTD_CLS_PM_OPS, }; +static struct class mtd_master_class = { + .name = "mtd_master", + .pm = MTD_CLS_PM_OPS, +}; + static DEFINE_IDR(mtd_idr); +static DEFINE_IDR(mtd_master_idr); /* These are exported solely for the purpose of mtd_blkdevs.c. You should not use them for _anything_ else */ @@ -83,8 +89,9 @@ EXPORT_SYMBOL_GPL(__mtd_next_device); static LIST_HEAD(mtd_notifiers); - +#define MTD_MASTER_DEVS 255 #define MTD_DEVT(index) MKDEV(MTD_CHAR_MAJOR, (index)*2) +static dev_t mtd_master_devt; /* REVISIT once MTD uses the driver model better, whoever allocates * the mtd_info will probably want to use the release() hook... @@ -104,6 +111,17 @@ static void mtd_release(struct device *dev) device_destroy(&mtd_class, index + 1); } +static void mtd_master_release(struct device *dev) +{ + struct mtd_info *mtd = dev_get_drvdata(dev); + + idr_remove(&mtd_master_idr, mtd->index); + of_node_put(mtd_get_of_node(mtd)); + + if (mtd_is_partition(mtd)) + release_mtd_partition(mtd); +} + static void mtd_device_release(struct kref *kref) { struct mtd_info *mtd = container_of(kref, struct mtd_info, refcnt); @@ -367,6 +385,11 @@ static const struct device_type mtd_devtype = { .release = mtd_release, }; +static const struct device_type mtd_master_devtype = { + .name = "mtd_master", + .release = mtd_master_release, +}; + static bool mtd_expert_analysis_mode; #ifdef CONFIG_DEBUG_FS @@ -634,13 +657,13 @@ static void mtd_check_of_node(struct mtd_info *mtd) /** * add_mtd_device - register an MTD device * @mtd: pointer to new MTD device info structure + * @partitioned: create partitioned device * * Add a device to the list of MTD devices present in the system, and * notify each currently active MTD 'user' of its arrival. Returns * zero on success or non-zero on failure. */ - -int add_mtd_device(struct mtd_info *mtd) +int add_mtd_device(struct mtd_info *mtd, bool partitioned) { struct device_node *np = mtd_get_of_node(mtd); struct mtd_info *master = mtd_get_master(mtd); @@ -687,10 +710,17 @@ int add_mtd_device(struct mtd_info *mtd) ofidx = -1; if (np) ofidx = of_alias_get_id(np, "mtd"); - if (ofidx >= 0) - i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); - else - i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); + if (partitioned) { + if (ofidx >= 0) + i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); + else + i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); + } else { + if (ofidx >= 0) + i = idr_alloc(&mtd_master_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); + else + i = idr_alloc(&mtd_master_idr, mtd, 0, 0, GFP_KERNEL); + } if (i < 0) { error = i; goto fail_locked; @@ -738,15 +768,23 @@ int add_mtd_device(struct mtd_info *mtd) /* Caller should have set dev.parent to match the * physical device, if appropriate. */ - mtd->dev.type = &mtd_devtype; - mtd->dev.class = &mtd_class; - mtd->dev.devt = MTD_DEVT(i); - dev_set_name(&mtd->dev, "mtd%d", i); + if (partitioned) { + mtd->dev.type = &mtd_devtype; + mtd->dev.class = &mtd_class; + mtd->dev.devt = MTD_DEVT(i); + dev_set_name(&mtd->dev, "mtd%d", i); + } else { + mtd->dev.type = &mtd_master_devtype; + mtd->dev.class = &mtd_master_class; + mtd->dev.devt = MKDEV(MAJOR(mtd_master_devt), i); + dev_set_name(&mtd->dev, "mtd_master%d", i); + } dev_set_drvdata(&mtd->dev, mtd); mtd_check_of_node(mtd); of_node_get(mtd_get_of_node(mtd)); error = device_register(&mtd->dev); if (error) { + pr_err("mtd: %s device_register fail %d\n", mtd->name, error); put_device(&mtd->dev); goto fail_added; } @@ -758,10 +796,13 @@ int add_mtd_device(struct mtd_info *mtd) mtd_debugfs_populate(mtd); - device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, - "mtd%dro", i); + if (partitioned) { + device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, + "mtd%dro", i); + } - pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); + pr_debug("mtd: Giving out %spartitioned device %d to %s\n", + partitioned ? "" : "un-", i, mtd->name); /* No need to get a refcount on the module containing the notifier, since we hold the mtd_table_mutex */ list_for_each_entry(not, &mtd_notifiers, list) @@ -769,13 +810,16 @@ int add_mtd_device(struct mtd_info *mtd) mutex_unlock(&mtd_table_mutex); - if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { - if (IS_BUILTIN(CONFIG_MTD)) { - pr_info("mtd: setting mtd%d (%s) as root device\n", mtd->index, mtd->name); - ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); - } else { - pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", - mtd->index, mtd->name); + if (partitioned) { + if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { + if (IS_BUILTIN(CONFIG_MTD)) { + pr_info("mtd: setting mtd%d (%s) as root device\n", + mtd->index, mtd->name); + ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); + } else { + pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", + mtd->index, mtd->name); + } } } @@ -790,7 +834,10 @@ int add_mtd_device(struct mtd_info *mtd) device_unregister(&mtd->dev); fail_added: of_node_put(mtd_get_of_node(mtd)); - idr_remove(&mtd_idr, i); + if (partitioned) + idr_remove(&mtd_idr, i); + else + idr_remove(&mtd_master_idr, i); fail_locked: mutex_unlock(&mtd_table_mutex); return error; @@ -808,12 +855,14 @@ int add_mtd_device(struct mtd_info *mtd) int del_mtd_device(struct mtd_info *mtd) { - int ret; struct mtd_notifier *not; + struct idr *idr; + int ret; mutex_lock(&mtd_table_mutex); - if (idr_find(&mtd_idr, mtd->index) != mtd) { + idr = mtd->dev.class == &mtd_class ? &mtd_idr : &mtd_master_idr; + if (idr_find(idr, mtd->index) != mtd) { ret = -ENODEV; goto out_error; } @@ -1061,12 +1110,6 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, if (ret) goto out; - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) { - ret = add_mtd_device(mtd); - if (ret) - goto out; - } - /* Prefer parsed partitions over driver-provided fallback */ ret = parse_mtd_partitions(mtd, types, parser_data); if (ret == -EPROBE_DEFER) @@ -1076,10 +1119,8 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, ret = 0; else if (nr_parts) ret = add_mtd_partitions(mtd, parts, nr_parts); - else if (!device_is_registered(&mtd->dev)) - ret = add_mtd_device(mtd); else - ret = 0; + ret = add_mtd_device(mtd, true); if (ret) goto out; @@ -1099,13 +1140,14 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, register_reboot_notifier(&mtd->reboot_notifier); } + return 0; out: - if (ret) { - nvmem_unregister(mtd->otp_user_nvmem); - nvmem_unregister(mtd->otp_factory_nvmem); - } + nvmem_unregister(mtd->otp_user_nvmem); + nvmem_unregister(mtd->otp_factory_nvmem); - if (ret && device_is_registered(&mtd->dev)) + del_mtd_partitions(mtd); + + if (device_is_registered(&mtd->dev)) del_mtd_device(mtd); return ret; @@ -1261,8 +1303,7 @@ int __get_mtd_device(struct mtd_info *mtd) mtd = mtd->parent; } - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) - kref_get(&master->refcnt); + kref_get(&master->refcnt); return 0; } @@ -1356,8 +1397,7 @@ void __put_mtd_device(struct mtd_info *mtd) mtd = parent; } - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) - kref_put(&master->refcnt, mtd_device_release); + kref_put(&master->refcnt, mtd_device_release); module_put(master->owner); @@ -2524,6 +2564,16 @@ static int __init init_mtd(void) if (ret) goto err_reg; + ret = class_register(&mtd_master_class); + if (ret) + goto err_reg2; + + ret = alloc_chrdev_region(&mtd_master_devt, 0, MTD_MASTER_DEVS, "mtd_master"); + if (ret < 0) { + pr_err("unable to allocate char dev region\n"); + goto err_chrdev; + } + mtd_bdi = mtd_bdi_init("mtd"); if (IS_ERR(mtd_bdi)) { ret = PTR_ERR(mtd_bdi); @@ -2548,6 +2598,10 @@ static int __init init_mtd(void) bdi_unregister(mtd_bdi); bdi_put(mtd_bdi); err_bdi: + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); +err_chrdev: + class_unregister(&mtd_master_class); +err_reg2: class_unregister(&mtd_class); err_reg: pr_err("Error registering mtd class or bdi: %d\n", ret); @@ -2561,9 +2615,12 @@ static void __exit cleanup_mtd(void) if (proc_mtd) remove_proc_entry("mtd", NULL); class_unregister(&mtd_class); + class_unregister(&mtd_master_class); + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); bdi_unregister(mtd_bdi); bdi_put(mtd_bdi); idr_destroy(&mtd_idr); + idr_destroy(&mtd_master_idr); } module_init(init_mtd); diff --git a/drivers/mtd/mtdcore.h b/drivers/mtd/mtdcore.h index b014861a06a6..2258d31c5aa6 100644 --- a/drivers/mtd/mtdcore.h +++ b/drivers/mtd/mtdcore.h @@ -8,7 +8,7 @@ extern struct mutex mtd_table_mutex; extern struct backing_dev_info *mtd_bdi; struct mtd_info *__mtd_next_device(int i); -int __must_check add_mtd_device(struct mtd_info *mtd); +int __must_check add_mtd_device(struct mtd_info *mtd, bool partitioned); int del_mtd_device(struct mtd_info *mtd); int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); int del_mtd_partitions(struct mtd_info *); diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c index 6811a714349d..97505b132313 100644 --- a/drivers/mtd/mtdpart.c +++ b/drivers/mtd/mtdpart.c @@ -86,8 +86,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, * parent conditional on that option. Note, this is a way to * distinguish between the parent and its partitions in sysfs. */ - child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ? - &parent->dev : parent->dev.parent; + child->dev.parent = &parent->dev; child->dev.of_node = part->of_node; child->parent = parent; child->part.offset = part->offset; @@ -276,7 +275,7 @@ int mtd_add_partition(struct mtd_info *parent, const char *name, list_add_tail(&child->part.node, &parent->partitions); mutex_unlock(&master->master.partitions_lock); - ret = add_mtd_device(child); + ret = add_mtd_device(child, true); if (ret) goto err_remove_part; @@ -402,6 +401,12 @@ int add_mtd_partitions(struct mtd_info *parent, printk(KERN_NOTICE "Creating %d MTD partitions on \"%s\":\n", nbparts, parent->name); + if (!mtd_is_partition(parent)) { + ret = add_mtd_device(parent, IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)); + if (ret) + return ret; + } + for (i = 0; i < nbparts; i++) { child = allocate_partition(parent, parts + i, i, cur_offset); if (IS_ERR(child)) { @@ -413,7 +418,7 @@ int add_mtd_partitions(struct mtd_info *parent, list_add_tail(&child->part.node, &parent->partitions); mutex_unlock(&master->master.partitions_lock); - ret = add_mtd_device(child); + ret = add_mtd_device(child, true); if (ret) { mutex_lock(&master->master.partitions_lock); list_del(&child->part.node); @@ -590,9 +595,6 @@ static int mtd_part_of_parse(struct mtd_info *master, int ret, err = 0; dev = &master->dev; - /* Use parent device (controller) if the top level MTD is not registered */ - if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master)) - dev = master->dev.parent; np = mtd_get_of_node(master); if (mtd_is_partition(master)) @@ -712,6 +714,7 @@ int parse_mtd_partitions(struct mtd_info *master, const char *const *types, if (ret < 0 && !err) err = ret; } + return err; } From patchwork Sun Mar 2 14:09:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23C2CC282D1 for ; Sun, 2 Mar 2025 14:20:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9D12410E173; Sun, 2 Mar 2025 14:20:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="hT52TguM"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2682110E169; Sun, 2 Mar 2025 14:20:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925247; x=1772461247; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aZ4MqzQuyV1Oyo+hiJgtXsgAEk5JFyPpFPGmhbpkLlc=; b=hT52TguMwQ2WwKO0//JaTGfaCx9eGB5nf27B9Y38lzhRw/xx1aM4YBR+ /XV7K354zyjnUD24kUz5DhlMCZrLaPSoADzO/iyj4+46YUpbbRziykLG9 dmAOaDXPg/qieQQ0qV10zxnSalitLwKcSje+awjcitjrRxJqSj+stzS8G xLalU3nqKZQ/ihUy6++9t0h1v39ym8Sq//knfiJ/ZdjyDF9iSULvTuAtv qiJ5vU1jDB3vDF9E3YqzQ2SvKVTcQPFeRLU2a2yG3HOiwpiyz7cywkImm jqIkTJNkpVydBIQ8990nnFBI0fMcrkBzrOOy7cUvjighhMZ5UrkTWpKqs g==; X-CSE-ConnectionGUID: qOX0oKtzSyyOzMddFWsZ2A== X-CSE-MsgGUID: Lu49hZzMQdCVzgUsDOyrow== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176389" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176389" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:47 -0800 X-CSE-ConnectionGUID: 80xEmZwjRi+ZRr7/AN+AxA== X-CSE-MsgGUID: /ySbAMtUTXWFYCBTiGQ4dw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737291" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:40 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v6 02/11] mtd: add driver for intel graphics non-volatile memory device Date: Sun, 2 Mar 2025 16:09:12 +0200 Message-ID: <20250302140921.504304-3-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add auxiliary driver for intel discrete graphics non-volatile memory device. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- MAINTAINERS | 7 ++ drivers/mtd/devices/Kconfig | 11 +++ drivers/mtd/devices/Makefile | 1 + drivers/mtd/devices/mtd_intel_dg.c | 138 +++++++++++++++++++++++++++++ include/linux/intel_dg_nvm_aux.h | 27 ++++++ 5 files changed, 184 insertions(+) create mode 100644 drivers/mtd/devices/mtd_intel_dg.c create mode 100644 include/linux/intel_dg_nvm_aux.h diff --git a/MAINTAINERS b/MAINTAINERS index cddcb097f7f3..ad29d54cc83f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11549,6 +11549,13 @@ L: linux-kernel@vger.kernel.org S: Supported F: arch/x86/include/asm/intel-family.h +INTEL DISCRETE GRAPHICS NVM MTD DRIVER +M: Alexander Usyskin +L: linux-mtd@lists.infradead.org +S: Supported +F: drivers/mtd/devices/mtd_intel_dg.c +F: include/linux/intel_dg_nvm_aux.h + INTEL DRM DISPLAY FOR XE AND I915 DRIVERS M: Jani Nikula M: Rodrigo Vivi diff --git a/drivers/mtd/devices/Kconfig b/drivers/mtd/devices/Kconfig index ff2f9e55ef28..59be6d3f0d32 100644 --- a/drivers/mtd/devices/Kconfig +++ b/drivers/mtd/devices/Kconfig @@ -183,6 +183,17 @@ config MTD_POWERNV_FLASH platforms from Linux. This device abstracts away the firmware interface for flash access. +config MTD_INTEL_DG + tristate "Intel Discrete Graphics non-volatile memory driver" + depends on AUXILIARY_BUS + depends on MTD + help + This provides an MTD device to access Intel Discrete Graphics + non-volatile memory. + + To compile this driver as a module, choose M here: the module + will be called mtd-intel-dg. + comment "Disk-On-Chip Device Drivers" config MTD_DOCG3 diff --git a/drivers/mtd/devices/Makefile b/drivers/mtd/devices/Makefile index d11eb2b8b6f8..9fe4ce9cffde 100644 --- a/drivers/mtd/devices/Makefile +++ b/drivers/mtd/devices/Makefile @@ -18,6 +18,7 @@ obj-$(CONFIG_MTD_SST25L) += sst25l.o obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o obj-$(CONFIG_MTD_ST_SPI_FSM) += st_spi_fsm.o obj-$(CONFIG_MTD_POWERNV_FLASH) += powernv_flash.o +obj-$(CONFIG_MTD_INTEL_DG) += mtd_intel_dg.o CFLAGS_docg3.o += -I$(src) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c new file mode 100644 index 000000000000..963a88cacc6c --- /dev/null +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +struct intel_dg_nvm { + struct kref refcnt; + void __iomem *base; + size_t size; + unsigned int nregions; + struct { + const char *name; + u8 id; + u64 offset; + u64 size; + } regions[] __counted_by(nregions); +}; + +static void intel_dg_nvm_release(struct kref *kref) +{ + struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); + int i; + + pr_debug("freeing intel_dg nvm\n"); + for (i = 0; i < nvm->nregions; i++) + kfree(nvm->regions[i].name); + kfree(nvm); +} + +static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *aux_dev_id) +{ + struct intel_dg_nvm_dev *invm = auxiliary_dev_to_intel_dg_nvm_dev(aux_dev); + struct device *device; + struct intel_dg_nvm *nvm; + unsigned int nregions; + unsigned int i, n; + char *name; + int ret; + + device = &aux_dev->dev; + + /* count available regions */ + for (nregions = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { + if (invm->regions[i].name) + nregions++; + } + + if (!nregions) { + dev_err(device, "no regions defined\n"); + return -ENODEV; + } + + nvm = kzalloc(struct_size(nvm, regions, nregions), GFP_KERNEL); + if (!nvm) + return -ENOMEM; + + kref_init(&nvm->refcnt); + + nvm->nregions = nregions; + for (n = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { + if (!invm->regions[i].name) + continue; + + name = kasprintf(GFP_KERNEL, "%s.%s", + dev_name(&aux_dev->dev), invm->regions[i].name); + if (!name) + continue; + nvm->regions[n].name = name; + nvm->regions[n].id = i; + n++; + } + nvm->nregions = n; /* in case where kasprintf fail */ + + nvm->base = devm_ioremap_resource(device, &invm->bar); + if (IS_ERR(nvm->base)) { + dev_err(device, "mmio not mapped\n"); + ret = PTR_ERR(nvm->base); + goto err; + } + + dev_set_drvdata(&aux_dev->dev, nvm); + + return 0; + +err: + kref_put(&nvm->refcnt, intel_dg_nvm_release); + return ret; +} + +static void intel_dg_mtd_remove(struct auxiliary_device *aux_dev) +{ + struct intel_dg_nvm *nvm = dev_get_drvdata(&aux_dev->dev); + + if (!nvm) + return; + + dev_set_drvdata(&aux_dev->dev, NULL); + + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static const struct auxiliary_device_id intel_dg_mtd_id_table[] = { + { + .name = "i915.nvm", + }, + { + .name = "xe.nvm", + }, + { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(auxiliary, intel_dg_mtd_id_table); + +static struct auxiliary_driver intel_dg_mtd_driver = { + .probe = intel_dg_mtd_probe, + .remove = intel_dg_mtd_remove, + .driver = { + /* auxiliary_driver_register() sets .name to be the modname */ + }, + .id_table = intel_dg_mtd_id_table +}; + +module_auxiliary_driver(intel_dg_mtd_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("Intel DGFX MTD driver"); diff --git a/include/linux/intel_dg_nvm_aux.h b/include/linux/intel_dg_nvm_aux.h new file mode 100644 index 000000000000..68df634c994c --- /dev/null +++ b/include/linux/intel_dg_nvm_aux.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#ifndef __INTEL_DG_NVM_AUX_H__ +#define __INTEL_DG_NVM_AUX_H__ + +#include + +#define INTEL_DG_NVM_REGIONS 13 + +struct intel_dg_nvm_region { + const char *name; +}; + +struct intel_dg_nvm_dev { + struct auxiliary_device aux_dev; + bool writable_override; + struct resource bar; + const struct intel_dg_nvm_region *regions; +}; + +#define auxiliary_dev_to_intel_dg_nvm_dev(auxiliary_dev) \ + container_of(auxiliary_dev, struct intel_dg_nvm_dev, aux_dev) + +#endif /* __INTEL_DG_NVM_AUX_H__ */ From patchwork Sun Mar 2 14:09:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C270FC282C6 for ; Sun, 2 Mar 2025 14:20:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 41E5510E19E; Sun, 2 Mar 2025 14:20:54 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="S7Q084e0"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 27B8A10E189; Sun, 2 Mar 2025 14:20:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925253; x=1772461253; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a0B7fMkJxt2Wb7bBeMVZwsKjEvvl5x89e9KqNreoDzc=; b=S7Q084e06shqX0UZypg3ArBt/6r9D/nzOpK97ieuIqAXhO73vebV38S0 hsoBGujRXuOa/ko1Vn1TXwZaf0lT+uUnm8OBik1oMHBv6r8lB7xJakkTt oBWnyueOxTMTaqn76PoUvhATDgQ7KexTnB5IJ1wD+ylNEHD0PohoR48aB na8Ect2arfjLk899aIF8AjizN6VTB3xmG3U8nFtBw40GM3Yj7NIp8In8o cCkfSbh27fQibjkXLRv66Av8eLtpRo8BNnfZPpWdaWix6flJOQPvtpn3+ SohhJH8zsFjWzXchTaFTb/T/OEccfY9AbhRGpyhChrnJSo9c/xg8rh71f Q==; X-CSE-ConnectionGUID: VhKY9TLURl+p/qvFs3buLA== X-CSE-MsgGUID: IoH/kwScSUqTvW3WU9rYPQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176406" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176406" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:53 -0800 X-CSE-ConnectionGUID: ILVEsQWeQn25tRMdsBkFmg== X-CSE-MsgGUID: 57aHayV8TTW5QzjT45R12g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737313" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:46 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v6 03/11] mtd: intel-dg: implement region enumeration Date: Sun, 2 Mar 2025 16:09:13 +0200 Message-ID: <20250302140921.504304-4-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In intel-dg, there is no access to the spi controller, the information is extracted from the descriptor region. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 198 +++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 963a88cacc6c..ba1c720e717b 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -3,6 +3,8 @@ * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. */ +#include +#include #include #include #include @@ -22,9 +24,199 @@ struct intel_dg_nvm { u8 id; u64 offset; u64 size; + unsigned int is_readable:1; + unsigned int is_writable:1; } regions[] __counted_by(nregions); }; +#define NVM_TRIGGER_REG 0x00000000 +#define NVM_VALSIG_REG 0x00000010 +#define NVM_ADDRESS_REG 0x00000040 +#define NVM_REGION_ID_REG 0x00000044 +/* + * [15:0]-Erase size = 0x0010 4K 0x0080 32K 0x0100 64K + * [23:16]-Reserved + * [31:24]-Erase MEM RegionID + */ +#define NVM_ERASE_REG 0x00000048 +#define NVM_ACCESS_ERROR_REG 0x00000070 +#define NVM_ADDRESS_ERROR_REG 0x00000074 + +/* Flash Valid Signature */ +#define NVM_FLVALSIG 0x0FF0A55A + +#define NVM_MAP_ADDR_MASK GENMASK(7, 0) +#define NVM_MAP_ADDR_SHIFT 0x00000004 + +#define NVM_REGION_ID_DESCRIPTOR 0 +/* Flash Region Base Address */ +#define NVM_FRBA 0x40 +/* Flash Region __n - Flash Descriptor Record */ +#define NVM_FLREG(__n) (NVM_FRBA + ((__n) * 4)) +/* Flash Map 1 Register */ +#define NVM_FLMAP1_REG 0x18 +#define NVM_FLMSTR4_OFFSET 0x00C + +#define NVM_ACCESS_ERROR_PCIE_MASK 0x7 + +#define NVM_FREG_BASE_MASK GENMASK(15, 0) +#define NVM_FREG_ADDR_MASK GENMASK(31, 16) +#define NVM_FREG_ADDR_SHIFT 12 +#define NVM_FREG_MIN_REGION_SIZE 0xFFF + +static inline void idg_nvm_set_region_id(struct intel_dg_nvm *nvm, u8 region) +{ + iowrite32((u32)region, nvm->base + NVM_REGION_ID_REG); +} + +static inline u32 idg_nvm_error(struct intel_dg_nvm *nvm) +{ + void __iomem *base = nvm->base; + + u32 reg = ioread32(base + NVM_ACCESS_ERROR_REG) & NVM_ACCESS_ERROR_PCIE_MASK; + + /* reset error bits */ + if (reg) + iowrite32(reg, base + NVM_ACCESS_ERROR_REG); + + return reg; +} + +static inline u32 idg_nvm_read32(struct intel_dg_nvm *nvm, u32 address) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + return ioread32(base + NVM_TRIGGER_REG); +} + +static int idg_nvm_get_access_map(struct intel_dg_nvm *nvm, u32 *access_map) +{ + u32 flmap1; + u32 fmba; + u32 fmstr4; + u32 fmstr4_addr; + + idg_nvm_set_region_id(nvm, NVM_REGION_ID_DESCRIPTOR); + + flmap1 = idg_nvm_read32(nvm, NVM_FLMAP1_REG); + if (idg_nvm_error(nvm)) + return -EIO; + /* Get Flash Master Baser Address (FMBA) */ + fmba = (FIELD_GET(NVM_MAP_ADDR_MASK, flmap1) << NVM_MAP_ADDR_SHIFT); + fmstr4_addr = fmba + NVM_FLMSTR4_OFFSET; + + fmstr4 = idg_nvm_read32(nvm, fmstr4_addr); + if (idg_nvm_error(nvm)) + return -EIO; + + *access_map = fmstr4; + return 0; +} + +static bool idg_nvm_region_readable(u32 access_map, u8 region) +{ + if (region < 12) + return access_map & BIT(region + 8); /* [19:8] */ + else + return access_map & BIT(region - 12); /* [3:0] */ +} + +static bool idg_nvm_region_writable(u32 access_map, u8 region) +{ + if (region < 12) + return access_map & BIT(region + 20); /* [31:20] */ + else + return access_map & BIT(region - 8); /* [7:4] */ +} + +static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) +{ + u32 is_valid; + + idg_nvm_set_region_id(nvm, NVM_REGION_ID_DESCRIPTOR); + + is_valid = idg_nvm_read32(nvm, NVM_VALSIG_REG); + if (idg_nvm_error(nvm)) + return -EIO; + + if (is_valid != NVM_FLVALSIG) + return -ENODEV; + + return 0; +} + +static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) +{ + int ret; + unsigned int i, n; + u32 access_map = 0; + + /* clean error register, previous errors are ignored */ + idg_nvm_error(nvm); + + ret = idg_nvm_is_valid(nvm); + if (ret) { + dev_err(device, "The MEM is not valid %d\n", ret); + return ret; + } + + if (idg_nvm_get_access_map(nvm, &access_map)) + return -EIO; + + for (i = 0, n = 0; i < nvm->nregions; i++) { + u32 address, base, limit, region; + u8 id = nvm->regions[i].id; + + address = NVM_FLREG(id); + region = idg_nvm_read32(nvm, address); + + base = FIELD_GET(NVM_FREG_BASE_MASK, region) << NVM_FREG_ADDR_SHIFT; + limit = (FIELD_GET(NVM_FREG_ADDR_MASK, region) << NVM_FREG_ADDR_SHIFT) | + NVM_FREG_MIN_REGION_SIZE; + + dev_dbg(device, "[%d] %s: region: 0x%08X base: 0x%08x limit: 0x%08x\n", + id, nvm->regions[i].name, region, base, limit); + + if (base >= limit || (i > 0 && limit == 0)) { + dev_dbg(device, "[%d] %s: disabled\n", + id, nvm->regions[i].name); + nvm->regions[i].is_readable = 0; + continue; + } + + if (nvm->size < limit) + nvm->size = limit; + + nvm->regions[i].offset = base; + nvm->regions[i].size = limit - base + 1; + /* No write access to descriptor; mask it out*/ + nvm->regions[i].is_writable = idg_nvm_region_writable(access_map, id); + + nvm->regions[i].is_readable = idg_nvm_region_readable(access_map, id); + dev_dbg(device, "Registered, %s id=%d offset=%lld size=%lld rd=%d wr=%d\n", + nvm->regions[i].name, + nvm->regions[i].id, + nvm->regions[i].offset, + nvm->regions[i].size, + nvm->regions[i].is_readable, + nvm->regions[i].is_writable); + + if (nvm->regions[i].is_readable) + n++; + } + + dev_dbg(device, "Registered %d regions\n", n); + + /* Need to add 1 to the amount of memory + * so it is reported as an even block + */ + nvm->size += 1; + + return n; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); @@ -88,6 +280,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } + ret = intel_dg_nvm_init(nvm, device); + if (ret < 0) { + dev_err(device, "cannot initialize nvm %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); return 0; From patchwork Sun Mar 2 14:09:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FCC5C282D1 for ; Sun, 2 Mar 2025 14:21:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8FF8510E2B4; Sun, 2 Mar 2025 14:21:00 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="mPhUEuoe"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 29D2510E2C5; Sun, 2 Mar 2025 14:20:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925259; x=1772461259; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lVtZQbsxfX/iqDLELTpbrgDvP7eZNLkxpNu2nwCF3LE=; b=mPhUEuoeMIOWgX8PF9t2XudSTRzxS1rSSTT9FXPDr7d8C7eFMncE2AtZ OZ9neIsqXyGAQhw9FXWnL7w7vkgyeTGk8eK8h3J2NFvnqdnOfpCjUIsKs jPzD+1rsEx67c6876bZtM6ShNJ9s+ck8fIfB/v3mnXhQFwbI4tBJxgTpb ef6QKnUl1QhA6pfr9w0RnzZU98OHO0iFa7iOV2WeLXK0V5d8H23nAoeIW A4ZbB29SAJLOxTLz2iCZisnaUzoajAWlmC8/cDPp1oe74yTZqUbx7sZde VVxPwObmGlEoLUX9kcas+oa7r/fD/X7SKahE+VuV7fGeqi3xFmxP+4rfa w==; X-CSE-ConnectionGUID: 1NXReUvETre9LiRiVKIVuQ== X-CSE-MsgGUID: rHXFL2OZRm2/e4tDx0iXZg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176410" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176410" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:59 -0800 X-CSE-ConnectionGUID: RShP9ZzfSV6ZlBGczJQ8ow== X-CSE-MsgGUID: 0B16smd+Qd+5m4HKfdT5jQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737322" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:52 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v6 04/11] mtd: intel-dg: implement access functions Date: Sun, 2 Mar 2025 16:09:14 +0200 Message-ID: <20250302140921.504304-5-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Implement read(), erase() and write() functions. CC: Lucas De Marchi CC: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 197 +++++++++++++++++++++++++++++ 1 file changed, 197 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index ba1c720e717b..6f67cf966d05 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,13 +5,16 @@ #include #include +#include #include #include #include +#include #include #include #include #include +#include #include struct intel_dg_nvm { @@ -91,6 +94,33 @@ static inline u32 idg_nvm_read32(struct intel_dg_nvm *nvm, u32 address) return ioread32(base + NVM_TRIGGER_REG); } +static inline u64 idg_nvm_read64(struct intel_dg_nvm *nvm, u32 address) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + return readq(base + NVM_TRIGGER_REG); +} + +static void idg_nvm_write32(struct intel_dg_nvm *nvm, u32 address, u32 data) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + iowrite32(data, base + NVM_TRIGGER_REG); +} + +static void idg_nvm_write64(struct intel_dg_nvm *nvm, u32 address, u64 data) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + writeq(data, base + NVM_TRIGGER_REG); +} + static int idg_nvm_get_access_map(struct intel_dg_nvm *nvm, u32 *access_map) { u32 flmap1; @@ -147,6 +177,173 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } +__maybe_unused +static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, loff_t from) +{ + unsigned int i; + + for (i = 0; i < nvm->nregions; i++) { + if ((nvm->regions[i].offset + nvm->regions[i].size - 1) > from && + nvm->regions[i].offset <= from && + nvm->regions[i].size != 0) + break; + } + + return i; +} + +static ssize_t idg_nvm_rewrite_partial(struct intel_dg_nvm *nvm, loff_t to, + loff_t offset, size_t len, const u32 *newdata) +{ + u32 data = idg_nvm_read32(nvm, to); + + if (idg_nvm_error(nvm)) + return -EIO; + + memcpy((u8 *)&data + offset, newdata, len); + + idg_nvm_write32(nvm, to, data); + if (idg_nvm_error(nvm)) + return -EIO; + + return len; +} + +__maybe_unused +static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, + loff_t to, size_t len, const unsigned char *buf) +{ + size_t i; + size_t len8; + size_t len4; + size_t to4; + size_t to_shift; + size_t len_s = len; + ssize_t ret; + + idg_nvm_set_region_id(nvm, region); + + to4 = ALIGN_DOWN(to, sizeof(u32)); + to_shift = min(sizeof(u32) - ((size_t)to - to4), len); + if (to - to4) { + ret = idg_nvm_rewrite_partial(nvm, to4, to - to4, to_shift, (uint32_t *)&buf[0]); + if (ret < 0) + return ret; + + buf += to_shift; + to += to_shift; + len_s -= to_shift; + } + + len8 = ALIGN_DOWN(len_s, sizeof(u64)); + for (i = 0; i < len8; i += sizeof(u64)) { + u64 data; + + memcpy(&data, &buf[i], sizeof(u64)); + idg_nvm_write64(nvm, to + i, data); + if (idg_nvm_error(nvm)) + return -EIO; + } + + len4 = len_s - len8; + if (len4 >= sizeof(u32)) { + u32 data; + + memcpy(&data, &buf[i], sizeof(u32)); + idg_nvm_write32(nvm, to + i, data); + if (idg_nvm_error(nvm)) + return -EIO; + i += sizeof(u32); + len4 -= sizeof(u32); + } + + if (len4 > 0) { + ret = idg_nvm_rewrite_partial(nvm, to + i, 0, len4, (uint32_t *)&buf[i]); + if (ret < 0) + return ret; + } + + return len; +} + +__maybe_unused +static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, + loff_t from, size_t len, unsigned char *buf) +{ + size_t i; + size_t len8; + size_t len4; + size_t from4; + size_t from_shift; + size_t len_s = len; + + idg_nvm_set_region_id(nvm, region); + + from4 = ALIGN_DOWN(from, sizeof(u32)); + from_shift = min(sizeof(u32) - ((size_t)from - from4), len); + + if (from - from4) { + u32 data = idg_nvm_read32(nvm, from4); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[0], (u8 *)&data + (from - from4), from_shift); + len_s -= from_shift; + buf += from_shift; + from += from_shift; + } + + len8 = ALIGN_DOWN(len_s, sizeof(u64)); + for (i = 0; i < len8; i += sizeof(u64)) { + u64 data = idg_nvm_read64(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + + memcpy(&buf[i], &data, sizeof(data)); + } + + len4 = len_s - len8; + if (len4 >= sizeof(u32)) { + u32 data = idg_nvm_read32(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[i], &data, sizeof(data)); + i += sizeof(u32); + len4 -= sizeof(u32); + } + + if (len4 > 0) { + u32 data = idg_nvm_read32(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[i], &data, len4); + } + + return len; +} + +__maybe_unused +static ssize_t +idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_addr) +{ + u64 i; + const u32 block = 0x10; + void __iomem *base = nvm->base; + + for (i = 0; i < len; i += SZ_4K) { + iowrite32(from + i, base + NVM_ADDRESS_REG); + iowrite32(region << 24 | block, base + NVM_ERASE_REG); + /* Since the writes are via sguint + * we cannot do back to back erases. + */ + msleep(50); + } + return len; +} + static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) { int ret; From patchwork Sun Mar 2 14:09:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDF16C19F32 for ; Sun, 2 Mar 2025 14:21:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9901210E2DB; Sun, 2 Mar 2025 14:21:06 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="FDHjRmuv"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id A903110E2DB; Sun, 2 Mar 2025 14:21:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925266; x=1772461266; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ukI5iPUv8J84CoV1OEbUeSFBHIZ7pB9cPVw1XNT34Y=; b=FDHjRmuvk/YgUk2r6tRS+vF+HGAsa5JYVyG670jjHVnGir2x+VhRsUs8 CiUwgP4+9pogIoaf83UYcfOIXcV0/LD7vMq2Flt537juwl2RdRdxK88WR JwoLNHg2pbGGTA3aRrC8nsXOSx97d7MGhzMj0zY6jNrkl7gJ5iZde1k4v u3hw2biOna7rVoMudqi1CbRBl1r4x53QPJUuVdXRUdinFZbpAaVvmiTw9 5vA3oWs9cAXMpV4cr98BiFLBMKuSwvL3NziBXvnN3V4rxfT9oldQzOavu vZ236LoXfPep/XCJs8gnNxeOO/A+tkSwna0kOi3vQVtpGsLW8YTjO9lJI Q==; X-CSE-ConnectionGUID: C/THTfyCTfOZM7kuA1eSKQ== X-CSE-MsgGUID: 7j3UqfthTOCPG3J+8vL27w== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176428" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176428" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:05 -0800 X-CSE-ConnectionGUID: e2tJ8PVPQmK+JieO7wqSsg== X-CSE-MsgGUID: 59nvkgX2R8OwGvClApoHug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737345" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:20:58 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v6 05/11] mtd: intel-dg: register with mtd Date: Sun, 2 Mar 2025 16:09:15 +0200 Message-ID: <20250302140921.504304-6-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Register the on-die nvm device with the mtd subsystem. Refcount nvm object on _get and _put mtd callbacks. For erase operation address and size should be 4K aligned. For write operation address and size has to be 4bytes aligned. CC: Rodrigo Vivi CC: Lucas De Marchi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 230 ++++++++++++++++++++++++++++- 1 file changed, 226 insertions(+), 4 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 6f67cf966d05..4023f2ebc344 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -12,6 +13,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +22,8 @@ struct intel_dg_nvm { struct kref refcnt; + struct mtd_info mtd; + struct mutex lock; /* region access lock */ void __iomem *base; size_t size; unsigned int nregions; @@ -177,7 +182,6 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } -__maybe_unused static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, loff_t from) { unsigned int i; @@ -209,7 +213,6 @@ static ssize_t idg_nvm_rewrite_partial(struct intel_dg_nvm *nvm, loff_t to, return len; } -__maybe_unused static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, loff_t to, size_t len, const unsigned char *buf) { @@ -266,7 +269,6 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, loff_t from, size_t len, unsigned char *buf) { @@ -325,7 +327,6 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_addr) { @@ -414,6 +415,147 @@ static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) return n; } +static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) +{ + struct intel_dg_nvm *nvm = mtd->priv; + unsigned int idx; + u8 region; + u64 addr; + ssize_t bytes; + loff_t from; + size_t len; + size_t total_len; + + if (WARN_ON(!nvm)) + return -EINVAL; + + if (!IS_ALIGNED(info->addr, SZ_4K) || !IS_ALIGNED(info->len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %llx\n", + info->addr, info->len); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -EINVAL; + } + + total_len = info->len; + addr = info->addr; + + guard(mutex)(&nvm->lock); + + while (total_len > 0) { + if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); + info->fail_addr = addr; + return -ERANGE; + } + + idx = idg_nvm_get_region(nvm, addr); + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -ERANGE; + } + + from = addr - nvm->regions[idx].offset; + region = nvm->regions[idx].id; + len = total_len; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + dev_dbg(&mtd->dev, "erasing region[%d] %s from %llx len %zx\n", + region, nvm->regions[idx].name, from, len); + + bytes = idg_erase(nvm, region, from, len, &info->fail_addr); + if (bytes < 0) { + dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); + info->fail_addr += nvm->regions[idx].offset; + return bytes; + } + + addr += len; + total_len -= len; + } + + return 0; +} + +static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, from); + + dev_dbg(&mtd->dev, "reading region[%d] %s from %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, from, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + from -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + guard(mutex)(&nvm->lock); + + ret = idg_read(nvm, region, from, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "read failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + +static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, + size_t *retlen, const u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, to); + + dev_dbg(&mtd->dev, "writing region[%d] %s to %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, to, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + to -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - to) + len = nvm->regions[idx].size - to; + + guard(mutex)(&nvm->lock); + + ret = idg_write(nvm, region, to, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "write failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); @@ -422,9 +564,80 @@ static void intel_dg_nvm_release(struct kref *kref) pr_debug("freeing intel_dg nvm\n"); for (i = 0; i < nvm->nregions; i++) kfree(nvm->regions[i].name); + mutex_destroy(&nvm->lock); kfree(nvm); } +static int intel_dg_mtd_get_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return -EINVAL; + pr_debug("get mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_get(&nvm->refcnt); + + return 0; +} + +static void intel_dg_mtd_put_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return; + pr_debug("put mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *device, + unsigned int nparts, bool writable_override) +{ + unsigned int i; + unsigned int n; + struct mtd_partition *parts = NULL; + int ret; + + dev_dbg(device, "registering with mtd\n"); + + nvm->mtd.owner = THIS_MODULE; + nvm->mtd.dev.parent = device; + nvm->mtd.flags = MTD_CAP_NORFLASH | MTD_WRITEABLE; + nvm->mtd.type = MTD_DATAFLASH; + nvm->mtd.priv = nvm; + nvm->mtd._write = intel_dg_mtd_write; + nvm->mtd._read = intel_dg_mtd_read; + nvm->mtd._erase = intel_dg_mtd_erase; + nvm->mtd._get_device = intel_dg_mtd_get_device; + nvm->mtd._put_device = intel_dg_mtd_put_device; + nvm->mtd.writesize = SZ_1; /* 1 byte granularity */ + nvm->mtd.erasesize = SZ_4K; /* 4K bytes granularity */ + nvm->mtd.size = nvm->size; + + parts = kcalloc(nvm->nregions, sizeof(*parts), GFP_KERNEL); + if (!parts) + return -ENOMEM; + + for (i = 0, n = 0; i < nvm->nregions && n < nparts; i++) { + if (!nvm->regions[i].is_readable) + continue; + parts[n].name = nvm->regions[i].name; + parts[n].offset = nvm->regions[i].offset; + parts[n].size = nvm->regions[i].size; + if (!nvm->regions[i].is_writable && !writable_override) + parts[n].mask_flags = MTD_WRITEABLE; + n++; + } + + ret = mtd_device_register(&nvm->mtd, parts, n); + + kfree(parts); + + return ret; +} + static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *aux_dev_id) { @@ -454,6 +667,7 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, return -ENOMEM; kref_init(&nvm->refcnt); + mutex_init(&nvm->lock); nvm->nregions = nregions; for (n = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { @@ -483,6 +697,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } + ret = intel_dg_nvm_init_mtd(nvm, device, ret, invm->writable_override); + if (ret) { + dev_err(device, "failed init mtd %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); return 0; @@ -499,6 +719,8 @@ static void intel_dg_mtd_remove(struct auxiliary_device *aux_dev) if (!nvm) return; + mtd_device_unregister(&nvm->mtd); + dev_set_drvdata(&aux_dev->dev, NULL); kref_put(&nvm->refcnt, intel_dg_nvm_release); From patchwork Sun Mar 2 14:09:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65B65C282D1 for ; Sun, 2 Mar 2025 14:21:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0F8B610E2C5; Sun, 2 Mar 2025 14:21:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="FSOaviy1"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 69CAA10E2F3; Sun, 2 Mar 2025 14:21:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925272; x=1772461272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=02tjDKQNWeXHFLJTCMQ3g4lvV4tJ61jI02p0lATS/lk=; b=FSOaviy1sBdKjR0PBNP2epnpSE0+UXsTK6pwbi0UlHpCnq37PU3R/Hf4 mj99QUqe30wHVdlOnR0I16naCjttuqiZABpo1XZH3oOLbtjB2m24R/NH2 /Rhe+gqlxCHHMXmGd4+6BuGRpcVqkhKZBPl87Bf5xB/QiZfy/cj587l9M tpl8aIBMuDOa7C9CZz5O7HaOdUuINQAb4oNTzdKP+stYXHwuilMXKcrHs rCtKtTZe3k2KT4P9tl9LL9Wn/yo97de2Wbxlrun2acEEMnji59DpkuiJR tP1z84mxUfay2zHgWWFM/3QkxCjzP/yuFuHa2R3qi6LOOyNyp5fMjPY7S g==; X-CSE-ConnectionGUID: X1II9qhYRWCnpVXpylp9Gw== X-CSE-MsgGUID: wS50KrW2QXK9QuC49yXojQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176444" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176444" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:11 -0800 X-CSE-ConnectionGUID: Nr5kF5eDQUKNGsO7hr0BIw== X-CSE-MsgGUID: Y8Pxrq6mRayadUVDiwWYng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737353" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:05 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 06/11] mtd: intel-dg: align 64bit read and write Date: Sun, 2 Mar 2025 16:09:16 +0200 Message-ID: <20250302140921.504304-7-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" GSC NVM controller HW errors on quad access overlapping 1K border. Align 64bit read and write to avoid readq/writeq over 1K border. Acked-by: Miquel Raynal Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 35 ++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 4023f2ebc344..3535f7b64429 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -238,6 +238,24 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, len_s -= to_shift; } + if (!IS_ALIGNED(to, sizeof(u64)) && + ((to ^ (to + len_s)) & GENMASK(31, 10))) { + /* + * Workaround reads/writes across 1k-aligned addresses + * (start u32 before 1k, end u32 after) + * as this fails on hardware. + */ + u32 data; + + memcpy(&data, &buf[0], sizeof(u32)); + idg_nvm_write32(nvm, to, data); + if (idg_nvm_error(nvm)) + return -EIO; + buf += sizeof(u32); + to += sizeof(u32); + len_s -= sizeof(u32); + } + len8 = ALIGN_DOWN(len_s, sizeof(u64)); for (i = 0; i < len8; i += sizeof(u64)) { u64 data; @@ -295,6 +313,23 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, from += from_shift; } + if (!IS_ALIGNED(from, sizeof(u64)) && + ((from ^ (from + len_s)) & GENMASK(31, 10))) { + /* + * Workaround reads/writes across 1k-aligned addresses + * (start u32 before 1k, end u32 after) + * as this fails on hardware. + */ + u32 data = idg_nvm_read32(nvm, from); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[0], &data, sizeof(data)); + len_s -= sizeof(u32); + buf += sizeof(u32); + from += sizeof(u32); + } + len8 = ALIGN_DOWN(len_s, sizeof(u64)); for (i = 0; i < len8; i += sizeof(u64)) { u64 data = idg_nvm_read64(nvm, from + i); From patchwork Sun Mar 2 14:09:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21C84C282D1 for ; Sun, 2 Mar 2025 14:21:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A7BA010E18A; Sun, 2 Mar 2025 14:21:18 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="j8ZAgxnS"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 406B910E188; Sun, 2 Mar 2025 14:21:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925277; x=1772461277; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nfNlQaQAx8Vv722WAjFHHGbTA8WrwldfoILRmHYeMfU=; b=j8ZAgxnSL51Ya8H2pC1B/BF8Wcmo5a8vcMCb0lDzT2jYFf+TenwhUaFZ ORHgae0oFxUVzapEcT2636+DZA1y0DcSMVtuThQjNFm8FrRb7wyTMfYR4 vxv8W6w4xNPFkEqHOyXdCItFo3huvWz/tM8E70vV9Wrb/1Bqc0CDIBFOx U58CtdJMCzUoILM9rXGH7/Ei5pGg4Q4is/a2HO8YuYtTJIOy2plrcGBoT FIFNPWyv/WoSny1jkQr0RM6pNq708oKdxQVLGpXCoRqOmt0Y1TFc5ioFE 0dO49iSk7+8I+azZ0H7U5mo5LYYXOOdzV6el2plj61eztWcBLlr8I/ESV A==; X-CSE-ConnectionGUID: kuAMQ3WkTbqRu1cHtFboZg== X-CSE-MsgGUID: 4B1wqG1NQYy8XCxIfpohYQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176457" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176457" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:17 -0800 X-CSE-ConnectionGUID: 3PPKpdLWR8GUdTboF3M6qA== X-CSE-MsgGUID: NHMmD/O/SwqFC7qgfiBPXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737360" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:11 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 07/11] mtd: intel-dg: wake card on operations Date: Sun, 2 Mar 2025 16:09:17 +0200 Message-ID: <20250302140921.504304-8-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable runtime PM in mtd driver to notify graphics driver that whole card should be kept awake while nvm operations are performed through this driver. CC: Lucas De Marchi Acked-by: Karthik Poosa Acked-by: Miquel Raynal Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 79 +++++++++++++++++++++++++----- 1 file changed, 67 insertions(+), 12 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 3535f7b64429..9f4bb15a03b8 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -15,11 +15,14 @@ #include #include #include +#include #include #include #include #include +#define INTEL_DG_NVM_RPM_TIMEOUT 500 + struct intel_dg_nvm { struct kref refcnt; struct mtd_info mtd; @@ -460,6 +463,7 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) loff_t from; size_t len; size_t total_len; + int ret = 0; if (WARN_ON(!nvm)) return -EINVAL; @@ -474,20 +478,28 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) total_len = info->len; addr = info->addr; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %d\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); while (total_len > 0) { if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); info->fail_addr = addr; - return -ERANGE; + ret = -ERANGE; + goto out; } idx = idg_nvm_get_region(nvm, addr); if (idx >= nvm->nregions) { dev_err(&mtd->dev, "out of range"); info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; - return -ERANGE; + ret = -ERANGE; + goto out; } from = addr - nvm->regions[idx].offset; @@ -503,14 +515,18 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) if (bytes < 0) { dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); info->fail_addr += nvm->regions[idx].offset; - return bytes; + ret = bytes; + goto out; } addr += len; total_len -= len; } - return 0; +out: + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, @@ -539,17 +555,25 @@ static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, if (len > nvm->regions[idx].size - from) len = nvm->regions[idx].size - from; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %zd\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); ret = idg_read(nvm, region, from, len, buf); if (ret < 0) { dev_dbg(&mtd->dev, "read failed with %zd\n", ret); - return ret; + } else { + *retlen = ret; + ret = 0; } - *retlen = ret; - - return 0; + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, @@ -578,17 +602,25 @@ static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, if (len > nvm->regions[idx].size - to) len = nvm->regions[idx].size - to; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %zd\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); ret = idg_write(nvm, region, to, len, buf); if (ret < 0) { dev_dbg(&mtd->dev, "write failed with %zd\n", ret); - return ret; + } else { + *retlen = ret; + ret = 0; } - *retlen = ret; - - return 0; + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static void intel_dg_nvm_release(struct kref *kref) @@ -670,6 +702,15 @@ static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *device kfree(parts); + if (ret) + goto out; + + devm_pm_runtime_enable(&nvm->mtd.dev); + + pm_runtime_set_autosuspend_delay(&nvm->mtd.dev, INTEL_DG_NVM_RPM_TIMEOUT); + pm_runtime_use_autosuspend(&nvm->mtd.dev); + +out: return ret; } @@ -719,6 +760,17 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, } nvm->nregions = n; /* in case where kasprintf fail */ + devm_pm_runtime_enable(device); + + pm_runtime_set_autosuspend_delay(device, INTEL_DG_NVM_RPM_TIMEOUT); + pm_runtime_use_autosuspend(device); + + ret = pm_runtime_resume_and_get(device); + if (ret < 0) { + dev_err(device, "rpm: get failed %d\n", ret); + goto err_norpm; + } + nvm->base = devm_ioremap_resource(device, &invm->bar); if (IS_ERR(nvm->base)) { dev_err(device, "mmio not mapped\n"); @@ -740,9 +792,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, dev_set_drvdata(&aux_dev->dev, nvm); + pm_runtime_put(device); return 0; err: + pm_runtime_put(device); +err_norpm: kref_put(&nvm->refcnt, intel_dg_nvm_release); return ret; } From patchwork Sun Mar 2 14:09:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63607C19F32 for ; Sun, 2 Mar 2025 14:21:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F28A710E315; Sun, 2 Mar 2025 14:21:24 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Q6QTCNWi"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6E68A10E2C4; Sun, 2 Mar 2025 14:21:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925284; x=1772461284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=460Kgdhhs3j1z1IIlEYwJRsLr4/cTHTmkLN0S9U8c6s=; b=Q6QTCNWibHCyzGCb7SvZskv04cUd6lNezZ87EwYdtBzIcRz2YC84Sl0w 3yP62zsT/iLu5cY1dBfgcLoUhpQGzsVMcpY8/QmC3kxSBa10EDzUFphNg meMBPsfTOHvpMyayKm2UfDF6o3WUoA6fqkcjTFJtduAQUKN9Ir0vZyMiu Zz2WmNj7O9uumUxmlnQzgX00Ulm5CMi9U+Owv6Mm9hHC4irUuMXgc6hUL E5YQpRQgqJQQBYSm/4WDlHj9J8Uf5qL3SIVNJlxWtW+n/yg7Gv+5QRO96 uWGAn3JFMmTPC13nFnywbLz8vTxpLtCpwyslCm/OCJbsvKIBLBDypuclF Q==; X-CSE-ConnectionGUID: +QYkfrSlS/WY/UVNDZG0Ug== X-CSE-MsgGUID: WqZIBKZ2RemTX9Eed/sdTg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176477" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176477" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:23 -0800 X-CSE-ConnectionGUID: WK/h8tEjQ++SjXQzU1JvIg== X-CSE-MsgGUID: 0lFSJL4rSamcf2lewYnPmQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737370" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:16 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v6 08/11] drm/i915/nvm: add nvm device for discrete graphics Date: Sun, 2 Mar 2025 16:09:18 +0200 Message-ID: <20250302140921.504304-9-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable access to internal non-volatile memory on DGFX devices via a child device. The nvm child device is exposed via auxiliary bus. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/i915/Makefile | 4 ++ drivers/gpu/drm/i915/i915_driver.c | 6 ++ drivers/gpu/drm/i915/i915_drv.h | 3 + drivers/gpu/drm/i915/i915_reg.h | 1 + drivers/gpu/drm/i915/intel_nvm.c | 92 ++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/intel_nvm.h | 15 +++++ 6 files changed, 121 insertions(+) create mode 100644 drivers/gpu/drm/i915/intel_nvm.c create mode 100644 drivers/gpu/drm/i915/intel_nvm.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index ed05b131ed3a..58e37d9e4fc6 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -213,6 +213,10 @@ i915-y += \ i915-y += \ gt/intel_gsc.o +# graphics nvm device (DGFX) support +i915-y += \ + intel_nvm.o + # graphics hardware monitoring (HWMON) support i915-$(CONFIG_HWMON) += \ i915_hwmon.o diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index 1dfd6269b355..a9e63aca8263 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -81,6 +81,8 @@ #include "soc/intel_dram.h" #include "soc/intel_gmch.h" +#include "intel_nvm.h" + #include "i915_debugfs.h" #include "i915_driver.h" #include "i915_drm_client.h" @@ -644,6 +646,8 @@ static void i915_driver_register(struct drm_i915_private *dev_priv) /* Depends on sysfs having been initialized */ i915_perf_register(dev_priv); + intel_nvm_init(dev_priv); + for_each_gt(gt, dev_priv, i) intel_gt_driver_register(gt); @@ -684,6 +688,8 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv) i915_hwmon_unregister(dev_priv); + intel_nvm_fini(dev_priv); + i915_perf_unregister(dev_priv); i915_pmu_unregister(dev_priv); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index ffc346379cc2..d3e257a9538c 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -67,6 +67,7 @@ struct drm_i915_clock_gating_funcs; struct vlv_s0ix_state; struct intel_pxp; +struct intel_dg_nvm_dev; #define GEM_QUIRK_PIN_SWIZZLED_PAGES BIT(0) @@ -314,6 +315,8 @@ struct drm_i915_private { struct i915_perf perf; + struct intel_dg_nvm_dev *nvm; + struct i915_hwmon *hwmon; struct intel_gt *gt[I915_MAX_GT]; diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index b31b26e9a685..b28c688f4701 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -321,6 +321,7 @@ #define DG2_GSC_HECI2_BASE 0x00374000 #define MTL_GSC_HECI1_BASE 0x00116000 #define MTL_GSC_HECI2_BASE 0x00117000 +#define GEN12_GUNIT_NVM_BASE 0x00102040 #define HECI_H_CSR(base) _MMIO((base) + 0x4) #define HECI_H_CSR_IE REG_BIT(0) diff --git a/drivers/gpu/drm/i915/intel_nvm.c b/drivers/gpu/drm/i915/intel_nvm.c new file mode 100644 index 000000000000..75d3ebe669ff --- /dev/null +++ b/drivers/gpu/drm/i915/intel_nvm.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright(c) 2019-2024, Intel Corporation. All rights reserved. + */ + +#include +#include +#include "i915_reg.h" +#include "i915_drv.h" +#include "intel_nvm.h" + +#define GEN12_GUNIT_NVM_SIZE 0x80 + +static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { + [0] = { .name = "DESCRIPTOR", }, + [2] = { .name = "GSC", }, + [11] = { .name = "OptionROM", }, + [12] = { .name = "DAM", }, +}; + +static void i915_nvm_release_dev(struct device *dev) +{ +} + +void intel_nvm_init(struct drm_i915_private *i915) +{ + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + struct intel_dg_nvm_dev *nvm; + struct auxiliary_device *aux_dev; + int ret; + + /* Only the DGFX devices have internal NVM */ + if (!IS_DGFX(i915)) + return; + + /* Nvm pointer should be NULL here */ + if (WARN_ON(i915->nvm)) + return; + + i915->nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); + if (!i915->nvm) + return; + + nvm = i915->nvm; + + nvm->writeable_override = true; + nvm->bar.parent = &pdev->resource[0]; + nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; + nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; + nvm->bar.flags = IORESOURCE_MEM; + nvm->bar.desc = IORES_DESC_NONE; + nvm->regions = regions; + + aux_dev = &nvm->aux_dev; + + aux_dev->name = "nvm"; + aux_dev->id = (pci_domain_nr(pdev->bus) << 16) | + PCI_DEVID(pdev->bus->number, pdev->devfn); + aux_dev->dev.parent = &pdev->dev; + aux_dev->dev.release = i915_nvm_release_dev; + + ret = auxiliary_device_init(aux_dev); + if (ret) { + drm_err(&i915->drm, "i915-nvm aux init failed %d\n", ret); + return; + } + + ret = auxiliary_device_add(aux_dev); + if (ret) { + drm_err(&i915->drm, "i915-nvm aux add failed %d\n", ret); + auxiliary_device_uninit(aux_dev); + return; + } +} + +void intel_nvm_fini(struct drm_i915_private *i915) +{ + struct intel_dg_nvm_dev *nvm = i915->nvm; + + /* Only the DGFX devices have internal NVM */ + if (!IS_DGFX(i915)) + return; + + /* Nvm pointer should not be NULL here */ + if (WARN_ON(!nvm)) + return; + + auxiliary_device_delete(&nvm->aux_dev); + auxiliary_device_uninit(&nvm->aux_dev); + kfree(nvm); + i915->nvm = NULL; +} diff --git a/drivers/gpu/drm/i915/intel_nvm.h b/drivers/gpu/drm/i915/intel_nvm.h new file mode 100644 index 000000000000..7bc3d1114a3f --- /dev/null +++ b/drivers/gpu/drm/i915/intel_nvm.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2024 Intel Corporation. All rights reserved. + */ + +#ifndef __INTEL_NVM_H__ +#define __INTEL_NVM_H__ + +struct drm_i915_private; + +void intel_nvm_init(struct drm_i915_private *i915); + +void intel_nvm_fini(struct drm_i915_private *i915); + +#endif /* __INTEL_NVM_H__ */ From patchwork Sun Mar 2 14:09:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83AD0C282D1 for ; Sun, 2 Mar 2025 14:21:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2BF4610E0F1; Sun, 2 Mar 2025 14:21:30 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="hRcu519B"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3432910E0F1; Sun, 2 Mar 2025 14:21:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925289; x=1772461289; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jp8JwSIuJk4GFCR3LmfzYuHpnYGRU8UeJvhnzagKb4c=; b=hRcu519Bn7xsRycPftnsookR/NrEtDk6JvzVwZf4Rx8ZyO1liPFTB/d9 nILSBNuGJMYmXIlsbF62qjcHnhKj+SPfWaQHvvWogtJeePzM/D5q+sSWc cXYzbRJU4aDBHkDMk3pMMXwJD6QkGwdwbVTSUhSWzScSGuG55fwbqyzCC 47yJaG6aELTYCN5agAlgsMWsshuf8vCnglojFw+FL1sXgQyX94X3PmU+J lMg+MMxx69nYsLlrqBOVbag7PHKondOWDcvMceWtvjhGRH0DaSwd+FxxK 5Fdu2YLYwuPm40x43+BF0HpMOb3NrCpOsmK1xMkHhFi3mDepgZKeBaL90 w==; X-CSE-ConnectionGUID: 1HgrfxSES1u1OP8J92EU/Q== X-CSE-MsgGUID: c0pmd9+8Sq6Wb5hCT1xuQw== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176487" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176487" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:29 -0800 X-CSE-ConnectionGUID: U/0gjIcNRh6IsfLC6qiStw== X-CSE-MsgGUID: iN6Z/y15Q3G1vXcrATEdQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737383" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:22 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 09/11] drm/i915/nvm: add support for access mode Date: Sun, 2 Mar 2025 16:09:19 +0200 Message-ID: <20250302140921.504304-10-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Check NVM access mode from GSC FW status registers and overwrite access status read from SPI descriptor, if needed. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/i915/intel_nvm.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_nvm.c b/drivers/gpu/drm/i915/intel_nvm.c index 75d3ebe669ff..dd3999c934a7 100644 --- a/drivers/gpu/drm/i915/intel_nvm.c +++ b/drivers/gpu/drm/i915/intel_nvm.c @@ -10,6 +10,7 @@ #include "intel_nvm.h" #define GEN12_GUNIT_NVM_SIZE 0x80 +#define HECI_FW_STATUS_2_NVM_ACCESS_MODE BIT(3) static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { [0] = { .name = "DESCRIPTOR", }, @@ -22,6 +23,28 @@ static void i915_nvm_release_dev(struct device *dev) { } +static bool i915_nvm_writable_override(struct drm_i915_private *i915) +{ + resource_size_t base; + bool writable_override; + + if (IS_DG1(i915)) { + base = DG1_GSC_HECI2_BASE; + } else if (IS_DG2(i915)) { + base = DG2_GSC_HECI2_BASE; + } else { + drm_err(&i915->drm, "Unknown platform\n"); + return true; + } + + writable_override = + !(intel_uncore_read(&i915->uncore, HECI_FWSTS(base, 2)) & + HECI_FW_STATUS_2_NVM_ACCESS_MODE); + if (writable_override) + drm_info(&i915->drm, "NVM access overridden by jumper\n"); + return writable_override; +} + void intel_nvm_init(struct drm_i915_private *i915) { struct pci_dev *pdev = to_pci_dev(i915->drm.dev); @@ -43,7 +66,7 @@ void intel_nvm_init(struct drm_i915_private *i915) nvm = i915->nvm; - nvm->writeable_override = true; + nvm->writable_override = i915_nvm_writable_override(i915); nvm->bar.parent = &pdev->resource[0]; nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; From patchwork Sun Mar 2 14:09:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87D86C282D1 for ; Sun, 2 Mar 2025 14:21:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2DF9B10E31B; Sun, 2 Mar 2025 14:21:37 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="LJc4G1P1"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id B8D0310E327; Sun, 2 Mar 2025 14:21:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925296; x=1772461296; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7euSt/kjCuMQkkaQNF2BgbWiqhiqKrlEFQcwMV0MGGc=; b=LJc4G1P1F93AGqRdYES/JHo7169+WKJYGVkNf5YLXmvwE8+zDzO9ZBl4 o92rRjwYdCeLc8sTqjE+w5vSVA1pUrhgE3f8nz0nV2fJj657dVXjadA9l M+356DR7GylJgVVe/hlsNWS877TreTesQx1lBmXL8jmM25MdEvZHH6dyz GSe21QJ8eMHx8Ow9/lAU81GWAZPXAsWzBKKV7bXJTX4R6r2aF/tVazmBP 9vcCnp7y4OJsr/Y4g/AwvSeU9dDMoQ69EkdjvCcW255GwE7uG53DH0oq8 VW1xTCaMF37ZUbl6h27yJ4GVegFIKPdKczBbB8FzBmIiOIlmHZNcib8Np A==; X-CSE-ConnectionGUID: QSv46hrzSECLsnbraept9Q== X-CSE-MsgGUID: yVx+qJSISsiqfLlZlw9IzQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176503" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176503" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:35 -0800 X-CSE-ConnectionGUID: S8ZAWG9sSaWJzRgTqVo4gg== X-CSE-MsgGUID: awtbU0AjTqi71z6l/uWS8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737392" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:28 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 10/11] drm/xe/nvm: add on-die non-volatile memory device Date: Sun, 2 Mar 2025 16:09:20 +0200 Message-ID: <20250302140921.504304-11-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable access to internal non-volatile memory on DGFX with GSC/CSC devices via a child device. The nvm child device is exposed via auxiliary bus. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/xe_device.c | 5 ++ drivers/gpu/drm/xe/xe_device_types.h | 6 ++ drivers/gpu/drm/xe/xe_nvm.c | 101 +++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_nvm.h | 15 ++++ drivers/gpu/drm/xe/xe_pci.c | 6 ++ 6 files changed, 134 insertions(+) create mode 100644 drivers/gpu/drm/xe/xe_nvm.c create mode 100644 drivers/gpu/drm/xe/xe_nvm.h diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index 856b14fe1c4d..3d1f1192ad4a 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -80,6 +80,7 @@ xe-y += xe_bb.o \ xe_mmio.o \ xe_mocs.o \ xe_module.o \ + xe_nvm.o \ xe_oa.o \ xe_observation.o \ xe_pat.o \ diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c index 9454b51f7ad8..76cb988afa1a 100644 --- a/drivers/gpu/drm/xe/xe_device.c +++ b/drivers/gpu/drm/xe/xe_device.c @@ -52,6 +52,7 @@ #include "xe_pmu.h" #include "xe_pxp.h" #include "xe_query.h" +#include "xe_nvm.h" #include "xe_sriov.h" #include "xe_tile.h" #include "xe_ttm_stolen_mgr.h" @@ -854,6 +855,8 @@ int xe_device_probe(struct xe_device *xe) return err; } + xe_nvm_init(xe); + err = xe_heci_gsc_init(xe); if (err) return err; @@ -907,6 +910,8 @@ void xe_device_remove(struct xe_device *xe) { xe_display_unregister(xe); + xe_nvm_fini(xe); + drm_dev_unplug(&xe->drm); } diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index 833c29fed3a3..5caa9daeee6b 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -35,6 +35,7 @@ #include "intel_display_device.h" #endif +struct intel_dg_nvm_dev; struct xe_ggtt; struct xe_pat_ops; struct xe_pxp; @@ -302,6 +303,8 @@ struct xe_device { u8 has_device_atomics_on_smem:1; /** @info.has_flat_ccs: Whether flat CCS metadata is used */ u8 has_flat_ccs:1; + /** @info.has_gsc_nvm: Device has gsc non-volatile memory */ + u8 has_gsc_nvm:1; /** @info.has_heci_cscfi: device has heci cscfi */ u8 has_heci_cscfi:1; /** @info.has_heci_gscfi: device has heci gscfi */ @@ -508,6 +511,9 @@ struct xe_device { /** @heci_gsc: graphics security controller */ struct xe_heci_gsc heci_gsc; + /** @nvm: discrete graphics non-volatile memory */ + struct intel_dg_nvm_dev *nvm; + /** @oa: oa observation subsystem */ struct xe_oa oa; diff --git a/drivers/gpu/drm/xe/xe_nvm.c b/drivers/gpu/drm/xe/xe_nvm.c new file mode 100644 index 000000000000..26de7d4472c8 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_nvm.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#include +#include + +#include "xe_device_types.h" +#include "xe_nvm.h" +#include "xe_sriov.h" + +#define GEN12_GUNIT_NVM_BASE 0x00102040 +#define GEN12_GUNIT_NVM_SIZE 0x80 +#define HECI_FW_STATUS_2_NVM_ACCESS_MODE BIT(3) + +static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { + [0] = { .name = "DESCRIPTOR", }, + [2] = { .name = "GSC", }, + [11] = { .name = "OptionROM", }, + [12] = { .name = "DAM", }, +}; + +static void xe_nvm_release_dev(struct device *dev) +{ +} + +void xe_nvm_init(struct xe_device *xe) +{ + struct pci_dev *pdev = to_pci_dev(xe->drm.dev); + struct intel_dg_nvm_dev *nvm; + struct auxiliary_device *aux_dev; + int ret; + + if (!xe->info.has_gsc_nvm) + return; + + /* No access to internal NVM from VFs */ + if (IS_SRIOV_VF(xe)) + return; + + /* Nvm pointer should be NULL here */ + if (WARN_ON(xe->nvm)) + return; + + xe->nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); + if (!xe->nvm) + return; + + nvm = xe->nvm; + + nvm->writeable_override = false; + nvm->bar.parent = &pdev->resource[0]; + nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; + nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; + nvm->bar.flags = IORESOURCE_MEM; + nvm->bar.desc = IORES_DESC_NONE; + nvm->regions = regions; + + aux_dev = &nvm->aux_dev; + + aux_dev->name = "nvm"; + aux_dev->id = (pci_domain_nr(pdev->bus) << 16) | + PCI_DEVID(pdev->bus->number, pdev->devfn); + aux_dev->dev.parent = &pdev->dev; + aux_dev->dev.release = xe_nvm_release_dev; + + ret = auxiliary_device_init(aux_dev); + if (ret) { + drm_err(&xe->drm, "xe-nvm aux init failed %d\n", ret); + return; + } + + ret = auxiliary_device_add(aux_dev); + if (ret) { + drm_err(&xe->drm, "xe-nvm aux add failed %d\n", ret); + auxiliary_device_uninit(aux_dev); + return; + } +} + +void xe_nvm_fini(struct xe_device *xe) +{ + struct intel_dg_nvm_dev *nvm = xe->nvm; + + if (!xe->info.has_gsc_nvm) + return; + + /* No access to internal NVM from VFs */ + if (IS_SRIOV_VF(xe)) + return; + + /* Nvm pointer should not be NULL here */ + if (WARN_ON(!nvm)) + return; + + auxiliary_device_delete(&nvm->aux_dev); + auxiliary_device_uninit(&nvm->aux_dev); + kfree(nvm); + xe->nvm = NULL; +} diff --git a/drivers/gpu/drm/xe/xe_nvm.h b/drivers/gpu/drm/xe/xe_nvm.h new file mode 100644 index 000000000000..5487764c180f --- /dev/null +++ b/drivers/gpu/drm/xe/xe_nvm.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2025 Intel Corporation. All rights reserved. + */ + +#ifndef __XE_NVM_H__ +#define __XE_NVM_H__ + +struct xe_device; + +void xe_nvm_init(struct xe_device *xe); + +void xe_nvm_fini(struct xe_device *xe); + +#endif diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c index 8b6658b214be..51a729d1ef98 100644 --- a/drivers/gpu/drm/xe/xe_pci.c +++ b/drivers/gpu/drm/xe/xe_pci.c @@ -62,6 +62,7 @@ struct xe_device_desc { u8 is_dgfx:1; u8 has_display:1; + u8 has_gsc_nvm:1; u8 has_heci_gscfi:1; u8 has_heci_cscfi:1; u8 has_llc:1; @@ -285,6 +286,7 @@ static const struct xe_device_desc dg1_desc = { PLATFORM(DG1), .dma_mask_size = 39, .has_display = true, + .has_gsc_nvm = 1, .has_heci_gscfi = 1, .require_force_probe = true, }; @@ -296,6 +298,7 @@ static const u16 dg2_g12_ids[] = { INTEL_DG2_G12_IDS(NOP), 0 }; #define DG2_FEATURES \ DGFX_FEATURES, \ PLATFORM(DG2), \ + .has_gsc_nvm = 1, \ .has_heci_gscfi = 1, \ .subplatforms = (const struct xe_subplatform_desc[]) { \ { XE_SUBPLATFORM_DG2_G10, "G10", dg2_g10_ids }, \ @@ -330,6 +333,7 @@ static const __maybe_unused struct xe_device_desc pvc_desc = { PLATFORM(PVC), .dma_mask_size = 52, .has_display = false, + .has_gsc_nvm = 1, .has_heci_gscfi = 1, .max_remote_tiles = 1, .require_force_probe = true, @@ -356,6 +360,7 @@ static const struct xe_device_desc bmg_desc = { PLATFORM(BATTLEMAGE), .dma_mask_size = 46, .has_display = true, + .has_gsc_nvm = 1, .has_heci_cscfi = 1, }; @@ -631,6 +636,7 @@ static int xe_info_init_early(struct xe_device *xe, xe->info.dma_mask_size = desc->dma_mask_size; xe->info.is_dgfx = desc->is_dgfx; + xe->info.has_gsc_nvm = desc->has_gsc_nvm; xe->info.has_heci_gscfi = desc->has_heci_gscfi; xe->info.has_heci_cscfi = desc->has_heci_cscfi; xe->info.has_llc = desc->has_llc; From patchwork Sun Mar 2 14:09:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 13997828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76586C19F32 for ; Sun, 2 Mar 2025 14:21:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0522210E330; Sun, 2 Mar 2025 14:21:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="kILuXNbV"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id D263810E320; Sun, 2 Mar 2025 14:21:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740925302; x=1772461302; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rjKY1l/SEQEYfQRD5GQW4b/ZP8WwEI3BeQdcMqQ/5zY=; b=kILuXNbVbAMi9EXdzbaD1SHyiNu688MmHA+t94CcjDJdult81xY2bzGa cRyq0kGotRjWC5oUVB6SSQ3NKnQxoYL874JyZlxfSrEgZau1g5MpbdWTU ryuZaLQ1VlJveuPIaZJVcigMT/lctaGThE7Pt5pgxzSChX35X5UjooSxo EpbXj7b5fu1xR3ANpmF13I2+qgpVm9fgO7kqapAU+H400KywbM3lcZ1N3 VoYS5MgE0uBhpIoD1TMCjMwtolULHghKhbtF6OHmts+bs7w3/rrgoCDjp GoJjVbw19QbNyDaSoXEz3puPPZHv7YZyDuBxU86Hl5i6FnteupW1w4hco w==; X-CSE-ConnectionGUID: P/D8/t1aThqL5XZ1N4FWKg== X-CSE-MsgGUID: VW2AZPMwRw+YKz/Ka7vq0g== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="67176510" X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="67176510" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:42 -0800 X-CSE-ConnectionGUID: F0lrrH2LR3aSZOIBrkpqvQ== X-CSE-MsgGUID: MrjAYm3+RlKBPEaAyAr+3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,327,1732608000"; d="scan'208";a="122737475" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2025 06:21:35 -0800 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v6 11/11] drm/xe/nvm: add support for access mode Date: Sun, 2 Mar 2025 16:09:21 +0200 Message-ID: <20250302140921.504304-12-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302140921.504304-1-alexander.usyskin@intel.com> References: <20250302140921.504304-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Check NVM access mode from GSC FW status registers and overwrite access status read from SPI descriptor, if needed. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/xe/regs/xe_gsc_regs.h | 4 +++ drivers/gpu/drm/xe/xe_heci_gsc.c | 5 +--- drivers/gpu/drm/xe/xe_nvm.c | 37 ++++++++++++++++++++++++++- 3 files changed, 41 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/xe/regs/xe_gsc_regs.h b/drivers/gpu/drm/xe/regs/xe_gsc_regs.h index 7702364b65f1..9b66cc972a63 100644 --- a/drivers/gpu/drm/xe/regs/xe_gsc_regs.h +++ b/drivers/gpu/drm/xe/regs/xe_gsc_regs.h @@ -16,6 +16,10 @@ #define MTL_GSC_HECI1_BASE 0x00116000 #define MTL_GSC_HECI2_BASE 0x00117000 +#define DG1_GSC_HECI2_BASE 0x00259000 +#define PVC_GSC_HECI2_BASE 0x00285000 +#define DG2_GSC_HECI2_BASE 0x00374000 + #define HECI_H_CSR(base) XE_REG((base) + 0x4) #define HECI_H_CSR_IE REG_BIT(0) #define HECI_H_CSR_IS REG_BIT(1) diff --git a/drivers/gpu/drm/xe/xe_heci_gsc.c b/drivers/gpu/drm/xe/xe_heci_gsc.c index 27d11e06a82b..6d7b62724126 100644 --- a/drivers/gpu/drm/xe/xe_heci_gsc.c +++ b/drivers/gpu/drm/xe/xe_heci_gsc.c @@ -11,15 +11,12 @@ #include "xe_device_types.h" #include "xe_drv.h" #include "xe_heci_gsc.h" +#include "regs/xe_gsc_regs.h" #include "xe_platform_types.h" #include "xe_survivability_mode.h" #define GSC_BAR_LENGTH 0x00000FFC -#define DG1_GSC_HECI2_BASE 0x259000 -#define PVC_GSC_HECI2_BASE 0x285000 -#define DG2_GSC_HECI2_BASE 0x374000 - static void heci_gsc_irq_mask(struct irq_data *d) { /* generic irq handling */ diff --git a/drivers/gpu/drm/xe/xe_nvm.c b/drivers/gpu/drm/xe/xe_nvm.c index 26de7d4472c8..8aec20bc629a 100644 --- a/drivers/gpu/drm/xe/xe_nvm.c +++ b/drivers/gpu/drm/xe/xe_nvm.c @@ -6,8 +6,11 @@ #include #include +#include "xe_device.h" #include "xe_device_types.h" +#include "xe_mmio.h" #include "xe_nvm.h" +#include "regs/xe_gsc_regs.h" #include "xe_sriov.h" #define GEN12_GUNIT_NVM_BASE 0x00102040 @@ -25,6 +28,38 @@ static void xe_nvm_release_dev(struct device *dev) { } +static bool xe_nvm_writable_override(struct xe_device *xe) +{ + struct xe_gt *gt = xe_root_mmio_gt(xe); + resource_size_t base; + bool writable_override; + + switch (xe->info.platform) { + case XE_BATTLEMAGE: + base = DG2_GSC_HECI2_BASE; + break; + case XE_PVC: + base = PVC_GSC_HECI2_BASE; + break; + case XE_DG2: + base = DG2_GSC_HECI2_BASE; + break; + case XE_DG1: + base = DG1_GSC_HECI2_BASE; + break; + default: + drm_err(&xe->drm, "Unknown platform\n"); + return true; + } + + writable_override = + !(xe_mmio_read32(>->mmio, HECI_FWSTS2(base)) & + HECI_FW_STATUS_2_NVM_ACCESS_MODE); + if (writable_override) + drm_info(&xe->drm, "NVM access overridden by jumper\n"); + return writable_override; +} + void xe_nvm_init(struct xe_device *xe) { struct pci_dev *pdev = to_pci_dev(xe->drm.dev); @@ -49,7 +84,7 @@ void xe_nvm_init(struct xe_device *xe) nvm = xe->nvm; - nvm->writeable_override = false; + nvm->writable_override = xe_nvm_writable_override(xe); nvm->bar.parent = &pdev->resource[0]; nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1;