From patchwork Mon Mar 25 08:32:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chunming Zhou X-Patchwork-Id: 10868201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E434A15AC for ; Mon, 25 Mar 2019 08:32:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA9A029271 for ; Mon, 25 Mar 2019 08:32:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA60129275; Mon, 25 Mar 2019 08:32:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BA6E229271 for ; Mon, 25 Mar 2019 08:32:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB14C6E68E; Mon, 25 Mar 2019 08:32:41 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-eopbgr740075.outbound.protection.outlook.com [40.107.74.75]) by gabe.freedesktop.org (Postfix) with ESMTPS id 03C976E684; Mon, 25 Mar 2019 08:32:39 +0000 (UTC) Received: from DM5PR12CA0055.namprd12.prod.outlook.com (2603:10b6:3:103::17) by BYAPR12MB2966.namprd12.prod.outlook.com (2603:10b6:a03:d5::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1730.18; Mon, 25 Mar 2019 08:32:37 +0000 Received: from CO1NAM03FT047.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::206) by DM5PR12CA0055.outlook.office365.com (2603:10b6:3:103::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1730.18 via Frontend Transport; Mon, 25 Mar 2019 08:32:37 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by CO1NAM03FT047.mail.protection.outlook.com (10.152.81.48) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1730.9 via Frontend Transport; Mon, 25 Mar 2019 08:32:37 +0000 Received: from zhoucm1.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Mon, 25 Mar 2019 03:32:35 -0500 From: Chunming Zhou To: , , , , , Subject: [PATCH 1/9] dma-buf: add new dma_fence_chain container v7 Date: Mon, 25 Mar 2019 16:32:16 +0800 Message-ID: <20190325083224.2983-1-david1.zhou@amd.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(396003)(346002)(376002)(136003)(39860400002)(2980300002)(428003)(199004)(189003)(105586002)(7696005)(5820100001)(316002)(86362001)(2201001)(478600001)(72206003)(6636002)(47776003)(30864003)(110136005)(50466002)(14444005)(53416004)(104016004)(53936002)(186003)(305945005)(486006)(77096007)(2870700001)(54906003)(476003)(126002)(68736007)(2616005)(1076003)(66574012)(2906002)(26005)(5660300002)(356004)(6666004)(36756003)(50226002)(4326008)(8936002)(97736004)(426003)(336012)(23676004)(81156014)(81166006)(106466001)(8676002)(2004002); DIR:OUT; SFP:1101; SCL:1; SRVR:BYAPR12MB2966; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ccf0eacb-8f10-4abc-e743-08d6b0fc6fc7 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060); SRVR:BYAPR12MB2966; X-MS-TrafficTypeDiagnostic: BYAPR12MB2966: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0987ACA2E2 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: g1EOHwgv52v4wFZdJ1I3JBdrkqriO2ZFcPEDMNx9EwCNwDoPuxxDpuxGKbJur2qm9GdWkgIU8hEJVV/5vfop22cWUsFwd7XEFJrnHxMrNgsfqQa78tgWCHhqpLb2OyiS/74y3TYw+Inx+IVLc24ohQeHNmFajTFzgZ1MoH8eSvQqJlZ4dpUNE32Xk8uQBWs8Nfh+rj3+nCSvmz66je2roUq9Qwe1Ksffrapai/r5NUTfhmmrGK8mEgA9wtd1Wp0lAJfOkon4vA2QJVBFCci5WApOTIvzov+KjSeHYElreeN1NaHSwpQ4EZDQh/e6n5O1ya6EqNlrMLIR2xEfDeTatsHM4xR/6suz74AKMtCCZ9ExqLS956zTWG2qm4E0+QBLsR0FYpGrReXId6um8ic194Muuyhj4BS6dCLGScME+1o= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2019 08:32:37.0090 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ccf0eacb-8f10-4abc-e743-08d6b0fc6fc7 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2966 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cOlIVXQaUrr/f6JTBwzKAGV/umweP+RLACFMU9OZmGU=; b=gHmSY5ARjKjRYXln2zccmMu4Eg7H2fw8AzeQ48mQEtIUU0HpIKMIVq2qRjJnoD7et1wBOCbSf64mP2dp69J5h+qcFIMPPerMugvOART+5DH+Dmv9Bjoob5VhEVJy7Fa/T4DxDSjKSD60AdTk78CCzvSYtoxH5C8p/nyEru3ff1s= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; jlekstrand.net; dkim=none (message not signed) header.d=none;jlekstrand.net; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König Lockless container implementation similar to a dma_fence_array, but with only two elements per node and automatic garbage collection. v2: properly document dma_fence_chain_for_each, add dma_fence_chain_find_seqno, drop prev reference during garbage collection if it's not a chain fence. v3: use head and iterator for dma_fence_chain_for_each v4: fix reference count in dma_fence_chain_enable_signaling v5: fix iteration when walking each chain node v6: add __rcu for member 'prev' of struct chain node v7: fix rcu warnings from kernel robot Signed-off-by: Christian König Cc: Lionel Landwerlin --- drivers/dma-buf/Makefile | 3 +- drivers/dma-buf/dma-fence-chain.c | 241 ++++++++++++++++++++++++++++++ include/linux/dma-fence-chain.h | 81 ++++++++++ 3 files changed, 324 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/dma-fence-chain.c create mode 100644 include/linux/dma-fence-chain.h diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 0913a6ccab5a..1f006e083eb9 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,4 +1,5 @@ -obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o +obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \ + reservation.o seqno-fence.o obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_UDMABUF) += udmabuf.o diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c new file mode 100644 index 000000000000..c729f98a7bd3 --- /dev/null +++ b/drivers/dma-buf/dma-fence-chain.c @@ -0,0 +1,241 @@ +/* + * fence-chain: chain fences together in a timeline + * + * Copyright (C) 2018 Advanced Micro Devices, Inc. + * Authors: + * Christian König + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#include + +static bool dma_fence_chain_enable_signaling(struct dma_fence *fence); + +/** + * dma_fence_chain_get_prev - use RCU to get a reference to the previous fence + * @chain: chain node to get the previous node from + * + * Use dma_fence_get_rcu_safe to get a reference to the previous fence of the + * chain node. + */ +static struct dma_fence *dma_fence_chain_get_prev(struct dma_fence_chain *chain) +{ + struct dma_fence *prev; + + rcu_read_lock(); + prev = dma_fence_get_rcu_safe(&chain->prev); + rcu_read_unlock(); + return prev; +} + +/** + * dma_fence_chain_walk - chain walking function + * @fence: current chain node + * + * Walk the chain to the next node. Returns the next fence or NULL if we are at + * the end of the chain. Garbage collects chain nodes which are already + * signaled. + */ +struct dma_fence *dma_fence_chain_walk(struct dma_fence *fence) +{ + struct dma_fence_chain *chain, *prev_chain; + struct dma_fence *prev, *replacement, *tmp; + + chain = to_dma_fence_chain(fence); + if (!chain) { + dma_fence_put(fence); + return NULL; + } + + while ((prev = dma_fence_chain_get_prev(chain))) { + + prev_chain = to_dma_fence_chain(prev); + if (prev_chain) { + if (!dma_fence_is_signaled(prev_chain->fence)) + break; + + replacement = dma_fence_chain_get_prev(prev_chain); + } else { + if (!dma_fence_is_signaled(prev)) + break; + + replacement = NULL; + } + + tmp = cmpxchg((void **)&chain->prev, (void *)prev, (void *)replacement); + if (tmp == prev) + dma_fence_put(tmp); + else + dma_fence_put(replacement); + dma_fence_put(prev); + } + + dma_fence_put(fence); + return prev; +} +EXPORT_SYMBOL(dma_fence_chain_walk); + +/** + * dma_fence_chain_find_seqno - find fence chain node by seqno + * @pfence: pointer to the chain node where to start + * @seqno: the sequence number to search for + * + * Advance the fence pointer to the chain node which will signal this sequence + * number. If no sequence number is provided then this is a no-op. + * + * Returns EINVAL if the fence is not a chain node or the sequence number has + * not yet advanced far enough. + */ +int dma_fence_chain_find_seqno(struct dma_fence **pfence, uint64_t seqno) +{ + struct dma_fence_chain *chain; + + if (!seqno) + return 0; + + chain = to_dma_fence_chain(*pfence); + if (!chain || chain->base.seqno < seqno) + return -EINVAL; + + dma_fence_chain_for_each(*pfence, &chain->base) { + if ((*pfence)->context != chain->base.context || + to_dma_fence_chain(*pfence)->prev_seqno < seqno) + break; + } + dma_fence_put(&chain->base); + + return 0; +} +EXPORT_SYMBOL(dma_fence_chain_find_seqno); + +static const char *dma_fence_chain_get_driver_name(struct dma_fence *fence) +{ + return "dma_fence_chain"; +} + +static const char *dma_fence_chain_get_timeline_name(struct dma_fence *fence) +{ + return "unbound"; +} + +static void dma_fence_chain_irq_work(struct irq_work *work) +{ + struct dma_fence_chain *chain; + + chain = container_of(work, typeof(*chain), work); + + /* Try to rearm the callback */ + if (!dma_fence_chain_enable_signaling(&chain->base)) + /* Ok, we are done. No more unsignaled fences left */ + dma_fence_signal(&chain->base); + dma_fence_put(&chain->base); +} + +static void dma_fence_chain_cb(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct dma_fence_chain *chain; + + chain = container_of(cb, typeof(*chain), cb); + irq_work_queue(&chain->work); + dma_fence_put(f); +} + +static bool dma_fence_chain_enable_signaling(struct dma_fence *fence) +{ + struct dma_fence_chain *head = to_dma_fence_chain(fence); + + dma_fence_get(&head->base); + dma_fence_chain_for_each(fence, &head->base) { + struct dma_fence_chain *chain = to_dma_fence_chain(fence); + struct dma_fence *f = chain ? chain->fence : fence; + + dma_fence_get(f); + if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) { + dma_fence_put(fence); + return true; + } + dma_fence_put(f); + } + dma_fence_put(&head->base); + return false; +} + +static bool dma_fence_chain_signaled(struct dma_fence *fence) +{ + dma_fence_chain_for_each(fence, fence) { + struct dma_fence_chain *chain = to_dma_fence_chain(fence); + struct dma_fence *f = chain ? chain->fence : fence; + + if (!dma_fence_is_signaled(f)) { + dma_fence_put(fence); + return false; + } + } + + return true; +} + +static void dma_fence_chain_release(struct dma_fence *fence) +{ + struct dma_fence_chain *chain = to_dma_fence_chain(fence); + + dma_fence_put(rcu_dereference_protected(chain->prev, true)); + dma_fence_put(chain->fence); + dma_fence_free(fence); +} + +const struct dma_fence_ops dma_fence_chain_ops = { + .get_driver_name = dma_fence_chain_get_driver_name, + .get_timeline_name = dma_fence_chain_get_timeline_name, + .enable_signaling = dma_fence_chain_enable_signaling, + .signaled = dma_fence_chain_signaled, + .release = dma_fence_chain_release, +}; +EXPORT_SYMBOL(dma_fence_chain_ops); + +/** + * dma_fence_chain_init - initialize a fence chain + * @chain: the chain node to initialize + * @prev: the previous fence + * @fence: the current fence + * + * Initialize a new chain node and either start a new chain or add the node to + * the existing chain of the previous fence. + */ +void dma_fence_chain_init(struct dma_fence_chain *chain, + struct dma_fence *prev, + struct dma_fence *fence, + uint64_t seqno) +{ + struct dma_fence_chain *prev_chain = to_dma_fence_chain(prev); + uint64_t context; + + spin_lock_init(&chain->lock); + rcu_assign_pointer(chain->prev, prev); + chain->fence = fence; + chain->prev_seqno = 0; + init_irq_work(&chain->work, dma_fence_chain_irq_work); + + /* Try to reuse the context of the previous chain node. */ + if (prev_chain && __dma_fence_is_later(seqno, prev->seqno)) { + context = prev->context; + chain->prev_seqno = prev->seqno; + } else { + context = dma_fence_context_alloc(1); + /* Make sure that we always have a valid sequence number. */ + if (prev_chain) + seqno = max(prev->seqno, seqno); + } + + dma_fence_init(&chain->base, &dma_fence_chain_ops, + &chain->lock, context, seqno); +} +EXPORT_SYMBOL(dma_fence_chain_init); diff --git a/include/linux/dma-fence-chain.h b/include/linux/dma-fence-chain.h new file mode 100644 index 000000000000..934a442db8ac --- /dev/null +++ b/include/linux/dma-fence-chain.h @@ -0,0 +1,81 @@ +/* + * fence-chain: chain fences together in a timeline + * + * Copyright (C) 2018 Advanced Micro Devices, Inc. + * Authors: + * Christian König + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#ifndef __LINUX_DMA_FENCE_CHAIN_H +#define __LINUX_DMA_FENCE_CHAIN_H + +#include +#include + +/** + * struct dma_fence_chain - fence to represent an node of a fence chain + * @base: fence base class + * @lock: spinlock for fence handling + * @prev: previous fence of the chain + * @prev_seqno: original previous seqno before garbage collection + * @fence: encapsulated fence + * @cb: callback structure for signaling + * @work: irq work item for signaling + */ +struct dma_fence_chain { + struct dma_fence base; + spinlock_t lock; + struct dma_fence __rcu *prev; + u64 prev_seqno; + struct dma_fence *fence; + struct dma_fence_cb cb; + struct irq_work work; +}; + +extern const struct dma_fence_ops dma_fence_chain_ops; + +/** + * to_dma_fence_chain - cast a fence to a dma_fence_chain + * @fence: fence to cast to a dma_fence_array + * + * Returns NULL if the fence is not a dma_fence_chain, + * or the dma_fence_chain otherwise. + */ +static inline struct dma_fence_chain * +to_dma_fence_chain(struct dma_fence *fence) +{ + if (!fence || fence->ops != &dma_fence_chain_ops) + return NULL; + + return container_of(fence, struct dma_fence_chain, base); +} + +/** + * dma_fence_chain_for_each - iterate over all fences in chain + * @iter: current fence + * @head: starting point + * + * Iterate over all fences in the chain. We keep a reference to the current + * fence while inside the loop which must be dropped when breaking out. + */ +#define dma_fence_chain_for_each(iter, head) \ + for (iter = dma_fence_get(head); iter; \ + iter = dma_fence_chain_walk(iter)) + +struct dma_fence *dma_fence_chain_walk(struct dma_fence *fence); +int dma_fence_chain_find_seqno(struct dma_fence **pfence, uint64_t seqno); +void dma_fence_chain_init(struct dma_fence_chain *chain, + struct dma_fence *prev, + struct dma_fence *fence, + uint64_t seqno); + +#endif /* __LINUX_DMA_FENCE_CHAIN_H */