From patchwork Tue May 10 23:56:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "T.J. Mercier" X-Patchwork-Id: 12845621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1FF0C433EF for ; Tue, 10 May 2022 23:57:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1D12410E908; Tue, 10 May 2022 23:57:09 +0000 (UTC) Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5F9E010EA0E for ; Tue, 10 May 2022 23:57:08 +0000 (UTC) Received: by mail-yb1-xb49.google.com with SMTP id 9-20020a250909000000b006484b89c979so370026ybj.21 for ; Tue, 10 May 2022 16:57:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=v8gt6xjWudwEV2qln2N7SwuUJc9JxxsnlnvbLCYxUc0=; b=loPY8BnZXxZrmNCWwTUfS3qnOnH67SZdt82uLV668C+yIIJpUM1rapWhZgScGIl9UE JFdoQvmgq7+PYtQoNo7Xt49JaVYYmqeh2y94j0P91OEseICBN/VZcZdJ3lAn7XpCBFZ3 ZAVIHeyQUE+wezChaUS94nW2/UcpI4NzobrLV1STkXosAruMEI00Fa/sE0BRXri67Gz2 q05IqbfZP5GchqKe/UA5reCqq9DB1Z8ctk9/fnRwNjkinW13EkBWelw/KRjhayVVXg4q Q+RrUAtEcwIKZeShieINdDTV4tIt2WaYR50DZuhNAsEvXdGj1OBBCwLDh1FP0SxBPBN8 7Qmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=v8gt6xjWudwEV2qln2N7SwuUJc9JxxsnlnvbLCYxUc0=; b=HsPCln6HNmblkocgJzlfkUe7WgzH5eRD1kgaRpjk+wqy2p6REYKE3e5qJZLUptQ/p/ ZroltpdydcWtzlr9JywHNNDz77K0jSgC9ldV9NkcGGeFx4fhoc7Y624yFFoZjGMgdrBA am2cMCAHKsDF613AMjeG6+xExWLW45h+8rp4LM8wP77lvLbIKLDJg1Ae+qW1lrKjkCwc syMWSiSLdg+klw8M6uV6D2ZxegX+/J+UYJ3IawyGenp9MrlUIXLOdrsHhiMKC+NVGeyx /qxe01toyWmoYglE2zXA9Nf11KJXBN3R1MHvZdCHaErZPBgwUcm442bRrDp3trXMCEgi reyg== X-Gm-Message-State: AOAM532b7wii1wSWzk90nLnp4A5qJXa/RLE+505xdRfIYwUtZA43OfWE VL8N5zUPVHUc/1w4tcnxSG6UXOz1gHRMDzk= X-Google-Smtp-Source: ABdhPJxuMd23Slzk/nX5zXAwKMfpsaSnwA9y71fBu5xFnMi06zJ5/oaDRiKKaGgRjX9ctDMnyPrityYdWY9YepQ= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a5b:8cb:0:b0:645:d65f:dcdd with SMTP id w11-20020a5b08cb000000b00645d65fdcddmr19840185ybq.233.1652227027524; Tue, 10 May 2022 16:57:07 -0700 (PDT) Date: Tue, 10 May 2022 23:56:48 +0000 In-Reply-To: <20220510235653.933868-1-tjmercier@google.com> Message-Id: <20220510235653.933868-5-tjmercier@google.com> Mime-Version: 1.0 References: <20220510235653.933868-1-tjmercier@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH v7 4/6] dmabuf: Add gpu cgroup charge transfer function From: "T.J. Mercier" To: tjmercier@google.com, Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Tejun Heo , Zefan Li , Johannes Weiner X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel-team@android.com, tkjos@android.com, Kenny.Ho@amd.com, cgroups@vger.kernel.org, skhan@linuxfoundation.org, cmllamas@google.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, jstultz@google.com, kaleshsingh@google.com, hridya@google.com, mkoutny@suse.com, surenb@google.com, linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The dma_buf_transfer_charge function provides a way for processes to transfer charge of a buffer to a different process. This is essential for the cases where a central allocator process does allocations for various subsystems, hands over the fd to the client who requested the memory and drops all references to the allocated memory. Originally-by: Hridya Valsaraju Signed-off-by: T.J. Mercier --- v5 changes Fix commit message which still contained the old name for dma_buf_transfer_charge per Michal Koutný. Modify the dma_buf_transfer_charge API to accept a task_struct instead of a gpucg. This avoids requiring the caller to manage the refcount of the gpucg upon failure and confusing ownership transfer logic. v4 changes Adjust ordering of charge/uncharge during transfer to avoid potentially hitting cgroup limit per Michal Koutný. v3 changes Use more common dual author commit message format per John Stultz. v2 changes Move dma-buf cgroup charge transfer from a dma_buf_op defined by every heap to a single dma-buf function for all heaps per Daniel Vetter and Christian König. --- drivers/dma-buf/dma-buf.c | 57 ++++++++++++++++++++++++++++++++++++++ include/linux/cgroup_gpu.h | 24 ++++++++++++++++ include/linux/dma-buf.h | 6 ++++ kernel/cgroup/gpu.c | 51 ++++++++++++++++++++++++++++++++++ 4 files changed, 138 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index bc89c44bd9b9..f3fb844925e2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1341,6 +1341,63 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); +/** + * dma_buf_transfer_charge - Change the GPU cgroup to which the provided dma_buf is charged. + * @dmabuf: [in] buffer whose charge will be migrated to a different GPU cgroup + * @target: [in] the task_struct of the destination process for the GPU cgroup charge + * + * Only tasks that belong to the same cgroup the buffer is currently charged to + * may call this function, otherwise it will return -EPERM. + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct task_struct *target) +{ + struct gpucg *current_gpucg, *target_gpucg, *to_release; + int ret; + + if (!dmabuf->gpucg || !dmabuf->gpucg_bucket) { + /* This dmabuf is not tracked under GPU cgroup accounting */ + return 0; + } + + current_gpucg = gpucg_get(current); + target_gpucg = gpucg_get(target); + to_release = target_gpucg; + + /* If the source and destination cgroups are the same, don't do anything. */ + if (current_gpucg == target_gpucg) { + ret = 0; + goto skip_transfer; + } + + /* + * Verify that the cgroup of the process requesting the transfer + * is the same as the one the buffer is currently charged to. + */ + mutex_lock(&dmabuf->lock); + if (current_gpucg != dmabuf->gpucg) { + ret = -EPERM; + goto err; + } + + ret = gpucg_transfer_charge( + dmabuf->gpucg, target_gpucg, dmabuf->gpucg_bucket, dmabuf->size); + if (ret) + goto err; + + to_release = dmabuf->gpucg; + dmabuf->gpucg = target_gpucg; + +err: + mutex_unlock(&dmabuf->lock); +skip_transfer: + gpucg_put(current_gpucg); + gpucg_put(to_release); + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_transfer_charge, DMA_BUF); + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/cgroup_gpu.h b/include/linux/cgroup_gpu.h index cb228a16aa1f..7eb68f1507fb 100644 --- a/include/linux/cgroup_gpu.h +++ b/include/linux/cgroup_gpu.h @@ -75,6 +75,22 @@ int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size); */ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size); +/** + * gpucg_transfer_charge - Transfer a GPU charge from one cgroup to another. + * + * @source: [in] The GPU cgroup the charge will be transferred from. + * @dest: [in] The GPU cgroup the charge will be transferred to. + * @bucket: [in] The GPU cgroup bucket corresponding to the charge. + * @size: [in] The size of the memory in bytes. + * This size will be rounded up to the nearest page size. + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size); + /** * gpucg_register_bucket - Registers a bucket for memory accounting using the GPU cgroup controller. * @@ -117,6 +133,14 @@ static inline void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size) {} +static inline int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size) +{ + return 0; +} + static inline struct gpucg_bucket *gpucg_register_bucket(const char *name) {} #endif /* CONFIG_CGROUP_GPU */ #endif /* _CGROUP_GPU_H */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 8e7c55c830b3..438ad8577b76 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -650,9 +651,14 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *exp_info, struct gpucg *gpucg, struct gpucg_bucket *gpucg_bucket); + +int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct task_struct *target); #else/* CONFIG_CGROUP_GPU */ static inline void dma_buf_exp_info_set_gpucg(struct dma_buf_export_info *exp_info, struct gpucg *gpucg, struct gpucg_bucket *gpucg_bucket) {} + +static inline int dma_buf_transfer_charge(struct dma_buf *dmabuf, struct task_struct *target) +{ return 0; } #endif /* CONFIG_CGROUP_GPU */ #endif /* __DMA_BUF_H__ */ diff --git a/kernel/cgroup/gpu.c b/kernel/cgroup/gpu.c index ad16ea15d427..038ea873a9d3 100644 --- a/kernel/cgroup/gpu.c +++ b/kernel/cgroup/gpu.c @@ -274,6 +274,57 @@ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size) css_put(&gpucg->css); } +int gpucg_transfer_charge(struct gpucg *source, + struct gpucg *dest, + struct gpucg_bucket *bucket, + u64 size) +{ + struct page_counter *counter; + u64 nr_pages; + struct gpucg_resource_pool *rp_source, *rp_dest; + int ret = 0; + + nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + mutex_lock(&gpucg_mutex); + rp_source = cg_rpool_find_locked(source, bucket); + if (unlikely(!rp_source)) { + ret = -ENOENT; + goto exit_early; + } + + rp_dest = cg_rpool_get_locked(dest, bucket); + if (IS_ERR(rp_dest)) { + ret = PTR_ERR(rp_dest); + goto exit_early; + } + + /* + * First uncharge from the pool it's currently charged to. This ordering avoids double + * charging while the transfer is in progress, which could cause us to hit a limit. + * If the try_charge fails for this transfer, we need to be able to reverse this uncharge, + * so we continue to hold the gpucg_mutex here. + */ + page_counter_uncharge(&rp_source->total, nr_pages); + css_put(&source->css); + + /* Now attempt the new charge */ + if (page_counter_try_charge(&rp_dest->total, nr_pages, &counter)) { + css_get(&dest->css); + } else { + /* + * The new charge failed, so reverse the uncharge from above. This should always + * succeed since charges on source are blocked by gpucg_mutex. + */ + WARN_ON(!page_counter_try_charge(&rp_source->total, nr_pages, &counter)); + css_get(&source->css); + ret = -ENOMEM; + } +exit_early: + mutex_unlock(&gpucg_mutex); + return ret; +} + struct gpucg_bucket *gpucg_register_bucket(const char *name) { struct gpucg_bucket *bucket, *b;