From patchwork Thu May 27 12:58:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64DF4C47089 for ; Thu, 27 May 2021 12:59:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 15A396128B for ; Thu, 27 May 2021 12:59:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15A396128B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133212.248368 (Exim 4.92) (envelope-from ) id 1lmFbP-0006Fg-45; Thu, 27 May 2021 12:59:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133212.248368; Thu, 27 May 2021 12:59:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbP-0006FV-00; Thu, 27 May 2021 12:59:07 +0000 Received: by outflank-mailman (input) for mailman id 133212; Thu, 27 May 2021 12:59:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbN-0006Dj-I3 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:05 +0000 Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 266a6cdf-f83c-415f-b166-7766a8c43566; Thu, 27 May 2021 12:59:04 +0000 (UTC) Received: by mail-pg1-x52a.google.com with SMTP id v14so3630654pgi.6 for ; Thu, 27 May 2021 05:59:04 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o17sm1723677pjp.33.2021.05.27.05.58.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:03 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 266a6cdf-f83c-415f-b166-7766a8c43566 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=; b=Ql518PoKOfY5r4efBKvH9DmAfXHTzOTlD7cQCPHv4pwiDJMplS9HGrBounYOm/uHrf /J12Wcg/AjGjzAJHhjxJJIQiFQkUIqzSfolO8gn7+iXITH1Ou9qg/tKyKRsiVDL/Q70X wZNWq4zxBO11T+ZAQUrcXnBAK+2h+MWriKags= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=; b=PQzpJPE0MPYlO6CHj2DdEEDdPR+qQkZmePFhXZf6IGirGeQQd7J3S5r+rZk74x6vV5 69LatfpG0ANC5z2e1wD6n3Ih5opI7s9e57fFj1NZJMAgcRigE2a3HtsU8HQAOZU4bNwM zyQ9ngpD5/ZTSWbQdvSqa6vm4i3aXP9mg2BsJpDzwpR9hYo10zccixzK8XjxR5tu/xeB HqMkpVRgFcIeKGRZfhRAl5CiCHZeZYSLR2/ewmyjfDR1pAGb/eNGNVyjOhAkQFklZAIK DlYRj/RjtrwacrY5EU8UUe5QUKD6LBBXySB4FYp0rW3he5kutCNbULRFwa4npSxz4M+I uDOw== X-Gm-Message-State: AOAM53164mMw4M1udrRb5FhMglqsBVcTE18683s4ZNxgrU2nGjh3v0w9 yUSXId7qi5MU5YxnMEbaEnk6UQ== X-Google-Smtp-Source: ABdhPJyp2dRY1SU48NJaQCKs+dJX/2XsN+WDKmaiBpTycmpdiWBGW8lmfuldLoK+B+T5ST+S6ZG9/g== X-Received: by 2002:a63:f248:: with SMTP id d8mr3579481pgk.219.1622120343937; Thu, 27 May 2021 05:59:03 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 01/15] swiotlb: Refactor swiotlb init functions Date: Thu, 27 May 2021 20:58:31 +0800 Message-Id: <20210527125845.1852284-2-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct initialization to make the code reusable. Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 8ca7d505d61c..d3232fc19385 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void) memset(vaddr, 0, bytes); } -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, + unsigned long nslabs, bool late_alloc) { + void *vaddr = phys_to_virt(start); unsigned long bytes = nslabs << IO_TLB_SHIFT, i; + + mem->nslabs = nslabs; + mem->start = start; + mem->end = mem->start + bytes; + mem->index = 0; + mem->late_alloc = late_alloc; + spin_lock_init(&mem->lock); + for (i = 0; i < mem->nslabs; i++) { + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; + mem->slots[i].alloc_size = 0; + } + + set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); + memset(vaddr, 0, bytes); +} + +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) +{ struct io_tlb_mem *mem; size_t alloc_size; @@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) if (!mem) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - mem->nslabs = nslabs; - mem->start = __pa(tlb); - mem->end = mem->start + bytes; - mem->index = 0; - spin_lock_init(&mem->lock); - for (i = 0; i < mem->nslabs; i++) { - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->slots[i].orig_addr = INVALID_PHYS_ADDR; - mem->slots[i].alloc_size = 0; - } + + swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false); io_tlb_default_mem = mem; if (verbose) @@ -282,7 +295,6 @@ swiotlb_late_init_with_default_size(size_t default_size) int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { - unsigned long bytes = nslabs << IO_TLB_SHIFT, i; struct io_tlb_mem *mem; if (swiotlb_force == SWIOTLB_NO_FORCE) @@ -297,20 +309,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) if (!mem) return -ENOMEM; - mem->nslabs = nslabs; - mem->start = virt_to_phys(tlb); - mem->end = mem->start + bytes; - mem->index = 0; - mem->late_alloc = 1; - spin_lock_init(&mem->lock); - for (i = 0; i < mem->nslabs; i++) { - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->slots[i].orig_addr = INVALID_PHYS_ADDR; - mem->slots[i].alloc_size = 0; - } - - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - memset(tlb, 0, bytes); + swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true); io_tlb_default_mem = mem; swiotlb_print_info(); From patchwork Thu May 27 12:58:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48E76C47089 for ; Thu, 27 May 2021 12:59:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1671F60FE6 for ; Thu, 27 May 2021 12:59:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1671F60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133217.248379 (Exim 4.92) (envelope-from ) id 1lmFbg-0006tt-Df; Thu, 27 May 2021 12:59:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133217.248379; Thu, 27 May 2021 12:59:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbg-0006tm-A8; Thu, 27 May 2021 12:59:24 +0000 Received: by outflank-mailman (input) for mailman id 133217; Thu, 27 May 2021 12:59:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbe-0006ac-5s for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:22 +0000 Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b9e83e38-63e4-44b7-9fac-aa81e9d146ff; Thu, 27 May 2021 12:59:13 +0000 (UTC) Received: by mail-pl1-x62f.google.com with SMTP id u7so2294266plq.4 for ; Thu, 27 May 2021 05:59:13 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id r10sm2150201pga.48.2021.05.27.05.59.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:12 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b9e83e38-63e4-44b7-9fac-aa81e9d146ff DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=; b=H8QKHfGZsWxCHEy5cA1tQcGKb44EP5b2zAEgNghokxkbg+whP74qNT9pCynan1YB4C ODR1VCmnmaAcY81RiH4rrdh2ogtLXiV4eZN6Qr0E0GHPsI2d+SkbVqT8pWRXMc0PJZWW P9APiIrxJsEIWE0eMgqeltDq17QR2UGVMSiSA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=; b=cRyxKVKtKWdmbdINqbXL6qfKUDxgmZXU5/kk2AkTXyV9e/I0fGFxM9zk6tlgiEm6jh L3/JGmD/c3tAvsf4UzWEOZqEG+5ljluFM3UKf5k5AYhHSTNczI97k7Y/oETBwW8yCZGM RtVaDh9dXjylQKd0gWL+WsdkpLNPQ8UhtZ4mJZ0QZmy06/q2PpTDaMnmQbuX8n8vZ8a8 MltHejvLWqkcVyHv9jpTZOxR9aMkK/hqOYh/aJgo9KLUROoTxFJohpQjWbMzRIwKBhz5 0vQ1FpeG8p4rSsWemX7v6dd4Uz59RTC5yfqG9tcmUsQiAQj3FWW/pHF5jcpxRKRsV35E 0X1A== X-Gm-Message-State: AOAM531Ncb1aPr76QRjN96JrmnVUAH1bWcv+YmsMboOwxyzvHdyO8x1y kgo3v4L3V26Xp/pii0h8xc4Zaw== X-Google-Smtp-Source: ABdhPJwy7gpYKip34XTwi+RiwqWb6k5jmIQ4n8vVEEugdkMYxDsCVe/LaGvOwoAOLDbCetdBwhFgLQ== X-Received: by 2002:a17:90b:190b:: with SMTP id mp11mr9025962pjb.77.1622120353025; Thu, 27 May 2021 05:59:13 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 02/15] swiotlb: Refactor swiotlb_create_debugfs Date: Thu, 27 May 2021 20:58:32 +0800 Message-Id: <20210527125845.1852284-3-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Split the debugfs creation to make the code reusable for supporting different bounce buffer pools, e.g. restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d3232fc19385..b849b01a446f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -64,6 +64,7 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem *io_tlb_default_mem; +static struct dentry *debugfs_dir; /* * Max segment that we can provide which (if pages are contingous) will @@ -662,18 +663,30 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active); #ifdef CONFIG_DEBUG_FS -static int __init swiotlb_create_debugfs(void) +static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name) { - struct io_tlb_mem *mem = io_tlb_default_mem; - if (!mem) - return 0; - mem->debugfs = debugfs_create_dir("swiotlb", NULL); + return; + + mem->debugfs = debugfs_create_dir(name, debugfs_dir); debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); +} + +static int __init swiotlb_create_default_debugfs(void) +{ + struct io_tlb_mem *mem = io_tlb_default_mem; + + if (mem) { + swiotlb_create_debugfs(mem, "swiotlb"); + debugfs_dir = mem->debugfs; + } else { + debugfs_dir = debugfs_create_dir("swiotlb", NULL); + } + return 0; } -late_initcall(swiotlb_create_debugfs); +late_initcall(swiotlb_create_default_debugfs); #endif From patchwork Thu May 27 12:58:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97D97C47089 for ; Thu, 27 May 2021 12:59:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6239D60FE6 for ; Thu, 27 May 2021 12:59:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6239D60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133218.248390 (Exim 4.92) (envelope-from ) id 1lmFbk-0007IQ-Kv; Thu, 27 May 2021 12:59:28 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133218.248390; Thu, 27 May 2021 12:59:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbk-0007IG-Ha; Thu, 27 May 2021 12:59:28 +0000 Received: by outflank-mailman (input) for mailman id 133218; Thu, 27 May 2021 12:59:27 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbj-0006ac-5w for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:27 +0000 Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 604db2b4-a7af-4f19-9aa1-0108a4c225df; Thu, 27 May 2021 12:59:22 +0000 (UTC) Received: by mail-pg1-x52a.google.com with SMTP id 29so3618019pgu.11 for ; Thu, 27 May 2021 05:59:22 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m14sm2012639pff.17.2021.05.27.05.59.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:21 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 604db2b4-a7af-4f19-9aa1-0108a4c225df DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=; b=E61Ri/F39WkjFWJqAXjGFvQPED/h2WAUJXs5JDDaCoUA1omO3EVyPXt2Y2hwAfXcjs MTmmUwzdXmE7AXrGKnYXFwjizScT15arM61lsHXOY8uoU6WtnTGi/ZNZ6/f2x5v6f4yj nZWUxmDPBg+tGToE82wIiDUeuR5LEBQHZ4KOI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=; b=XmO2Q7c/kWF+M11cNkeZyydli5DBgx6+8TbJtRdsGxc241FWyI+W8APm92drREmsgY SmSwyewn4Ffda8Y0jufjxRGxHCgqQMRXE6LlwlcPRtCA+PkxVdg7kWM+eIymqBlUlLsz lT4F1QHvgbOpJ27a4tYrpCyZ11CJPudi2iXMUwO0oQIlEGEm1tR/TW8LPGyya7oTTQt8 kvg0sKUES5cygvGDumBSxUNiKfV3R7xKop9/wcRT3GBgLm02jQnyHoxStCYOZWABTiU3 euOonbH0o6py24Kx5K8gBM0Y728D/dLTOEiGjc1SLGc1F52TEEsMpP0SwTJjSZ6Z/4Tm yg/g== X-Gm-Message-State: AOAM533ELK5GQHWa6mkViBW+NP5llWnBc5T7YF18Gd2HIYQ1hMgPOaxz /3mI4ruqMeI/r6K9/pZB2njxow== X-Google-Smtp-Source: ABdhPJxENRpAFf3CkWut6vuXYNlsvVQFU1tdVBb0uZip+87V32FWXRIoYRditOw7sBaKZrr0nh2u9g== X-Received: by 2002:a05:6a00:134b:b029:2bf:2c30:ebbd with SMTP id k11-20020a056a00134bb02902bf2c30ebbdmr3639373pfu.74.1622120361576; Thu, 27 May 2021 05:59:21 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 03/15] swiotlb: Add DMA_RESTRICTED_POOL Date: Thu, 27 May 2021 20:58:33 +0800 Message-Id: <20210527125845.1852284-4-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/Kconfig | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 77b405508743..3e961dc39634 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -80,6 +80,20 @@ config SWIOTLB bool select NEED_DMA_MAP_STATE +config DMA_RESTRICTED_POOL + bool "DMA Restricted Pool" + depends on OF && OF_RESERVED_MEM + select SWIOTLB + help + This enables support for restricted DMA pools which provide a level of + DMA memory protection on systems with limited hardware protection + capabilities, such as those lacking an IOMMU. + + For more information see + + and . + If unsure, say "n". + # # Should be selected if we can mmap non-coherent mappings to userspace. # The only thing that is really required is a way to set an uncached bit From patchwork Thu May 27 12:58:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C1F1C47089 for ; Thu, 27 May 2021 12:59:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0387460FE6 for ; Thu, 27 May 2021 12:59:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0387460FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133220.248401 (Exim 4.92) (envelope-from ) id 1lmFbp-0007fQ-1M; Thu, 27 May 2021 12:59:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133220.248401; Thu, 27 May 2021 12:59:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbo-0007fI-Td; Thu, 27 May 2021 12:59:32 +0000 Received: by outflank-mailman (input) for mailman id 133220; Thu, 27 May 2021 12:59:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbo-0006ac-67 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:32 +0000 Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 37934301-fdfc-413f-be64-21feac0c4318; Thu, 27 May 2021 12:59:31 +0000 (UTC) Received: by mail-pj1-x1032.google.com with SMTP id b15-20020a17090a550fb029015dad75163dso370535pji.0 for ; Thu, 27 May 2021 05:59:31 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id b1sm2283938pgf.84.2021.05.27.05.59.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:30 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 37934301-fdfc-413f-be64-21feac0c4318 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=; b=CvfRxJ5V9gKk6u6YLWcJx5gnvIl9KADGaY68BVyoOTlxsZIYBwmCIUy6cy/EkJ5V39 BIErMCSgOF8GthMo65IN/6rS1CSzvq2iqmoBrtsHAQg9YFDtsGD6qxVIcoystXKVmg4Y 0Pj4mBD1pyn4QIUFHHuIi/lzWBR/yBkH4GJaE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=; b=d+F8SdBxi7bcK3pmbnYzZ4zEbjJoIuYcTTkfD427zPGty4IFM/gwU5aVsZpZceQp99 HncTwks0EQysN1gGfdIFf0iR2Bzt/oX2VnOsx11ycm5kN47TLXGSadgqIPkMrfd0Y2V4 IP1EvjF8ePjc8c8dlCzoQO8faJXSBED7SmfW2YPrSEeRqHviYoZsNUhLxcZyVkBQ4ktN WQBxB1dQcWO/rh0kwFlh1i/vEbDHWZIgNbl2Q1UBjz6oMr+xpslVTj2SrZxhgP4w293m s3JNoyylZmen3W5HGaaQ/NdPDwS4SIKD6NZxl2U5470TDAFsC+ZROiwcuxiNUPdeEmhM ugGw== X-Gm-Message-State: AOAM530kAS2JGiMDqcRJzHFaB1O/f2frDn88sX3WupmtAmdHFTMvK0aK UTksN27BdtoP/zm90fjpUQ1EyQ== X-Google-Smtp-Source: ABdhPJzsUXxHElPiQq44JQkAo3Fb/eFl0eY1YUlEp1O6N7E/B92qoxEbXYhTocMtdUka3Uh42209KQ== X-Received: by 2002:a17:902:b181:b029:fc:c069:865c with SMTP id s1-20020a170902b181b02900fcc069865cmr3099177plr.28.1622120370579; Thu, 27 May 2021 05:59:30 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 04/15] swiotlb: Add restricted DMA pool initialization Date: Thu, 27 May 2021 20:58:34 +0800 Message-Id: <20210527125845.1852284-5-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Add the initialization function to create restricted DMA pools from matching reserved-memory nodes. Signed-off-by: Claire Chang --- include/linux/device.h | 4 +++ include/linux/swiotlb.h | 3 +- kernel/dma/swiotlb.c | 76 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 82 insertions(+), 1 deletion(-) diff --git a/include/linux/device.h b/include/linux/device.h index 959cb9d2c9ab..e78e1ce0b1b1 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -416,6 +416,7 @@ struct dev_links_info { * @dma_pools: Dma pools (if dma'ble device). * @dma_mem: Internal for coherent mem override. * @cma_area: Contiguous memory area for dma allocations + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override. * @archdata: For arch-specific additions. * @of_node: Associated device tree node. * @fwnode: Associated device node supplied by platform firmware. @@ -521,6 +522,9 @@ struct device { #ifdef CONFIG_DMA_CMA struct cma *cma_area; /* contiguous memory area for dma allocations */ +#endif +#ifdef CONFIG_DMA_RESTRICTED_POOL + struct io_tlb_mem *dma_io_tlb_mem; #endif /* arch specific additions */ struct dev_archdata archdata; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 216854a5e513..03ad6e3b4056 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force; * range check to see if the memory was in fact allocated by this * API. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and - * @end. This is command line adjustable via setup_io_tlb_npages. + * @end. For default swiotlb, this is command line adjustable via + * setup_io_tlb_npages. * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b849b01a446f..d99b403144a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -39,6 +39,13 @@ #ifdef CONFIG_DEBUG_FS #include #endif +#ifdef CONFIG_DMA_RESTRICTED_POOL +#include +#include +#include +#include +#include +#endif #include #include @@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void) late_initcall(swiotlb_create_default_debugfs); #endif + +#ifdef CONFIG_DMA_RESTRICTED_POOL +static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + struct device *dev) +{ + struct io_tlb_mem *mem = rmem->priv; + unsigned long nslabs = rmem->size >> IO_TLB_SHIFT; + + if (dev->dma_io_tlb_mem) + return 0; + + /* + * Since multiple devices can share the same pool, the private data, + * io_tlb_mem struct, will be initialized by the first device attached + * to it. + */ + if (!mem) { + mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL); + if (!mem) + return -ENOMEM; + + swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false); + + rmem->priv = mem; + + if (IS_ENABLED(CONFIG_DEBUG_FS)) + swiotlb_create_debugfs(mem, rmem->name); + } + + dev->dma_io_tlb_mem = mem; + + return 0; +} + +static void rmem_swiotlb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + if (dev) + dev->dma_io_tlb_mem = NULL; +} + +static const struct reserved_mem_ops rmem_swiotlb_ops = { + .device_init = rmem_swiotlb_device_init, + .device_release = rmem_swiotlb_device_release, +}; + +static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "linux,cma-default", NULL) || + of_get_flat_dt_prop(node, "linux,dma-default", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) { + pr_err("Restricted DMA pool must be accessible within the linear mapping."); + return -EINVAL; + } + + rmem->ops = &rmem_swiotlb_ops; + pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} + +RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ From patchwork Thu May 27 12:58:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20BEBC4707F for ; Thu, 27 May 2021 12:59:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E570B60FE6 for ; Thu, 27 May 2021 12:59:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E570B60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133226.248412 (Exim 4.92) (envelope-from ) id 1lmFby-0008Ib-C8; Thu, 27 May 2021 12:59:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133226.248412; Thu, 27 May 2021 12:59:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFby-0008Hv-7i; Thu, 27 May 2021 12:59:42 +0000 Received: by outflank-mailman (input) for mailman id 133226; Thu, 27 May 2021 12:59:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFbw-0008B4-Dr for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:40 +0000 Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id da616947-6eb9-4f3e-98fd-4805e49e6ddb; Thu, 27 May 2021 12:59:39 +0000 (UTC) Received: by mail-pj1-x1036.google.com with SMTP id gb21-20020a17090b0615b029015d1a863a91so2296399pjb.2 for ; Thu, 27 May 2021 05:59:39 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o2sm1885997pfu.80.2021.05.27.05.59.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:38 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: da616947-6eb9-4f3e-98fd-4805e49e6ddb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=; b=CWN//86M7pfZwpqjk6Pz1G7HqyEoQV2evBjGZ2yUSjg4QlRX/avwG27ptvNDpwyHCw +v7KXNRKRgPheqlBZeQIo496WAVDVK7ga9R6f5STmR5c/U2eiCxGBrlIMJe+YfycLoea t6vNAhsA0F2FaNz1gwrhV3oLti0cIUc2nP7Cs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=; b=A/0dkHEizizQ43OvBORTp2ZRtipSJGuEizwOZ+f9aL5zuhd8yAOumqJYtmvyqRAaJD SnehyQND1NXbq3X2NYkjR2kfCQjHDBbY7xbptbrK02jDkEFuBzKI6w4nfjY6tTbAsYzn eEQVzZj9LSb89GGy9CT5qwQggvXS08hKyMBTXJm1C+gaE482K82++AjWDiIziNaxinDF 2yw1jhlZbdnydei1f/8Su2EsleQzwN5h7c28k/YsT7XFdOLw9Zi6Ya9Blyss4OIPd+e0 sm7YZqaEq8RQh2GAg2kkVDHKF3ZUwPZO3aucFkOF4ebuvdFGmUU0rDwuSKb1S8dsSsr0 mB2w== X-Gm-Message-State: AOAM533IgGfQ8Oi/Db7rW9cc/ar0/67O8L1gblFUb+9VTQ2/a2qc4zRg hFEnkX6zjNcOVe3M0ReP7gHjAg== X-Google-Smtp-Source: ABdhPJy5Kn+yvqLcXzfzyvGlttPOTv6zxSUWOgHVrqd8CjcCzdgLfUd5wPMKRSdGsOL4+oXiuJ5KrA== X-Received: by 2002:a17:90a:46cc:: with SMTP id x12mr9547674pjg.52.1622120379071; Thu, 27 May 2021 05:59:39 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 05/15] swiotlb: Add a new get_io_tlb_mem getter Date: Thu, 27 May 2021 20:58:35 +0800 Message-Id: <20210527125845.1852284-6-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct. The restricted DMA pool is preferred if available. The reason it was done this way instead of assigning the active pool to dev->dma_io_tlb_mem was because directly using dev->dma_io_tlb_mem might cause memory allocation issues for existing devices. The pool can't support atomic coherent allocation so swiotlb_alloc needs to distinguish it from the default swiotlb pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 03ad6e3b4056..b469f04cca26 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -2,6 +2,7 @@ #ifndef __LINUX_SWIOTLB_H #define __LINUX_SWIOTLB_H +#include #include #include #include @@ -102,6 +103,16 @@ struct io_tlb_mem { }; extern struct io_tlb_mem *io_tlb_default_mem; +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev) +{ +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev && dev->dma_io_tlb_mem) + return dev->dma_io_tlb_mem; +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + + return io_tlb_default_mem; +} + static inline bool is_swiotlb_buffer(phys_addr_t paddr) { struct io_tlb_mem *mem = io_tlb_default_mem; From patchwork Thu May 27 12:58:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9099FC4707F for ; Thu, 27 May 2021 13:00:00 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4007160FE6 for ; Thu, 27 May 2021 13:00:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4007160FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133233.248423 (Exim 4.92) (envelope-from ) id 1lmFc9-0000fA-Km; Thu, 27 May 2021 12:59:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133233.248423; Thu, 27 May 2021 12:59:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFc9-0000ev-HO; Thu, 27 May 2021 12:59:53 +0000 Received: by outflank-mailman (input) for mailman id 133233; Thu, 27 May 2021 12:59:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFc8-0008B4-6r for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:52 +0000 Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c588dbfe-736e-4951-b15b-2e43ce61c759; Thu, 27 May 2021 12:59:48 +0000 (UTC) Received: by mail-pj1-x1032.google.com with SMTP id f8so413211pjh.0 for ; Thu, 27 May 2021 05:59:48 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o5sm1947925pfp.196.2021.05.27.05.59.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:47 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c588dbfe-736e-4951-b15b-2e43ce61c759 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=; b=cNn9BMhhKGHjhwPPG2UgABfE69X+p0W4TmaTAB5q8Z4M5InvkWs8ATmOd5U8N6qWxU bxxYuDcvkgM0Xfxs9klHmfv1f/HVtMfBRvpREzONkBnjg/VPWHTAwBepUFiPDAMGuCVL v48zrB6kkCg3HK6W5PGMid66190hqov2+sC6I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=; b=WpP5CO3x3NwfJTNwUZAKb7RKoQBtP5p5E+X17vYLUE/C19m2wbw3MOo+FUynVX2jBb YIPYT0zcwilOrsv0IhuzizhNn4Iww5rXouwJHOGPZkIVt/ehs6k1EGvI23l6+eTTwDS4 RJY8YFQ1vBLBVcmnqL7XEnoi3CDbAp/INttvGRdbkb84qV0aZnFNxkXIBXuMd2LeCfEE N4s9m48BV1HDw/XHMyd91X8gTgsiTp721FMoZLpVCIduIre3Ft0DN8atFvqXSG8zWSHW V63CUi69X2znAEZjisAHc4+EAKsWGofutw+KtieSruK84STNvgtLXa4+MThsl8BycKoa 8rcg== X-Gm-Message-State: AOAM531lAw9CYEC/nBdYcbOpp2NUQRvvH1CAeFlApyBvAlORg0mPGBHU HKwmQnqsnqAMZUr5NUNTDqi0Yw== X-Google-Smtp-Source: ABdhPJzNxyImuZ9vLBAvulnmvXHqnDmLE3mk3E5uMUl1wrRQ78i9a00FaZ7G/OE27BAd+P8QYyZIUw== X-Received: by 2002:a17:903:22d1:b029:f2:67be:82fb with SMTP id y17-20020a17090322d1b02900f267be82fbmr3220638plg.15.1622120387710; Thu, 27 May 2021 05:59:47 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 06/15] swiotlb: Update is_swiotlb_buffer to add a struct device argument Date: Thu, 27 May 2021 20:58:36 +0800 Message-Id: <20210527125845.1852284-7-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Update is_swiotlb_buffer to add a struct device argument. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- drivers/iommu/dma-iommu.c | 12 ++++++------ drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 6 +++--- kernel/dma/direct.c | 6 +++--- kernel/dma/direct.h | 6 +++--- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7bcdd1205535..a5df35bfd150 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, __iommu_dma_unmap(dev, dma_addr, size); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } @@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, } iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) + if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs); return iova; } @@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_sync_single_for_cpu(dev, phys, size, dir); } @@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_sync_single_for_device(dev, phys, size, dir); if (!dev_is_dma_coherent(dev)) @@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_sync_single_for_cpu(dev, sg_phys(sg), sg->length, dir); } @@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) { - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_sync_single_for_device(dev, sg_phys(sg), sg->length, dir); diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 24d11861ac7d..0c4fb34f11ab 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) * in our domain. Therefore _only_ check address within our domain. */ if (pfn_valid(PFN_DOWN(paddr))) - return is_swiotlb_buffer(paddr); + return is_swiotlb_buffer(dev, paddr); return 0; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index b469f04cca26..2a6cca07540b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev) return io_tlb_default_mem; } -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); return mem && paddr >= mem->start && paddr < mem->end; } @@ -127,7 +127,7 @@ bool is_swiotlb_active(void); void __init swiotlb_adjust_size(unsigned long size); #else #define swiotlb_force SWIOTLB_NO_FORCE -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index f737e3347059..84c9feb5474a 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev, for_each_sg(sgl, sg, nents, i) { phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg)); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_device(dev, paddr, sg->length, dir); @@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(paddr, sg->length, dir); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_cpu(dev, paddr, sg->length, dir); @@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev) bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr) { return !dev_is_dma_coherent(dev) || - is_swiotlb_buffer(dma_to_phys(dev, dma_addr)); + is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr)); } /** diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 50afc05b6f1d..13e9e7158d94 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, { phys_addr_t paddr = dma_to_phys(dev, addr); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_device(dev, paddr, size, dir); if (!dev_is_dma_coherent(dev)) @@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, arch_sync_dma_for_cpu_all(); } - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_cpu(dev, paddr, size, dir); if (dir == DMA_FROM_DEVICE) @@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } #endif /* _KERNEL_DMA_DIRECT_H */ From patchwork Thu May 27 12:58:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 368DCC4708A for ; Thu, 27 May 2021 13:00:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E0ECD613CA for ; Thu, 27 May 2021 13:00:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0ECD613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133237.248433 (Exim 4.92) (envelope-from ) id 1lmFcI-0001Oi-Vh; Thu, 27 May 2021 13:00:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133237.248433; Thu, 27 May 2021 13:00:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFcI-0001OD-SQ; Thu, 27 May 2021 13:00:02 +0000 Received: by outflank-mailman (input) for mailman id 133237; Thu, 27 May 2021 13:00:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFcI-0008B4-6v for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:02 +0000 Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c0c42614-a3d2-47ed-a196-38eebc41ff34; Thu, 27 May 2021 12:59:56 +0000 (UTC) Received: by mail-pf1-x436.google.com with SMTP id x18so506569pfi.9 for ; Thu, 27 May 2021 05:59:56 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id z6sm1943314pgp.89.2021.05.27.05.59.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:55 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c0c42614-a3d2-47ed-a196-38eebc41ff34 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=; b=cC0gm2frLrJhtkllrartdi/IJ10NPfAYEcwXPEVpQV2EiZmW5wSJj1wWFHn6TpRrf2 HaqrP/OROCg0MWl2jqREsiM0RPu9M17H/istHD5+5fpOYHJ0bBQL7eAhc04shOaWsUE6 dFPx9rNBosC2Q2M9oPdfJQj93f1q/HMozBcKA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=; b=jAtSNVSPn+4JuNnyT19bVnZwJOWXMTZhm8jb271AaKsfoN8zcwlzOUUnbVBk6eCZMt IMAaILs205GWvEiPcUa/wQBv7scDrBAFGhewLSpagCJy8piRZZ/ySYHgj+qLIfuLdfoQ QnKBcN+HLWmcEG+RssH9i/528twoZLQ3qhHb3tSUFklOIA8D8L8UwYJN4Tz+7YSSWKnV dpS8W09R/1F7grAfgfk6pzKhyTCvhNm7Ou/CSt3UIFiF2csJoLr9epb4dLz+JxZdHhV6 HbTRJKtY0T1hoqnih9eP7TIZv0bNDeQ3lVdIqPYUVuI1HvxOHKKPz0K6dR6DrL7dOzEk lckw== X-Gm-Message-State: AOAM533RZV9n94CeMPYyWrT32bIRCZaRrFxwvgkY5zFHeOXDgR5qGWsU SXBwvMYcmyTf3BnzG5APLUfqjQ== X-Google-Smtp-Source: ABdhPJzPlZfJRITEebp1ZlzGxNW3wGUcc2uIEGsv6usjxbdSMuBRFFThGIAIIadsB2Kn5HYh1wQOKg== X-Received: by 2002:a05:6a00:bcd:b029:2e5:694c:1c96 with SMTP id x13-20020a056a000bcdb02902e5694c1c96mr3610267pfu.53.1622120396288; Thu, 27 May 2021 05:59:56 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 07/15] swiotlb: Update is_swiotlb_active to add a struct device argument Date: Thu, 27 May 2021 20:58:37 +0800 Message-Id: <20210527125845.1852284-8-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Update is_swiotlb_active to add a struct device argument. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- drivers/gpu/drm/nouveau/nouveau_ttm.c | 2 +- drivers/pci/xen-pcifront.c | 2 +- include/linux/swiotlb.h | 4 ++-- kernel/dma/direct.c | 2 +- kernel/dma/swiotlb.c | 4 ++-- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index ce6b664b10aa..7d48c433446b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB - if (is_swiotlb_active()) { + if (is_swiotlb_active(NULL)) { unsigned int max_segment; max_segment = swiotlb_max_segment(); diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index 65430912ff72..d0e998b9e2e8 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -270,7 +270,7 @@ nouveau_ttm_init(struct nouveau_drm *drm) } #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) - need_swiotlb = is_swiotlb_active(); + need_swiotlb = is_swiotlb_active(NULL); #endif ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c index b7a8f3a1921f..6d548ce53ce7 100644 --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev) spin_unlock(&pcifront_dev_lock); - if (!err && !is_swiotlb_active()) { + if (!err && !is_swiotlb_active(NULL)) { err = pci_xen_swiotlb_init_late(); if (err) dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n"); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 2a6cca07540b..c530c976d18b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); -bool is_swiotlb_active(void); +bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long size); #else #define swiotlb_force SWIOTLB_NO_FORCE @@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev) return SIZE_MAX; } -static inline bool is_swiotlb_active(void) +static inline bool is_swiotlb_active(struct device *dev) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 84c9feb5474a..7a88c34d0867 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask) size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ - if (is_swiotlb_active() && + if (is_swiotlb_active(dev) && (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) return swiotlb_max_mapping_size(dev); return SIZE_MAX; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d99b403144a8..b2b6503ecd88 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -662,9 +662,9 @@ size_t swiotlb_max_mapping_size(struct device *dev) return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; } -bool is_swiotlb_active(void) +bool is_swiotlb_active(struct device *dev) { - return io_tlb_default_mem != NULL; + return get_io_tlb_mem(dev) != NULL; } EXPORT_SYMBOL_GPL(is_swiotlb_active); From patchwork Thu May 27 12:58:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3485CC4708A for ; Thu, 27 May 2021 13:03:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DEC6F610A2 for ; Thu, 27 May 2021 13:03:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEC6F610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133270.248474 (Exim 4.92) (envelope-from ) id 1lmFft-0004G3-Ie; Thu, 27 May 2021 13:03:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133270.248474; Thu, 27 May 2021 13:03:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFft-0004Es-DF; Thu, 27 May 2021 13:03:45 +0000 Received: by outflank-mailman (input) for mailman id 133270; Thu, 27 May 2021 13:03:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFcm-0008B4-83 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:32 +0000 Received: from mail-pl1-x62c.google.com (unknown [2607:f8b0:4864:20::62c]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6aa84d79-370f-4126-b495-7b1f156bc2f5; Thu, 27 May 2021 13:00:06 +0000 (UTC) Received: by mail-pl1-x62c.google.com with SMTP id h12so2284627plf.11 for ; Thu, 27 May 2021 06:00:06 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m1sm2027143pgd.78.2021.05.27.05.59.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:04 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6aa84d79-370f-4126-b495-7b1f156bc2f5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=; b=kcN4x88wnIqA2eddyBkI1s+5ccQseApdZLb8/RSr2+Fa520MxJZZ6NkX9r/ML2uBrj kvMXO+zNUanE1yYV9wWE+HheUK89EpCu8v6POAAGHEkAn5qXZCd1I7/7KAUnWJ2R0GFj AiEWOWj6q+T+/qXKxk0sPdE2cDhBD8s6mX/Vs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=; b=IhNRDs/J/nyPwygPZYYin9Zlz7QIGL7OKgwZcl597P5oMDKg8C9Oh0yqOax3muZiSO PQJqR5eKjl2P9CqR+nqHjT/zAaDugC7rCwqm69C3Cq15dYIjP4lzcXZ/aCGzSpWrm0Jm iu8ZOMqjMfRy02CAebU8MF79nvUOr5kFimvYpJh17neAASzPcwzB7sKIhk7p8gRBmS9P h28L0Q+nkqXSH6zm1WvpaDGSr9x0IoFZmMSlU2aISEGuJqmTKWMr1hePx4h+qb2dtCzK 1CbT0wGkk/L6LXIkzrtCbtPq/uljbPaBmxAikRx0Raa8twkGMBZ09KalAYOcMm2RLH/7 TuNw== X-Gm-Message-State: AOAM532wH1wFTuHWcQosFXfATnWBhNXzuS/hy4rLK7oeY943tPqkvrW0 T/4DmVa8yb+La0CBV67/pTD9CQ== X-Google-Smtp-Source: ABdhPJx/S80brWa6yq+WbXDwFafH2VwKxtCuF8GRMQTIN21BHKQJi5x8f1RJPDSguTNDbXCIi1shzA== X-Received: by 2002:a17:90b:33d1:: with SMTP id lk17mr3678617pjb.154.1622120405435; Thu, 27 May 2021 06:00:05 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 08/15] swiotlb: Bounce data from/to restricted DMA pool if available Date: Thu, 27 May 2021 20:58:38 +0800 Message-Id: <20210527125845.1852284-9-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Note that is_dev_swiotlb_force doesn't check if swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior with default swiotlb will be changed by the following patche ("dma-direct: Allocate memory from restricted DMA pool if available"). Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 13 +++++++++++++ kernel/dma/direct.c | 3 ++- kernel/dma/direct.h | 3 ++- kernel/dma/swiotlb.c | 8 ++++---- 4 files changed, 21 insertions(+), 6 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index c530c976d18b..0c5a18d9cf89 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) return mem && paddr >= mem->start && paddr < mem->end; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev->dma_io_tlb_mem) + return true; +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + return false; +} + void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); @@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ + return false; +} static inline void swiotlb_exit(void) { } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 7a88c34d0867..078f7087e466 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ if (is_swiotlb_active(dev) && - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE || + is_dev_swiotlb_force(dev))) return swiotlb_max_mapping_size(dev); return SIZE_MAX; } diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 13e9e7158d94..f94813674e23 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) + if (unlikely(swiotlb_force == SWIOTLB_FORCE) || + is_dev_swiotlb_force(dev)) return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b2b6503ecd88..fa7f23fffc81 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -347,7 +347,7 @@ void __init swiotlb_exit(void) static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = mem->slots[index].orig_addr; size_t alloc_size = mem->slots[index].alloc_size; @@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; @@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int i; int index; @@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; From patchwork Thu May 27 12:58:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46386C4707F for ; Thu, 27 May 2021 13:03:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F39AB61132 for ; Thu, 27 May 2021 13:03:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F39AB61132 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133264.248459 (Exim 4.92) (envelope-from ) id 1lmFfm-0003rc-Ue; Thu, 27 May 2021 13:03:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133264.248459; Thu, 27 May 2021 13:03:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFfm-0003rV-RE; Thu, 27 May 2021 13:03:38 +0000 Received: by outflank-mailman (input) for mailman id 133264; Thu, 27 May 2021 13:03:36 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFcr-0008B4-81 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:37 +0000 Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 26ed9e2f-dcab-4dd2-8b89-9bd645ed4578; Thu, 27 May 2021 13:00:14 +0000 (UTC) Received: by mail-pf1-x430.google.com with SMTP id q25so538850pfn.1 for ; Thu, 27 May 2021 06:00:14 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m84sm1905689pfd.41.2021.05.27.06.00.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:13 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 26ed9e2f-dcab-4dd2-8b89-9bd645ed4578 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=; b=h56T1h46NuCHAXLEDOrFtGzPeGkXpqjaVqU2Id3/ScnxpcTFZFB+NgniQLXbdnzuUx QD0APsw9ZjoeJMpeIHj0tuMfdo+wG/iO7GBpb84AqyFHYbVnubqVV+LlaPHYPYnvVgbR tJ7ATJhWs9UvVNPUNH5hquxvKLueKDOrXLp9s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=; b=FyfcJkYGfYUh9QDTmex4khUPGIakqklvXPKJNSf0WinsowiYBCVwBuk10+4ptQv/Zp RB5y+3A3V7KITbW/s+47ftG+G0qwYWnhV36t1CIDhNj/jtKNIXAZvW/J3tZqs/JbmKnV ziuHYIExvpQtnxqYsId+tLumYrPyjl3VjTistTtx6eXFY7/E5fAHuGkvGz21IFlqPMoi y2RlG+d9ngAofsDsQmeOI0CpELf/6ciiW4pS9Q4I7RLamOHU9wR8KNY+A5Blta5ObHhX YVN9Xze6IBbGK7NFQw0oUuS7vh3Xja05VdidQhkLdekPwrcQhXnUiGrZSt9qlfMZP36o uoPQ== X-Gm-Message-State: AOAM531ePA/yaZCuSLvj+d1MjpBitHbxVUEwsW41AJNZ/UcTFvgpLEgp f8u3zEqQ4Im8WH9lwTSuVUSA9g== X-Google-Smtp-Source: ABdhPJwPKvUueO3/tWMtqAGArtGFSFkKVOYPoppJ9Pk4xAfchfGgkSrJtv/ImJCVwl16f+ZRC2hwVw== X-Received: by 2002:a63:5c01:: with SMTP id q1mr3600625pgb.447.1622120414085; Thu, 27 May 2021 06:00:14 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 09/15] swiotlb: Move alloc_size to find_slots Date: Thu, 27 May 2021 20:58:39 +0800 Message-Id: <20210527125845.1852284-10-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Move the maintenance of alloc_size to find_slots for better code reusability later. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index fa7f23fffc81..88b3471ac6a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -482,8 +482,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, return -1; found: - for (i = index; i < index + nslots; i++) + for (i = index; i < index + nslots; i++) { mem->slots[i].list = 0; + mem->slots[i].alloc_size = + alloc_size - ((i - index) << IO_TLB_SHIFT); + } for (i = index - 1; io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && mem->slots[i].list; i--) @@ -538,11 +541,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - for (i = 0; i < nr_slots(alloc_size + offset); i++) { + for (i = 0; i < nr_slots(alloc_size + offset); i++) mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); - mem->slots[index + i].alloc_size = - alloc_size - (i << IO_TLB_SHIFT); - } tlb_addr = slot_addr(mem->start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) From patchwork Thu May 27 12:58:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 015EEC4708A for ; Thu, 27 May 2021 13:04:04 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A5C7761132 for ; Thu, 27 May 2021 13:04:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5C7761132 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133275.248503 (Exim 4.92) (envelope-from ) id 1lmFg5-0005Kp-AA; Thu, 27 May 2021 13:03:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133275.248503; Thu, 27 May 2021 13:03:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFg5-0005KW-6Q; Thu, 27 May 2021 13:03:57 +0000 Received: by outflank-mailman (input) for mailman id 133275; Thu, 27 May 2021 13:03:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFd6-0008B4-8f for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:52 +0000 Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3e7a7143-5b5f-42e2-894b-9d300fe461f9; Thu, 27 May 2021 13:00:28 +0000 (UTC) Received: by mail-pj1-x1036.google.com with SMTP id b15-20020a17090a550fb029015dad75163dso372328pji.0 for ; Thu, 27 May 2021 06:00:28 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id 66sm2009117pgj.9.2021.05.27.06.00.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:22 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3e7a7143-5b5f-42e2-894b-9d300fe461f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=; b=FaFXCFWJLCD/gigBOGfY4F270yO0Alvb7EMZVKQJ9L8utcEPvn3XtnKI3BEMvmJtEK gYgjai80HZHDiRLWnq4ZnWFUPWMggGA/Wo+QuTb4wDh5MX+6KsOXKwWCEnZzHAaiuA3T 2/Aqi7L4LhIRGkedSPRMYz9qyuKketwP3p78k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=; b=Hymu/xwylAU+dLhd+O9BiDQMjPpxNveb59vsOiwP56T+5tcD+nKYgZFr5lSbDVfJPf IdTO2YMabxETffgHfuwlNm22391IMroXvxjTWwxo3k77ze4TRdQuEoFrrfK50g3F36CL qOvdDpm6792I7MpvAFqwSppiIjKTZz5wp3MgrZVzpXKBmv6SzANGvSrqC/mvHAf/qkpe m7nJVYm7Z/nrSTC07uYOEjNoupTQlVGk4XXfdn0lUsoNtWPDfD4tA8yNd/ikZBwqnp7f bfLoiFmq4LXW0eRNn+v1nViZ2IwLSvJCNlkAjTY+nPm6XOwSZqiYFM3rbevSQhTV+vuQ KMvg== X-Gm-Message-State: AOAM532vknpBfikTAk0Pz2js6L/vn4mfyr6+IyL2BQKYjg6SLkPO6k8C JF1T7KcKIhvbGjdiEw9q9EFF4A== X-Google-Smtp-Source: ABdhPJyRdfD5aP84oP+O3UZ9D5kP+7R8Jjf69TCVtMPqoqtl6dZXUjZXcnaxnn4OXnc5bmDzFDOgsA== X-Received: by 2002:a17:90a:74f:: with SMTP id s15mr4091622pje.90.1622120423164; Thu, 27 May 2021 06:00:23 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single Date: Thu, 27 May 2021 20:58:40 +0800 Message-Id: <20210527125845.1852284-11-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new function, release_slots, to make the code reusable for supporting different bounce buffer pools, e.g. restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 88b3471ac6a8..c4fc2e444e7a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return tlb_addr; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, enum dma_data_direction dir, - unsigned long attrs) +static void release_slots(struct device *dev, phys_addr_t tlb_addr) { - struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long flags; - unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); + unsigned int offset = swiotlb_align_offset(dev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; int nslots = nr_slots(mem->slots[index].alloc_size + offset); int count, i; - /* - * First, sync the memory before unmapping the entry - */ - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE); - /* * Return the buffer to the free list by setting the corresponding * entries to indicate the number of contiguous entries available. @@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&mem->lock, flags); } +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr, + size_t mapping_size, enum dma_data_direction dir, + unsigned long attrs) +{ + /* + * First, sync the memory before unmapping the entry + */ + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + release_slots(dev, tlb_addr); +} + void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { From patchwork Thu May 27 13:03:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46E34C47089 for ; Thu, 27 May 2021 13:03:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0BC9861132 for ; Thu, 27 May 2021 13:03:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BC9861132 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133271.248490 (Exim 4.92) (envelope-from ) id 1lmFfu-0004h4-RR; Thu, 27 May 2021 13:03:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133271.248490; Thu, 27 May 2021 13:03:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFfu-0004fV-N2; Thu, 27 May 2021 13:03:46 +0000 Received: by outflank-mailman (input) for mailman id 133271; Thu, 27 May 2021 13:03:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFft-0004HJ-Mn for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:03:45 +0000 Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a47b63ad-8d60-4159-92cd-eadc6b6b963a; Thu, 27 May 2021 13:03:45 +0000 (UTC) Received: by mail-pf1-x435.google.com with SMTP id 22so509319pfv.11 for ; Thu, 27 May 2021 06:03:44 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id w15sm2015155pjy.1.2021.05.27.06.03.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:03:41 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a47b63ad-8d60-4159-92cd-eadc6b6b963a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=; b=ftmREmcxrJM5ghdWu3uu6RoVOQmn/FmRG0fXdD0sAjbx0bczGkzwvRTurlvaYuIvJW x3lcL0ttraMgqRb6RQUndSyEBiGIY3jtWjTqxgCUrCG/1jwY14bstcfPs5KZ9RhvRj5W T2wJrb10vnObIKglZGitAlnSSqiKkQCQp8AYQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=; b=AJ60oWOW8piSmO39i7ejv1tqCzZqTuSBfDixlrQz4BElRNB50M8Wu8U8bF7fkFRuHK duYlI/V9jYR4QkgiR4fZiFw9X9NZ3F5qj0btcvJQE+fKbXGsDNwTevvOK98pBj/QR6m+ I+qFDPTtlfQhbAztebHWcdodhFgTM06dXazBIrO5M6KKE3kgDCtVOc5cQut7sTlgEhyD wYZFvuDr3r2XRvb5NmwqnjC9XnNcgIsQFWtjDNJPn3HI9r0LSP3YQA9vqCw5iv/BiCWH cHxW/Bdllp7NSwCaog6L6AaQPOeSs0ahvPf+g+vVhSV/YAEgTUQp8kt0L8AP8efKch9k lBSQ== X-Gm-Message-State: AOAM531hm508AyoEdvnrqNmUmiIZIFh+pqVNdYhy7M7800rVvTYLdhi+ vHMemR6o1b3T+Osw8I6UClduHw== X-Google-Smtp-Source: ABdhPJwFMAyQyClkgtD2WU3AOT0NSiYpDWGkYsiRLX3TSIJ7iJhjuPVvZAr31+YEfks0HMDDaEOArA== X-Received: by 2002:a05:6a00:216a:b029:2df:3461:4ac3 with SMTP id r10-20020a056a00216ab02902df34614ac3mr3590982pff.80.1622120624345; Thu, 27 May 2021 06:03:44 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 11/15] dma-direct: Add a new wrapper __dma_direct_free_pages() Date: Thu, 27 May 2021 21:03:25 +0800 Message-Id: <20210527130329.1853588-1-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog MIME-Version: 1.0 Add a new wrapper __dma_direct_free_pages() that will be useful later for swiotlb_free(). Signed-off-by: Claire Chang --- kernel/dma/direct.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 078f7087e466..eb4098323bbc 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } +static void __dma_direct_free_pages(struct device *dev, struct page *page, + size_t size) +{ + dma_free_contiguous(dev, page, size); +} + static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp) { @@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return NULL; } out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size, else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); + __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); } struct page *dma_direct_alloc_pages(struct device *dev, size_t size, @@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)vaddr, 1 << page_order); - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); } #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ From patchwork Thu May 27 13:03:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06D98C4708B for ; Thu, 27 May 2021 13:04:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B0509610A2 for ; Thu, 27 May 2021 13:04:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0509610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133276.248509 (Exim 4.92) (envelope-from ) id 1lmFg5-0005Pk-Qr; Thu, 27 May 2021 13:03:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133276.248509; Thu, 27 May 2021 13:03:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFg5-0005Nx-H4; Thu, 27 May 2021 13:03:57 +0000 Received: by outflank-mailman (input) for mailman id 133276; Thu, 27 May 2021 13:03:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFg3-0005ED-M5 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:03:55 +0000 Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9d859fc2-d424-4032-9379-b632696215ba; Thu, 27 May 2021 13:03:54 +0000 (UTC) Received: by mail-pf1-x430.google.com with SMTP id y15so501191pfn.13 for ; Thu, 27 May 2021 06:03:54 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id p11sm2096443pjo.19.2021.05.27.06.03.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:03:53 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9d859fc2-d424-4032-9379-b632696215ba DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=; b=Fc1gvJ3n84xz3JxOO0N55rsBPx0BRwCnJqQGnPap/zK0pI1U/L+kL/kcQzWLQ84Riq 0j/pqlcmWQYQydyVIRDN+PwShThBXOPgJ9Tt5RavIwKiqavfO6y+wHM9/KEWCh+zM9+b bTIS7XzvK0G3Tb5c5LYB9BTAV5sVenH8dr8s8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=; b=ngM91AlHPnHI/iOLRJuXQwOGKwaMHxcQ0RwhuHPdzSDfIuevYl+BNhaGsLJZ5twjE3 tquxy9Xf7+h1ZU9gTC7X7uM7l/ydwc0KTzVzH71ylA9/9T6Fbd4kgp/C1UHykPGSLaeR 1I7eYeDsszYkj1nXuMp6Df/BOtabYHLZ9ujIbEO+d/qCnyT6ykE+yYaMRKdI9+2iMkj9 bP2VXqVyG2QAVcE1PuQcr2+zhTwOkBRR2CNxlGwqxRpLIShe9FCM91jjjXlICSGXztiB 9/EvxZUcinypythNkzhe8/LgFQ6A8HKBIWJD6DhDVIfmS5LvppXz0VcxADF2AAlpgAwq HoNw== X-Gm-Message-State: AOAM530gLDlR9WPiBEctR6+AfIhJvHNC2Ga7lxPzALqXPa1O6J5vEdfd l/iwwIRaWjbg5HhAWVLLBPKgrw== X-Google-Smtp-Source: ABdhPJwek6kyZnVLp6SkoYltLcwOXMIMY+AUL7OjyvyvB2fw/uVSfHyOeaCWcshB7EYERK97204FCg== X-Received: by 2002:aa7:8588:0:b029:28e:dfa1:e31a with SMTP id w8-20020aa785880000b029028edfa1e31amr3587263pfn.77.1622120633858; Thu, 27 May 2021 06:03:53 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 12/15] swiotlb: Add restricted DMA alloc/free support. Date: Thu, 27 May 2021 21:03:26 +0800 Message-Id: <20210527130329.1853588-2-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Add the functions, swiotlb_{alloc,free} to support the memory allocation from restricted DMA pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 4 ++++ kernel/dma/swiotlb.c | 35 +++++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 0c5a18d9cf89..e8cf49bd90c5 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long size); +#ifdef CONFIG_DMA_RESTRICTED_POOL +struct page *swiotlb_alloc(struct device *dev, size_t size); +bool swiotlb_free(struct device *dev, struct page *page, size_t size); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ #else #define swiotlb_force SWIOTLB_NO_FORCE static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index c4fc2e444e7a..648bfdde4b0c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -457,8 +457,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, index = wrap = wrap_index(mem, ALIGN(mem->index, stride)); do { - if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != - (orig_addr & iotlb_align_mask)) { + if (orig_addr && + (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != + (orig_addr & iotlb_align_mask)) { index = wrap_index(mem, index + 1); continue; } @@ -704,6 +705,36 @@ late_initcall(swiotlb_create_default_debugfs); #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +struct page *swiotlb_alloc(struct device *dev, size_t size) +{ + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; + phys_addr_t tlb_addr; + int index; + + if (!mem) + return NULL; + + index = find_slots(dev, 0, size); + if (index == -1) + return NULL; + + tlb_addr = slot_addr(mem->start, index); + + return pfn_to_page(PFN_DOWN(tlb_addr)); +} + +bool swiotlb_free(struct device *dev, struct page *page, size_t size) +{ + phys_addr_t tlb_addr = page_to_phys(page); + + if (!is_swiotlb_buffer(dev, tlb_addr)) + return false; + + release_slots(dev, tlb_addr); + + return true; +} + static int rmem_swiotlb_device_init(struct reserved_mem *rmem, struct device *dev) { From patchwork Thu May 27 13:03:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAEA9C4708A for ; Thu, 27 May 2021 13:04:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8EDAE61132 for ; Thu, 27 May 2021 13:04:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8EDAE61132 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133296.248525 (Exim 4.92) (envelope-from ) id 1lmFgY-0007Is-V7; Thu, 27 May 2021 13:04:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133296.248525; Thu, 27 May 2021 13:04:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFgY-0007Il-Rc; Thu, 27 May 2021 13:04:26 +0000 Received: by outflank-mailman (input) for mailman id 133296; Thu, 27 May 2021 13:04:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFgX-0005ED-NI for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:25 +0000 Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fe37d330-38dd-471c-8995-62dc68b67365; Thu, 27 May 2021 13:04:03 +0000 (UTC) Received: by mail-pg1-x52b.google.com with SMTP id e22so3628023pgv.10 for ; Thu, 27 May 2021 06:04:03 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id n1sm1934646pjv.40.2021.05.27.06.03.55 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:02 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fe37d330-38dd-471c-8995-62dc68b67365 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=; b=IyvYtT1AB6E4xwxKSFDYkZ5n98xR0l1LZiUkJW0MW5CgH/i5aGIsnRF/JPxxLV994w 6zBxh1X3gHavrt/AsvfReKnILPfzC/l6kkFOeK3VrBIogUw16ui2Ro+3Hw3QPFy2pWGM +/sNzLuWK+E6zh1p0Gxt9RmiN+wGdEuT6cNDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=; b=MqJGD+vBDelX55SvvdU4DoBwpTFbwBdYHQfzgHUg83IIGGkGOu9nwD2XxERVEdBReJ 6x2QZnlBCe9jVf8ZviTBU2ySCQMeIlYlOAYwl2ilefARiMB7yVVEWVsMcoZAeETQooaF H0Bdi3NLR2ZC0kXgnaHyehznLIjL21XOLaYM5bgIcrxuth2nZj4Kukfh4KtfC/zh8oWo U+fFndDTX0JctkIVJwej16N58A1efEw/V/7m+5Dy7bxYgeT9JNNcovIajducGnjWMD5D lkzpBNdvMaVyuqFpjPQAvA2nhgn32+ef+tPcetNGl5TX7m9Tv8iKaUvw85Ml5br+bLvI Kn/w== X-Gm-Message-State: AOAM532H/GlXmAcnQqDTznkZzuViENO86xE+xTg7lt0wvK818jZgvQHa y0h56BHnaPrdemutDFuDRv8ZFw== X-Google-Smtp-Source: ABdhPJxiOdBGPcWtjBuPQLZxflfz/miVWdNfGLJRIpkM261jkyasjOonUIh30AqgExiX3771SeFH4Q== X-Received: by 2002:a63:4d11:: with SMTP id a17mr3584089pgb.266.1622120642890; Thu, 27 May 2021 06:04:02 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 13/15] dma-direct: Allocate memory from restricted DMA pool if available Date: Thu, 27 May 2021 21:03:27 +0800 Message-Id: <20210527130329.1853588-3-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 The restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Note that since coherent allocation needs remapping, one must set up another device coherent pool by shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic coherent allocation. Signed-off-by: Claire Chang --- kernel/dma/direct.c | 38 +++++++++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index eb4098323bbc..0d521f78c7b9 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size) { +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (swiotlb_free(dev, page, size)) + return; +#endif dma_free_contiguous(dev, page, size); } @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_limit); - page = dma_alloc_contiguous(dev, size, gfp); + +#ifdef CONFIG_DMA_RESTRICTED_POOL + page = swiotlb_alloc(dev, size); + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { + __dma_direct_free_pages(dev, page, size); + page = NULL; + } +#endif + + if (!page) + page = dma_alloc_contiguous(dev, size, gfp); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { dma_free_contiguous(dev, page, size); page = NULL; @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, gfp |= __GFP_NOWARN; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); if (!page) return NULL; @@ -161,18 +175,23 @@ void *dma_direct_alloc(struct device *dev, size_t size, } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); /* * Remapping or decrypting memory may block. If either is required and * we can't block, allocate the memory from the atomic pools. + * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must + * set up another device coherent pool by shared-dma-pool and use + * dma_alloc_from_dev_coherent instead. */ if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && !gfpflags_allow_blocking(gfp) && (force_dma_unencrypted(dev) || - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + !dev_is_dma_coherent(dev))) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); /* we always manually zero the memory once we are done */ @@ -253,15 +272,15 @@ void dma_direct_free(struct device *dev, size_t size, unsigned int page_order = get_order(size); if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) { + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) { arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); return; } @@ -289,7 +308,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, void *ret; if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); page = __dma_direct_alloc_pages(dev, size, gfp); From patchwork Thu May 27 13:03:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FDA7C4707F for ; Thu, 27 May 2021 13:04:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 32F73610A2 for ; Thu, 27 May 2021 13:04:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 32F73610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133299.248536 (Exim 4.92) (envelope-from ) id 1lmFgj-0007ia-7r; Thu, 27 May 2021 13:04:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133299.248536; Thu, 27 May 2021 13:04:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFgj-0007iR-3h; Thu, 27 May 2021 13:04:37 +0000 Received: by outflank-mailman (input) for mailman id 133299; Thu, 27 May 2021 13:04:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFgh-0005ED-Nf for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:35 +0000 Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8da5f5d4-b8fc-426f-8d4a-7c3b8316b614; Thu, 27 May 2021 13:04:12 +0000 (UTC) Received: by mail-pg1-x529.google.com with SMTP id l70so3663155pga.1 for ; Thu, 27 May 2021 06:04:12 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id w197sm1969835pfc.5.2021.05.27.06.04.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:11 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8da5f5d4-b8fc-426f-8d4a-7c3b8316b614 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=; b=GAjg2iwD8z4KXC0wmxl+b72WGOgo3axuh1N2ckUZTa0L2t7lEjUG/rd9vUciIgjM6/ BBoe8qhQS9McLcHa3Vql2S8tpN9ukDLaEDnz4WKO3KFwq/co1Y7IP35DmK8g86ONb0cC 2QkhzvjQas2dIg52wH3GUtMDzBV3jAGcZRtko= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=; b=Q3lJ/n7s7NRJsRcX5covPzBvlRBpktRhFvStBQjgKSLprWxAv0w7A2b4qnxN2TdSDb t9TWHt51N7S+Ipky2YFoyRc475Y3du7SNyheonQO8Wz4/ZY2/7/zZIGwk4+Ui373skJT DTieml/eD0ObIV/Z4nVEGC3Wq3Pt3VUxhPa2y+hRQgW+jnBOP9qHsEL+9ILMGO/8hfsU JjlDj5SovDsSOTuuSu322oXzfAD5rufVt5ZmZeDAwwbylerhbQrAhF1fOEANSwzty7rr A19dyzQm7YeTYb7emTysghQ8tkZhC+rDCYYHszpQCwQyO2vL0gPug4Lz/31Oy96HJ6Ha 0Ibg== X-Gm-Message-State: AOAM5305e+02akOqwYoiYWO5fIVeJdOTGQyj0l3S2tLebYkPsPgLNdNX 1EPXUNZ4G2cRpgNufbKP+vdkrA== X-Google-Smtp-Source: ABdhPJz5SQWTqZi0xc7pup6EKCV3Dg34qwCaD2h6fUXISa0uHIOvWKamMjceM9SgliQmqLBbNqrp+Q== X-Received: by 2002:a62:cd49:0:b029:2d8:ae8d:6a9f with SMTP id o70-20020a62cd490000b02902d8ae8d6a9fmr3674431pfg.50.1622120651592; Thu, 27 May 2021 06:04:11 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 14/15] dt-bindings: of: Add restricted DMA pool Date: Thu, 27 May 2021 21:03:28 +0800 Message-Id: <20210527130329.1853588-4-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the reserved-memory node. Signed-off-by: Claire Chang --- .../reserved-memory/reserved-memory.txt | 36 +++++++++++++++++-- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..46804f24df05 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,23 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to lock down + the memory access, e.g., MPU. Note that since coherent allocation + needs remapping, one must set up another device coherent pool by + shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic + coherent allocation. - vendor specific string in the form ,[-] no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding Example ------- -This example defines 3 contiguous regions are defined for Linux kernel: +This example defines 4 contiguous regions for Linux kernel: one default of all device drivers (named linux,cma@72000000 and 64MiB in size), -one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and -one for multimedia processing (named multimedia-memory@77000000, 64MiB). +one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), +one for multimedia processing (named multimedia-memory@77000000, 64MiB), and +one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB). / { #address-cells = <1>; @@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_reserved: restricted_dma_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x4000000>; + }; }; /* ... */ @@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + reg = <0x83010000 0x0 0x00000000 0x0 0x00100000 + 0x83010000 0x0 0x00100000 0x0 0x00100000>; + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; }; From patchwork Thu May 27 13:03:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12284221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05DB2C4707F for ; Thu, 27 May 2021 13:13:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A3E866128B for ; Thu, 27 May 2021 13:13:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3E866128B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.133344.248575 (Exim 4.92) (envelope-from ) id 1lmFp0-0002XQ-R8; Thu, 27 May 2021 13:13:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 133344.248575; Thu, 27 May 2021 13:13:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFp0-0002XD-NR; Thu, 27 May 2021 13:13:10 +0000 Received: by outflank-mailman (input) for mailman id 133344; Thu, 27 May 2021 13:13:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lmFgm-0005ED-Nu for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:40 +0000 Received: from mail-pf1-x433.google.com (unknown [2607:f8b0:4864:20::433]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 803279d5-934d-40d8-bef2-a2cb7f54f081; Thu, 27 May 2021 13:04:20 +0000 (UTC) Received: by mail-pf1-x433.google.com with SMTP id q25so548388pfn.1 for ; Thu, 27 May 2021 06:04:20 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o3sm1937619pgh.22.2021.05.27.06.04.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:19 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 803279d5-934d-40d8-bef2-a2cb7f54f081 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=; b=RU9c38RKOmRl8qLPe9nfaNayIMAo/KhsxRPQb+p1I2/vkq857xJH1yHb2hsJr/oLI5 OzLGwV9+GOm93mot3oVKzRPIFGItH+esVlARoN4qqQuH+rUWPo604KbT7KtsniJEDbEo HQFJYd6CK0U3jnTgrILmcYyxGPhhzyPRwcTRo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=; b=nwMq6y3ohTVCKQ0mSDkXfutcgqIwbOMu4jeGXglaV4W5Vrhaf0nk2v9cnLGRnt42um GVhN/Lali8rT+AnSHH7uBEQv412NX/aJEGJFCZigMJC56glVDpBndrTPxnNW5LvE94bK +XfsAH+NgK6fwZe9ijhnrGiAWyKN7ZSg8XaiOsJIIgqDteVNNIkcroG0q099CkB7VuTF W0JTTb0MjcrIt5f2ezj/YkhATx0uT/I412TBLfnQtmPWJc+mthG+9WzULiyhmwsC8s3K 3h5e5YAHNLaLop7yYzm2X3InnclZHhaz+Ya9Pr5uEz/LEWWnIpynPOE113Dma+C7Qi/s oe+w== X-Gm-Message-State: AOAM531ocuRQJ8BkFzQ7bjwU7oZ6G8NcW0i879hFYle0ycE/JKiX9rlX MlNxNpwpaGDkMyijK1o4rPlLRQ== X-Google-Smtp-Source: ABdhPJw+tNdufRtYdRKBadLokBr1RMenJ8TRt7icb84uhhJq7TwV1n1jRizQR2Rn5o3HccoeyQdcjw== X-Received: by 2002:aa7:9438:0:b029:2df:258e:7f10 with SMTP id y24-20020aa794380000b02902df258e7f10mr3229291pfo.79.1622120660147; Thu, 27 May 2021 06:04:20 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 15/15] of: Add plumbing for restricted DMA pool Date: Thu, 27 May 2021 21:03:29 +0800 Message-Id: <20210527130329.1853588-5-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 If a device is not behind an IOMMU, we look up the device node and set up the restricted DMA when the restricted-dma-pool is presented. Signed-off-by: Claire Chang --- drivers/of/address.c | 33 +++++++++++++++++++++++++++++++++ drivers/of/device.c | 3 +++ drivers/of/of_private.h | 6 ++++++ 3 files changed, 42 insertions(+) diff --git a/drivers/of/address.c b/drivers/of/address.c index aca94c348bd4..6cc7eaaf7e11 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -1112,6 +1113,38 @@ bool of_dma_is_coherent(struct device_node *np) } EXPORT_SYMBOL_GPL(of_dma_is_coherent); +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np) +{ + struct device_node *node, *of_node = dev->of_node; + int count, i; + + count = of_property_count_elems_of_size(of_node, "memory-region", + sizeof(u32)); + /* + * If dev->of_node doesn't exist or doesn't contain memory-region, try + * the OF node having DMA configuration. + */ + if (count <= 0) { + of_node = np; + count = of_property_count_elems_of_size( + of_node, "memory-region", sizeof(u32)); + } + + for (i = 0; i < count; i++) { + node = of_parse_phandle(of_node, "memory-region", i); + /* + * There might be multiple memory regions, but only one + * restricted-dma-pool region is allowed. + */ + if (of_device_is_compatible(node, "restricted-dma-pool") && + of_device_is_available(node)) + return of_reserved_mem_device_init_by_idx(dev, of_node, + i); + } + + return 0; +} + /** * of_mmio_is_nonposted - Check if device uses non-posted MMIO * @np: device node diff --git a/drivers/of/device.c b/drivers/of/device.c index c5a9473a5fb1..2defdca418ec 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); + if (!iommu) + return of_dma_set_restricted_buffer(dev, np); + return 0; } EXPORT_SYMBOL_GPL(of_dma_configure_id); diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index d717efbd637d..8fde97565d11 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -163,12 +163,18 @@ struct bus_dma_region; #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map); +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np); #else static inline int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) { return -ENODEV; } +static inline int of_dma_set_restricted_buffer(struct device *dev, + struct device_node *np) +{ + return -ENODEV; +} #endif #endif /* _LINUX_OF_PRIVATE_H */