From patchwork Tue Feb 9 06:21:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 738A4C433E0 for ; Tue, 9 Feb 2021 06:22:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3188864E54 for ; Tue, 9 Feb 2021 06:22:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3188864E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83127.153962 (Exim 4.92) (envelope-from ) id 1l9MPJ-00029b-Kd; Tue, 09 Feb 2021 06:21:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83127.153962; Tue, 09 Feb 2021 06:21:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPJ-00029S-HL; Tue, 09 Feb 2021 06:21:53 +0000 Received: by outflank-mailman (input) for mailman id 83127; Tue, 09 Feb 2021 06:21:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPI-000298-9o for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:21:52 +0000 Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 88784016-37e5-4c45-b58f-4d3d30475329; Tue, 09 Feb 2021 06:21:51 +0000 (UTC) Received: by mail-pl1-x633.google.com with SMTP id a16so9161705plh.8 for ; Mon, 08 Feb 2021 22:21:51 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id y26sm21067426pgk.42.2021.02.08.22.21.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:21:50 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 88784016-37e5-4c45-b58f-4d3d30475329 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=; b=iIl5SP7dar66QYJx01a8eKBzk/OkK/x97Udeqkp8z6Zecwu6gJbtr4+R7AJhbJUsET mW/NYKe+92F4cTf+MWb8Kjs3DEc/fRUB2Tuk/IK9lw1i5dbH+o71qP4rUdYAStwKxzrf aFKGX0hir3NozqAR+M+IQwDL2zs+YG9OyvX5A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=; b=OqWBdvNlNwl42JVqtcBAeIGbXKhe829U+cvVF7V9AS0dBjbhFOHGJGFT7NsajsV1gC ryf4QvN4PRNO9Iq6IlB0QgyrePoJpKiU8YXAggHqyXYx54rC0kP0DkQgMa5abiPoDZih 2yp0pp/1GErKlhbp/1CcWkXk3xYp+8PMQo4OVV1CKubyndWhPjRFv28sQ7G51MoZw8mv V6qfAPtNunbt9Dlq+JY5ntlsakmzgS99K6ZDcuibSUrkdNqqtqnwkXnDL4ayS8TSuzYE vEc2E6Fy96tNHzrVTJ0ZWG7DDxuYelhNAp0O6d3tULndJXyPQoxnBNm+DzkOw4mqqs8N XVgA== X-Gm-Message-State: AOAM533OuaCKZIPNhB/Kd+lljpGP3goRwE2S/a2OEDVbBgqS5EvjPDWX vT6RS3Vg/KStdtlCwy/n140hHA== X-Google-Smtp-Source: ABdhPJzMs/GVP3LhI2pHnVT9gB2t4EDZQ5t7O4c7u18G3hsJLKUa+IhCtg1SpLtI9Cft7lrwlj4fPw== X-Received: by 2002:a17:90a:ad09:: with SMTP id r9mr2555446pjq.51.1612851710886; Mon, 08 Feb 2021 22:21:50 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 01/14] swiotlb: Remove external access to io_tlb_start Date: Tue, 9 Feb 2021 14:21:18 +0800 Message-Id: <20210209062131.2300005-2-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new function, get_swiotlb_start(), and remove external access to io_tlb_start, so we can entirely hide struct swiotlb inside of swiotlb.c in the following patches. Signed-off-by: Claire Chang --- arch/powerpc/platforms/pseries/svm.c | 4 ++-- drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 5 +++++ 4 files changed, 10 insertions(+), 4 deletions(-) -- This can be dropped if Christoph's swiotlb cleanups are landed. https://lore.kernel.org/linux-iommu/20210207160934.2955931-1-hch@lst.de/T/#m7124f29b6076d462101fcff6433295157621da09 2.30.0.478.g8a0d178c01-goog diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 7b739cc7a8a9..c10c51d49f3d 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -55,8 +55,8 @@ void __init svm_swiotlb_init(void) if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false)) return; - if (io_tlb_start) - memblock_free_early(io_tlb_start, + if (vstart) + memblock_free_early(vstart, PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); panic("SVM: Cannot allocate SWIOTLB buffer"); } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..91f8c68d1a9b 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early) /* * IO TLB memory already allocated. Just use it. */ - if (io_tlb_start != 0) { - xen_io_tlb_start = phys_to_virt(io_tlb_start); + if (is_swiotlb_active()) { + xen_io_tlb_start = phys_to_virt(get_swiotlb_start()); goto end; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index d9c9fc9ca5d2..83200f3b042a 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -81,6 +81,7 @@ void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(void); +phys_addr_t get_swiotlb_start(void); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 7c42df6e6100..e180211f6ad9 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -719,6 +719,11 @@ bool is_swiotlb_active(void) return io_tlb_end != 0; } +phys_addr_t get_swiotlb_start(void) +{ + return io_tlb_start; +} + #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) From patchwork Tue Feb 9 06:21:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6B46C433E6 for ; Tue, 9 Feb 2021 06:22:09 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E68464EBC for ; Tue, 9 Feb 2021 06:22:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E68464EBC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83128.153975 (Exim 4.92) (envelope-from ) id 1l9MPQ-0002DA-Tv; Tue, 09 Feb 2021 06:22:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83128.153975; Tue, 09 Feb 2021 06:22:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPQ-0002D2-Ql; Tue, 09 Feb 2021 06:22:00 +0000 Received: by outflank-mailman (input) for mailman id 83128; Tue, 09 Feb 2021 06:21:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPP-0002CP-93 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:21:59 +0000 Received: from mail-pl1-x62e.google.com (unknown [2607:f8b0:4864:20::62e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8a10e0d0-ec6a-498b-990d-14c632b15aae; Tue, 09 Feb 2021 06:21:58 +0000 (UTC) Received: by mail-pl1-x62e.google.com with SMTP id y10so9168900plk.7 for ; Mon, 08 Feb 2021 22:21:58 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id a24sm22136125pff.18.2021.02.08.22.21.52 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:21:57 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8a10e0d0-ec6a-498b-990d-14c632b15aae DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ypYSGMF0E/cBAHmG+5MgTYKeSo3o0eLQF2YTe8sYrHU=; b=nuyunksJVzuVGUJwTZPlPVCVdAk7PqLR5/eJIWK+czvsKm+Rv5adeEB/7QzXpfJ6Qb eTQmwh//uII6ybrnF53ZsBr2I2SswetGJe9SJ79VWxXwIXylAfF5A0x6o0MD+aeanu4S AHLyX/KnrhYHKwiWZolFO//7UXaKKEQSkWti8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ypYSGMF0E/cBAHmG+5MgTYKeSo3o0eLQF2YTe8sYrHU=; b=XL3vaxdA7H1QeHirmFBt1FjX1pSjMloNotxMPxzIq9A+C3osCdaQ1AGdijRp34cvA0 6N1fYeu+0/WNfQNPiOTd6OgYq235Fcr8q17OEyygSEyvAVe2N0fDHZ8t+G/w+/7PHmUn aRhpuNS7TCuKP2+67suogEpxFk0VZPDh/EnUZF8YEAl89Fgo5+7eHGNuuL6/WEXPV77X 0u+enS5I4hlQd20104pmq9dj58dy4tjs1uIfpGxzxGWa+rMwph1vZnW3ZeVG+C4zgEV8 O9hDFUvgeAXk50dGqXiT9NQfma1eTlC02rRZMIzfeF3WpCZ+ACU1z8FIWlD3ny90VZ3K qKyw== X-Gm-Message-State: AOAM5330TSK7fAZirgG8TjA+e5hbcwc6M+QAdxNsreoMT48GFYeHrxo3 gtfGGAgYi+rr4b5Qzrre3PuLSw== X-Google-Smtp-Source: ABdhPJywOf1kdt/G4UCY0rOH88voPYoRmAlcet1VV4EMdbEytPdXO083Yd5o1179zNV2rDMmBilckQ== X-Received: by 2002:a17:90a:df87:: with SMTP id p7mr663033pjv.99.1612851717915; Mon, 08 Feb 2021 22:21:57 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 02/14] swiotlb: Move is_swiotlb_buffer() to swiotlb.c Date: Tue, 9 Feb 2021 14:21:19 +0800 Message-Id: <20210209062131.2300005-3-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Move is_swiotlb_buffer() to swiotlb.c and make io_tlb_{start,end} static, so we can entirely hide struct swiotlb inside of swiotlb.c in the following patches. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 7 +------ kernel/dma/swiotlb.c | 7 ++++++- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 83200f3b042a..041611bf3c2a 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -70,13 +70,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; -extern phys_addr_t io_tlb_start, io_tlb_end; - -static inline bool is_swiotlb_buffer(phys_addr_t paddr) -{ - return paddr >= io_tlb_start && paddr < io_tlb_end; -} +bool is_swiotlb_buffer(phys_addr_t paddr); void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e180211f6ad9..678490d39e55 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -69,7 +69,7 @@ enum swiotlb_force swiotlb_force; * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this * API. */ -phys_addr_t io_tlb_start, io_tlb_end; +static phys_addr_t io_tlb_start, io_tlb_end; /* * The number of IO TLB blocks (in groups of 64) between io_tlb_start and @@ -719,6 +719,11 @@ bool is_swiotlb_active(void) return io_tlb_end != 0; } +bool is_swiotlb_buffer(phys_addr_t paddr) +{ + return paddr >= io_tlb_start && paddr < io_tlb_end; +} + phys_addr_t get_swiotlb_start(void) { return io_tlb_start; From patchwork Tue Feb 9 06:21:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B55BC433DB for ; Tue, 9 Feb 2021 06:22:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 16D5064E54 for ; Tue, 9 Feb 2021 06:22:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16D5064E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83129.153987 (Exim 4.92) (envelope-from ) id 1l9MPa-0002JB-8H; Tue, 09 Feb 2021 06:22:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83129.153987; Tue, 09 Feb 2021 06:22:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPa-0002J4-3l; Tue, 09 Feb 2021 06:22:10 +0000 Received: by outflank-mailman (input) for mailman id 83129; Tue, 09 Feb 2021 06:22:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPY-0002IN-Km for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:08 +0000 Received: from mail-pl1-x631.google.com (unknown [2607:f8b0:4864:20::631]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8ed5e34b-d72a-47cf-80b7-dd254de9f3a4; Tue, 09 Feb 2021 06:22:06 +0000 (UTC) Received: by mail-pl1-x631.google.com with SMTP id e9so9176012plh.3 for ; Mon, 08 Feb 2021 22:22:06 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id b25sm13766245pfp.26.2021.02.08.22.21.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:04 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8ed5e34b-d72a-47cf-80b7-dd254de9f3a4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=; b=D8UmSO/ERP7kOKUSbGuHANSz9VFDE05biNMmXqRTUbET3+rkQ5LLNdZr4BHcWQ7G7g mYSYnczi/RbEbwT/nkTNlbF/EFD+y/rGOnS3U4Dd+yM4HqpkpAWccFAu6G2OdcYUfka0 rFGRq0QsZwzAp4bjrdVueBqqe1LrrbcHvK6uA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=; b=DpCtysdIyXWx+sOpfGHlbJ/JQEAAgniYtO2kU+hzf8VKO6tMt3MgCc9Jsh4lW/op0D rx5aHhkPbhha9PnV0WduqWWEY64bVX08FPyl1KBKnvdkx2TBB7aQrc6s48z99J8ByzF1 8GBPk2zB9cR1TUqQD8Udw/NJMv9QPXcteCojG723nlOalUA2qJfEh2QYvj+DKgi5XzPJ CRta4PRSliU+aHKE1HnRgpGqGKPcvJ3cnsmndT+NRfAbt4W7qDoYTcRNeNGGsU66rmIf 5JB2pElCsDUe+m4AhPCgIoaeEsJOO3bie0RoYKtWPtoat8nxXw/gv7wFOXYemDNMOgMH IJKw== X-Gm-Message-State: AOAM530gvLEgia0MRUsHaqxaCZZYi7sZNVWl2ognjiHrG9Rg8OjcLfU8 6hPsW0loQgsh12VwH/4Zv3ncZw== X-Google-Smtp-Source: ABdhPJwiCoGV2Q0lgiH5vbozu5JoZAEHZGIKhYvfLKOhwI17DJRAwojDGzNQdyAdFGBmg/ai3bJumQ== X-Received: by 2002:a17:90a:3d42:: with SMTP id o2mr2498572pjf.173.1612851725714; Mon, 08 Feb 2021 22:22:05 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 03/14] swiotlb: Add struct swiotlb Date: Tue, 9 Feb 2021 14:21:20 +0800 Message-Id: <20210209062131.2300005-4-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Added a new struct, swiotlb, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 327 +++++++++++++++++++++++-------------------- 1 file changed, 172 insertions(+), 155 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 678490d39e55..28b7bfe7a2a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -61,33 +61,43 @@ * allocate a contiguous 1MB, we're probably in trouble anyway. */ #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) enum swiotlb_force swiotlb_force; /* - * Used to do a quick range check in swiotlb_tbl_unmap_single and - * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -static phys_addr_t io_tlb_start, io_tlb_end; - -/* - * The number of IO TLB blocks (in groups of 64) between io_tlb_start and - * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - */ -static unsigned long io_tlb_nslabs; - -/* - * The number of used IO TLB block - */ -static unsigned long io_tlb_used; - -/* - * This is a free list describing the number of free entries available from - * each index + * struct swiotlb - Software IO TLB Memory Pool Descriptor + * + * @start: The start address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @end: The end address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and + * @end. This is command line adjustable via setup_io_tlb_npages. + * @used: The number of used IO TLB block. + * @list: The free list describing the number of free entries available + * from each index. + * @index: The index to start searching in the next round. + * @orig_addr: The original address corresponding to a mapped entry for the + * sync operations. + * @lock: The lock to protect the above data structures in the map and + * unmap calls. + * @debugfs: The dentry to debugfs. */ -static unsigned int *io_tlb_list; -static unsigned int io_tlb_index; +struct swiotlb { + phys_addr_t start; + phys_addr_t end; + unsigned long nslabs; + unsigned long used; + unsigned int *list; + unsigned int index; + phys_addr_t *orig_addr; + spinlock_t lock; + struct dentry *debugfs; +}; +static struct swiotlb default_swiotlb; /* * Max segment that we can provide which (if pages are contingous) will @@ -95,27 +105,17 @@ static unsigned int io_tlb_index; */ static unsigned int max_segment; -/* - * We need to save away the original address corresponding to a mapped entry - * for the sync operations. - */ -#define INVALID_PHYS_ADDR (~(phys_addr_t)0) -static phys_addr_t *io_tlb_orig_addr; - -/* - * Protect the above data structures in the map and unmap calls - */ -static DEFINE_SPINLOCK(io_tlb_lock); - static int late_alloc; static int __init setup_io_tlb_npages(char *str) { + struct swiotlb *swiotlb = &default_swiotlb; + if (isdigit(*str)) { - io_tlb_nslabs = simple_strtoul(str, &str, 0); + swiotlb->nslabs = simple_strtoul(str, &str, 0); /* avoid tail segment of size < IO_TLB_SEGSIZE */ - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } if (*str == ',') ++str; @@ -123,7 +123,7 @@ setup_io_tlb_npages(char *str) swiotlb_force = SWIOTLB_FORCE; } else if (!strcmp(str, "noforce")) { swiotlb_force = SWIOTLB_NO_FORCE; - io_tlb_nslabs = 1; + swiotlb->nslabs = 1; } return 0; @@ -134,7 +134,7 @@ static bool no_iotlb_memory; unsigned long swiotlb_nr_tbl(void) { - return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs; + return unlikely(no_iotlb_memory) ? 0 : default_swiotlb.nslabs; } EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); @@ -156,13 +156,14 @@ unsigned long swiotlb_size_or_default(void) { unsigned long size; - size = io_tlb_nslabs << IO_TLB_SHIFT; + size = default_swiotlb.nslabs << IO_TLB_SHIFT; return size ? size : (IO_TLB_DEFAULT_SIZE); } void __init swiotlb_adjust_size(unsigned long new_size) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long size; /* @@ -170,10 +171,10 @@ void __init swiotlb_adjust_size(unsigned long new_size) * architectures such as those supporting memory encryption to * adjust/expand SWIOTLB size for their use. */ - if (!io_tlb_nslabs) { + if (!swiotlb->nslabs) { size = ALIGN(new_size, 1 << IO_TLB_SHIFT); - io_tlb_nslabs = size >> IO_TLB_SHIFT; - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + swiotlb->nslabs = size >> IO_TLB_SHIFT; + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } @@ -181,14 +182,15 @@ void __init swiotlb_adjust_size(unsigned long new_size) void swiotlb_print_info(void) { - unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT; + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes = swiotlb->nslabs << IO_TLB_SHIFT; if (no_iotlb_memory) { pr_warn("No low mem\n"); return; } - pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end, + pr_info("mapped [mem %pa-%pa] (%luMB)\n", &swiotlb->start, &swiotlb->end, bytes >> 20); } @@ -200,57 +202,61 @@ void swiotlb_print_info(void) */ void __init swiotlb_update_mem_attributes(void) { + struct swiotlb *swiotlb = &default_swiotlb; void *vaddr; unsigned long bytes; if (no_iotlb_memory || late_alloc) return; - vaddr = phys_to_virt(io_tlb_start); - bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + vaddr = phys_to_virt(swiotlb->start); + bytes = PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); memset(vaddr, 0, bytes); } int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long i, bytes; size_t alloc_size; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = __pa(tlb); - io_tlb_end = io_tlb_start + bytes; + swiotlb->nslabs = nslabs; + swiotlb->start = __pa(tlb); + swiotlb->end = swiotlb->start + bytes; /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between swiotlb->start and swiotlb->end. */ - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int)); - io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_list) + alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(int)); + swiotlb->list = memblock_alloc(alloc_size, PAGE_SIZE); + if (!swiotlb->list) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)); - io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_orig_addr) + alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t)); + swiotlb->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); + if (!swiotlb->orig_addr) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < swiotlb->nslabs; i++) { + swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + swiotlb->index = 0; no_iotlb_memory = false; if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); + spin_lock_init(&swiotlb->lock); + return 0; } @@ -261,26 +267,27 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) void __init swiotlb_init(int verbose) { + struct swiotlb *swiotlb = &default_swiotlb; size_t default_size = IO_TLB_DEFAULT_SIZE; unsigned char *vstart; unsigned long bytes; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!swiotlb->nslabs) { + swiotlb->nslabs = (default_size >> IO_TLB_SHIFT); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + bytes = swiotlb->nslabs << IO_TLB_SHIFT; /* Get IO TLB memory from the low pages */ vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) + if (vstart && !swiotlb_init_with_tbl(vstart, swiotlb->nslabs, verbose)) return; - if (io_tlb_start) { - memblock_free_early(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); - io_tlb_start = 0; + if (swiotlb->start) { + memblock_free_early(swiotlb->start, + PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT)); + swiotlb->start = 0; } pr_warn("Cannot allocate buffer"); no_iotlb_memory = true; @@ -294,22 +301,23 @@ swiotlb_init(int verbose) int swiotlb_late_init_with_default_size(size_t default_size) { - unsigned long bytes, req_nslabs = io_tlb_nslabs; + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes, req_nslabs = swiotlb->nslabs; unsigned char *vstart = NULL; unsigned int order; int rc = 0; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!swiotlb->nslabs) { + swiotlb->nslabs = (default_size >> IO_TLB_SHIFT); + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); } /* * Get IO TLB memory from the low pages */ - order = get_order(io_tlb_nslabs << IO_TLB_SHIFT); - io_tlb_nslabs = SLABS_PER_PAGE << order; - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + order = get_order(swiotlb->nslabs << IO_TLB_SHIFT); + swiotlb->nslabs = SLABS_PER_PAGE << order; + bytes = swiotlb->nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, @@ -320,15 +328,15 @@ swiotlb_late_init_with_default_size(size_t default_size) } if (!vstart) { - io_tlb_nslabs = req_nslabs; + swiotlb->nslabs = req_nslabs; return -ENOMEM; } if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20); - io_tlb_nslabs = SLABS_PER_PAGE << order; + swiotlb->nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs); + rc = swiotlb_late_init_with_tbl(vstart, swiotlb->nslabs); if (rc) free_pages((unsigned long)vstart, order); @@ -337,22 +345,25 @@ swiotlb_late_init_with_default_size(size_t default_size) static void swiotlb_cleanup(void) { - io_tlb_end = 0; - io_tlb_start = 0; - io_tlb_nslabs = 0; + struct swiotlb *swiotlb = &default_swiotlb; + + swiotlb->end = 0; + swiotlb->start = 0; + swiotlb->nslabs = 0; max_segment = 0; } int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long i, bytes; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = virt_to_phys(tlb); - io_tlb_end = io_tlb_start + bytes; + swiotlb->nslabs = nslabs; + swiotlb->start = virt_to_phys(tlb); + swiotlb->end = swiotlb->start + bytes; set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); memset(tlb, 0, bytes); @@ -360,39 +371,40 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between swiotlb->start and swiotlb->end. */ - io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * sizeof(int))); - if (!io_tlb_list) + swiotlb->list = (unsigned int *)__get_free_pages(GFP_KERNEL, + get_order(swiotlb->nslabs * sizeof(int))); + if (!swiotlb->list) goto cleanup3; - io_tlb_orig_addr = (phys_addr_t *) + swiotlb->orig_addr = (phys_addr_t *) __get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * + get_order(swiotlb->nslabs * sizeof(phys_addr_t))); - if (!io_tlb_orig_addr) + if (!swiotlb->orig_addr) goto cleanup4; - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < swiotlb->nslabs; i++) { + swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + swiotlb->index = 0; no_iotlb_memory = false; swiotlb_print_info(); late_alloc = 1; - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); + spin_lock_init(&swiotlb->lock); return 0; cleanup4: - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - io_tlb_list = NULL; + free_pages((unsigned long)swiotlb->list, + get_order(swiotlb->nslabs * sizeof(int))); + swiotlb->list = NULL; cleanup3: swiotlb_cleanup(); return -ENOMEM; @@ -400,23 +412,25 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) void __init swiotlb_exit(void) { - if (!io_tlb_orig_addr) + struct swiotlb *swiotlb = &default_swiotlb; + + if (!swiotlb->orig_addr) return; if (late_alloc) { - free_pages((unsigned long)io_tlb_orig_addr, - get_order(io_tlb_nslabs * sizeof(phys_addr_t))); - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - free_pages((unsigned long)phys_to_virt(io_tlb_start), - get_order(io_tlb_nslabs << IO_TLB_SHIFT)); + free_pages((unsigned long)swiotlb->orig_addr, + get_order(swiotlb->nslabs * sizeof(phys_addr_t))); + free_pages((unsigned long)swiotlb->list, + get_order(swiotlb->nslabs * sizeof(int))); + free_pages((unsigned long)phys_to_virt(swiotlb->start), + get_order(swiotlb->nslabs << IO_TLB_SHIFT)); } else { - memblock_free_late(__pa(io_tlb_orig_addr), - PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(io_tlb_list), - PAGE_ALIGN(io_tlb_nslabs * sizeof(int))); - memblock_free_late(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + memblock_free_late(__pa(swiotlb->orig_addr), + PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t))); + memblock_free_late(__pa(swiotlb->list), + PAGE_ALIGN(swiotlb->nslabs * sizeof(int))); + memblock_free_late(swiotlb->start, + PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT)); } swiotlb_cleanup(); } @@ -465,7 +479,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start); + struct swiotlb *swiotlb = &default_swiotlb; + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); unsigned long flags; phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; @@ -516,13 +531,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * Find suitable number of IO TLB entries size that will fit this * request and allocate a buffer from that IO TLB pool. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&swiotlb->lock, flags); - if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) + if (unlikely(nslots > swiotlb->nslabs - swiotlb->used)) goto not_found; - index = ALIGN(io_tlb_index, stride); - if (index >= io_tlb_nslabs) + index = ALIGN(swiotlb->index, stride); + if (index >= swiotlb->nslabs) index = 0; wrap = index; @@ -530,7 +545,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, while (iommu_is_span_boundary(index, nslots, offset_slots, max_slots)) { index += stride; - if (index >= io_tlb_nslabs) + if (index >= swiotlb->nslabs) index = 0; if (index == wrap) goto not_found; @@ -541,40 +556,40 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * contiguous buffers, we allocate the buffers from that slot * and mark the entries as '0' indicating unavailable. */ - if (io_tlb_list[index] >= nslots) { + if (swiotlb->list[index] >= nslots) { int count = 0; for (i = index; i < (int) (index + nslots); i++) - io_tlb_list[i] = 0; - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT); + swiotlb->list[i] = 0; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && swiotlb->list[i]; i--) + swiotlb->list[i] = ++count; + tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT); /* * Update the indices to avoid searching in the next * round. */ - io_tlb_index = ((index + nslots) < io_tlb_nslabs - ? (index + nslots) : 0); + swiotlb->index = ((index + nslots) < swiotlb->nslabs + ? (index + nslots) : 0); goto found; } index += stride; - if (index >= io_tlb_nslabs) + if (index >= swiotlb->nslabs) index = 0; } while (index != wrap); not_found: - tmp_io_tlb_used = io_tlb_used; + tmp_io_tlb_used = swiotlb->used; - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&swiotlb->lock, flags); if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", - alloc_size, io_tlb_nslabs, tmp_io_tlb_used); + alloc_size, swiotlb->nslabs, tmp_io_tlb_used); return (phys_addr_t)DMA_MAPPING_ERROR; found: - io_tlb_used += nslots; - spin_unlock_irqrestore(&io_tlb_lock, flags); + swiotlb->used += nslots; + spin_unlock_irqrestore(&swiotlb->lock, flags); /* * Save away the mapping from the original address to the DMA address. @@ -582,7 +597,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * needed. */ for (i = 0; i < nslots; i++) - io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); + swiotlb->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -597,10 +612,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + struct swiotlb *swiotlb = &default_swiotlb; unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = swiotlb->orig_addr[index]; /* * First, sync the memory before unmapping the entry @@ -616,36 +632,37 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * While returning the entries to the free list, we merge the entries * with slots below and above the pool being returned. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&swiotlb->lock, flags); { count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? - io_tlb_list[index + nslots] : 0); + swiotlb->list[index + nslots] : 0); /* * Step 1: return the slots to the free list, merging the * slots with superceeding slots */ for (i = index + nslots - 1; i >= index; i--) { - io_tlb_list[i] = ++count; - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + swiotlb->list[i] = ++count; + swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } /* * Step 2: merge the returned slots with the preceding slots, * if available (non zero) */ - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && swiotlb->list[i]; i--) + swiotlb->list[i] = ++count; - io_tlb_used -= nslots; + swiotlb->used -= nslots; } - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&swiotlb->lock, flags); } void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + struct swiotlb *swiotlb = &default_swiotlb; + int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = swiotlb->orig_addr[index]; if (orig_addr == INVALID_PHYS_ADDR) return; @@ -713,31 +730,31 @@ size_t swiotlb_max_mapping_size(struct device *dev) bool is_swiotlb_active(void) { /* - * When SWIOTLB is initialized, even if io_tlb_start points to physical - * address zero, io_tlb_end surely doesn't. + * When SWIOTLB is initialized, even if swiotlb->start points to + * physical address zero, swiotlb->end surely doesn't. */ - return io_tlb_end != 0; + return default_swiotlb.end != 0; } bool is_swiotlb_buffer(phys_addr_t paddr) { - return paddr >= io_tlb_start && paddr < io_tlb_end; + return paddr >= default_swiotlb.start && paddr < default_swiotlb.end; } phys_addr_t get_swiotlb_start(void) { - return io_tlb_start; + return default_swiotlb.start; } #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) { - struct dentry *root; + struct swiotlb *swiotlb = &default_swiotlb; - root = debugfs_create_dir("swiotlb", NULL); - debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs); - debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used); + swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL); + debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs); + debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used); return 0; } From patchwork Tue Feb 9 06:21:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9951C433E0 for ; Tue, 9 Feb 2021 06:22:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9A43664EBA for ; Tue, 9 Feb 2021 06:22:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A43664EBA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83130.153999 (Exim 4.92) (envelope-from ) id 1l9MPf-0002ND-Nn; Tue, 09 Feb 2021 06:22:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83130.153999; Tue, 09 Feb 2021 06:22:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPf-0002N6-Ip; Tue, 09 Feb 2021 06:22:15 +0000 Received: by outflank-mailman (input) for mailman id 83130; Tue, 09 Feb 2021 06:22:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPe-0002Ma-69 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:14 +0000 Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 14809f83-4e15-4ff8-a496-6d1b68be30c5; Tue, 09 Feb 2021 06:22:13 +0000 (UTC) Received: by mail-pg1-x532.google.com with SMTP id t11so8139642pgu.8 for ; Mon, 08 Feb 2021 22:22:13 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id w1sm14605147pfg.116.2021.02.08.22.22.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:11 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14809f83-4e15-4ff8-a496-6d1b68be30c5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=; b=WmMoRPfwVrKkMTfzfSeApWYOsANXGiOgA3320pKWhEerXc+d04kio6kBo6Samikmhh dKY6FAm9apY748uamOD6+T+Q7zkEWfp30SA3fCJNtSNB7CjdS5M+/v04eEEHZ9JPhbQU oGnPa5k409EctotY2FmiW01VL+4N5+/MQmqlc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=; b=tshSeZAKFdFdEUET3cbZoVInDpSplFjL/vGjUl5M8bKYz4WEFDJY+J7vXJqizEX21k tBZJoFiFqGOwT7x2zrd0KOH1+EqV8DgUPALKvgPQnad50lx1nRMeD5lQwTz/KMcXmpwn MJDKEVBYESzRSJ2gyLLd5qZTs9g34ItNXNBphsTRo443oWO/LaQz+hFUz97jt3xeamjH SA8SMN+NvpgKIa6X8u38oaOhlt0+ORj5d1a37R0R2x+Q74kChfmDklFkUCV9bGalZPfM Y8AURoSxDXCv5sBjzOdJqc48zmqzlZ97tMxuDS0+oI71jhjA0acwmjnawhLz8Hrkhpeu UOng== X-Gm-Message-State: AOAM532F/dWhuuTyF9svzTqcB/3VDjzehOYMrL+yqSkjB65xCxGIR3Se +uHS4lGqBzKoPTp7VoKjMwRVcw== X-Google-Smtp-Source: ABdhPJwPilLMeHO4izUWlTNwHOL/6AulGQdpqWdqLrBPBm35gr6jqlo9rCgrJT+lklmP7VdqmUgqQg== X-Received: by 2002:a63:80c8:: with SMTP id j191mr1570888pgd.58.1612851732543; Mon, 08 Feb 2021 22:22:12 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 04/14] swiotlb: Refactor swiotlb_late_init_with_tbl Date: Tue, 9 Feb 2021 14:21:21 +0800 Message-Id: <20210209062131.2300005-5-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Refactor swiotlb_late_init_with_tbl to make the code reusable for restricted DMA pool initialization. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 65 ++++++++++++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 23 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 28b7bfe7a2a8..dc37951c6924 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -353,20 +353,21 @@ static void swiotlb_cleanup(void) max_segment = 0; } -int -swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +static int swiotlb_init_tlb_pool(struct swiotlb *swiotlb, phys_addr_t start, + size_t size) { - struct swiotlb *swiotlb = &default_swiotlb; - unsigned long i, bytes; + unsigned long i; + void *vaddr = phys_to_virt(start); - bytes = nslabs << IO_TLB_SHIFT; + size = ALIGN(size, 1 << IO_TLB_SHIFT); + swiotlb->nslabs = size >> IO_TLB_SHIFT; + swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE); - swiotlb->nslabs = nslabs; - swiotlb->start = virt_to_phys(tlb); - swiotlb->end = swiotlb->start + bytes; + swiotlb->start = start; + swiotlb->end = swiotlb->start + size; - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - memset(tlb, 0, bytes); + set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); + memset(vaddr, 0, size); /* * Allocate and initialize the free list array. This array is used @@ -390,13 +391,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) swiotlb->orig_addr[i] = INVALID_PHYS_ADDR; } swiotlb->index = 0; - no_iotlb_memory = false; - - swiotlb_print_info(); - late_alloc = 1; - - swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT); spin_lock_init(&swiotlb->lock); return 0; @@ -410,6 +405,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) return -ENOMEM; } +int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +{ + struct swiotlb *swiotlb = &default_swiotlb; + unsigned long bytes = nslabs << IO_TLB_SHIFT; + int ret; + + ret = swiotlb_init_tlb_pool(swiotlb, virt_to_phys(tlb), bytes); + if (ret) + return ret; + + no_iotlb_memory = false; + + swiotlb_print_info(); + + late_alloc = 1; + + swiotlb_set_max_segment(bytes); + + return 0; +} + void __init swiotlb_exit(void) { struct swiotlb *swiotlb = &default_swiotlb; @@ -747,17 +763,20 @@ phys_addr_t get_swiotlb_start(void) } #ifdef CONFIG_DEBUG_FS - -static int __init swiotlb_create_debugfs(void) +static void swiotlb_create_debugfs(struct swiotlb *swiotlb, const char *name, + struct dentry *node) { - struct swiotlb *swiotlb = &default_swiotlb; - - swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL); + swiotlb->debugfs = debugfs_create_dir(name, node); debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs); debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used); - return 0; } -late_initcall(swiotlb_create_debugfs); +static int __init swiotlb_create_default_debugfs(void) +{ + swiotlb_create_debugfs(&default_swiotlb, "swiotlb", NULL); + + return 0; +} +late_initcall(swiotlb_create_default_debugfs); #endif From patchwork Tue Feb 9 06:21:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFFF8C433E0 for ; Tue, 9 Feb 2021 06:22:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 57E8E64EBA for ; Tue, 9 Feb 2021 06:22:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57E8E64EBA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83131.154011 (Exim 4.92) (envelope-from ) id 1l9MPm-0002SP-16; Tue, 09 Feb 2021 06:22:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83131.154011; Tue, 09 Feb 2021 06:22:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPl-0002SF-Tt; Tue, 09 Feb 2021 06:22:21 +0000 Received: by outflank-mailman (input) for mailman id 83131; Tue, 09 Feb 2021 06:22:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPk-0002Rf-Mf for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:20 +0000 Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9e444da7-3acf-4b43-96a3-de85db326b31; Tue, 09 Feb 2021 06:22:20 +0000 (UTC) Received: by mail-pl1-x62d.google.com with SMTP id a16so9162295plh.8 for ; Mon, 08 Feb 2021 22:22:20 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id g17sm21205826pfq.135.2021.02.08.22.22.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:18 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9e444da7-3acf-4b43-96a3-de85db326b31 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9SQyVkSYwcHNPQBFHoCQtFFFjCENqy21v8WbruxTyIQ=; b=S8b7bks4s14Aq0h3NWBRKYyM5rNroqOh0mjkOQqRPPLHlWF1b/82tP2JW6Ak+XahaS vuqFZkcLranydMu7pllFHxd5nuNgxoGZIBPnWrP05Of2B7XaG8dtqQLOaoHbIZQ1OzXK 3qWvUM55PHjPtT5jCfFBuHsTloYPqv037Cmok= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9SQyVkSYwcHNPQBFHoCQtFFFjCENqy21v8WbruxTyIQ=; b=V8EaYOQ/SRVEKXMNpWzKzgYZEgVLcFIY7Cm/UDnvGB8e66jrwAqhSJGnbiu4eN1oIv Xr2yD1xo4b3Bj1llQzUDg+ax5z+IeRik95sy8IDm50hBoxGQlIuvtXYkap16MoBAbYX0 Dtxogc7aHqsvXTW2Z1vUCoQoB/nWMcHRzkGRxrQGGO3yzal1Vi8HWxCpglZ8nlVgiF3X hfYRAtm7CvAvpukJjRmCsFlNL7aitQTEwEwViSdYWrUxCM/Gm45oOy6PJACgV+INhSZM s0DJG0ur9zsuVhqj2VmLiObACfu2m6Iw+fdGBlnzJ63f0P80KbgMjMZlayBRSdHx3iAB FU+A== X-Gm-Message-State: AOAM531dR4/fLnwzUs6tIHn0/T7dl6LR+UejouSNVLaM6ZCfn2/YHwfy HRrJFfPq4FD0Rpup1eZg2eVQAg== X-Google-Smtp-Source: ABdhPJxcpi2bQzo5Ag59ySIm3BqsShiYpJbthB3TKnbwDYfpDCilPQMTJU6vCjKSHchn3Ia3PqrJPA== X-Received: by 2002:a17:90a:654a:: with SMTP id f10mr2534268pjs.202.1612851739435; Mon, 08 Feb 2021 22:22:19 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 05/14] swiotlb: Add DMA_RESTRICTED_POOL Date: Tue, 9 Feb 2021 14:21:22 +0800 Message-Id: <20210209062131.2300005-6-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/Kconfig | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 479fc145acfc..97ff9f8dd3c8 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,20 @@ config SWIOTLB bool select NEED_DMA_MAP_STATE +config DMA_RESTRICTED_POOL + bool "DMA Restricted Pool" + depends on OF && OF_RESERVED_MEM + select SWIOTLB + help + This enables support for restricted DMA pools which provide a level of + DMA memory protection on systems with limited hardware protection + capabilities, such as those lacking an IOMMU. + + For more information see + + and . + If unsure, say "n". + # # Should be selected if we can mmap non-coherent mappings to userspace. # The only thing that is really required is a way to set an uncached bit From patchwork Tue Feb 9 06:21:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BDE3C433DB for ; Tue, 9 Feb 2021 06:22:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 38A4664EC2 for ; Tue, 9 Feb 2021 06:22:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38A4664EC2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83132.154023 (Exim 4.92) (envelope-from ) id 1l9MPu-0002ZF-9V; Tue, 09 Feb 2021 06:22:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83132.154023; Tue, 09 Feb 2021 06:22:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPu-0002Z6-6H; Tue, 09 Feb 2021 06:22:30 +0000 Received: by outflank-mailman (input) for mailman id 83132; Tue, 09 Feb 2021 06:22:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPs-0002Xy-F2 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:28 +0000 Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 08b5f9aa-1429-40ab-93ff-b5fa52d14e3d; Tue, 09 Feb 2021 06:22:27 +0000 (UTC) Received: by mail-pj1-x1031.google.com with SMTP id gb24so976520pjb.4 for ; Mon, 08 Feb 2021 22:22:27 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id np7sm1080411pjb.10.2021.02.08.22.22.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:25 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 08b5f9aa-1429-40ab-93ff-b5fa52d14e3d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=; b=FOjc62yoUhk/LYUclkinZZ4zDvKpeGo1x74CjLhqDxql67icfdZqyBr48BDMmzkMqR tA+flicJ3qCGmrksh4BzQ5Am4hl3t7sFnye0tmSkf0JhPfYSoJrQn4ouvbgIPDxqcZT9 22NHFR6mKzgKsRHbHqBZhVhBnyuYHLbdeoeHw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=; b=QmhH21nsdCyP3anOIGhloWuW3HcMHlzz6pUCJ8ElxmyskhU20XjltIddFoivValrAn o68WFMa3cHaPf9XW+VlSZb3MP7JUP+XVy5DB6PgPDzKjtxXH2MVEDpwI6HjuFvdu5ip6 knxPnONCYq8l6nseSHT5kCGjIIcUx5oWOWGYJJo6NZZujMs4XnNi0JPsfzQuBf/75JWQ vrQPJf/fzOwyZowhwCNu/EPUlcupu0iNd6BkrKfVRYhWAXHkIT3FTOz/+pP5eAv6huw5 QXYIQsHMD11VMsDwB5QbvSPlb9+Sr4V6CVFEp9J5spgROotxWnAK042uqQav6jlZPBkI OOrg== X-Gm-Message-State: AOAM532b1R9/QSf3LxfQW98JtlWG4xbYW2jJhP6JsMCvzO0KKwHh5cEp bb3P36sg3aBUB0QoVnadLo3/mA== X-Google-Smtp-Source: ABdhPJwDHlCULV4tTgyY9CKwnf/NbNTRufsz7Pi36U/uwqUNeuQrkhr5EaPoNzogQlT61PzYunqO5A== X-Received: by 2002:a17:90a:3188:: with SMTP id j8mr2559404pjb.53.1612851746343; Mon, 08 Feb 2021 22:22:26 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 06/14] swiotlb: Add restricted DMA pool Date: Tue, 9 Feb 2021 14:21:23 +0800 Message-Id: <20210209062131.2300005-7-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add the initialization function to create restricted DMA pools from matching reserved-memory nodes. Signed-off-by: Claire Chang --- include/linux/device.h | 4 ++ kernel/dma/swiotlb.c | 94 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 97 insertions(+), 1 deletion(-) diff --git a/include/linux/device.h b/include/linux/device.h index 7619a84f8ce4..08d440627b93 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -415,6 +415,7 @@ struct dev_links_info { * @dma_pools: Dma pools (if dma'ble device). * @dma_mem: Internal for coherent mem override. * @cma_area: Contiguous memory area for dma allocations + * @dev_swiotlb: Internal for swiotlb override. * @archdata: For arch-specific additions. * @of_node: Associated device tree node. * @fwnode: Associated device node supplied by platform firmware. @@ -517,6 +518,9 @@ struct device { #ifdef CONFIG_DMA_CMA struct cma *cma_area; /* contiguous memory area for dma allocations */ +#endif +#ifdef CONFIG_DMA_RESTRICTED_POOL + struct swiotlb *dev_swiotlb; #endif /* arch specific additions */ struct dev_archdata archdata; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dc37951c6924..3a17451c5981 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -39,6 +39,13 @@ #ifdef CONFIG_DEBUG_FS #include #endif +#ifdef CONFIG_DMA_RESTRICTED_POOL +#include +#include +#include +#include +#include +#endif #include #include @@ -75,7 +82,8 @@ enum swiotlb_force swiotlb_force; * range check to see if the memory was in fact allocated by this * API. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and - * @end. This is command line adjustable via setup_io_tlb_npages. + * @end. For default swiotlb, this is command line adjustable via + * setup_io_tlb_npages. * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. @@ -780,3 +788,87 @@ static int __init swiotlb_create_default_debugfs(void) late_initcall(swiotlb_create_default_debugfs); #endif + +#ifdef CONFIG_DMA_RESTRICTED_POOL +static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + struct device *dev) +{ + struct swiotlb *swiotlb = rmem->priv; + int ret; + + if (dev->dev_swiotlb) + return -EBUSY; + + /* Since multiple devices can share the same pool, the private data, + * swiotlb struct, will be initialized by the first device attached + * to it. + */ + if (!swiotlb) { + swiotlb = kzalloc(sizeof(*swiotlb), GFP_KERNEL); + if (!swiotlb) + return -ENOMEM; +#ifdef CONFIG_ARM + unsigned long pfn = PHYS_PFN(reme->base); + + if (!PageHighMem(pfn_to_page(pfn))) { + ret = -EINVAL; + goto cleanup; + } +#endif /* CONFIG_ARM */ + + ret = swiotlb_init_tlb_pool(swiotlb, rmem->base, rmem->size); + if (ret) + goto cleanup; + + rmem->priv = swiotlb; + } + +#ifdef CONFIG_DEBUG_FS + swiotlb_create_debugfs(swiotlb, rmem->name, default_swiotlb.debugfs); +#endif /* CONFIG_DEBUG_FS */ + + dev->dev_swiotlb = swiotlb; + + return 0; + +cleanup: + kfree(swiotlb); + + return ret; +} + +static void rmem_swiotlb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + if (!dev) + return; + +#ifdef CONFIG_DEBUG_FS + debugfs_remove_recursive(dev->dev_swiotlb->debugfs); +#endif /* CONFIG_DEBUG_FS */ + dev->dev_swiotlb = NULL; +} + +static const struct reserved_mem_ops rmem_swiotlb_ops = { + .device_init = rmem_swiotlb_device_init, + .device_release = rmem_swiotlb_device_release, +}; + +static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "linux,cma-default", NULL) || + of_get_flat_dt_prop(node, "linux,dma-default", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + rmem->ops = &rmem_swiotlb_ops; + pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} + +RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ From patchwork Tue Feb 9 06:21:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06D09C433E6 for ; Tue, 9 Feb 2021 06:22:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A74E264E54 for ; Tue, 9 Feb 2021 06:22:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A74E264E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83133.154035 (Exim 4.92) (envelope-from ) id 1l9MPz-0002eR-L6; Tue, 09 Feb 2021 06:22:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83133.154035; Tue, 09 Feb 2021 06:22:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPz-0002eH-Gx; Tue, 09 Feb 2021 06:22:35 +0000 Received: by outflank-mailman (input) for mailman id 83133; Tue, 09 Feb 2021 06:22:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MPz-0002dt-2O for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:35 +0000 Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4a28c162-c85c-4bea-adae-b9d1292bbf68; Tue, 09 Feb 2021 06:22:33 +0000 (UTC) Received: by mail-pj1-x1033.google.com with SMTP id d2so1037168pjs.4 for ; Mon, 08 Feb 2021 22:22:33 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id y3sm15909957pfr.125.2021.02.08.22.22.27 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:32 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4a28c162-c85c-4bea-adae-b9d1292bbf68 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bfnwvtu6c4qFg17z3zRw+ngra3tQPiKaBNAeGu3kuys=; b=DN/xJ9zyoyOl7tww0sHqTDrXTmBoYAaQW/aAjyZF/ZQsZmPalMIFISuv8D920br2VA xieDVxUIBy4YAKQYRTVV+UeMH/2q7WluBpEAMNlCbAfhXUEH7XYvH0L98upfR63ITAPt zmL+fN06Y3KeFCHEmdSEQ7l08fSY4b6qEVcGI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bfnwvtu6c4qFg17z3zRw+ngra3tQPiKaBNAeGu3kuys=; b=OceEldU694BIqBQJNFB+xxydzMOUVYG7MUe8NqHE9MSXqiuOo3eC0jmfU+0AOrgh0i VMh+ODw/I4VEAx4YXHPvepJVESPPdff3WoQEdNLo+EULMh2noZPgBvF2xhr6MS/2rRUs 0IHrShYnncNd+2H4WPmd1n4o+JPknBELC3eOibwGWSFgK9ipK5Jd7NfTYw4THxkbnUW5 pCGk8A/1UJjVrnCcM/HxC5W4dhFZQIZME0Wn2mUBA5cl2X9q24wH5OzRxwrGAIMezwGE qT3wlIhj3JlnotvzSvp1IJu3rsmhn85AQJf7ywXYZv0UwnkQxuf4PzSMiSHwkYhCU9xn 8QBw== X-Gm-Message-State: AOAM530GwCZiybOtIbUsnAYxaRvEbrHfS3af7F05grqBCuS82rdTmMEw lxf2M5t0JSD7KNJWGGk0r1YmiA== X-Google-Smtp-Source: ABdhPJzsoA6NwKm+S4KZVPeR4zk76WKbIUMqYXifW3lINXvbVDJBj2JyDT2vUppUwxRx5/36CLZtuw== X-Received: by 2002:a17:902:b40b:b029:df:cf31:2849 with SMTP id x11-20020a170902b40bb02900dfcf312849mr19548774plr.33.1612851753276; Mon, 08 Feb 2021 22:22:33 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 07/14] swiotlb: Update swiotlb API to gain a struct device argument Date: Tue, 9 Feb 2021 14:21:24 +0800 Message-Id: <20210209062131.2300005-8-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Introduce the get_swiotlb() getter and update all callers of is_swiotlb_active(), is_swiotlb_buffer() and get_swiotlb_start() to gain a struct device argument. Signed-off-by: Claire Chang --- drivers/iommu/dma-iommu.c | 12 ++++++------ drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 10 +++++----- kernel/dma/direct.c | 8 ++++---- kernel/dma/direct.h | 6 +++--- kernel/dma/swiotlb.c | 23 +++++++++++++++++------ 6 files changed, 37 insertions(+), 26 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f659395e7959..abdbe14472cc 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -503,7 +503,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, __iommu_dma_unmap(dev, dma_addr, size); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, iova_align(iovad, size), dir, attrs); } @@ -580,7 +580,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, } iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys)) + if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, org_size, aligned_size, dir, attrs); @@ -753,7 +753,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU); } @@ -766,7 +766,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE); if (!dev_is_dma_coherent(dev)) @@ -787,7 +787,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, dir, SYNC_FOR_CPU); } @@ -804,7 +804,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) { - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, dir, SYNC_FOR_DEVICE); diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 91f8c68d1a9b..f424d46756b1 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early) /* * IO TLB memory already allocated. Just use it. */ - if (is_swiotlb_active()) { - xen_io_tlb_start = phys_to_virt(get_swiotlb_start()); + if (is_swiotlb_active(NULL)) { + xen_io_tlb_start = phys_to_virt(get_swiotlb_start(NULL)); goto end; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 041611bf3c2a..f13a52a97382 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -71,16 +71,16 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; -bool is_swiotlb_buffer(phys_addr_t paddr); +bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr); void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); -bool is_swiotlb_active(void); -phys_addr_t get_swiotlb_start(void); +bool is_swiotlb_active(struct device *dev); +phys_addr_t get_swiotlb_start(struct device *dev); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } @@ -96,7 +96,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev) return SIZE_MAX; } -static inline bool is_swiotlb_active(void) +static inline bool is_swiotlb_active(struct device *dev) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 002268262c9a..30ccbc08e229 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev, for_each_sg(sgl, sg, nents, i) { phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg)); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, sg->length, dir, SYNC_FOR_DEVICE); @@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(paddr, sg->length, dir); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, sg->length, dir, SYNC_FOR_CPU); @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask) size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ - if (is_swiotlb_active() && + if (is_swiotlb_active(dev) && (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) return swiotlb_max_mapping_size(dev); return SIZE_MAX; @@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev) bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr) { return !dev_is_dma_coherent(dev) || - is_swiotlb_buffer(dma_to_phys(dev, dma_addr)); + is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr)); } /** diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index b98615578737..7b83b1595989 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, { phys_addr_t paddr = dma_to_phys(dev, addr); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); if (!dev_is_dma_coherent(dev)) @@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, arch_sync_dma_for_cpu_all(); } - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); if (dir == DMA_FROM_DEVICE) @@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs); } #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 3a17451c5981..e22e7ae75f1c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -107,6 +107,11 @@ struct swiotlb { }; static struct swiotlb default_swiotlb; +static inline struct swiotlb *get_swiotlb(struct device *dev) +{ + return &default_swiotlb; +} + /* * Max segment that we can provide which (if pages are contingous) will * not be bounced (unless SWIOTLB_FORCE is set). @@ -751,23 +756,29 @@ size_t swiotlb_max_mapping_size(struct device *dev) return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; } -bool is_swiotlb_active(void) +bool is_swiotlb_active(struct device *dev) { + struct swiotlb *swiotlb = get_swiotlb(dev); + /* * When SWIOTLB is initialized, even if swiotlb->start points to * physical address zero, swiotlb->end surely doesn't. */ - return default_swiotlb.end != 0; + return swiotlb->end != 0; } -bool is_swiotlb_buffer(phys_addr_t paddr) +bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { - return paddr >= default_swiotlb.start && paddr < default_swiotlb.end; + struct swiotlb *swiotlb = get_swiotlb(dev); + + return paddr >= swiotlb->start && paddr < swiotlb->end; } -phys_addr_t get_swiotlb_start(void) +phys_addr_t get_swiotlb_start(struct device *dev) { - return default_swiotlb.start; + struct swiotlb *swiotlb = get_swiotlb(dev); + + return swiotlb->start; } #ifdef CONFIG_DEBUG_FS From patchwork Tue Feb 9 06:21:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C4BCC433E0 for ; Tue, 9 Feb 2021 06:22:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 03F4F64E54 for ; Tue, 9 Feb 2021 06:22:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03F4F64E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83134.154047 (Exim 4.92) (envelope-from ) id 1l9MQ7-0002l0-4u; Tue, 09 Feb 2021 06:22:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83134.154047; Tue, 09 Feb 2021 06:22:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQ7-0002kt-1Y; Tue, 09 Feb 2021 06:22:43 +0000 Received: by outflank-mailman (input) for mailman id 83134; Tue, 09 Feb 2021 06:22:41 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQ5-0002k8-SD for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:41 +0000 Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 51ad2447-8834-4279-af1e-9195b9f37977; Tue, 09 Feb 2021 06:22:40 +0000 (UTC) Received: by mail-pj1-x1031.google.com with SMTP id e9so1049176pjj.0 for ; Mon, 08 Feb 2021 22:22:40 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id x14sm20837364pfj.15.2021.02.08.22.22.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:39 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 51ad2447-8834-4279-af1e-9195b9f37977 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=; b=NhEm9T2AwADa9VsvefRZO3hJf81eglORlqEc/M/a8rR0kuSIY5UrlWGbaksqIVirGr J6QDJ3CyNkeplzaOrje+vpzgt129yWbwHTzlb+beGZFWZQ8LQ0tH754TJy6h0q3jS8jo cGxvFhbRKlBHTIBko/408oiJovV3JTKwJW8QU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=; b=JZP9wRoKUlJsHxqxeD3m24pjlJBwGjk+Yf3iApErGwFgvXHxqt0KfDuVBbtyxsWDPh 5S6IvZ2bzu04XotKmisgKVuo2TQp3NGzj3IE4E886vTwKTHPDxnqe2OYFfv9R2wJM67Z fWo99QX3OBmVQM5xEfoHNgVhCEe7a0bVucYmiDr2qnvCCclKBdUZPfoZrXAQnaSd/gSz vx44tig7Qjc3IzsWTHviJChV2oqisCoPvNtatwO4X3CDdXqZiRycy/GxiFfG2E2ld2rY cuwj52LVisqyUN/2Cch+hq7/x9QvaBiQwGbq66x4xm2P9IHFrsYpTvNEpyQ0WwACXgGs HxTw== X-Gm-Message-State: AOAM53259GAGPHUE4DSc5E0T8C0ZTuUPRbojYMohM1DYOAKbxBehZH4n WkyVqM+N0baJ2trQTfp2WvF7Zg== X-Google-Smtp-Source: ABdhPJxTkbyFhURADasjsjB42g8YIEdg4z+Ug1Qla2hS5HdHoxmM0uHRHvdpONYXgEmtPzgr5O1D4Q== X-Received: by 2002:a17:90a:49c4:: with SMTP id l4mr2647464pjm.33.1612851760260; Mon, 08 Feb 2021 22:22:40 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 08/14] swiotlb: Use restricted DMA pool if available Date: Tue, 9 Feb 2021 14:21:25 +0800 Message-Id: <20210209062131.2300005-9-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 13 +++++++++++++ kernel/dma/direct.h | 2 +- kernel/dma/swiotlb.c | 20 +++++++++++++++++--- 3 files changed, 31 insertions(+), 4 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f13a52a97382..76f86c684524 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -71,6 +71,15 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; +#ifdef CONFIG_DMA_RESTRICTED_POOL +bool is_swiotlb_force(struct device *dev); +#else +static inline bool is_swiotlb_force(struct device *dev) +{ + return unlikely(swiotlb_force == SWIOTLB_FORCE); +} +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr); void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); @@ -80,6 +89,10 @@ phys_addr_t get_swiotlb_start(struct device *dev); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE +static inline bool is_swiotlb_force(struct device *dev) +{ + return false; +} static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 7b83b1595989..b011db1b625d 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) + if (is_swiotlb_force(dev)) return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e22e7ae75f1c..6fdebde8fb1f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -40,6 +40,7 @@ #include #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +#include #include #include #include @@ -109,6 +110,10 @@ static struct swiotlb default_swiotlb; static inline struct swiotlb *get_swiotlb(struct device *dev) { +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev && dev->dev_swiotlb) + return dev->dev_swiotlb; +#endif return &default_swiotlb; } @@ -508,7 +513,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); unsigned long flags; phys_addr_t tlb_addr; @@ -519,7 +524,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, unsigned long max_slots; unsigned long tmp_io_tlb_used; +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (no_iotlb_memory && !hwdev->dev_swiotlb) +#else if (no_iotlb_memory) +#endif panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); if (mem_encrypt_active()) @@ -641,7 +650,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; @@ -689,7 +698,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - struct swiotlb *swiotlb = &default_swiotlb; + struct swiotlb *swiotlb = get_swiotlb(hwdev); int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = swiotlb->orig_addr[index]; @@ -801,6 +810,11 @@ late_initcall(swiotlb_create_default_debugfs); #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +bool is_swiotlb_force(struct device *dev) +{ + return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb; +} + static int rmem_swiotlb_device_init(struct reserved_mem *rmem, struct device *dev) { From patchwork Tue Feb 9 06:21:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65EDBC433E6 for ; Tue, 9 Feb 2021 06:23:00 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2075A64EB9 for ; Tue, 9 Feb 2021 06:23:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2075A64EB9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83135.154059 (Exim 4.92) (envelope-from ) id 1l9MQF-0002ra-FV; Tue, 09 Feb 2021 06:22:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83135.154059; Tue, 09 Feb 2021 06:22:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQF-0002rR-BS; Tue, 09 Feb 2021 06:22:51 +0000 Received: by outflank-mailman (input) for mailman id 83135; Tue, 09 Feb 2021 06:22:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQE-0002qa-0K for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:50 +0000 Received: from mail-pg1-x530.google.com (unknown [2607:f8b0:4864:20::530]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ef116655-434b-4336-9e46-e03df93aed3e; Tue, 09 Feb 2021 06:22:49 +0000 (UTC) Received: by mail-pg1-x530.google.com with SMTP id m2so5090413pgq.5 for ; Mon, 08 Feb 2021 22:22:48 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id n1sm6296866pgn.94.2021.02.08.22.22.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:47 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ef116655-434b-4336-9e46-e03df93aed3e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=; b=iQ8wBJSEVZzeAe0ffh2h2GM8F5kehjcPxtVFeT4gPFJg8QooaH8d0bbR3DBFmEjh7/ +t81Z22ZQ3jfYQJEtw6EEDXPAwtTfgWAGJvdqJ64cWu+Z/90HeG76Xdlz9LMQYcjebg6 t1DpKPM5Kv3tzEnaDX3egDRfxshz/CzrFwO+0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=; b=FzgTjVUokJEVezdrNkB7Gxghzd+1xSvmub6pb2AB1dsRfNIqi6w/3/M/pHb/CHL3Zf yEZ5OyLBhnQL3tgdLvhSVzpXOypJXqndRi2IPaRvnyFoRnspPiLZW2c6//5S4FOF+gtZ JXlEA0zsYdL1+DMzTImcSFBUuqQDR3smpfJwf0ggV6WsCyqihQzmW6xQbUPBCZZahxaP mQO+TO1hkWwOhbPsOPlfTjWQfPassNdSvkzvKqePh8RWiVY+CvsGwNny5EsWSejngpSH USRrMPCAQIVj6LmkptFW1FN1t3gfn0nITMmLAkvQTE3f8E0fObxkqjD/qX65pe1UXOUt 5hDQ== X-Gm-Message-State: AOAM531lTsQoFNVrg5zjMxkOKPrm6dxsPUUYcoRiHTWXGOEfF4mFzL52 mufkAAYwnvZJkYUyBHXE67fsxA== X-Google-Smtp-Source: ABdhPJy1CzOC+GQys63fdcsVT9unv4NpcqE4/TesqmaSSFAML/xnfw6XY2m3UHOllfAZ0J4Itqp2AA== X-Received: by 2002:aa7:80cc:0:b029:1da:689d:2762 with SMTP id a12-20020aa780cc0000b02901da689d2762mr14012849pfn.3.1612851768274; Mon, 08 Feb 2021 22:22:48 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 09/14] swiotlb: Refactor swiotlb_tbl_{map,unmap}_single Date: Tue, 9 Feb 2021 14:21:26 +0800 Message-Id: <20210209062131.2300005-10-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Refactor swiotlb_tbl_{map,unmap}_single to make the code reusable for dev_swiotlb_{alloc,free}. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 116 ++++++++++++++++++++++++++----------------- 1 file changed, 71 insertions(+), 45 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 6fdebde8fb1f..f64cbe6e84cc 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -509,14 +509,12 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } } -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static int swiotlb_tbl_find_free_region(struct device *hwdev, + dma_addr_t tbl_dma_addr, + size_t alloc_size, unsigned long attrs) { struct swiotlb *swiotlb = get_swiotlb(hwdev); - dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); unsigned long flags; - phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; int i; unsigned long mask; @@ -531,15 +529,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, #endif panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); - if (mem_encrypt_active()) - pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); - - if (mapping_size > alloc_size) { - dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", - mapping_size, alloc_size); - return (phys_addr_t)DMA_MAPPING_ERROR; - } - mask = dma_get_seg_boundary(hwdev); tbl_dma_addr &= mask; @@ -601,7 +590,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, swiotlb->list[i] = 0; for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && swiotlb->list[i]; i--) swiotlb->list[i] = ++count; - tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT); /* * Update the indices to avoid searching in the next @@ -624,45 +612,20 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", alloc_size, swiotlb->nslabs, tmp_io_tlb_used); - return (phys_addr_t)DMA_MAPPING_ERROR; + return -ENOMEM; + found: swiotlb->used += nslots; spin_unlock_irqrestore(&swiotlb->lock, flags); - /* - * Save away the mapping from the original address to the DMA address. - * This is needed when we sync the memory. Then we sync the buffer if - * needed. - */ - for (i = 0; i < nslots; i++) - swiotlb->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); - - return tlb_addr; + return index; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static void swiotlb_tbl_release_region(struct device *hwdev, int index, size_t size) { struct swiotlb *swiotlb = get_swiotlb(hwdev); unsigned long flags; - int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = swiotlb->orig_addr[index]; - - /* - * First, sync the memory before unmapping the entry - */ - if (orig_addr != INVALID_PHYS_ADDR && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; /* * Return the buffer to the free list by setting the corresponding @@ -694,6 +657,69 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&swiotlb->lock, flags); } +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, + unsigned long attrs) +{ + struct swiotlb *swiotlb = get_swiotlb(hwdev); + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start); + phys_addr_t tlb_addr; + unsigned int nslots, index; + int i; + + if (mem_encrypt_active()) + pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); + + if (mapping_size > alloc_size) { + dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", + mapping_size, alloc_size); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + index = swiotlb_tbl_find_free_region(hwdev, tbl_dma_addr, alloc_size, attrs); + if (index < 0) + return (phys_addr_t)DMA_MAPPING_ERROR; + + tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT); + + /* + * Save away the mapping from the original address to the DMA address. + * This is needed when we sync the memory. Then we sync the buffer if + * needed. + */ + nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + for (i = 0; i < nslots; i++) + swiotlb->orig_addr[index + i] = orig_addr + (i << IO_TLB_SHIFT); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + + return tlb_addr; +} + +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct swiotlb *swiotlb = get_swiotlb(hwdev); + int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = swiotlb->orig_addr[index]; + + /* + * First, sync the memory before unmapping the entry + */ + if (orig_addr != INVALID_PHYS_ADDR && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + swiotlb_tbl_release_region(hwdev, index, alloc_size); +} + void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) From patchwork Tue Feb 9 06:21:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1FC2C433E0 for ; Tue, 9 Feb 2021 06:23:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 86CE264EB9 for ; Tue, 9 Feb 2021 06:23:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86CE264EB9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83136.154071 (Exim 4.92) (envelope-from ) id 1l9MQM-0002xT-OQ; Tue, 09 Feb 2021 06:22:58 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83136.154071; Tue, 09 Feb 2021 06:22:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQM-0002xJ-Ke; Tue, 09 Feb 2021 06:22:58 +0000 Received: by outflank-mailman (input) for mailman id 83136; Tue, 09 Feb 2021 06:22:57 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQL-0002wn-QK for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:57 +0000 Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5a9ce06d-f690-4a5e-a404-5afefa9bfb0b; Tue, 09 Feb 2021 06:22:57 +0000 (UTC) Received: by mail-pg1-x52b.google.com with SMTP id o38so301027pgm.9 for ; Mon, 08 Feb 2021 22:22:57 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id x20sm10253509pfn.14.2021.02.08.22.22.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:55 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5a9ce06d-f690-4a5e-a404-5afefa9bfb0b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=; b=J3JXGtOrm5PoEyvBsVk+eONszRpKlS2e/fHFYsrzg7r41jB4Qhydpz4iHXCx+nR2XB bgZwF53kTc5vzTPgHCJnyyfoRJvrgBHi7UionQnej7Yu1P183TVntz02XzomTxMCHg3G mcXNNavE9b460T2UV+kLxlE9ubP6dRssCo1RI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=; b=aulAd3gJRz1Us+eG+bD2DObDE8TgpPn3j+WFcCwhANg7Nl3CBdiTazg8ae+TeoeuaG 5b+QYYg6vqkTtX6zLqmuCFGaQ0eg3eN8rUvgIFPd86FoKicFCk7ztw39yxlZOqXRh9H2 DCRqWv2xcrwgUtOrcK1yD0NK38UZwDfWLhWKq4nElji8bp39lndaPdWDt8/TdtIj34C5 Ht90kCJZsQ22Vvkheoj5WMis5T1+PJHgc5w/eSoxuU/6WzJLWBlST55ncTqhv7zFxw0e 4NxDKIswurwDVkZDhdcqBaxEUUOdvFDK8zb28TrTmfsqNBTFKiZeoUXQNKe/ldHYbUts mVrw== X-Gm-Message-State: AOAM531HtiBrqMYhxGHEzF6vpbpgh3e5Y2rYIb4EH0yEKF7seRNSqf76 2MVllqA0y0zCv4i5+G2ypP4fEA== X-Google-Smtp-Source: ABdhPJxEw/06walO5wKH5QrYrrWevtYirMnCFq5PBC/eIi86fDszp4wESrmyrR0n6ZgmsKGnPFNlrg== X-Received: by 2002:a63:4a1a:: with SMTP id x26mr21490368pga.260.1612851776396; Mon, 08 Feb 2021 22:22:56 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Date: Tue, 9 Feb 2021 14:21:27 +0800 Message-Id: <20210209062131.2300005-11-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add a new wrapper __dma_direct_free_pages() that will be useful later for dev_swiotlb_free(). Signed-off-by: Claire Chang --- kernel/dma/direct.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 30ccbc08e229..a76a1a2f24da 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -75,6 +75,11 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } +static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size) +{ + dma_free_contiguous(dev, page, size); +} + static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp) { @@ -237,7 +242,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return NULL; } out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -273,7 +278,7 @@ void dma_direct_free(struct device *dev, size_t size, else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); + __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); } struct page *dma_direct_alloc_pages(struct device *dev, size_t size, @@ -310,7 +315,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -329,7 +334,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)vaddr, 1 << page_order); - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); } #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ From patchwork Tue Feb 9 06:21:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B69FEC433DB for ; Tue, 9 Feb 2021 06:23:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 747FA64E54 for ; Tue, 9 Feb 2021 06:23:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 747FA64E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83137.154083 (Exim 4.92) (envelope-from ) id 1l9MQU-00033x-4k; Tue, 09 Feb 2021 06:23:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83137.154083; Tue, 09 Feb 2021 06:23:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQT-00033m-WC; Tue, 09 Feb 2021 06:23:06 +0000 Received: by outflank-mailman (input) for mailman id 83137; Tue, 09 Feb 2021 06:23:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQS-000330-R5 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:04 +0000 Received: from mail-pg1-x52f.google.com (unknown [2607:f8b0:4864:20::52f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 527fcaab-956a-41ff-bffd-62b421d9a286; Tue, 09 Feb 2021 06:23:04 +0000 (UTC) Received: by mail-pg1-x52f.google.com with SMTP id c132so11827524pga.3 for ; Mon, 08 Feb 2021 22:23:04 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id gx22sm1155253pjb.49.2021.02.08.22.22.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:23:02 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 527fcaab-956a-41ff-bffd-62b421d9a286 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6zQVHgfobPuQEWulcSd+ytGSNGCX6uxs4mC7Gt2kf+c=; b=ert+XkxphAdsHWZRYOV0Cfn/2U+Hq+bg6asziCHHt/7u8Vvc0xyaeiuaRHcJi9Q1Vz h0k512lAFo9/rmCIttSQjdSndXS7KAKpm476xzbqDMAPDG5a7mdNYtdhfGaun9yWGWmH a9iyECq63v2VApjt+BbXpHPW6ZsiyzSxuIc1E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6zQVHgfobPuQEWulcSd+ytGSNGCX6uxs4mC7Gt2kf+c=; b=OuCqFn9BlcvKsMQFvouVam6hB+0HcbOtV1qrm01SVeuO0VVBuUFRHHQV5J2sNRWq2c 9ylqRfYp21liRkn7xRWPpYx6DYqknXNHfwY8aZBS2UJ2EJUjByW7LHuWplxks0zB8SWo nJnHOIvwRsvka59b0OFlDw28H8cafacGjElfdjUs4D3udFePB8lOCj13HDoJDJEipN+H isoFgfXkRxbgZgdDsYrRDB+L+XPvdC+6hVIXjxCJocHr2/TKJNKFAw19rTYo5nh0KqoW f5IA2fBjo/clzpuaopSc29Ts0/d7vFKEuS1ZP6x+IQzmPKekMR4FixzG7Qzx9i6AG2gC uZFQ== X-Gm-Message-State: AOAM533L1w9+Pq0QxYjlBTRI/kxUtM2+a1vg0fNORLjuG1FJ8u6ztrRb x11M/0hoOd2BdP/lCI/XtSgwqA== X-Google-Smtp-Source: ABdhPJzmV9YXPB6hCsXZ7URzXPDB7/qCe304q7KRCHJSQjtn7qxPBetED2jKI5U7yWvekiJpr6hFzw== X-Received: by 2002:a62:4e10:0:b029:1c9:9015:dc5b with SMTP id c16-20020a624e100000b02901c99015dc5bmr21533363pfb.30.1612851783406; Mon, 08 Feb 2021 22:23:03 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 11/14] swiotlb: Add is_dev_swiotlb_force() Date: Tue, 9 Feb 2021 14:21:28 +0800 Message-Id: <20210209062131.2300005-12-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add is_dev_swiotlb_force() which returns true if the device has restricted DMA pool (e.g. dev->dev_swiotlb is set). Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 9 +++++++++ kernel/dma/swiotlb.c | 5 +++++ 2 files changed, 14 insertions(+) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 76f86c684524..b9f2a250c8da 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -73,11 +73,16 @@ extern enum swiotlb_force swiotlb_force; #ifdef CONFIG_DMA_RESTRICTED_POOL bool is_swiotlb_force(struct device *dev); +bool is_dev_swiotlb_force(struct device *dev); #else static inline bool is_swiotlb_force(struct device *dev) { return unlikely(swiotlb_force == SWIOTLB_FORCE); } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ + return false; +} #endif /* CONFIG_DMA_RESTRICTED_POOL */ bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr); @@ -93,6 +98,10 @@ static inline bool is_swiotlb_force(struct device *dev) { return false; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ + return false; +} static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f64cbe6e84cc..fd9c1bd183ac 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -841,6 +841,11 @@ bool is_swiotlb_force(struct device *dev) return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb; } +bool is_dev_swiotlb_force(struct device *dev) +{ + return dev->dev_swiotlb; +} + static int rmem_swiotlb_device_init(struct reserved_mem *rmem, struct device *dev) { From patchwork Tue Feb 9 06:21:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A1D5C433DB for ; Tue, 9 Feb 2021 06:23:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BBD3C64E54 for ; Tue, 9 Feb 2021 06:23:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBD3C64E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83138.154095 (Exim 4.92) (envelope-from ) id 1l9MQb-0003BG-Lh; Tue, 09 Feb 2021 06:23:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83138.154095; Tue, 09 Feb 2021 06:23:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQb-0003B8-Fe; Tue, 09 Feb 2021 06:23:13 +0000 Received: by outflank-mailman (input) for mailman id 83138; Tue, 09 Feb 2021 06:23:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQZ-0003A2-RH for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:11 +0000 Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1684ab7b-4ba4-4df0-8dab-bc81045a7a5d; Tue, 09 Feb 2021 06:23:10 +0000 (UTC) Received: by mail-pl1-x630.google.com with SMTP id u11so9151602plg.13 for ; Mon, 08 Feb 2021 22:23:10 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id p12sm1179850pju.35.2021.02.08.22.23.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:23:09 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1684ab7b-4ba4-4df0-8dab-bc81045a7a5d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vHso+1dmoTPyqg/ZF0QCJG3XeyxPCOLU98naLUieJmU=; b=QwHj1zYjx8/U4T+NrsaexKnUNeJz6V3FI8pfT++P1AguN4L2+bwgyhZpJV9bSAwRQa twEx8bTmtbIBs2lxyKwlCbga0ygDkQ0nQZXX5ABNhH9W7/F1oNHJLy6ve5KSFZapowUH 93XaB33tmsf953SSQYHK0jmBpqLV34Qb65Vhc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vHso+1dmoTPyqg/ZF0QCJG3XeyxPCOLU98naLUieJmU=; b=s+BuOTYxAARBjquq9nUa+i9tsObzOFzBHMmzmjzQZAnpRZFBIDMIgs7f94IsxDk1uu aKmGZPzZh6TVsef7Fhmocb28CM/j88XD0EnN6S767uNl0PO94dKSj2cv6IuWVGo5O2dK mLuTLf+d2uczd1nQZscH5GRhuHAOvg+obZiGv7Lw+X/36JNyyzCJdGz+T/mbbycmtMa1 yB4BnwU1k84V6bhVqKG28yzYFySYsHgKztFdPvqhdy8yn27GWO9Fvs9rWIc2/Y0zU2FX sTViTiS1HibX3hRWlkBd/DYrAX1r6haGGHPfg8ly7OVdAA48eX6OqJh+iACoNe2qVEkD LW8w== X-Gm-Message-State: AOAM532YrlcmVaF2YvHvLltpd6nwgD7rjdCDiQGlZKKWM7BEZRNEs6T1 2qFjHtkvUBT9xqnPWh55id4EvA== X-Google-Smtp-Source: ABdhPJxWN5Jg96v3iIqbJoN9yhUIoNv8pB4+CsEk+MHaFgKY0zVKfDlN6kq7qCIcEpJ1CyA9UerivQ== X-Received: by 2002:a17:90a:c82:: with SMTP id v2mr2529855pja.171.1612851790238; Mon, 08 Feb 2021 22:23:10 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 12/14] swiotlb: Add restricted DMA alloc/free support. Date: Tue, 9 Feb 2021 14:21:29 +0800 Message-Id: <20210209062131.2300005-13-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Add the functions, dev_swiotlb_{alloc,free} to support the memory allocation from restricted DMA pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 2 ++ kernel/dma/direct.c | 30 ++++++++++++++++++++++-------- kernel/dma/swiotlb.c | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+), 8 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index b9f2a250c8da..2cd39e102915 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -74,6 +74,8 @@ extern enum swiotlb_force swiotlb_force; #ifdef CONFIG_DMA_RESTRICTED_POOL bool is_swiotlb_force(struct device *dev); bool is_dev_swiotlb_force(struct device *dev); +struct page *dev_swiotlb_alloc(struct device *dev, size_t size, gfp_t gfp); +bool dev_swiotlb_free(struct device *dev, struct page *page, size_t size); #else static inline bool is_swiotlb_force(struct device *dev) { diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index a76a1a2f24da..f9a9321f7559 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include "direct.h" @@ -77,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size) { +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev_swiotlb_free(dev, page, size)) + return; +#endif dma_free_contiguous(dev, page, size); } @@ -89,6 +94,12 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, WARN_ON_ONCE(!PAGE_ALIGNED(size)); +#ifdef CONFIG_DMA_RESTRICTED_POOL + page = dev_swiotlb_alloc(dev, size, gfp); + if (page) + return page; +#endif + gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_limit); page = dma_alloc_contiguous(dev, size, gfp); @@ -147,7 +158,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, gfp |= __GFP_NOWARN; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); if (!page) return NULL; @@ -160,8 +171,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); /* @@ -171,7 +182,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && !gfpflags_allow_blocking(gfp) && (force_dma_unencrypted(dev) || - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + !dev_is_dma_coherent(dev))) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); /* we always manually zero the memory once we are done */ @@ -252,15 +265,15 @@ void dma_direct_free(struct device *dev, size_t size, unsigned int page_order = get_order(size); if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) { + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) { arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); return; } @@ -288,7 +301,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, void *ret; if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); page = __dma_direct_alloc_pages(dev, size, gfp); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index fd9c1bd183ac..8b77fd64199e 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -836,6 +836,40 @@ late_initcall(swiotlb_create_default_debugfs); #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +struct page *dev_swiotlb_alloc(struct device *dev, size_t size, gfp_t gfp) +{ + struct swiotlb *swiotlb; + phys_addr_t tlb_addr; + unsigned int index; + + /* dev_swiotlb_alloc can be used only in the context which permits sleeping. */ + if (!dev->dev_swiotlb || !gfpflags_allow_blocking(gfp)) + return NULL; + + swiotlb = dev->dev_swiotlb; + index = swiotlb_tbl_find_free_region(dev, swiotlb->start, size, 0); + if (index < 0) + return NULL; + + tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT); + + return pfn_to_page(PFN_DOWN(tlb_addr)); +} + +bool dev_swiotlb_free(struct device *dev, struct page *page, size_t size) +{ + unsigned int index; + phys_addr_t tlb_addr = page_to_phys(page); + + if (!is_swiotlb_buffer(dev, tlb_addr)) + return false; + + index = (tlb_addr - dev->dev_swiotlb->start) >> IO_TLB_SHIFT; + swiotlb_tbl_release_region(dev, index, size); + + return true; +} + bool is_swiotlb_force(struct device *dev) { return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb; From patchwork Tue Feb 9 06:21:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9295CC433E0 for ; Tue, 9 Feb 2021 06:23:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4550764EBD for ; Tue, 9 Feb 2021 06:23:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4550764EBD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83139.154107 (Exim 4.92) (envelope-from ) id 1l9MQh-0003HM-V8; Tue, 09 Feb 2021 06:23:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83139.154107; Tue, 09 Feb 2021 06:23:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQh-0003HF-Qj; Tue, 09 Feb 2021 06:23:19 +0000 Received: by outflank-mailman (input) for mailman id 83139; Tue, 09 Feb 2021 06:23:19 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQg-0003Ge-UH for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:18 +0000 Received: from mail-pf1-x434.google.com (unknown [2607:f8b0:4864:20::434]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d39c7deb-f1b9-4f4a-86ee-faa0d136912d; Tue, 09 Feb 2021 06:23:18 +0000 (UTC) Received: by mail-pf1-x434.google.com with SMTP id q131so11244161pfq.10 for ; Mon, 08 Feb 2021 22:23:18 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id a8sm1160332pjs.40.2021.02.08.22.23.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:23:16 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d39c7deb-f1b9-4f4a-86ee-faa0d136912d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=; b=I1pOI9d2uMAOj25XXD6wAuAIs02PiqHn3vYbrTGQfRLUalLAaaWHj4AP66p1gZRw6c B3Fs+yS4m4TscPVDCU+jECnoKxyq5OcojOQ3UlsKgJX/2bKZw0g3cUyt8Lb9tyEzB+I7 d4RC5Ezyz3o4oj5hhy8N+DKnKQ48mOYCkG1WM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=; b=YGeVkwiyi/hMIWbhrBxfc/HSto4FqmnAkeJzXlNIajKTfcQUJcsI6Te5eN2zP4vfb+ cEB6U4QCYfjqjUMLDNBppzdC6FAlnpFjlNPmF6Rov12s9N61trtvqOzKoO+RfrsMM/ZC 2h1RHsbHUBnII/YX7brG7tgtowuXpr2lu/X7eBIuD1kZ+2byTIYWdS5/wUEbzEdK4oCv 7A8qOA2m3kXzIV2tWvlu+V63Gt1rWqGfTvrqc5X5hOlwFYs2AzrBK6tGY8JHY/KNQFrW H4DFNOV9KoonQWg6YSecCw2x7B7JgXfYTzSJ4eZ/sF2wYRKM7eMvOLtTfYMcb9mDV3aF /gNA== X-Gm-Message-State: AOAM533VDtqziwmTUMOH+kczEh6R8xBFjnRyeUjWV/3bf643q0Uh0uEl bdSk+B+Ws7eh+aQ2MRbFFOb6SQ== X-Google-Smtp-Source: ABdhPJxkt3Ed7T7cJDKzmZe4wszjCxFAf1leiOQT2ICOasIg3V/FKQTBEDaG8y6oYK6K58aSdM216A== X-Received: by 2002:a05:6a00:a8d:b029:1ba:71d1:fe3c with SMTP id b13-20020a056a000a8db02901ba71d1fe3cmr21262398pfl.51.1612851797444; Mon, 08 Feb 2021 22:23:17 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 13/14] dt-bindings: of: Add restricted DMA pool Date: Tue, 9 Feb 2021 14:21:30 +0800 Message-Id: <20210209062131.2300005-14-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the reserved-memory node. Signed-off-by: Claire Chang --- .../reserved-memory/reserved-memory.txt | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..fc9a12c2f679 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,20 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to lock down + the memory access, e.g., MPU. - vendor specific string in the form ,[-] no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_mem_reserved: restricted_dma_mem_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x400000>; + }; }; /* ... */ @@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; }; From patchwork Tue Feb 9 06:21:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12077213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61A33C433E6 for ; Tue, 9 Feb 2021 06:23:35 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A52064E54 for ; Tue, 9 Feb 2021 06:23:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A52064E54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83140.154118 (Exim 4.92) (envelope-from ) id 1l9MQp-0003NU-8P; Tue, 09 Feb 2021 06:23:27 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83140.154118; Tue, 09 Feb 2021 06:23:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQp-0003NK-5F; Tue, 09 Feb 2021 06:23:27 +0000 Received: by outflank-mailman (input) for mailman id 83140; Tue, 09 Feb 2021 06:23:26 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQo-0003Ma-5F for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:26 +0000 Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6423188d-c4c4-448e-a666-2f0ae8d8a268; Tue, 09 Feb 2021 06:23:25 +0000 (UTC) Received: by mail-pg1-x534.google.com with SMTP id j5so3022315pgb.11 for ; Mon, 08 Feb 2021 22:23:25 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id c18sm11072205pfo.171.2021.02.08.22.23.19 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:23:24 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6423188d-c4c4-448e-a666-2f0ae8d8a268 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fCpdetXZbkmTlNt6IWgIQo6qPUhUH6Y1glnHH1TYQ8A=; b=m27gPzIn4y36BwdfABPhJctzVcQ+lZFochfhUvq6yh+3RZRwkrsGVp6GPOttVPTYhe piJwyWKCXZarFl+5Mgad+/eyxNzq8NXJiZDrHYIevzx3qMO6Yl37ZUAmkcqRBRBt6DSZ YohFfsGtKU7yDyK0RzwEYOoKhUHsk0fZc4SIk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fCpdetXZbkmTlNt6IWgIQo6qPUhUH6Y1glnHH1TYQ8A=; b=XJmiwbJemI5ObNW1pZpZVfppvfdXwUPyN1lZcXINH312KhNBpOpKRPFkEEFZnwL7y/ UOTcVp7We6GF/zIJRtY0WKsPaqSWyjqsN7Xaz82nv1W720mddpOVtsLplTNYBcR1d0K3 fzSIx/gFI97oIjvmHZTtbzD7j9LzQ+IpsaFBdo+vku8CfvMANIt2Fe4MwvArNn7ok7ro OVZtmSpZUo8mBr9vCewIAvTs8LoD03ikQSWjL7UFaRVrgWpT0ENUmP0qSD53QnVyjdkc Y00Vwp3Jrnkaf14oqF6Ki+qURLehFumz75JOoCNuW2VL1HCK8MC/k60S5gycYSboYvRU Achg== X-Gm-Message-State: AOAM5313LXFAcQCzx7csrZqUHq+bqZpDWkEHIIbRaZVEbTN+7JAhUZwC JC4RJBo7mnj4OjEhHn8oJo7eUQ== X-Google-Smtp-Source: ABdhPJzbc6B180HFMEwkww4vcx+xgwf9jKl8iifukr0wO8EhuGUoiPsVkBYDMJ6LpbG7UB75Krtn/A== X-Received: by 2002:a62:484:0:b029:1b7:878b:c170 with SMTP id 126-20020a6204840000b02901b7878bc170mr21302515pfe.28.1612851804694; Mon, 08 Feb 2021 22:23:24 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 14/14] of: Add plumbing for restricted DMA pool Date: Tue, 9 Feb 2021 14:21:31 +0800 Message-Id: <20210209062131.2300005-15-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 If a device is not behind an IOMMU, we look up the device node and set up the restricted DMA when the restricted-dma-pool is presented. Signed-off-by: Claire Chang --- drivers/of/address.c | 25 +++++++++++++++++++++++++ drivers/of/device.c | 3 +++ drivers/of/of_private.h | 5 +++++ 3 files changed, 33 insertions(+) diff --git a/drivers/of/address.c b/drivers/of/address.c index 73ddf2540f3f..b6093c9b135d 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -1094,3 +1095,27 @@ bool of_dma_is_coherent(struct device_node *np) return false; } EXPORT_SYMBOL_GPL(of_dma_is_coherent); + +int of_dma_set_restricted_buffer(struct device *dev) +{ + struct device_node *node; + int count, i; + + if (!dev->of_node) + return 0; + + count = of_property_count_elems_of_size(dev->of_node, "memory-region", + sizeof(phandle)); + for (i = 0; i < count; i++) { + node = of_parse_phandle(dev->of_node, "memory-region", i); + /* There might be multiple memory regions, but only one + * restriced-dma-pool region is allowed. + */ + if (of_device_is_compatible(node, "restricted-dma-pool") && + of_device_is_available(node)) + return of_reserved_mem_device_init_by_idx( + dev, dev->of_node, i); + } + + return 0; +} diff --git a/drivers/of/device.c b/drivers/of/device.c index 1122daa8e273..38c631f1fafa 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -186,6 +186,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); + if (!iommu) + return of_dma_set_restricted_buffer(dev); + return 0; } EXPORT_SYMBOL_GPL(of_dma_configure_id); diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index d9e6a324de0a..28a2dfa197ba 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -161,12 +161,17 @@ struct bus_dma_region; #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map); +int of_dma_set_restricted_buffer(struct device *dev); #else static inline int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) { return -ENODEV; } +static inline int of_dma_get_restricted_buffer(struct device *dev) +{ + return -ENODEV; +} #endif #endif /* _LINUX_OF_PRIVATE_H */