From patchwork Thu Feb 11 14:34:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8281191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2BCE8BEEE5 for ; Thu, 11 Feb 2016 14:42:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B33B620222 for ; Thu, 11 Feb 2016 14:42:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8EBAA203AD for ; Thu, 11 Feb 2016 14:42:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsQR-0002Jj-SA; Thu, 11 Feb 2016 14:40:55 +0000 Received: from mail-wm0-x235.google.com ([2a00:1450:400c:c09::235]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsLK-0004cw-OQ for linux-arm-kernel@lists.infradead.org; Thu, 11 Feb 2016 14:35:42 +0000 Received: by mail-wm0-x235.google.com with SMTP id c200so76072667wme.0 for ; Thu, 11 Feb 2016 06:35:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gUfGz16eJsEP9VqOTye+IeDVWz3FO1jjquln5mHcEbU=; b=AGvNIxAJ6teQbZAqBEREek16+9dO7vGQPKsDbZ0uipubRV7IM9KUAI9EyOAaXbDUi0 l79kIBJB3f6Q6M8INQWYECFND3w19mEUqfXtxzbVLuqP10hVShYG2DxOeVQ4IMkzZM4N 1hHhYjy3Aw7N2wUET+yK1iHqO9ZLGCQPJm6BM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gUfGz16eJsEP9VqOTye+IeDVWz3FO1jjquln5mHcEbU=; b=AO1wvFqxUGbDNVVTShrAPMHY60iHvT/1kgfBJMWGAaOOvyajPII7x+EDokUR6lJAUQ ozbwhtdnQLshxN2CQ7H9ifZ7huOIjbRXTHP43RRDgKUBc/49LyUg5X9niwRD9HJcEIFb uhJ4qsFqiQHz7hklRuoQPVfXIVkT04O38qj4Oc7hBOlgvmi6KwOhwwzhXN13omkEBfX2 jD7SlsQeFw1dh1DUaDd2YMYL/uANTcQFRLc4SvEUF2gX47N/KoO9tZXZelk6/peq1UDL wnIzbd61Qcy5WSNaxGBk8KlNjjUSDBqnZYdBKC4ngPjNipaIonMO7EQowhq91MDyhbY5 fywQ== X-Gm-Message-State: AG10YOQqUTB8Ys6eCrxEWBxNxTiOFkv5TE5ZyPhEliRd2J1YEIl25STsF5smy43xjz3THMJO X-Received: by 10.194.203.99 with SMTP id kp3mr46959648wjc.3.1455201317031; Thu, 11 Feb 2016 06:35:17 -0800 (PST) Received: from new-host-12.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id t205sm8290751wmt.23.2016.02.11.06.35.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Feb 2016 06:35:15 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v2 06/15] iommu/arm-smmu: add a reserved binding RB tree Date: Thu, 11 Feb 2016 14:34:13 +0000 Message-Id: <1455201262-5259-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> References: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160211_063539_205120_1D4A7612 X-CRM114-Status: GOOD ( 14.55 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas.Lendacky@amd.com, brijesh.singh@amd.com, patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, leo.duran@amd.com, suravee.suthikulpanit@amd.com, sherry.hurwitz@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- drivers/iommu/arm-smmu.c | 65 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 64 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index f42341d..729a4c6 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -349,10 +349,21 @@ struct arm_smmu_domain { struct mutex init_mutex; /* Protects smmu pointer */ struct iommu_domain domain; struct iova_domain *reserved_iova_domain; - /* protects reserved domain manipulation */ + /* rb tree indexed by PA, for reserved bindings only */ + struct rb_root reserved_binding_list; + /* protects reserved domain and rbtree manipulation */ struct mutex reserved_mutex; }; +struct arm_smmu_reserved_binding { + struct kref kref; + struct rb_node node; + struct arm_smmu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + static struct iommu_ops arm_smmu_ops; static DEFINE_SPINLOCK(arm_smmu_devices_lock); @@ -400,6 +411,57 @@ static struct device_node *dev_get_dev_node(struct device *dev) return dev->of_node; } +/* Reserved binding RB-tree manipulation */ + +static struct arm_smmu_reserved_binding *find_reserved_binding( + struct arm_smmu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct arm_smmu_reserved_binding *binding = + rb_entry(node, struct arm_smmu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void link_reserved_binding(struct arm_smmu_domain *d, + struct arm_smmu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct arm_smmu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct arm_smmu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void unlink_reserved_binding(struct arm_smmu_domain *d, + struct arm_smmu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu, struct device_node *dev_node) { @@ -981,6 +1043,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) mutex_init(&smmu_domain->init_mutex); mutex_init(&smmu_domain->reserved_mutex); spin_lock_init(&smmu_domain->pgtbl_lock); + smmu_domain->reserved_binding_list = RB_ROOT; return &smmu_domain->domain; }