From patchwork Tue Jul 26 03:23:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Marczykowski-G=C3=B3recki?= X-Patchwork-Id: 12928597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C02AC433EF for ; Tue, 26 Jul 2022 03:23:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.374940.607213 (Exim 4.92) (envelope-from ) id 1oGBAZ-000538-UU; Tue, 26 Jul 2022 03:23:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 374940.607213; Tue, 26 Jul 2022 03:23:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oGBAZ-00052T-OM; Tue, 26 Jul 2022 03:23:39 +0000 Received: by outflank-mailman (input) for mailman id 374940; Tue, 26 Jul 2022 03:23:38 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oGBAY-000459-78 for xen-devel@lists.xenproject.org; Tue, 26 Jul 2022 03:23:38 +0000 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 56027e2f-0c92-11ed-924f-1f966e50362f; Tue, 26 Jul 2022 05:23:37 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 794525C00D1; Mon, 25 Jul 2022 23:23:36 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Mon, 25 Jul 2022 23:23:36 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 25 Jul 2022 23:23:35 -0400 (EDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 56027e2f-0c92-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= invisiblethingslab.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to; s=fm3; t=1658805816; x=1658892216; bh=CFew235jgq 1Bpcgaw8qOFpTlwBPLlNuYoSmc12fF7k4=; b=WRQxw3OBitWNGCYieQr8QttJD9 sPQ2lLMqsM/Nh/JLOEvFCaxTO2BhR3ADv/pSQCO8UgBZT0Z/Suo8H7yqROmh/fgW mE1b0VRHSTBjkUtsfl9OWIZCc5ZrBHkEqwpRsB19c5v9z+kGiMKwayF0Uiclb0Jp ukVWO9+avzXAWHKbl4ANpJgxWAfbFu0Wo+Y2o0+0rWUErfYw0MRIgWoduEJV0tl2 zCjc+6Ovb0zgJHQB6VBZ0XcDd9XaEY2ZFW1eUjPfGA+Ieay0wlukrMirnVN/84ED P7fXE5uswll5D+wMXxKEy+x+e162wk2wy8HzIQI8bFi0rFzH5ydnr3GYH+Ew== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1658805816; x= 1658892216; bh=CFew235jgq1Bpcgaw8qOFpTlwBPLlNuYoSmc12fF7k4=; b=1 S4K2OoRNLnrAkd15WJgBuqPqoAzm/RQTVblxpSrLXXZGPdGo7tYulxJOdDVLZewB +htgDQXvNgtsmA8dEMj+gvh1EP9Arnpc5kgATXB+0cYYRkvqTCgWv7X7ZNvr2UoH If+PTitycJ9FMqKVSUddbjCOA5XzBO1cUGnVLKLAtnvOP3iU3opzW3ttZKEmhjhW 7kMP6ZzP3ie2PuyWRmXHPUwXiPX/jp8hvoQgsUhLuGSTYemqT+iz7jsxDmsycwh0 Ii48Kt/wqUIoNZtfrMY1ytN6Qo2OMOieP7tiAZ+8u+AJkjTrfPdfyrJbSpLQ6K5P jebwbx02OXfM35jIQbNdA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvddtledgieelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm X-ME-Proxy: Feedback-ID: i1568416f:Fastmail From: =?utf-8?q?Marek_Marczykowski-G=C3=B3recki?= To: xen-devel@lists.xenproject.org Cc: =?utf-8?q?Marek_Marczykowski-G=C3=B3recki?= , Kevin Tian Subject: [PATCH v3 06/10] IOMMU/VT-d: wire common device reserved memory API Date: Tue, 26 Jul 2022 05:23:11 +0200 Message-Id: <0670dc3600aac44532d73fced60b457848ec2ceb.1658804819.git-series.marmarek@invisiblethingslab.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Re-use rmrr= parameter handling code to handle common device reserved memory. Signed-off-by: Marek Marczykowski-Górecki --- Changes in v3: - make MAX_USER_RMRR_PAGES applicable only to user-configured RMRR --- xen/drivers/passthrough/vtd/dmar.c | 201 +++++++++++++++++------------- 1 file changed, 119 insertions(+), 82 deletions(-) diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c index 367304c8739c..3df5f6b69719 100644 --- a/xen/drivers/passthrough/vtd/dmar.c +++ b/xen/drivers/passthrough/vtd/dmar.c @@ -861,111 +861,139 @@ static struct user_rmrr __initdata user_rmrrs[MAX_USER_RMRR]; /* Macro for RMRR inclusive range formatting. */ #define ERMRRU_FMT "[%lx-%lx]" -#define ERMRRU_ARG(eru) eru.base_pfn, eru.end_pfn +#define ERMRRU_ARG base_pfn, end_pfn + +static int __init add_one_user_rmrr(unsigned long base_pfn, + unsigned long end_pfn, + unsigned int dev_count, + uint32_t *sbdf); static int __init add_user_rmrr(void) { + unsigned int i; + int ret; + + for ( i = 0; i < nr_rmrr; i++ ) + { + ret = add_one_user_rmrr(user_rmrrs[i].base_pfn, + user_rmrrs[i].end_pfn, + user_rmrrs[i].dev_count, + user_rmrrs[i].sbdf); + if ( ret < 0 ) + return ret; + } + return 0; +} + +/* Returns 1 on success, 0 when ignoring and < 0 on error. */ +static int __init add_one_user_rmrr(unsigned long base_pfn, + unsigned long end_pfn, + unsigned int dev_count, + uint32_t *sbdf) +{ struct acpi_rmrr_unit *rmrr, *rmrru; - unsigned int idx, seg, i; - unsigned long base, end; + unsigned int idx, seg; + unsigned long base_iter; bool overlap; - for ( i = 0; i < nr_rmrr; i++ ) + if ( iommu_verbose ) + printk(XENLOG_DEBUG VTDPREFIX + "Adding RMRR for %d device ([0]: %#x) range "ERMRRU_FMT"\n", + dev_count, sbdf[0], ERMRRU_ARG); + + if ( base_pfn > end_pfn ) { - base = user_rmrrs[i].base_pfn; - end = user_rmrrs[i].end_pfn; + printk(XENLOG_ERR VTDPREFIX + "Invalid RMRR Range "ERMRRU_FMT"\n", + ERMRRU_ARG); + return 0; + } - if ( base > end ) + overlap = false; + list_for_each_entry(rmrru, &acpi_rmrr_units, list) + { + if ( pfn_to_paddr(base_pfn) <= rmrru->end_address && + rmrru->base_address <= pfn_to_paddr(end_pfn) ) { printk(XENLOG_ERR VTDPREFIX - "Invalid RMRR Range "ERMRRU_FMT"\n", - ERMRRU_ARG(user_rmrrs[i])); - continue; + "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n", + ERMRRU_ARG, + paddr_to_pfn(rmrru->base_address), + paddr_to_pfn(rmrru->end_address)); + overlap = true; + break; } + } + /* Don't add overlapping RMRR. */ + if ( overlap ) + return 0; - if ( (end - base) >= MAX_USER_RMRR_PAGES ) + base_iter = base_pfn; + do + { + if ( !mfn_valid(_mfn(base_iter)) ) { printk(XENLOG_ERR VTDPREFIX - "RMRR range "ERMRRU_FMT" exceeds "\ - __stringify(MAX_USER_RMRR_PAGES)" pages\n", - ERMRRU_ARG(user_rmrrs[i])); - continue; + "Invalid pfn in RMRR range "ERMRRU_FMT"\n", + ERMRRU_ARG); + break; } + } while ( base_iter++ < end_pfn ); - overlap = false; - list_for_each_entry(rmrru, &acpi_rmrr_units, list) - { - if ( pfn_to_paddr(base) <= rmrru->end_address && - rmrru->base_address <= pfn_to_paddr(end) ) - { - printk(XENLOG_ERR VTDPREFIX - "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n", - ERMRRU_ARG(user_rmrrs[i]), - paddr_to_pfn(rmrru->base_address), - paddr_to_pfn(rmrru->end_address)); - overlap = true; - break; - } - } - /* Don't add overlapping RMRR. */ - if ( overlap ) - continue; + /* Invalid pfn in range as the loop ended before end_pfn was reached. */ + if ( base_iter <= end_pfn ) + return 0; - do - { - if ( !mfn_valid(_mfn(base)) ) - { - printk(XENLOG_ERR VTDPREFIX - "Invalid pfn in RMRR range "ERMRRU_FMT"\n", - ERMRRU_ARG(user_rmrrs[i])); - break; - } - } while ( base++ < end ); + rmrr = xzalloc(struct acpi_rmrr_unit); + if ( !rmrr ) + return -ENOMEM; - /* Invalid pfn in range as the loop ended before end_pfn was reached. */ - if ( base <= end ) - continue; + rmrr->scope.devices = xmalloc_array(u16, dev_count); + if ( !rmrr->scope.devices ) + { + xfree(rmrr); + return -ENOMEM; + } - rmrr = xzalloc(struct acpi_rmrr_unit); - if ( !rmrr ) - return -ENOMEM; + seg = 0; + for ( idx = 0; idx < dev_count; idx++ ) + { + rmrr->scope.devices[idx] = sbdf[idx]; + seg |= PCI_SEG(sbdf[idx]); + } + if ( seg != PCI_SEG(sbdf[0]) ) + { + printk(XENLOG_ERR VTDPREFIX + "Segments are not equal for RMRR range "ERMRRU_FMT"\n", + ERMRRU_ARG); + scope_devices_free(&rmrr->scope); + xfree(rmrr); + return 0; + } - rmrr->scope.devices = xmalloc_array(u16, user_rmrrs[i].dev_count); - if ( !rmrr->scope.devices ) - { - xfree(rmrr); - return -ENOMEM; - } + rmrr->segment = seg; + rmrr->base_address = pfn_to_paddr(base_pfn); + /* Align the end_address to the end of the page */ + rmrr->end_address = pfn_to_paddr(end_pfn) | ~PAGE_MASK; + rmrr->scope.devices_cnt = dev_count; - seg = 0; - for ( idx = 0; idx < user_rmrrs[i].dev_count; idx++ ) - { - rmrr->scope.devices[idx] = user_rmrrs[i].sbdf[idx]; - seg |= PCI_SEG(user_rmrrs[i].sbdf[idx]); - } - if ( seg != PCI_SEG(user_rmrrs[i].sbdf[0]) ) - { - printk(XENLOG_ERR VTDPREFIX - "Segments are not equal for RMRR range "ERMRRU_FMT"\n", - ERMRRU_ARG(user_rmrrs[i])); - scope_devices_free(&rmrr->scope); - xfree(rmrr); - continue; - } + if ( register_one_rmrr(rmrr) ) + printk(XENLOG_ERR VTDPREFIX + "Could not register RMMR range "ERMRRU_FMT"\n", + ERMRRU_ARG); - rmrr->segment = seg; - rmrr->base_address = pfn_to_paddr(user_rmrrs[i].base_pfn); - /* Align the end_address to the end of the page */ - rmrr->end_address = pfn_to_paddr(user_rmrrs[i].end_pfn) | ~PAGE_MASK; - rmrr->scope.devices_cnt = user_rmrrs[i].dev_count; + return 1; +} - if ( register_one_rmrr(rmrr) ) - printk(XENLOG_ERR VTDPREFIX - "Could not register RMMR range "ERMRRU_FMT"\n", - ERMRRU_ARG(user_rmrrs[i])); - } +static int __init cf_check add_one_extra_rmrr(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt) +{ + u32 sbdf_array[] = { id }; + return add_one_user_rmrr(start, start+nr, 1, sbdf_array); +} - return 0; +static int __init add_extra_rmrr(void) +{ + return iommu_get_extra_reserved_device_memory(add_one_extra_rmrr, NULL); } #include @@ -1010,7 +1038,7 @@ int __init acpi_dmar_init(void) { iommu_init_ops = &intel_iommu_init_ops; - return add_user_rmrr(); + return add_user_rmrr() || add_extra_rmrr(); } return ret; @@ -1108,6 +1136,15 @@ static int __init cf_check parse_rmrr_param(const char *str) else end = start; + if ( (end - start) >= MAX_USER_RMRR_PAGES ) + { + printk(XENLOG_ERR VTDPREFIX + "RMRR range "ERMRRU_FMT" exceeds "\ + __stringify(MAX_USER_RMRR_PAGES)" pages\n", + start, end); + return -E2BIG; + } + user_rmrrs[nr_rmrr].base_pfn = start; user_rmrrs[nr_rmrr].end_pfn = end;