From patchwork Thu Oct 29 13:59:45 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 7519351 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4E5B6BEEA4 for ; Thu, 29 Oct 2015 14:02:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C1DFA20928 for ; Thu, 29 Oct 2015 14:02:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CA5412064B for ; Thu, 29 Oct 2015 14:02:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZrnkY-0003D3-8V; Thu, 29 Oct 2015 14:00:18 +0000 Received: from mail-wi0-x232.google.com ([2a00:1450:400c:c05::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZrnkU-0001x7-FG for linux-arm-kernel@lists.infradead.org; Thu, 29 Oct 2015 14:00:15 +0000 Received: by wicfx6 with SMTP id fx6so228476859wic.1 for ; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=oOzteIkh+7qt8dS8umOn/vIKJCQhzQz/KUqMR2+9k9OEv/tvDAx0v6E0xi2ePDT4JF rxCkGEDvDqW2VY7zdcTFKUt1sUODhyk75vvGUZiQiI6an5d9h6CYUE2hzeqSFR2HUhJJ 43tDcTt9Wk6U42vy5LCdeL8BXbL2hQh5rY7XUTF8fpdcXvH/aX3PcAbfgzVUhY03eZUU demfNkaD11kHlnlfhNkO+bCJrMNFbLMPPvoPlyFsJDC0U6KVbn75M8DdJX7KGHJhOyXi wGCtjtpY0Tq8j1Dq0E31HLzm60V6XgAeUatIk/+CtrB14IlzOaRMCA/Zq2r5J6ICkub3 iQPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=XtXPadyqGDRLWOpJ5fpngmkyGE9nTdIHxlbcOyYWDDzP9XVFKb1JGw/vbmftb/d/Y+ UAR4RkcJvOm3ac1iBQY18htPN0APuBn6puCsgi/Iby15XYoxbXlN6qp/D2PicuP5n2h4 Qq91wvJn4LXoHTcE32hbcZN1uCRXF3bZlS9K7T3zpBtZiCpnibmpqj1qNndIVBeIgkjT ZjYlZ4iN65tR/r5+QFIXwewLBoIfgLGTqlTRDrAesv7mcQxjk6TrGRglss5E7WOXjPk7 dj6po8E5hZI0Vuce8lHFsY0YUV8pTq/4oXhHXFlErvrZoK56tIkd5kC9kkzgVOp5Op3y qpgg== X-Gm-Message-State: ALoCoQnr51J1ARa/xoUcHab9MTImNYbI8jmkPbQapIt0ht7UUgXDXbiIKMQM4cbpO1qd5sOrlwbC X-Received: by 10.195.13.38 with SMTP id ev6mr2458925wjd.150.1446127192489; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) Received: from new-host-4.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id an7sm1930271wjc.44.2015.10.29.06.59.50 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 06:59:51 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, will.deacon@arm.com Subject: [PATCH] vfio/type1: handle case where IOMMU does not support PAGE_SIZE size Date: Thu, 29 Oct 2015 13:59:45 +0000 Message-Id: <1446127185-2096-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151029_070014_750270_A236651A X-CRM114-Status: GOOD ( 16.03 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, christoffer.dall@linaro.org, suravee.suthikulpanit@amd.com, linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current vfio_pgsize_bitmap code hides the supported IOMMU page sizes smaller than PAGE_SIZE. As a result, in case the IOMMU does not support PAGE_SIZE page, the alignment check on map/unmap is done with larger page sizes, if any. This can fail although mapping could be done with pages smaller than PAGE_SIZE. This patch modifies vfio_pgsize_bitmap implementation so that, in case the IOMMU supports page sizes smaller than PAGE_HOST we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. That way the user will be able to map/unmap buffers whose size/ start address is aligned with PAGE_HOST. Pinning code uses that granularity while iommu driver can use the sub-PAGE_HOST size to map the buffer. Signed-off-by: Eric Auger Signed-off-by: Alex Williamson --- This was tested on AMD Seattle with 64kB page host. ARM MMU 401 currently expose 4kB, 2MB and 1GB page support. With a 64kB page host, the map/unmap check is done against 2MB. Some alignment check fail so VFIO_IOMMU_MAP_DMA fail while we could map using 4kB IOMMU page size. RFC -> PATCH v1: - move all modifications in vfio_pgsize_bitmap following Alex' suggestion to expose a fake PAGE_HOST support - restore WARN_ON's --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 57d8c37..cee504a 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -403,13 +403,26 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap = PAGE_MASK; + unsigned long bitmap = ULONG_MAX; mutex_lock(&iommu->lock); list_for_each_entry(domain, &iommu->domain_list, next) bitmap &= domain->domain->ops->pgsize_bitmap; mutex_unlock(&iommu->lock); + /* + * In case the IOMMU supports page sizes smaller than PAGE_HOST + * we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. + * That way the user will be able to map/unmap buffers whose size/ + * start address is aligned with PAGE_HOST. Pinning code uses that + * granularity while iommu driver can use the sub-PAGE_HOST size + * to map the buffer. + */ + if (bitmap & ~PAGE_MASK) { + bitmap &= PAGE_MASK; + bitmap |= PAGE_SIZE; + } + return bitmap; }