From patchwork Fri Nov 27 12:04:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11935585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5FC6C64E75 for ; Fri, 27 Nov 2020 12:07:50 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6964D21D81 for ; Fri, 27 Nov 2020 12:07:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6964D21D81 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3C7A96EBA0; Fri, 27 Nov 2020 12:07:49 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id A7AE36EB9E; Fri, 27 Nov 2020 12:07:47 +0000 (UTC) IronPort-SDR: CLdVgtSuKWXDkJPGVSR9kookmerSopCsgUzB+k4DXIOwQ7xDvkWGSViyAiSEBMZY7/2pO4UMU4 rKpZWAQ+wJMw== X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="168883337" X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="168883337" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:07:29 -0800 IronPort-SDR: 2ooAzNBC7zzOKAHaPqBDW8nLlMXpeLrYzOZ637IELdGHWR26TAsVb1PxY27RvQKj9YL+jscPsx PE4f4lJxAolw== X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="548028489" Received: from mjgleeso-mobl.ger.corp.intel.com (HELO mwauld-desk1.ger.corp.intel.com) ([10.251.85.2]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:07:28 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 002/162] drm/i915/selftest: assert we get 2M GTT pages Date: Fri, 27 Nov 2020 12:04:38 +0000 Message-Id: <20201127120718.454037-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201127120718.454037-1-matthew.auld@intel.com> References: <20201127120718.454037-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For the LMEM case if we have suitable alignment and 2M physical pages we should always get 2M GTT pages within the constraints of the hugepages selftest. If we don't then something might be wrong in our construction of the backing pages. Signed-off-by: Matthew Auld --- .../gpu/drm/i915/gem/selftests/huge_pages.c | 21 +++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 0bf93947d89d..77a13527a7e6 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -368,6 +368,27 @@ static int igt_check_page_sizes(struct i915_vma *vma) err = -EINVAL; } + + /* + * The dma-api is like a box of chocolates when it comes to the + * alignment of dma addresses, however for LMEM we have total control + * and so can guarantee alignment, likewise when we allocate our blocks + * they should appear in descending order, and if we know that we align + * to the largest page size for the GTT address, we should be able to + * assert that if we see 2M physical pages then we should also get 2M + * GTT pages. If we don't then something might be wrong in our + * construction of the backing pages. + */ + if (i915_gem_object_is_lmem(obj) && + IS_ALIGNED(vma->node.start, SZ_2M) && + vma->page_sizes.sg & SZ_2M && + vma->page_sizes.gtt < SZ_2M) { + pr_err("gtt pages mismatch for LMEM, expected 2M GTT pages, sg(%u), gtt(%u)\n", + vma->page_sizes.sg, vma->page_sizes.gtt); + err = -EINVAL; + } + + if (obj->mm.page_sizes.gtt) { pr_err("obj->page_sizes.gtt(%u) should never be set\n", obj->mm.page_sizes.gtt);