From patchwork Wed Dec 7 14:19:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13067218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 892B6C352A1 for ; Wed, 7 Dec 2022 14:20:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 74EBD10E3B9; Wed, 7 Dec 2022 14:20:47 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9746B10E3B9 for ; Wed, 7 Dec 2022 14:20:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670422844; x=1701958844; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=JkNPgUi+V8v050j/tgUwj0eaBZaPZXTOIInyRDuwDZs=; b=jb+qjITHjIGJHuWYHCVyYz/BeWgLX/ajW8IGrax45gpiLvlWQqeMc6UH 4BZq+gV8rngU1fwsxAJFIlDwYf0ArALIaK9eIlJPSs7YIiNDDiKJ/LSmQ x1ZVrsfoiGtG9k0gL2fM+aFOmMmt3sLORFU3w3GiXW3jWHhbMMFFibmqL yKvqbKtUfWlQoNt7NlhbJHWq9AgkONOE7DZesLZVOqy+RdB/6wJBb8mw9 GEHa67prsOeXh21mbktpcMohTRkjWtgXKqPSN/j2LNatQ6jo79KXuepfN Y92SF/62yKkcA7Hi8P5pYEj00RY+lECapgt9Un47RB3I5eRW1mSo+RNQN w==; X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="318757638" X-IronPort-AV: E=Sophos;i="5.96,225,1665471600"; d="scan'208";a="318757638" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 06:20:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="753113297" X-IronPort-AV: E=Sophos;i="5.96,225,1665471600"; d="scan'208";a="753113297" Received: from kstrozan-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.5.34]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 06:20:17 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Wed, 7 Dec 2022 14:19:13 +0000 Message-Id: <20221207141913.210995-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.38.1 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH] drm/i915/migrate: fix corner case in CCS aux copying X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Shuicheng Lin , Andrzej Hajda , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In the case of lmem -> lmem transfers, which is currently only possible with small-bar systems, we need to ensure we copy the CCS aux state as-is, rather than nuke it. This should fix some nasty visual corruption sometimes seen on DG2 small-bar systems, when also using DG2_RC_CCS_CC for the surface. Fixes: e3afc690188b ("drm/i915/display: consider DG2_RC_CCS_CC when migrating buffers") Signed-off-by: Matthew Auld Cc: Ville Syrjälä Cc: Nirmoy Das Cc: Andrzej Hajda Cc: Shuicheng Lin --- drivers/gpu/drm/i915/gt/intel_migrate.c | 37 +++++++++++++++++++------ 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index b405a04135ca..e25de6a8e04c 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -829,14 +829,35 @@ intel_context_migrate_copy(struct intel_context *ce, if (err) goto out_rq; - /* - * While we can't always restore/manage the CCS state, - * we still need to ensure we don't leak the CCS state - * from the previous user, so make sure we overwrite it - * with something. - */ - err = emit_copy_ccs(rq, dst_offset, INDIRECT_ACCESS, - dst_offset, DIRECT_ACCESS, len); + if (src_is_lmem) { + /* + * If the src is already in lmem, then we must + * be doing an lmem -> lmem transfer, and so + * should be safe to directly copy the CCS + * state. In this case we have either + * initialised the CCS aux state when first + * clearing the pages (since it is already + * allocated in lmem), or the user has + * potentially populated it, in which case we + * need to copy the CCS state as-is. + */ + err = emit_copy_ccs(rq, + dst_offset, INDIRECT_ACCESS, + src_offset, INDIRECT_ACCESS, + len); + } else { + /* + * While we can't always restore/manage the CCS + * state, we still need to ensure we don't leak + * the CCS state from the previous user, so make + * sure we overwrite it with something. + */ + err = emit_copy_ccs(rq, + dst_offset, INDIRECT_ACCESS, + dst_offset, DIRECT_ACCESS, + len); + } + if (err) goto out_rq;