From patchwork Fri May 24 08:11:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= X-Patchwork-Id: 10959273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 140BD76 for ; Fri, 24 May 2019 08:11:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03FB326E54 for ; Fri, 24 May 2019 08:11:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC75A28852; Fri, 24 May 2019 08:11:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 01F4328477 for ; Fri, 24 May 2019 08:11:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E1E8989F71; Fri, 24 May 2019 08:11:44 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from pio-pvt-msa3.bahnhof.se (pio-pvt-msa3.bahnhof.se [79.136.2.42]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2274C89F71 for ; Fri, 24 May 2019 08:11:43 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTP id 076383F509; Fri, 24 May 2019 10:11:41 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at bahnhof.se Received: from pio-pvt-msa3.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa3.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kzgZGTlKFtIo; Fri, 24 May 2019 10:11:29 +0200 (CEST) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTPA id 0F6183F24C; Fri, 24 May 2019 10:11:28 +0200 (CEST) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id 601453600B5; Fri, 24 May 2019 10:11:28 +0200 (CEST) From: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= To: dri-devel@lists.freedesktop.org Subject: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption Date: Fri, 24 May 2019 10:11:14 +0200 Message-Id: <20190524081114.53661-1-thomas@shipmail.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1558685488; bh=81X017jL4RFOdMH5uwEmFxey2ihsk6P07Vxs8Z4i7WM=; h=From:To:Cc:Subject:Date:From; b=euBQONJjYdVhSZi6tuJWz6x7Ns00grSoNln1dMxPO8j/vDiIVJxkE+ryZD77mcu37 SahbMVhWMFcYoqZCyXN4BYA8DRcEG6tE0dgGG09uc5/61drfJGIsincrkXd+wylg0j YwW2f0+i0QLf91AaLUoc66T9Bg6XJ0p/WRX0A1x8= X-Mailman-Original-Authentication-Results: pio-pvt-msa3.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=euBQONJj; dkim-atps=neutral X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Hellstrom With SEV encryption, all DMA memory must be marked decrypted (AKA "shared") for devices to be able to read it. In the future we might want to be able to switch normal (encrypted) memory to decrypted in exactly the same way as we handle caching states, and that would require additional memory pools. But for now, rely on memory allocated with dma_alloc_coherent() which is already decrypted with SEV enabled. Set up the page protection accordingly. Drivers must detect SEV enabled and switch to the dma page pool. This patch has not yet been tested. As a follow-up, we might want to cache decrypted pages in the dma page pool regardless of their caching state. Cc: Christian König Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo_util.c | 17 +++++++++++++---- drivers/gpu/drm/ttm/ttm_bo_vm.c | 6 ++++-- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 3 +++ drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 6 ++++-- include/drm/ttm/ttm_bo_driver.h | 8 +++++--- include/drm/ttm/ttm_tt.h | 1 + 6 files changed, 30 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 895d77d799e4..1d6643bd0b01 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -419,11 +419,13 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, page = i * dir + add; if (old_iomap == NULL) { pgprot_t prot = ttm_io_prot(old_mem->placement, + ttm->page_flags, PAGE_KERNEL); ret = ttm_copy_ttm_io_page(ttm, new_iomap, page, prot); } else if (new_iomap == NULL) { pgprot_t prot = ttm_io_prot(new_mem->placement, + ttm->page_flags, PAGE_KERNEL); ret = ttm_copy_io_ttm_page(ttm, old_iomap, page, prot); @@ -526,11 +528,11 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, return 0; } -pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp) +pgprot_t ttm_io_prot(u32 caching_flags, u32 tt_page_flags, pgprot_t tmp) { /* Cached mappings need no adjustment */ if (caching_flags & TTM_PL_FLAG_CACHED) - return tmp; + goto check_encryption; #if defined(__i386__) || defined(__x86_64__) if (caching_flags & TTM_PL_FLAG_WC) @@ -548,6 +550,11 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp) #if defined(__sparc__) || defined(__mips__) tmp = pgprot_noncached(tmp); #endif + +check_encryption: + if (tt_page_flags & TTM_PAGE_FLAG_DECRYPTED) + tmp = pgprot_decrypted(tmp); + return tmp; } EXPORT_SYMBOL(ttm_io_prot); @@ -594,7 +601,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, if (ret) return ret; - if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) { + if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED) && + !(ttm->page_flags & TTM_PAGE_FLAG_DECRYPTED)) { /* * We're mapping a single page, and the desired * page protection is consistent with the bo. @@ -608,7 +616,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, * We need to use vmap to get the desired page protection * or to make the buffer object look contiguous. */ - prot = ttm_io_prot(mem->placement, PAGE_KERNEL); + prot = ttm_io_prot(mem->placement, ttm->page_flags, + PAGE_KERNEL); map->bo_kmap_type = ttm_bo_map_vmap; map->virtual = vmap(ttm->pages + start_page, num_pages, 0, prot); diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 2d9862fcf6fd..e12247edd243 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -245,7 +245,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, goto out_io_unlock; } - cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, prot); if (!bo->mem.bus.is_iomem) { struct ttm_operation_ctx ctx = { .interruptible = false, @@ -255,13 +254,16 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, }; ttm = bo->ttm; + cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, + ttm->page_flags, prot); if (ttm_tt_populate(bo->ttm, &ctx)) { ret = VM_FAULT_OOM; goto out_io_unlock; } } else { /* Iomem should not be marked encrypted */ - cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot); + cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, + TTM_PAGE_FLAG_DECRYPTED, prot); } /* diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 98d100fd1599..1a8a09c05805 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -979,6 +979,9 @@ int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev, } ttm->state = tt_unbound; + if (sev_active()) + ttm->page_flags |= TTM_PAGE_FLAG_DECRYPTED; + return 0; } EXPORT_SYMBOL_GPL(ttm_dma_populate); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c index fc6673cde289..11c8cd248530 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c @@ -483,8 +483,10 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst, d.src_pages = src->ttm->pages; d.dst_num_pages = dst->num_pages; d.src_num_pages = src->num_pages; - d.dst_prot = ttm_io_prot(dst->mem.placement, PAGE_KERNEL); - d.src_prot = ttm_io_prot(src->mem.placement, PAGE_KERNEL); + d.dst_prot = ttm_io_prot(dst->mem.placement, dst->ttm->page_flags, + PAGE_KERNEL); + d.src_prot = ttm_io_prot(src->mem.placement, src->ttm->page_flags, + PAGE_KERNEL); d.diff = diff; for (j = 0; j < h; ++j) { diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 53fe95be5b32..261cc89c024e 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -889,13 +889,15 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo); /** * ttm_io_prot * - * @c_state: Caching state. + * @caching_flags: The caching flags of the map. + * @tt_page_flags: The tt_page_flags of the map, TTM_PAGE_FLAG_* * @tmp: Page protection flag for a normal, cached mapping. * * Utility function that returns the pgprot_t that should be used for - * setting up a PTE with the caching model indicated by @c_state. + * setting up a PTE with the caching model indicated by @caching_flags, + * and encryption state indicated by @tt_page_flags, */ -pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp); +pgprot_t ttm_io_prot(u32 caching_flags, u32 tt_page_flags, pgprot_t tmp); extern const struct ttm_mem_type_manager_func ttm_bo_manager_func; diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index c0e928abf592..45cc26355513 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -41,6 +41,7 @@ struct ttm_operation_ctx; #define TTM_PAGE_FLAG_DMA32 (1 << 7) #define TTM_PAGE_FLAG_SG (1 << 8) #define TTM_PAGE_FLAG_NO_RETRY (1 << 9) +#define TTM_PAGE_FLAG_DECRYPTED (1 << 10) enum ttm_caching_state { tt_uncached,