From patchwork Wed Aug 21 18:31:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11107961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8413E1864 for ; Wed, 21 Aug 2019 18:34:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D9F2216F4 for ; Wed, 21 Aug 2019 18:34:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="UST/i9Rk"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="GwR4O+GZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D9F2216F4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=We55HkE5UiWl2ePIyO0cfUxmLJxmAqscH48sqF4tn58=; b=UST/i9RkkNuoff g4rFE8GpsAA/PvLKyhuLXfy2OUyrbXfSFk7n8f+l5xn1zc+0T1hQ65//uUnvrZYieIrT1mr2IzTzw QcEYvFItgMAA23Qu4MUyileJG3dcTzMqVGMLh4cPLTiHbhzVuHKl0bT7aeBadnTSghAC5XZMX0MaU e/DJoJkUTqlpxWQZpSHUbueqILWnpWvEqXWCLe9ix3taUQTkB5Z9BOTuqUlmcWrLc8mmbJ51PALLZ mZbps2bYiOvdD0BQ8TfHM44y3hQhe4o5hx0xuhlO5Qscl0yQMacI1Khtpe2nKT8dU0+YWqG6YkBtc Lj5JxHQqbNS5duH/w5Bg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i0VRU-0006ZQ-6I; Wed, 21 Aug 2019 18:34:44 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1i0VP8-00048g-KO for linux-arm-kernel@lists.infradead.org; Wed, 21 Aug 2019 18:32:21 +0000 Received: by mail-qt1-x841.google.com with SMTP id k13so4213270qtm.12 for ; Wed, 21 Aug 2019 11:32:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Vyp7oM8zAZgGMoCqiSMdjV7wPLarOwidLqKl2FSfXuY=; b=GwR4O+GZuV+aqYQFHiE+n29LSixzzuevr0Nbxb6yZIT80kOok2R+lRtbW/06u+dX5Q /eQvO4AUz05+QP3sTjxZJ6W14iVkX8cCtyJDCgOnN0aruf7geNGfHmuOpb1vBQ0N1y1/ RnUSCTV4th5j3LMbHkQFArAntngdNyWFPS8WeT9BiNMXqYbMDG+dHzxfb1uiJN8a7wBz 4g1qsC2SigORjrgutMPWy65PSmjjsywBm4qV4a48ePe7FRzT2Sj6k8PqHDW7oMDfEDq8 wMjte5u3GvCBMLKAW/p8jbe6T6WbZtAdQuz5rGDY14jQVrbewimtVyyvrtIji/EfUz4J M9Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Vyp7oM8zAZgGMoCqiSMdjV7wPLarOwidLqKl2FSfXuY=; b=raOtYCCJThTVjB7o1nH052YZSaJPx8TVnq55580PspMuMn67FJZRiGlgKJ84L4fE20 6tG2ESRMSlRJ1w9D52JkVwdTOgEBlGu3QhuJX2yK2p6EyQ9rsuD9Mmn++jjC2sEvXhwG WIqFpKv0kN6Pn83VKrRxbKUalZFNOIzKcCrrsmr2f6QFtn9pYhZxJmNh8Ph/+02IzPg2 A9P/NSZSEKbTJ/K/+QuzdSRUVFMsUh48hofmRh8wZiDxofT8ab8BM+jpv2PLDpwYsFL9 s1i5OimVpx6GU5jSztV0qaO7ikD+mbXr6bpX3myH+tH89mupIbTO35CbeCBWLpF9hQEn sldw== X-Gm-Message-State: APjAAAUWlvbL9bKhyEortJL0XSSJBgrVvOsBQkuRhGV/a0BnNcAiVJow EAy0G1XyBl6aUVYHf6Rq1/dXXA== X-Google-Smtp-Source: APXvYqwigCaaN2Ze/5Uh8I8AjvTjHogSW8RVXMdUeRLcQ3JXKwrBxJ5O9FQpNLApUKCo5nzW1mvkvQ== X-Received: by 2002:a0c:f909:: with SMTP id v9mr19101122qvn.83.1566412337444; Wed, 21 Aug 2019 11:32:17 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q13sm10443332qkm.120.2019.08.21.11.32.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2019 11:32:16 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v3 08/17] arm64, trans_pgd: make trans_pgd_map_page generic Date: Wed, 21 Aug 2019 14:31:55 -0400 Message-Id: <20190821183204.23576-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190821183204.23576-1-pasha.tatashin@soleen.com> References: <20190821183204.23576-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190821_113218_852749_D12A0AB5 X-CRM114-Status: GOOD ( 14.60 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:841 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Currently, trans_pgd_map_page has assumptions that are relevant to hibernate. But, to make it generic we must allow it to use any allocator and also, can't assume that entries do not exist in the page table already. Also, we can't use init_mm here. Also, add "flags" for trans_pgd_info, they are going to be used in copy functions once they are generalized. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_pgd.h | 39 +++++++++++++- arch/arm64/kernel/hibernate.c | 13 ++++- arch/arm64/mm/trans_pgd.c | 82 +++++++++++++++++++++--------- 3 files changed, 107 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index c7b5402b7d87..e3d022b1b526 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -11,10 +11,45 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one uninitilaized page, if this + * allocator fails, trans_pgd returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + * + * trans_flags + * - bitmap with flags that control how page table is filled. + * TRANS_MKWRITE: during page table copy make PTE, PME, and PUD page + * writeable by removing RDONLY flag from PTE. + * TRANS_MKVALID: during page table copy, if PTE present, but not valid, + * make it valid. + * TRANS_CHECKPFN: During page table copy, for every PTE entry check that + * PFN that this PTE points to is valid. Otherwise return + * -ENXIO + */ + +#define TRANS_MKWRITE BIT(0) +#define TRANS_MKVALID BIT(1) +#define TRANS_CHECKPFN BIT(2) + +struct trans_pgd_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; + unsigned long trans_flags; +}; + int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot); +/* + * Add map entry to trans_pgd for a base-size page at PTE level. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + */ +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 6ee81bbaa37f..17426dc8cb54 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -179,6 +179,12 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void * +hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintentance, then maps it at the specified address low @@ -195,6 +201,11 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + .trans_flags = 0, + }; void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; int rc; @@ -209,7 +220,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_pgd) return -ENOMEM; - rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 00b62d8640c2..dbabccd78cc4 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -17,6 +17,16 @@ #include #include +static void *trans_alloc(struct trans_pgd_info *info) +{ + void *page = info->trans_alloc_page(info->trans_alloc_arg); + + if (page) + clear_page(page); + + return page; +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -172,40 +182,64 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot) +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot) { - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) + int pgd_idx = pgd_index(dst_addr); + int pud_idx = pud_index(dst_addr); + int pmd_idx = pmd_index(dst_addr); + int pte_idx = pte_index(dst_addr); + pgd_t *pgdp = trans_pgd; + pgd_t pgd = READ_ONCE(pgdp[pgd_idx]); + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; + + if (pgd_none(pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + + __pgd_populate(&pgdp[pgd_idx], __pa(t), PUD_TYPE_TABLE); + pgd = READ_ONCE(pgdp[pgd_idx]); } - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) + pudp = __va(pgd_page_paddr(pgd)); + pud = READ_ONCE(pudp[pud_idx]); + if (pud_sect(pud)) { + return -ENXIO; + } else if (pud_none(pud) || pud_sect(pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + + __pud_populate(&pudp[pud_idx], __pa(t), PMD_TYPE_TABLE); + pud = READ_ONCE(pudp[pud_idx]); } - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) + pmdp = __va(pud_page_paddr(pud)); + pmd = READ_ONCE(pmdp[pmd_idx]); + if (pmd_sect(pmd)) { + return -ENXIO; + } else if (pmd_none(pmd) || pmd_sect(pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + + __pmd_populate(&pmdp[pmd_idx], __pa(t), PTE_TYPE_PAGE); + pmd = READ_ONCE(pmdp[pmd_idx]); } - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + ptep = __va(pmd_page_paddr(pmd)); + pte = READ_ONCE(ptep[pte_idx]); + + if (!pte_none(pte)) + return -ENXIO; + + set_pte(&ptep[pte_idx], pfn_pte(virt_to_pfn(page), pgprot)); return 0; }