From patchwork Wed Aug 21 18:31:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11107915 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6372313B1 for ; Wed, 21 Aug 2019 18:32:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15C472339F for ; Wed, 21 Aug 2019 18:32:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="GwR4O+GZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15C472339F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6963E6B026F; Wed, 21 Aug 2019 14:32:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 580416B0270; Wed, 21 Aug 2019 14:32:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D2C56B0271; Wed, 21 Aug 2019 14:32:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 1091C6B026F for ; Wed, 21 Aug 2019 14:32:19 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id C246BA2D5 for ; Wed, 21 Aug 2019 18:32:18 +0000 (UTC) X-FDA: 75847279956.23.walk92_31369a6c8633f X-Spam-Summary: 2,0,0,8e6c70cc6fba3f0e,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:1:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1515:1605:1730:1747:1777:1792:1978:1981:2194:2199:2393:2559:2562:2896:2897:3138:3139:3140:3141:3142:3622:3865:3867:3868:3870:3871:3872:3874:4049:4250:4321:4605:5007:6117:6119:6261:6653:6737:7875:7903:8603:9592:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14394:21063:21080:21444:21451:21627:30003:30054:30070:30075:30079,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66. 100.201. X-HE-Tag: walk92_31369a6c8633f X-Filterd-Recvd-Size: 10050 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Wed, 21 Aug 2019 18:32:18 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id v38so4306229qtb.0 for ; Wed, 21 Aug 2019 11:32:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Vyp7oM8zAZgGMoCqiSMdjV7wPLarOwidLqKl2FSfXuY=; b=GwR4O+GZuV+aqYQFHiE+n29LSixzzuevr0Nbxb6yZIT80kOok2R+lRtbW/06u+dX5Q /eQvO4AUz05+QP3sTjxZJ6W14iVkX8cCtyJDCgOnN0aruf7geNGfHmuOpb1vBQ0N1y1/ RnUSCTV4th5j3LMbHkQFArAntngdNyWFPS8WeT9BiNMXqYbMDG+dHzxfb1uiJN8a7wBz 4g1qsC2SigORjrgutMPWy65PSmjjsywBm4qV4a48ePe7FRzT2Sj6k8PqHDW7oMDfEDq8 wMjte5u3GvCBMLKAW/p8jbe6T6WbZtAdQuz5rGDY14jQVrbewimtVyyvrtIji/EfUz4J M9Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Vyp7oM8zAZgGMoCqiSMdjV7wPLarOwidLqKl2FSfXuY=; b=QBm/ZjI7QXuqNeEAELK3y9W3e5/mS7CO1VmYQLTkc1LBxZiJ+D+iOVJQxSF35l9aB4 X/I8NR3oztTj+pX5xMpZcl3W01XfiWzJNqSxr0Rz1GM0j3w4E/D5tEnpJpM6lIdGvFcr NSZ2VcuDxt0foRn5c5kOV7A+RTlPZpERwm9KmPvtKgXParJUhb1TiXq43VHjric/2OSc vZjwAX5ENLEtsHKVWlFKnd/3FyLdtjxn+sZY/VnNgazAk4qaBVKuqEfXDGrv/kbEhBN/ CymTOXaq6shfhzCQ1JoMKOijqRzX8wdKwm/3rrrMGwOlPqHJC/fqkji1hXnfPHm7nV6S 95uA== X-Gm-Message-State: APjAAAXVZ7zcuxP5qWi/4Ql510XTWEVNIEI+oe5NLSB6AhdFmbDe+tLA ML1JUxpo3kSTuqcRpDg0KeN3vA== X-Google-Smtp-Source: APXvYqwigCaaN2Ze/5Uh8I8AjvTjHogSW8RVXMdUeRLcQ3JXKwrBxJ5O9FQpNLApUKCo5nzW1mvkvQ== X-Received: by 2002:a0c:f909:: with SMTP id v9mr19101122qvn.83.1566412337444; Wed, 21 Aug 2019 11:32:17 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q13sm10443332qkm.120.2019.08.21.11.32.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2019 11:32:16 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v3 08/17] arm64, trans_pgd: make trans_pgd_map_page generic Date: Wed, 21 Aug 2019 14:31:55 -0400 Message-Id: <20190821183204.23576-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190821183204.23576-1-pasha.tatashin@soleen.com> References: <20190821183204.23576-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, trans_pgd_map_page has assumptions that are relevant to hibernate. But, to make it generic we must allow it to use any allocator and also, can't assume that entries do not exist in the page table already. Also, we can't use init_mm here. Also, add "flags" for trans_pgd_info, they are going to be used in copy functions once they are generalized. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_pgd.h | 39 +++++++++++++- arch/arm64/kernel/hibernate.c | 13 ++++- arch/arm64/mm/trans_pgd.c | 82 +++++++++++++++++++++--------- 3 files changed, 107 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index c7b5402b7d87..e3d022b1b526 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -11,10 +11,45 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one uninitilaized page, if this + * allocator fails, trans_pgd returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + * + * trans_flags + * - bitmap with flags that control how page table is filled. + * TRANS_MKWRITE: during page table copy make PTE, PME, and PUD page + * writeable by removing RDONLY flag from PTE. + * TRANS_MKVALID: during page table copy, if PTE present, but not valid, + * make it valid. + * TRANS_CHECKPFN: During page table copy, for every PTE entry check that + * PFN that this PTE points to is valid. Otherwise return + * -ENXIO + */ + +#define TRANS_MKWRITE BIT(0) +#define TRANS_MKVALID BIT(1) +#define TRANS_CHECKPFN BIT(2) + +struct trans_pgd_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; + unsigned long trans_flags; +}; + int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot); +/* + * Add map entry to trans_pgd for a base-size page at PTE level. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + */ +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 6ee81bbaa37f..17426dc8cb54 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -179,6 +179,12 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void * +hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintentance, then maps it at the specified address low @@ -195,6 +201,11 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + .trans_flags = 0, + }; void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; int rc; @@ -209,7 +220,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_pgd) return -ENOMEM; - rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 00b62d8640c2..dbabccd78cc4 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -17,6 +17,16 @@ #include #include +static void *trans_alloc(struct trans_pgd_info *info) +{ + void *page = info->trans_alloc_page(info->trans_alloc_arg); + + if (page) + clear_page(page); + + return page; +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -172,40 +182,64 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot) +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot) { - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) + int pgd_idx = pgd_index(dst_addr); + int pud_idx = pud_index(dst_addr); + int pmd_idx = pmd_index(dst_addr); + int pte_idx = pte_index(dst_addr); + pgd_t *pgdp = trans_pgd; + pgd_t pgd = READ_ONCE(pgdp[pgd_idx]); + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; + + if (pgd_none(pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + + __pgd_populate(&pgdp[pgd_idx], __pa(t), PUD_TYPE_TABLE); + pgd = READ_ONCE(pgdp[pgd_idx]); } - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) + pudp = __va(pgd_page_paddr(pgd)); + pud = READ_ONCE(pudp[pud_idx]); + if (pud_sect(pud)) { + return -ENXIO; + } else if (pud_none(pud) || pud_sect(pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + + __pud_populate(&pudp[pud_idx], __pa(t), PMD_TYPE_TABLE); + pud = READ_ONCE(pudp[pud_idx]); } - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) + pmdp = __va(pud_page_paddr(pud)); + pmd = READ_ONCE(pmdp[pmd_idx]); + if (pmd_sect(pmd)) { + return -ENXIO; + } else if (pmd_none(pmd) || pmd_sect(pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + + __pmd_populate(&pmdp[pmd_idx], __pa(t), PTE_TYPE_PAGE); + pmd = READ_ONCE(pmdp[pmd_idx]); } - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + ptep = __va(pmd_page_paddr(pmd)); + pte = READ_ONCE(ptep[pte_idx]); + + if (!pte_none(pte)) + return -ENXIO; + + set_pte(&ptep[pte_idx], pfn_pte(virt_to_pfn(page), pgprot)); return 0; }