From patchwork Thu Sep 30 14:31:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12528529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AD00C433EF for ; Thu, 30 Sep 2021 14:31:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF9CF60551 for ; Thu, 30 Sep 2021 14:31:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AF9CF60551 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 307B29400B1; Thu, 30 Sep 2021 10:31:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26CFF94003A; Thu, 30 Sep 2021 10:31:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06F5F9400B1; Thu, 30 Sep 2021 10:31:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id E7C0794003A for ; Thu, 30 Sep 2021 10:31:32 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A43822D007 for ; Thu, 30 Sep 2021 14:31:32 +0000 (UTC) X-FDA: 78644478024.35.6C05317 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by imf27.hostedemail.com (Postfix) with ESMTP id 5EE3770000A0 for ; Thu, 30 Sep 2021 14:31:32 +0000 (UTC) Received: by mail-qt1-f171.google.com with SMTP id m26so5853898qtn.1 for ; Thu, 30 Sep 2021 07:31:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=XxlQwO+zvTPvOSLEI+q8JmpmwLvHiOqSq9H/D1w+4mk=; b=aOdtcLsSMWhdmH/KKMmZnRau19vAPajS02V43ANM6ifhO/tnr/ulMkrIJTQmX7vAlq YADiIG39r9Vb8SZLAASMRqAQkCynKyOg0JkQwWMSIFTPKBulzt9tXQmn3UQ4nHZ3Vec4 voVkB4YK0EgaS0YEkK4o633fgSi4CFFC1DAx3UaBZ+BZVZx8Bh8Rv4FppGd+Ii7Uv5Ii gccTAgM1+8aZq5FXtWSip+4iu1L56aFhwQS1FUhvvzUT1IpJKlNrC0qCJuQm/EWJhkSX yeK0+vcszokE9LKwJMyqxq71n5JxLS0hdUyuJ15uNVFeMWMaLUwrC8M2NJ6UH9kOWleh WTtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XxlQwO+zvTPvOSLEI+q8JmpmwLvHiOqSq9H/D1w+4mk=; b=kb4cZMIFZxxbwTieeSJC8cli7ndKpXt9fYUlMaWoBC+9KYLW5tfTSqCxJNa7MFqR3V CfECmYFAynKzFj+M/FfdgZlJIWmlZi628Y+WChGNE8ZXBA5MJGSk25Kqb/lX9PoNmCPR 48CEggufLGaxkPm2ShUb+o8IfTCqtUdi7Xgj3hijVDRQxHVC1GAAE3OyBfVc41GkOKP4 +ivi5fcynitPW+KRYY0/PMiJJOHKlWDc5OFCfQF1E5nmRVM4ipX6sxXfHJIxxYucAf8E W8CKsMWbNlMj7fM8QUE5WOj+a/lSEO6dh4NaMNl6ooM8xdbeG4jgPp9kBL9h84JZ37hT 6bwg== X-Gm-Message-State: AOAM530RkN60TS2ruL6Z0KIwbvsFAE/tILFtQkpdpy4LMQ0m1NV0B4ao OSdMqG2p0OtEhJm4f0mE7AFA5A== X-Google-Smtp-Source: ABdhPJzecZm5xCqX2NKvPHbAfDt/pCmj5N9m76pk7Eb3EnDGgwSGvIf0RbDVFg/Bu7rhjM9gly/kXw== X-Received: by 2002:ac8:138b:: with SMTP id h11mr6860629qtj.163.1633012291650; Thu, 30 Sep 2021 07:31:31 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id l195sm1528731qke.98.2021.09.30.07.31.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 07:31:30 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, kernelfans@gmail.com, akpm@linux-foundation.org Subject: [PATCH v18 15/15] arm64: trans_pgd: remove trans_pgd_map_page() Date: Thu, 30 Sep 2021 14:31:13 +0000 Message-Id: <20210930143113.1502553-16-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog In-Reply-To: <20210930143113.1502553-1-pasha.tatashin@soleen.com> References: <20210930143113.1502553-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5EE3770000A0 X-Stat-Signature: sq1i1h8pxb6mf4sh9cq89y83sjs3yd96 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=aOdtcLsS; dmarc=none; spf=pass (imf27.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.171 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1633012292-21406 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The intend of trans_pgd_map_page() was to map contiguous range of VA memory to the memory that is getting relocated during kexec. However, since we are now using linear map instead of contiguous range this function is not needed Suggested-by: Pingfan Liu Signed-off-by: Pasha Tatashin Acked-by: Catalin Marinas --- arch/arm64/include/asm/trans_pgd.h | 5 +-- arch/arm64/mm/trans_pgd.c | 57 ------------------------------ 2 files changed, 1 insertion(+), 61 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 7b04d32b102c..033d400a4ea4 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -15,7 +15,7 @@ /* * trans_alloc_page * - Allocator that should return exactly one zeroed page, if this - * allocator fails, trans_pgd_create_copy() and trans_pgd_map_page() + * allocator fails, trans_pgd_create_copy() and trans_pgd_idmap_page() * return -ENOMEM error. * * trans_alloc_arg @@ -30,9 +30,6 @@ struct trans_pgd_info { int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, unsigned long start, unsigned long end); -int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, - void *page, unsigned long dst_addr, pgprot_t pgprot); - int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, unsigned long *t0sz, void *page); diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 26bd8f2d95af..d7da8ca40d2e 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -217,63 +217,6 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **dst_pgdp, return rc; } -/* - * Add map entry to trans_pgd for a base-size page at PTE level. - * info: contains allocator and its argument - * trans_pgd: page table in which new map is added. - * page: page to be mapped. - * dst_addr: new VA address for the page - * pgprot: protection for the page. - * - * Returns 0 on success, and -ENOMEM on failure. - */ -int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, - void *page, unsigned long dst_addr, pgprot_t pgprot) -{ - pgd_t *pgdp; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_pgd(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - p4dp = trans_alloc(info); - if (!pgdp) - return -ENOMEM; - pgd_populate(NULL, pgdp, p4dp); - } - - p4dp = p4d_offset(pgdp, dst_addr); - if (p4d_none(READ_ONCE(*p4dp))) { - pudp = trans_alloc(info); - if (!pudp) - return -ENOMEM; - p4d_populate(NULL, p4dp, pudp); - } - - pudp = pud_offset(p4dp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = trans_alloc(info); - if (!pmdp) - return -ENOMEM; - pud_populate(NULL, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = trans_alloc(info); - if (!ptep) - return -ENOMEM; - pmd_populate_kernel(NULL, pmdp, ptep); - } - - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), pgprot)); - - return 0; -} - /* * The page we want to idmap may be outside the range covered by VA_BITS that * can be built using the kernel's p?d_populate() helpers. As a one off, for a