From patchwork Thu Mar 20 01:55:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80F6AC36002 for ; Thu, 20 Mar 2025 01:56:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B656D280003; Wed, 19 Mar 2025 21:56:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEDDE280001; Wed, 19 Mar 2025 21:56:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91AB6280003; Wed, 19 Mar 2025 21:56:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 76288280001 for ; Wed, 19 Mar 2025 21:56:04 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E2169120587 for ; Thu, 20 Mar 2025 01:56:04 +0000 (UTC) X-FDA: 83240263848.20.7E1B196 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf18.hostedemail.com (Postfix) with ESMTP id 1BBFB1C000C for ; Thu, 20 Mar 2025 01:56:02 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=A7Hj7w4i; spf=pass (imf18.hostedemail.com: domain of 3sXXbZwoKCFAuzs5yGCs53y66y3w.u64305CF-442Dsu2.69y@flex--changyuanl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3sXXbZwoKCFAuzs5yGCs53y66y3w.u64305CF-442Dsu2.69y@flex--changyuanl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742435763; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=vRD5AlWFa+qoiyuwLyYrHWBB//n6sLAYOntLjgTZFPnrhxiEUbBGkqh1LGA0awlqV7aklp ZtN0qSk5Z2sudyfm9426CtHAGMbUFuZizHNLmkbgT6KB8R76X6f0zRXtpIZl8az55kfd+L BYlBhzn+xGTOa3KhN+DifNdhuTpiMxE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=A7Hj7w4i; spf=pass (imf18.hostedemail.com: domain of 3sXXbZwoKCFAuzs5yGCs53y66y3w.u64305CF-442Dsu2.69y@flex--changyuanl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3sXXbZwoKCFAuzs5yGCs53y66y3w.u64305CF-442Dsu2.69y@flex--changyuanl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742435763; a=rsa-sha256; cv=none; b=O6bZkOMvJ/njn40QMvfIKvMbFyn1YVRzPfhUoDkqrEtL8GDAWYDmGFNDFR16XBciiz0MJj pSAaaV/41KZ8teDJQrlhEYBbn1nNap+YoQhqsZsDpxYcLIdRYSQu6dcP8FgOW7xNM0SJu+ u7iQAIM0cRErOojFgf8V0ZcDYcEGHcA= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff7f9a0b9bso500896a91.0 for ; Wed, 19 Mar 2025 18:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435762; x=1743040562; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=A7Hj7w4ib2nELWJJwssc2/k2FNvcnO8UaywvJqdxjTPHveui+GLEClmgJO7BM3R4DX z/bUl1kCGt3UfW8BgVq9Fj6tOtGJFVLrQhtY7arYBoWj8zMRql8W8L+sDDl3hZjHUh5R saKzMKuqUVARFboU+ikIIIpRgFojrNdT2KmpRS/7MV5pP9QYGMgj6U1gNqVMdN3spERS S7P5fRRW6pTOvRzan+w097vILcA3ZYwQvk1J/1Iulx1HJxYsNgd8yPBmOxyqq6rlYNR9 P2ye/19PkgxqgeztvF5b0wVVO4E14pjpf53FFtdwahoxYX/eDubHl+jGk82qVhR3ZIAo /fgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435762; x=1743040562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=AtYBgDRiBIiWHFhdNEYbVZur56aKoatnkqDITaVNGllGhiujlx2OdO/OluVRV7hYzR XbbVYVr/o/mRB6rX+piz0XPmpaJ96DREXK7vo9tIbseNgpVdtspt6tJFuMn825fEcHWk r5LVZufPEer2JjUfnf03A1gCpJc505uyIOVW8aRjw/0OzmPl0TBXoaX7fgSJAYzdk7d6 yCdHq+cWuuh+CshmC5+OmQS3x4UN97aYxNzXJQtDd8PyZGYcfhi029xKcEr7pcbSSkF6 3B1uYIUHc3tU0LCyoR011zQeZaNlzqAZrbVNfdel52qrZw9yEKq+OntrBHNBLFXzgr0C PEKA== X-Forwarded-Encrypted: i=1; AJvYcCWwtFQheb6anR6LQ7x5qewM4YG3FDmaP4CYxlRmFhUBxTPoZvD3H3s2KQZRnNwTuZ/qhrWVzj5R5w==@kvack.org X-Gm-Message-State: AOJu0Yy0uUNDxzuopOxCw2hTUxfA8jNub3XR7lSjh9WcE6EDSt4I2kvD OHdg8yYMpXdq8YnASVp0lb1NUKQ9vi2GZCgN8ptfEMxpPluJjbr59/HtW+PCqrQ+cjJFmyFxgY/ lB9Cti2P2QLIgJ/O9nQ== X-Google-Smtp-Source: AGHT+IH4fNuT0QWlfSI42heq+dLpm/GFEXTXaL6bQRWMvudZOwxIJGtvzZFRNXPgaqxisIVnXeLUFEDTHfoI9HMH X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2ee:4a90:3d06]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fc5:b0:2f6:d266:f462 with SMTP id 98e67ed59e1d1-301be20750dmr7815826a91.35.1742435761976; Wed, 19 Mar 2025 18:56:01 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:36 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-2-changyuanl@google.com> Subject: [PATCH v5 01/16] kexec: define functions to map and unmap segments From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, steven chen , Tushar Sugandhi , Changyuan Lyu X-Stat-Signature: u976gj8hfd8k63ztgtn1c6tpunhzoh68 X-Rspamd-Queue-Id: 1BBFB1C000C X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1742435762-907315 X-HE-Meta: U2FsdGVkX196fO3R11jVRo6NygiSWODsU+1zIDTs4tSTCY31L7jojswt/QP43/PCJOc7qIL9BCIBGjVS1QwUGmlhmjAuijlOCEeC/UysQj7NTw1ER2d0vAEwXfTmDV69k08RQOofZtsgwjR3HuKhXA85AOz0BNwjxaey2eibPszc4xsKEyo5Uw+9aJMgYBiibHDpjIxjaaArV9QpVefKf2NDzTFPiu//5/lvbz3cZAhSJJMs29opeX/wXIuLbcUFR7fiZnzH9uvfDwxpHT2gkwvJttkdf9l3mnIbnctoU8W+oXM6UdnlmzCnyWxa+zKYDqLCIRZm+rMeAOJcX4CqK8WXIwwLGkyYVwXBq8hYoGMqDka6mgXmK0+SfcpjRenetqmt5NgjoRZS6/0vGTZ/DBAo4n8m2BWRyH5NDn854pUK61c8f3eb5v0zdghagjzl82sxRenp3QX88Xow+VVze64Evf5MyxBPw9+ts7ydwrKo7MAuoI3NRhWsS/fKVz1EOCpZSVUPSZ4LvHLo2Qp5N5tMV2zOWtoWgf5xwvHjAm1vmjJLAnY5gAcVwsTvHFiH/VDZ8sCirbyZRoHWhPvWM8u1J7eeKnpfXXXQLtrXHVoOG4qfxHLn1LsE94uV9GFe2vYAxG4n2SelWiHlPu3IKIvTVsBo6cxjZdRU2rwvp8S3l51QlGVwquB/Wdds2RarrtWkIDd1ZwAvMzvHgSsrnAxVnNiJMfzbKV8DCs0c5JOEznZ0IXlUDlpS6Kk9n0dejetumvjOc0fw9IOmIKqztVTUX2QrGs/WsGGpjBGJ+HiUtw/b/32sSxssvO6ASsa12eeS1e/nozy8EdKQtIXJWPFxPtehRVxq8InZQWtNjFlu1Y5QrG9EJ2ZYHQgPToxYYYGhNrgTvvhDAGrrIofh/pg+s72HgstFXPfUpvx+KUdzJgNREG0N1/cxMyi5OF+VxrZLY8U1t8qMizvnb8d aSUUCmfH vjpkD7GVQ5x/O/X/PoUFdvFxYd28tYGdPGIZ/DiAP9vYfzHZLiVlZVzJnFQjC6kW9ueLfMhB/PBc8y3BLH2IRunY4cYKLoFoADkA1VMXJ93gKSrkos7YlZIS+Hd2Ta3NWg1tWerzbe4opaWowZf2enqcEMZN11mEL5FD00iJ3ghHC+6Gx1VwHOZ7AjWkdtEAsSwkEiE7aAbg00Q2xrgoJxo0MacV4oQpyvjE2/xId3D7dfmAuJYpcdM7sWESRcp+XofrXYp97QJz2snUHOUN1WLQZQ+0LQHSdEWt1kPVuIlV1qTz2puNzGJdIW1hb/PZgYe9wV/dNR12UAt/7Z5rWJhfW4p4RfKfNL0KCL3/+trnVJbH+cNi2MEk+hX/YmlpmwB/1ScxFQp481Rnmv870tlkZms3S04sDdIkkzznMGDxNTIqbCs1LLULcH/FtOLYK9LB+Juqk0bMae3qo/qC57hpVAJLDCUuWHZU/1kbQYcbS4lYYEPHKvKltoLY34bRJU4S7tCpeJ+qf1Wcjd0a7Oc7fo6wke+Y1/7dAg0g21197dnO0uoXXHOekrdJM3SXqMu2xb/IFqS1PYW3rP9HJg0JIx2ZnbgDRChoY42yEbYZ/0PtORRH6XC0GJ4BizNE7Z/0IQQGxPi3vmrAHichKSv/vJWnPRX3q6ON1MJOMPARoqys= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: steven chen Currently, the mechanism to map and unmap segments to the kimage structure is not available to the subsystems outside of kexec. This functionality is needed when IMA is allocating the memory segments during kexec 'load' operation. Implement functions to map and unmap segments to kimage. Implement kimage_map_segment() to enable mapping of IMA buffer source pages to the kimage structure post kexec 'load'. This function, accepting a kimage pointer, an address, and a size, will gather the source pages within the specified address range, create an array of page pointers, and map these to a contiguous virtual address range. The function returns the start of this range if successful, or NULL if unsuccessful. Implement kimage_unmap_segment() for unmapping segments using vunmap(). Signed-off-by: Tushar Sugandhi Signed-off-by: steven chen Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 54 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..fad04f3bcf1d 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -467,6 +467,8 @@ extern bool kexec_file_dbg_print; #define kexec_dprintk(fmt, arg...) \ do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) +void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size); +void kimage_unmap_segment(void *buffer); #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; @@ -474,6 +476,9 @@ static inline void __crash_kexec(struct pt_regs *regs) { } static inline void crash_kexec(struct pt_regs *regs) { } static inline int kexec_should_crash(struct task_struct *p) { return 0; } static inline int kexec_crash_loaded(void) { return 0; } +static inline void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size) +{ return NULL; } +static inline void kimage_unmap_segment(void *buffer) { } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c0bdc1686154..640d252306ea 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -867,6 +867,60 @@ int kimage_load_segment(struct kimage *image, return result; } +void *kimage_map_segment(struct kimage *image, + unsigned long addr, unsigned long size) +{ + unsigned long eaddr = addr + size; + unsigned long src_page_addr, dest_page_addr; + unsigned int npages; + struct page **src_pages; + int i; + kimage_entry_t *ptr, entry; + void *vaddr = NULL; + + /* + * Collect the source pages and map them in a contiguous VA range. + */ + npages = PFN_UP(eaddr) - PFN_DOWN(addr); + src_pages = kvmalloc_array(npages, sizeof(*src_pages), GFP_KERNEL); + if (!src_pages) { + pr_err("Could not allocate source pages array for destination %lx.\n", addr); + return NULL; + } + + i = 0; + for_each_kimage_entry(image, ptr, entry) { + if (entry & IND_DESTINATION) { + dest_page_addr = entry & PAGE_MASK; + } else if (entry & IND_SOURCE) { + if (dest_page_addr >= addr && dest_page_addr < eaddr) { + src_page_addr = entry & PAGE_MASK; + src_pages[i++] = + virt_to_page(__va(src_page_addr)); + if (i == npages) + break; + dest_page_addr += PAGE_SIZE; + } + } + } + + /* Sanity check. */ + WARN_ON(i < npages); + + vaddr = vmap(src_pages, npages, VM_MAP, PAGE_KERNEL); + kvfree(src_pages); + + if (!vaddr) + pr_err("Could not map segment source pages for destination %lx.\n", addr); + + return vaddr; +} + +void kimage_unmap_segment(void *segment_buffer) +{ + vunmap(segment_buffer); +} + struct kexec_load_limit { /* Mutex protects the limit count. */ struct mutex mutex;