From patchwork Sun Nov 7 23:57:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12607103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6E07C433F5 for ; Sun, 7 Nov 2021 23:58:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 359C761352 for ; Sun, 7 Nov 2021 23:58:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 359C761352 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 619696B00B6; Sun, 7 Nov 2021 18:57:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C89C6B00B8; Sun, 7 Nov 2021 18:57:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B6F16B00B9; Sun, 7 Nov 2021 18:57:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 3B4D06B00B6 for ; Sun, 7 Nov 2021 18:57:59 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EFA4B8249980 for ; Sun, 7 Nov 2021 23:57:58 +0000 (UTC) X-FDA: 78783799878.10.6B0E784 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf08.hostedemail.com (Postfix) with ESMTP id BC5F93000254 for ; Sun, 7 Nov 2021 23:57:45 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id k63-20020a628442000000b004812ea67c34so9613102pfd.2 for ; Sun, 07 Nov 2021 15:57:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:cc; bh=R+t/urldu9YHqL+lDHl0HSLrbce7ecQGyIl+/EPH9I8=; b=jtvTOQvOVdGNvyQyeQqOgGXvjQRC942kwerWX6M5FsjL3Tf2acXkUvzNi3m/soosln smuise7b6YoEQSDMWni5RrgsRkFHhCkqDkNg/SvV2OrqbcUp89NTGHyz9K3zMkml9h8C H2FfEfYy8l1yITzXaM5pqtHa+/VgyFAP57qmtbkovAq/FbBzve5NMVOD0aeba8x4rKyf w5sBJ5g/iG7tk815O9LKpRjk0SZdl8e7vfYWFMybz0XoV7TCIsdNqjcKcLv3ChUSt6Op ftNIT54rBT3hSlbv8PxpftPfFCmvpmlYscqi3G7h6BTnfPp/UC/ZGZJxL78o0aupiqgV BpcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:cc; bh=R+t/urldu9YHqL+lDHl0HSLrbce7ecQGyIl+/EPH9I8=; b=R9iqYr4ABYPG4tySjcHDeSCcyy3u4zPNQsnUEg1jQ3ZmfsHvsNLwNSmkIiJ1hP6CgO oz0TL8DgrgnK/HflRir6zQp3SGardNFKL5JclOqetk7gJF/SSRAFOER+s31eZk6oAsp6 HRxiNyNBRzdCJ+4D7XI5ZOJCnGc0ttK0pxyQcbowAq9fkBYCmgHLkgjPip9091iVIjwv GlK43YzdYhGG7k27O/6FVFeUWqIs9j6g73Mgd+OLFdIG8PlRbc+TJVXzitxIlRRQk9sU HNgv4xfCCkOOXI/2xwsvXEyQrNCzbbE3nadoFOq3iLTixFspph+RrhzD9k2nY+njE2Xq 6Fug== X-Gm-Message-State: AOAM531Z9MEyZ2MpKeA8kVdBh1EVBYrxNhdkjutilLRMgK6bpXPYnngC i2Ll4NAwGfTXXhaamXqsE6ea4pzPNvuxc7F4xQ== X-Google-Smtp-Source: ABdhPJxvrhLtle5SS8aEghWIfAk3tWnJrCeqtk4DE1GvfdAPQan5xzgUAyKoMUQEhK3rRNz7sYMbQHZHVKUni4iHPw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:86c8:d4b:a94b:a8fe]) (user=almasrymina job=sendgmr) by 2002:a63:4847:: with SMTP id x7mr47288917pgk.239.1636329477365; Sun, 07 Nov 2021 15:57:57 -0800 (PST) Date: Sun, 7 Nov 2021 15:57:54 -0800 Message-Id: <20211107235754.1395488-1-almasrymina@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [PATCH v4] mm: Add PM_HUGE_THP_MAPPING to /proc/pid/pagemap From: Mina Almasry Cc: Mina Almasry , David Hildenbrand , Matthew Wilcox , "Paul E . McKenney" , Yu Zhao , Jonathan Corbet , Andrew Morton , Peter Xu , Ivan Teterevkov , Florian Schmidt , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: BC5F93000254 X-Stat-Signature: pxho17649b1uywq4dnwy6y3cn3uxq3fn Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=jtvTOQvO; spf=pass (imf08.hostedemail.com: domain of 3BWiIYQsKCAokvwk218wsxkqyyqvo.mywvsx47-wwu5kmu.y1q@flex--almasrymina.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3BWiIYQsKCAokvwk218wsxkqyyqvo.mywvsx47-wwu5kmu.y1q@flex--almasrymina.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1636329465-94884 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PM_HUGE_THP MAPPING to allow userspace to detect whether a given virt address is currently mapped by a transparent huge page or not. Example use case is a process requesting THPs from the kernel (via a huge tmpfs mount for example), for a performance critical region of memory. The userspace may want to query whether the kernel is actually backing this memory by hugepages or not. PM_HUGE_THP_MAPPING bit is set if the virt address is mapped at the PMD level and the underlying page is a transparent huge page. Tested manually by adding logging into transhuge-stress, and by allocating THP and querying the PM_HUGE_THP_MAPPING flag at those virtual addresses. Signed-off-by: Mina Almasry Cc: David Hildenbrand Cc: Matthew Wilcox Cc: David Rientjes rientjes@google.com Cc: Paul E. McKenney Cc: Yu Zhao Cc: Jonathan Corbet Cc: Andrew Morton Cc: Peter Xu Cc: Ivan Teterevkov Cc: Florian Schmidt Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org --- Changes in v4: - Removed unnecessary moving of flags variable declaration Changes in v3: - Renamed PM_THP to PM_HUGE_THP_MAPPING - Fixed checks to set PM_HUGE_THP_MAPPING - Added PM_HUGE_THP_MAPPING docs --- Documentation/admin-guide/mm/pagemap.rst | 3 ++- fs/proc/task_mmu.c | 3 +++ tools/testing/selftests/vm/transhuge-stress.c | 21 +++++++++++++++---- 3 files changed, 22 insertions(+), 5 deletions(-) -- 2.34.0.rc0.344.g81b53c2807-goog diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst index fdc19fbc10839..8a0f0064ff336 100644 --- a/Documentation/admin-guide/mm/pagemap.rst +++ b/Documentation/admin-guide/mm/pagemap.rst @@ -23,7 +23,8 @@ There are four components to pagemap: * Bit 56 page exclusively mapped (since 4.2) * Bit 57 pte is uffd-wp write-protected (since 5.13) (see :ref:`Documentation/admin-guide/mm/userfaultfd.rst `) - * Bits 57-60 zero + * Bit 58 page is a huge (PMD size) THP mapping + * Bits 59-60 zero * Bit 61 page is file-page or shared-anon (since 3.5) * Bit 62 page swapped * Bit 63 page present diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index ad667dbc96f5c..6f1403f83b310 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1302,6 +1302,7 @@ struct pagemapread { #define PM_SOFT_DIRTY BIT_ULL(55) #define PM_MMAP_EXCLUSIVE BIT_ULL(56) #define PM_UFFD_WP BIT_ULL(57) +#define PM_HUGE_THP_MAPPING BIT_ULL(58) #define PM_FILE BIT_ULL(61) #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63) @@ -1456,6 +1457,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (page && page_mapcount(page) == 1) flags |= PM_MMAP_EXCLUSIVE; + if (page && is_transparent_hugepage(page)) + flags |= PM_HUGE_THP_MAPPING; for (; addr != end; addr += PAGE_SIZE) { pagemap_entry_t pme = make_pme(frame, flags); diff --git a/tools/testing/selftests/vm/transhuge-stress.c b/tools/testing/selftests/vm/transhuge-stress.c index fd7f1b4a96f94..7dce18981fff5 100644 --- a/tools/testing/selftests/vm/transhuge-stress.c +++ b/tools/testing/selftests/vm/transhuge-stress.c @@ -16,6 +16,12 @@ #include #include +/* + * We can use /proc/pid/pagemap to detect whether the kernel was able to find + * hugepages or no. This can be very noisy, so is disabled by default. + */ +#define NO_DETECT_HUGEPAGES + #define PAGE_SHIFT 12 #define HPAGE_SHIFT 21 @@ -23,6 +29,7 @@ #define HPAGE_SIZE (1 << HPAGE_SHIFT) #define PAGEMAP_PRESENT(ent) (((ent) & (1ull << 63)) != 0) +#define PAGEMAP_THP(ent) (((ent) & (1ull << 58)) != 0) #define PAGEMAP_PFN(ent) ((ent) & ((1ull << 55) - 1)) int pagemap_fd; @@ -47,10 +54,16 @@ int64_t allocate_transhuge(void *ptr) (uintptr_t)ptr >> (PAGE_SHIFT - 3)) != sizeof(ent)) err(2, "read pagemap"); - if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1]) && - PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) && - !(PAGEMAP_PFN(ent[0]) & ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1))) - return PAGEMAP_PFN(ent[0]); + if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1])) { +#ifndef NO_DETECT_HUGEPAGES + if (!PAGEMAP_THP(ent[0])) + fprintf(stderr, "WARNING: detected non THP page\n"); +#endif + if (PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) && + !(PAGEMAP_PFN(ent[0]) & + ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1))) + return PAGEMAP_PFN(ent[0]); + } return -1; }