From patchwork Wed Nov 17 19:48:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12625301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E3D7C433F5 for ; Wed, 17 Nov 2021 19:49:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 284B061265 for ; Wed, 17 Nov 2021 19:49:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 284B061265 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A4B8E6B0071; Wed, 17 Nov 2021 14:49:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FAC66B0073; Wed, 17 Nov 2021 14:49:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 875586B0074; Wed, 17 Nov 2021 14:49:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 731F16B0071 for ; Wed, 17 Nov 2021 14:49:11 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3AB328249980 for ; Wed, 17 Nov 2021 19:49:01 +0000 (UTC) X-FDA: 78819460230.19.233E775 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf01.hostedemail.com (Postfix) with ESMTP id AD75B5092EC5 for ; Wed, 17 Nov 2021 19:48:59 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id g25-20020a25b119000000b005c5e52a0574so5801386ybj.5 for ; Wed, 17 Nov 2021 11:49:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=PpBiG+58AqKHL1Onxjj1GmKlritKFJFCrusKbqqq2FE=; b=maTz4mTJBef5LXTAVKIznYcBVXrKhhbFl6Sjn9+9n+vlBMXlBUpILx8hnFowMWNKVR D5WvRhfmcQYN9632bBuUjcDJOx0Za+nMrkTvrkrULLFOWc2kOQ3RSu585cCQti/p2JWv UMg5QLlPEBP8hPnpQRW9cFEfW6yVss7tPH8vlStqQCRWpJAShgXCeKnibT1V8bMN4P2v RUYNNQvus91tdcZRcsUg/JgE6h0Z1Z9++YOloTZhtQaBPg01FDrZrzt+K/BB2qqxhye5 qD+NIdQfbgxdJPi8SqdTS8bVBGoHQdw+pchl5ktr/ug9I7tGRBmUWsLFGspntZ9ehaKW kODw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=PpBiG+58AqKHL1Onxjj1GmKlritKFJFCrusKbqqq2FE=; b=hQ44wSN7jQHgkdSFP5IPgWRfwzy2x61/lBeVt866RCZIp4IkFnT7+hosAVJfx6j+J8 NS9fLpBVi/WFsSkfrJSlLaVhOsT22Ib6xk+xP7pcIgs0pfjCbnGOLRaXuTmYu0vbIUMO FZDGof2BGp6Hbx2QUzVU8ZEg2X0KuBpBVQ7IZYLJzDyBECJ80jKA4N7YqsyGYXqF6bEA QXiU4mkDtrlrakQkR0jejiqngk5PXNO7F4DKhHsAUmm7XoUoH0ffUIX5QWtKrGj/Rjos j8iE0jAD+bM9UDnIj6ZM2PM8Vh9hIwKM2g8vxhZEjYCQi5M0LvNlaGj4n3kL4APOlFKR 4abQ== X-Gm-Message-State: AOAM533Jvp2F7QKo8uZqp6UrV4MKWWLtPH0wfWy1a8aJxZ1bKqtiHJa7 IfAdBHepZ23V/D1EuNSAiLLfzvTzUKPpbPaDiQ== X-Google-Smtp-Source: ABdhPJx/1FP76tKslX0j28CU2KgGpu+lsjLSgUFFLnvLWteC3/S8VHTgPQGZlFFscWKnSWSFZmyke9EFBaUr2IVJMA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:ab13:f492:fd91:a37d]) (user=almasrymina job=sendgmr) by 2002:a25:bfd2:: with SMTP id q18mr13117865ybm.542.1637178539930; Wed, 17 Nov 2021 11:48:59 -0800 (PST) Date: Wed, 17 Nov 2021 11:48:54 -0800 Message-Id: <20211117194855.398455-1-almasrymina@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v6] mm: Add PM_THP_MAPPED to /proc/pid/pagemap From: Mina Almasry To: Jonathan Corbet Cc: Mina Almasry , David Hildenbrand , Matthew Wilcox , "Paul E . McKenney" , Yu Zhao , Andrew Morton , Peter Xu , Ivan Teterevkov , Florian Schmidt , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AD75B5092EC5 X-Stat-Signature: wf5oosuff5rx77gt5sybpimq77wtp6gz Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=maTz4mTJ; spf=pass (imf01.hostedemail.com: domain of 3q1yVYQsKCMws34sA9G405sy66y3w.u64305CF-442Dsu2.69y@flex--almasrymina.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3q1yVYQsKCMws34sA9G405sy66y3w.u64305CF-442Dsu2.69y@flex--almasrymina.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1637178539-522034 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PM_THP_MAPPED MAPPING to allow userspace to detect whether a given virt address is currently mapped by a transparent huge page or not. Example use case is a process requesting THPs from the kernel (via a huge tmpfs mount for example), for a performance critical region of memory. The userspace may want to query whether the kernel is actually backing this memory by hugepages or not. PM_THP_MAPPED bit is set if the virt address is mapped at the PMD level and the underlying page is a transparent huge page. A few options were considered: 1. Add /proc/pid/pageflags that exports the same info as /proc/kpageflags. This is not appropriate because many kpageflags are inappropriate to expose to userspace processes. 2. Simply get this info from the existing /proc/pid/smaps interface. There are a couple of issues with that: 1. /proc/pid/smaps output is human readable and unfriendly to programatically parse. 2. /proc/pid/smaps is slow. The cost of reading /proc/pid/smaps into userspace buffers is about ~800us per call, and this doesn't include parsing the output to get the information you need. The cost of querying 1 virt address in /proc/pid/pagemaps however is around 5-7us. Tested manually by adding logging into transhuge-stress, and by allocating THP and querying the PM_THP_MAPPED flag at those virtual addresses. Signed-off-by: Mina Almasry Cc: David Hildenbrand Cc: Matthew Wilcox Cc: David Rientjes rientjes@google.com Cc: Paul E. McKenney Cc: Yu Zhao Cc: Jonathan Corbet Cc: Andrew Morton Cc: Peter Xu Cc: Ivan Teterevkov Cc: Florian Schmidt Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org --- Changes in v6: - Renamed to PM_THP_MAPPED - Removed changes to transhuge-stress Changes in v5: - Added justification for this interface in the commit message! Changes in v4: - Removed unnecessary moving of flags variable declaration Changes in v3: - Renamed PM_THP to PM_HUGE_THP_MAPPING - Fixed checks to set PM_HUGE_THP_MAPPING - Added PM_HUGE_THP_MAPPING docs --- Documentation/admin-guide/mm/pagemap.rst | 3 ++- fs/proc/task_mmu.c | 3 +++ 2 files changed, 5 insertions(+), 1 deletion(-) -- 2.34.0.rc2.393.gf8c9666880-goog diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst index fdc19fbc10839..8a0f0064ff336 100644 --- a/Documentation/admin-guide/mm/pagemap.rst +++ b/Documentation/admin-guide/mm/pagemap.rst @@ -23,7 +23,8 @@ There are four components to pagemap: * Bit 56 page exclusively mapped (since 4.2) * Bit 57 pte is uffd-wp write-protected (since 5.13) (see :ref:`Documentation/admin-guide/mm/userfaultfd.rst `) - * Bits 57-60 zero + * Bit 58 page is a huge (PMD size) THP mapping + * Bits 59-60 zero * Bit 61 page is file-page or shared-anon (since 3.5) * Bit 62 page swapped * Bit 63 page present diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index ad667dbc96f5c..d784a97aa209a 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1302,6 +1302,7 @@ struct pagemapread { #define PM_SOFT_DIRTY BIT_ULL(55) #define PM_MMAP_EXCLUSIVE BIT_ULL(56) #define PM_UFFD_WP BIT_ULL(57) +#define PM_THP_MAPPED BIT_ULL(58) #define PM_FILE BIT_ULL(61) #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63) @@ -1456,6 +1457,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (page && page_mapcount(page) == 1) flags |= PM_MMAP_EXCLUSIVE; + if (page && is_transparent_hugepage(page)) + flags |= PM_THP_MAPPED; for (; addr != end; addr += PAGE_SIZE) { pagemap_entry_t pme = make_pme(frame, flags);