From patchwork Fri Apr 29 13:35:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12832018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 340C0C433F5 for ; Fri, 29 Apr 2022 13:37:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB5B26B0081; Fri, 29 Apr 2022 09:37:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C3E416B0082; Fri, 29 Apr 2022 09:37:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A91F06B0083; Fri, 29 Apr 2022 09:37:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 966126B0081 for ; Fri, 29 Apr 2022 09:37:33 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 73A9A1205F0 for ; Fri, 29 Apr 2022 13:37:33 +0000 (UTC) X-FDA: 79410018786.26.A3E6DD9 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf12.hostedemail.com (Postfix) with ESMTP id C93BE4004A for ; Fri, 29 Apr 2022 13:37:21 +0000 (UTC) Received: by mail-pf1-f180.google.com with SMTP id h1so6923381pfv.12 for ; Fri, 29 Apr 2022 06:37:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=23JDuOeXS3qapCnfe9+MGlUBuO57K6IKT3rd6Zq9kR4=; b=MHSRX303E6gAt323/IxcNAaiiuJtGUdH5cveE5vIqfJsvePabyp0uPeIkm0nDQ8x+z mCEeA/j/KM/zmasCI3NTuE8qHxZdXOOfIfcn8do+VN9VYhVXhI+g+9WAniNW4BEPE+Hr J9OlM7BM2PT4gmQ+NP52DmvU2IhXT/3tZBir8W+BpSrREGb9XJc337kADy/ZkI4Obkb1 fYgKvKZsYGfcnMZSjdbXdUQsBbzcAXrLxtXnir6+P37YT8MteDij0NXzVlxDMbO/OmKK fvdicx7MXbzcUO6F6lxiLRamPoU1KEA+G/pfvWZB5UwwmBvNGsfLlRTWzI28/+l7upAy HoeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=23JDuOeXS3qapCnfe9+MGlUBuO57K6IKT3rd6Zq9kR4=; b=udeEzxoxKQFbuMN6SRCLXbHaJi2jQQNh+zRz7GCqDYQxJ3pDOVuOiHUao6l3+G5XRk DISYlBKQ6yglxbhHYCvvaNBauLMLIcfxBPc/t9Ad3p+fHIcs44+QVEbTLu2Ql/RBVayG peDuj4fqb875BZ/+Vn+K9xABsR1TnTLtChDg2kPlNuCpQRYOS13mrKqQ3SXXm+FjcMiK Bj2K9+jRoztANAkajErfTNpu7SluMUY16+22vq5wDy/+OSrMi5PqH3TXgNuz3TSNhvjW kZ1K0vX7iODjYQk5AWWAbGua3YH8bAkhD4tPr6L0wdsZHon2QrEPRGaULKTjLmShs2yv tVmw== X-Gm-Message-State: AOAM533HhQJAabdcNYLYg30Sg92YMNqP640U60LEIFlIQPp+5pLywRdt QAHJWTvbH5OMb0RR/VTKUQdo7g== X-Google-Smtp-Source: ABdhPJzyfarjjWMZT/Urjo6R6jR7MFp2VAkxiYz4i0cDzXZXP0EBlBsAWmKDRDq8ix5Fv+EZW5xpSg== X-Received: by 2002:a63:5847:0:b0:399:3452:ffe4 with SMTP id i7-20020a635847000000b003993452ffe4mr32556539pgm.406.1651239451972; Fri, 29 Apr 2022 06:37:31 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id m8-20020a17090a414800b001d81a30c437sm10681977pjg.50.2022.04.29.06.37.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 06:37:31 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com, tj@kernel.org, dennis@kernel.org, ming.lei@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [RFC PATCH 13/18] mm: add try_to_free_user_pte() helper Date: Fri, 29 Apr 2022 21:35:47 +0800 Message-Id: <20220429133552.33768-14-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220429133552.33768-1-zhengqi.arch@bytedance.com> References: <20220429133552.33768-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C93BE4004A X-Stat-Signature: ng579ynzro6rjze5zxo6ae79mqoo531c X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=MHSRX303; spf=pass (imf12.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam09 X-HE-Tag: 1651239441-202136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Normally, the percpu_ref of the user PTE page table page is in percpu mode. This patch add try_to_free_user_pte() to switch the percpu_ref to atomic mode and check if it is 0. If the percpu_ref is 0, which means that no one is using the user PTE page table page, then we can safely reclaim it. Signed-off-by: Qi Zheng --- include/linux/pte_ref.h | 7 +++ mm/pte_ref.c | 99 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 104 insertions(+), 2 deletions(-) diff --git a/include/linux/pte_ref.h b/include/linux/pte_ref.h index bfe620038699..379c3b45a6ab 100644 --- a/include/linux/pte_ref.h +++ b/include/linux/pte_ref.h @@ -16,6 +16,8 @@ void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr); bool pte_tryget(struct mm_struct *mm, pmd_t *pmd, unsigned long addr); void __pte_put(pgtable_t page); void pte_put(pte_t *ptep); +void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + bool switch_back); #else /* !CONFIG_FREE_USER_PTE */ @@ -47,6 +49,11 @@ static inline void pte_put(pte_t *ptep) { } +static inline void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, bool switch_back) +{ +} + #endif /* CONFIG_FREE_USER_PTE */ #endif /* _LINUX_PTE_REF_H */ diff --git a/mm/pte_ref.c b/mm/pte_ref.c index 5b382445561e..bf9629272c71 100644 --- a/mm/pte_ref.c +++ b/mm/pte_ref.c @@ -8,6 +8,9 @@ #include #include #include +#include +#include +#include #ifdef CONFIG_FREE_USER_PTE @@ -44,8 +47,6 @@ void pte_ref_free(pgtable_t pte) kfree(ref); } -void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) {} - /* * pte_tryget - try to get the pte_ref of the user PTE page table page * @mm: pointer the target address space @@ -102,4 +103,98 @@ void pte_put(pte_t *ptep) } EXPORT_SYMBOL(pte_put); +#ifdef CONFIG_DEBUG_VM +void pte_free_debug(pmd_t pmd) +{ + pte_t *ptep = (pte_t *)pmd_page_vaddr(pmd); + int i = 0; + + for (i = 0; i < PTRS_PER_PTE; i++) + BUG_ON(!pte_none(*ptep++)); +} +#else +static inline void pte_free_debug(pmd_t pmd) +{ +} +#endif + +static inline void pte_free_rcu(struct rcu_head *rcu) +{ + struct page *page = container_of(rcu, struct page, rcu_head); + + pgtable_pte_page_dtor(page); + __free_page(page); +} + +/* + * free_user_pte - free the user PTE page table page + * @mm: pointer the target address space + * @pmd: pointer to a PMD + * @addr: start address of the tlb range to be flushed + * + * Context: The pmd range has been unmapped and TLB purged. And the user PTE + * page table page will be freed by rcu handler. + */ +void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) +{ + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + spinlock_t *ptl; + pmd_t pmdval; + + ptl = pmd_lock(mm, pmd); + pmdval = *pmd; + if (pmd_none(pmdval) || pmd_leaf(pmdval)) { + spin_unlock(ptl); + return; + } + pmd_clear(pmd); + flush_tlb_range(&vma, addr, addr + PMD_SIZE); + spin_unlock(ptl); + + pte_free_debug(pmdval); + mm_dec_nr_ptes(mm); + call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); +} + +/* + * try_to_free_user_pte - try to free the user PTE page table page + * @mm: pointer the target address space + * @pmd: pointer to a PMD + * @addr: virtual address associated with pmd + * @switch_back: indicates if switching back to percpu mode is required + */ +void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + bool switch_back) +{ + pgtable_t pte; + + if (&init_mm == mm) + return; + + if (!pte_tryget(mm, pmd, addr)) + return; + pte = pmd_pgtable(*pmd); + percpu_ref_switch_to_atomic_sync(pte->pte_ref); + rcu_read_lock(); + /* + * Here we can safely put the pte_ref because we already hold the rcu + * lock, which guarantees that the user PTE page table page will not + * be released. + */ + __pte_put(pte); + if (percpu_ref_is_zero(pte->pte_ref)) { + rcu_read_unlock(); + free_user_pte(mm, pmd, addr & PMD_MASK); + return; + } + rcu_read_unlock(); + + if (switch_back) { + if (pte_tryget(mm, pmd, addr)) { + percpu_ref_switch_to_percpu(pte->pte_ref); + __pte_put(pte); + } + } +} + #endif /* CONFIG_FREE_USER_PTE */