From patchwork Thu Apr 1 18:17:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12179115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74C5FC001BA for ; Thu, 1 Apr 2021 18:17:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 087FB60201 for ; Thu, 1 Apr 2021 18:17:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 087FB60201 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9E0396B0074; Thu, 1 Apr 2021 14:17:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B8686B0075; Thu, 1 Apr 2021 14:17:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8085C6B0078; Thu, 1 Apr 2021 14:17:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 5EC486B0074 for ; Thu, 1 Apr 2021 14:17:51 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 17BC81839C9A3 for ; Thu, 1 Apr 2021 18:17:51 +0000 (UTC) X-FDA: 77984606742.35.75A8FA6 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf21.hostedemail.com (Postfix) with ESMTP id E10DCE000118 for ; Thu, 1 Apr 2021 18:17:49 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id k15so42463ybh.6 for ; Thu, 01 Apr 2021 11:17:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Zj6gtvaZjQIgZc62gAWXEdUGn4ceYnrbr7Zk4v3KyJw=; b=FlA6Ify3u1OV06IaOI9B0LYyrXeZoWmreC02IOZYYHu/MUJtSU4Ou9/vrF9K3qQLhe ygWgDaWkNlXAF60dV7Xw/XO2/jYyTfw4V+h4JT5W2aDrN/fB7OLEY8Zrzsa4E6oUy4I8 zHOjRf0KeBX3ep2aOKMWt4ZRRMLdYopkba6YGvBXqLCQ2D+uN/boZ6AcDKPJqlcrJ/9c UJ5UBcD0kgKpTYW+9N4gwGEWWt35JbzTQKTS4CU9RcC+STB+pQDPVGfLeZN6TJ8VfQ78 xdHc4e8SYRY69SOBF0lCvOxO5UJOJyZIghev2lg2vbU1llQ66geybPr2oZk/lgpDY+cg FyzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Zj6gtvaZjQIgZc62gAWXEdUGn4ceYnrbr7Zk4v3KyJw=; b=S1C85goLFrLEK60qYt4H1FJlQvpRqFnSSxeJvXLXVsDXhIIORc9qs+2C6ATW3Vz2Db Vc+4VlF8twEqKjJFL5it4y+8NvGsNGr+7MV2SbssHWzPRpDvZm27ywYrTsyKT67DTfJZ ii9C1PO0xJYGNQWXKI3mDpWRAyt2yHmcwC6P1FrNCAWiL6DRb/oh8xjvQvpwwtxHQHR4 kSJZDGmXThr7/cZXCc8Cfl5n3nVNHzKA8FELbr2sfyO93De37jDesbXYN9fkywOPW/lo AdlUDgWmbDmAP/3juEkEAiBB6CP2eSad/LYm9eGqR7ZqHwcFVAU8yABsrPJFIHiw0vKF KEvQ== X-Gm-Message-State: AOAM532rJHq5l9IoKCIjM8BpCD+2N/1ycwNHouy5I2d+vM/+1pKCZ46H XX43BANO/caO6vXEfQDscDeY7FX4ZYo= X-Google-Smtp-Source: ABdhPJw6plyA7HOAVbYj4YxkjtOsN3J8EjN491Hael5AJaDEpRN8aRsEoywhiYhU2sbcxncxqauJJ1bpc74= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:899:1066:21fc:b3c5]) (user=surenb job=sendgmr) by 2002:a25:2b08:: with SMTP id r8mr13820232ybr.194.1617301069769; Thu, 01 Apr 2021 11:17:49 -0700 (PDT) Date: Thu, 1 Apr 2021 11:17:37 -0700 In-Reply-To: <20210401181741.168763-1-surenb@google.com> Message-Id: <20210401181741.168763-2-surenb@google.com> Mime-Version: 1.0 References: <20210401181741.168763-1-surenb@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 1/5] mm: reuse only-pte-mapped KSM page in do_wp_page() From: Suren Baghdasaryan To: stable@vger.kernel.org Cc: gregkh@linuxfoundation.org, jannh@google.com, ktkhai@virtuozzo.com, torvalds@linux-foundation.org, shli@fb.com, namit@vmware.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Yang Shi , "Kirill A. Shutemov" , Hugh Dickins , Andrea Arcangeli , Christian Koenig , Claudio Imbrenda , Rik van Riel , Huang Ying , Minchan Kim , Andrew Morton X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E10DCE000118 X-Stat-Signature: 3gdeiz4ctrz1x9w635wermhgjoza59ea Received-SPF: none (flex--surenb.bounces.google.com>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from="<3TQ5mYAYKCAw463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617301069-276075 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Kirill Tkhai Add an optimization for KSM pages almost in the same way that we have for ordinary anonymous pages. If there is a write fault in a page, which is mapped to an only pte, and it is not related to swap cache; the page may be reused without copying its content. [ Note that we do not consider PageSwapCache() pages at least for now, since we don't want to complicate __get_ksm_page(), which has nice optimization based on this (for the migration case). Currenly it is spinning on PageSwapCache() pages, waiting for when they have unfreezed counters (i.e., for the migration finish). But we don't want to make it also spinning on swap cache pages, which we try to reuse, since there is not a very high probability to reuse them. So, for now we do not consider PageSwapCache() pages at all. ] So in reuse_ksm_page() we check for 1) PageSwapCache() and 2) page_stable_node(), to skip a page, which KSM is currently trying to link to stable tree. Then we do page_ref_freeze() to prohibit KSM to merge one more page into the page, we are reusing. After that, nobody can refer to the reusing page: KSM skips !PageSwapCache() pages with zero refcount; and the protection against of all other participants is the same as for reused ordinary anon pages pte lock, page lock and mmap_sem. [akpm@linux-foundation.org: replace BUG_ON()s with WARN_ON()s] Link: http://lkml.kernel.org/r/154471491016.31352.1168978849911555609.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Reviewed-by: Yang Shi Cc: "Kirill A. Shutemov" Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Christian Koenig Cc: Claudio Imbrenda Cc: Rik van Riel Cc: Huang Ying Cc: Minchan Kim Cc: Kirill Tkhai Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/ksm.h | 7 +++++++ mm/ksm.c | 30 ++++++++++++++++++++++++++++-- mm/memory.c | 16 ++++++++++++++-- 3 files changed, 49 insertions(+), 4 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 44368b19b27e..def48a2d87aa 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -64,6 +64,8 @@ struct page *ksm_might_need_to_copy(struct page *page, void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); void ksm_migrate_page(struct page *newpage, struct page *oldpage); +bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, unsigned long address); #else /* !CONFIG_KSM */ @@ -103,6 +105,11 @@ static inline void rmap_walk_ksm(struct page *page, static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage) { } +static inline bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_MMU */ #endif /* !CONFIG_KSM */ diff --git a/mm/ksm.c b/mm/ksm.c index 65d4bf52f543..62419735ee9c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -695,8 +695,9 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) * case this node is no longer referenced, and should be freed; * however, it might mean that the page is under page_freeze_refs(). * The __remove_mapping() case is easy, again the node is now stale; - * but if page is swapcache in migrate_page_move_mapping(), it might - * still be our page, in which case it's essential to keep the node. + * the same is in reuse_ksm_page() case; but if page is swapcache + * in migrate_page_move_mapping(), it might still be our page, + * in which case it's essential to keep the node. */ while (!get_page_unless_zero(page)) { /* @@ -2609,6 +2610,31 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) goto again; } +bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, + unsigned long address) +{ +#ifdef CONFIG_DEBUG_VM + if (WARN_ON(is_zero_pfn(page_to_pfn(page))) || + WARN_ON(!page_mapped(page)) || + WARN_ON(!PageLocked(page))) { + dump_page(page, "reuse_ksm_page"); + return false; + } +#endif + + if (PageSwapCache(page) || !page_stable_node(page)) + return false; + /* Prohibit parallel get_ksm_page() */ + if (!page_ref_freeze(page, 1)) + return false; + + page_move_anon_rmap(page, vma); + page->index = linear_page_index(vma, address); + page_ref_unfreeze(page, 1); + + return true; +} #ifdef CONFIG_MIGRATION void ksm_migrate_page(struct page *newpage, struct page *oldpage) { diff --git a/mm/memory.c b/mm/memory.c index 21a0bbb9c21f..6920bfb3f89c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2831,8 +2831,11 @@ static int do_wp_page(struct vm_fault *vmf) * Take out anonymous pages first, anonymous shared vmas are * not dirty accountable. */ - if (PageAnon(vmf->page) && !PageKsm(vmf->page)) { + if (PageAnon(vmf->page)) { int total_map_swapcount; + if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) || + page_count(vmf->page) != 1)) + goto copy; if (!trylock_page(vmf->page)) { get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -2847,6 +2850,15 @@ static int do_wp_page(struct vm_fault *vmf) } put_page(vmf->page); } + if (PageKsm(vmf->page)) { + bool reused = reuse_ksm_page(vmf->page, vmf->vma, + vmf->address); + unlock_page(vmf->page); + if (!reused) + goto copy; + wp_page_reuse(vmf); + return VM_FAULT_WRITE; + } if (reuse_swap_page(vmf->page, &total_map_swapcount)) { if (total_map_swapcount == 1) { /* @@ -2867,7 +2879,7 @@ static int do_wp_page(struct vm_fault *vmf) (VM_WRITE|VM_SHARED))) { return wp_page_shared(vmf); } - +copy: /* * Ok, we need to copy. Oh, well.. */