From patchwork Wed Jan 23 23:52:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10778163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD56B139A for ; Wed, 23 Jan 2019 23:53:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8BCE2855E for ; Wed, 23 Jan 2019 23:53:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B80902D488; Wed, 23 Jan 2019 23:53:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3519A2855E for ; Wed, 23 Jan 2019 23:53:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E345B8E0062; Wed, 23 Jan 2019 18:53:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DE4118E0047; Wed, 23 Jan 2019 18:53:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CABE68E0062; Wed, 23 Jan 2019 18:53:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 86B168E0047 for ; Wed, 23 Jan 2019 18:53:04 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id p4so2638245pgj.21 for ; Wed, 23 Jan 2019 15:53:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=m9GTC+7S9fhaNoJrGcDKyTUKvfKdbodt7cr2mnEVFA0=; b=nZnirNNCISri5h+pw6arZJtCBQIB8rBVcpAYUlthzZsTbapyuxfFBLjFId3p6WH4j9 2+WR5UWUoyChSFwv0FMtuDj2YTU6NzVl53Zz6nBbLVYsJ8dK1VrcY58nrRAdHOgxwlDi QcPDHqtYRR0wxv0vSMgIMVC8h6saeEwyxLobAaJV6aY7jkJm3k6ljpgcmLel3K60fAWo Owmg95cyISntQz6qKOw4A3UpTqzt3pDRNO3li2xwzdZnvdaRp13OLAoIwCZPUjNbjxL6 rXFcDtumcDJPhek98ttbU5RNMpD+giwp3YVReGYp4X2URW6mtIhVol3CsNm9TfDSNIQJ iuRg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: AJcUukehrr1GT6CZUGvnOc2YbV3GTZoDDkQ9UiTJaTyLsan1KdN4XSKX iJK979cSf+CaMf4pQvWXICUbyxtkHuxbYcfUKuVqnq6ad9H1V0Oq6/q3YcJoqY/JzcFbcPZnxpK 8+YjfeUySTzBuY4sxC92ZA7TXORxNHNn6Lo2umZdUz+vVt1rHeZj4XsaLKnK1+9ejxg== X-Received: by 2002:a17:902:31a4:: with SMTP id x33mr4210903plb.41.1548287584031; Wed, 23 Jan 2019 15:53:04 -0800 (PST) X-Google-Smtp-Source: ALg8bN4ytt4pcrAcV/UoI2+Rc0Yf6Dif8TqqaXgfDgBbSdxwVAQCzxr+LwYiC1+MiX88cN/Wejva X-Received: by 2002:a17:902:31a4:: with SMTP id x33mr4210839plb.41.1548287582369; Wed, 23 Jan 2019 15:53:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548287582; cv=none; d=google.com; s=arc-20160816; b=M1JHjW7IW5qN6mTuFkowf0F2O8Gh1dGNBpCDdHZ2AFRAXVsclDdjD+zj9Zt79IEvTh 0Z5x6VZCnzrMd+p8rMfq305tZdbgPJVbIUvx0tGbr/BppUXwjYnVv4qpaevCCTlsWHb0 eQzpVqkJ2AL4QPqe2u9fImWmr5D3UNr1Mbw2iqRXf/YMLBVjsXzK0Se+IV3FhY3q3V4m d6NBCQWKhpyq3H6hOzwmEHVL2VlXP1zTY3knHUByG8qxHeN7FuczdVS8AxZvEwyM9yR7 440WoxlzZyam7a2WzO5e2bBA5LDJncSSGtfoBErewIFtw4EnPi3Xrx1+CKt/m1uaP1uU u/Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=m9GTC+7S9fhaNoJrGcDKyTUKvfKdbodt7cr2mnEVFA0=; b=u/yvmtASEcs0uLLyoCTbrKkYruYpM5piiDMkDLkaujFrUE/eGpy00Abk551i1PrkxH Jvobl+tKlGjy9CxJOIohZrDKkbEsKuUal75gXVUCU4svZ1iai9DngyqSNBP00KgEku8A NlmqM6jWEMqapwNrXff//AdkftZmJhgF2zHaFb8BmCPLNOF/5tam0EnakFq4hGRROV/B e5OVdCXzYgsOCySFwc29z40y2zwWGjdBY/kXx8+wru69BazwvXWjYfquQpl/FM+wKDTT 5J3DzUGeCUp4mbBkxYn5r4j7SEFomR4dQhChw8rGhtCOu2IUR208GMBbmKlPq2gE7wkN AmaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com. [115.124.30.56]) by mx.google.com with ESMTPS id m14si20439432pgd.326.2019.01.23.15.53.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Jan 2019 15:53:02 -0800 (PST) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.56 as permitted sender) client-ip=115.124.30.56; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0TIsDp.m_1548287573; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TIsDp.m_1548287573) by smtp.aliyun-inc.com(127.0.0.1); Thu, 24 Jan 2019 07:52:59 +0800 From: Yang Shi To: ktkhai@virtuozzo.com, hughd@google.com, aarcange@redhat.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH] mm: ksm: do not block on page lock when searching stable tree Date: Thu, 24 Jan 2019 07:52:53 +0800 Message-Id: <1548287573-15084-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP ksmd need search stable tree to look for the suitable KSM page, but the KSM page might be locked for a while due to i.e. KSM page rmap walk. Basically it is not a big deal since commit 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page deduplication limit"), since max_page_sharing limits the number of shared KSM pages. But it still sounds not worth waiting for the lock, the page can be skip, then try to merge it in the next scan to avoid potential stall if its content is still intact. Introduce async mode to get_ksm_page() to not block on page lock, like what try_to_merge_one_page() does. Return -EBUSY if trylock fails, since NULL means not find suitable KSM page, which is a valid case. With the default max_page_sharing setting (256), there is almost no observed change comparing lock vs trylock. However, with ksm02 of LTP, the reduced ksmd full scan time can be observed, which has set max_page_sharing to 786432. With lock version, ksmd may tak 10s - 11s to run two full scans, with trylock version ksmd may take 8s - 11s to run two full scans. And, the number of pages_sharing and pages_to_scan keep same. Basically, this change has no harm. Cc: Hugh Dickins Cc: Andrea Arcangeli Reviewed-by: Kirill Tkhai Signed-off-by: Yang Shi --- Hi folks, This patch was with "mm: vmscan: skip KSM page in direct reclaim if priority is low" in the initial submission. Then Hugh and Andrea pointed out commit 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page deduplication limit") is good enough for limiting the number of shared KSM page to prevent from softlock when walking ksm page rmap. This commit does solve the problem. So, the series was dropped by Andrew from -mm tree. However, I thought the second patch (this one) still sounds useful. So, I did some test and resubmit it. The first version was reviewed by Krill Tkhai, so I keep his Reviewed-by tag since there is no change to the patch except the commit log. So, would you please reconsider this patch? v2: Updated the commit log to reflect some test result and latest discussion mm/ksm.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 6c48ad1..f66405c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -668,7 +668,7 @@ static void remove_node_from_stable_tree(struct stable_node *stable_node) } /* - * get_ksm_page: checks if the page indicated by the stable node + * __get_ksm_page: checks if the page indicated by the stable node * is still its ksm page, despite having held no reference to it. * In which case we can trust the content of the page, and it * returns the gotten page; but if the page has now been zapped, @@ -686,7 +686,8 @@ static void remove_node_from_stable_tree(struct stable_node *stable_node) * a page to put something that might look like our key in page->mapping. * is on its way to being freed; but it is an anomaly to bear in mind. */ -static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) +static struct page *__get_ksm_page(struct stable_node *stable_node, + bool lock_it, bool async) { struct page *page; void *expected_mapping; @@ -729,7 +730,14 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) } if (lock_it) { - lock_page(page); + if (async) { + if (!trylock_page(page)) { + put_page(page); + return ERR_PTR(-EBUSY); + } + } else + lock_page(page); + if (READ_ONCE(page->mapping) != expected_mapping) { unlock_page(page); put_page(page); @@ -752,6 +760,11 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) return NULL; } +static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) +{ + return __get_ksm_page(stable_node, lock_it, false); +} + /* * Removing rmap_item from stable or unstable tree. * This function will clean the information from the stable/unstable tree. @@ -1673,7 +1686,11 @@ static struct page *stable_tree_search(struct page *page) * It would be more elegant to return stable_node * than kpage, but that involves more changes. */ - tree_page = get_ksm_page(stable_node_dup, true); + tree_page = __get_ksm_page(stable_node_dup, true, true); + + if (PTR_ERR(tree_page) == -EBUSY) + return ERR_PTR(-EBUSY); + if (unlikely(!tree_page)) /* * The tree may have been rebalanced, @@ -2060,6 +2077,10 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) /* We first start with searching the page inside the stable tree */ kpage = stable_tree_search(page); + + if (PTR_ERR(kpage) == -EBUSY) + return; + if (kpage == page && rmap_item->head == stable_node) { put_page(kpage); return;