From patchwork Fri Jun 23 07:12:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 9805817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B4284600C5 for ; Fri, 23 Jun 2017 07:14:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A47A42870D for ; Fri, 23 Jun 2017 07:14:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 994EB2872C; Fri, 23 Jun 2017 07:14:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 481782870D for ; Fri, 23 Jun 2017 07:14:58 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 297A421CA35A7; Fri, 23 Jun 2017 00:13:28 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 39B0221BBC421 for ; Fri, 23 Jun 2017 00:13:27 -0700 (PDT) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Jun 2017 00:14:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,376,1493708400"; d="scan'208";a="116565866" Received: from yhuang-dev.sh.intel.com ([10.239.13.13]) by orsmga005.jf.intel.com with ESMTP; 23 Jun 2017 00:14:07 -0700 From: "Huang, Ying" To: Andrew Morton Subject: [PATCH -mm -v2 01/12] mm, THP, swap: Support to clear swap cache flag for THP swapped out Date: Fri, 23 Jun 2017 15:12:52 +0800 Message-Id: <20170623071303.13469-2-ying.huang@intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623071303.13469-1-ying.huang@intel.com> References: <20170623071303.13469-1-ying.huang@intel.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rik van Riel , linux-nvdimm@lists.01.org, Huang Ying , Hugh Dickins , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Minchan Kim , Shaohua Li MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP From: Huang Ying Previously, swapcache_free_cluster() is used only in the error path of shrink_page_list() to free the swap cluster just allocated if the THP (Transparent Huge Page) is failed to be split. In this patch, it is enhanced to clear the swap cache flag (SWAP_HAS_CACHE) for the swap cluster that holds the contents of THP swapped out. This will be used in delaying splitting THP after swapping out support. Because there is no THP swapping in as a whole support yet, after clearing the swap cache flag, the swap cluster backing the THP swapped out will be split. So that the swap slots in the swap cluster can be swapped in as normal pages later. Signed-off-by: "Huang, Ying" Cc: Johannes Weiner Cc: Minchan Kim Cc: Hugh Dickins Cc: Shaohua Li Cc: Rik van Riel --- mm/swapfile.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 6ba4aab2db0b..c32e9b23d642 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1168,22 +1168,40 @@ static void swapcache_free_cluster(swp_entry_t entry) struct swap_cluster_info *ci; struct swap_info_struct *si; unsigned char *map; - unsigned int i; + unsigned int i, free_entries = 0; + unsigned char val; - si = swap_info_get(entry); + si = _swap_info_get(entry); if (!si) return; ci = lock_cluster(si, offset); map = si->swap_map + offset; for (i = 0; i < SWAPFILE_CLUSTER; i++) { - VM_BUG_ON(map[i] != SWAP_HAS_CACHE); - map[i] = 0; + val = map[i]; + VM_BUG_ON(!(val & SWAP_HAS_CACHE)); + if (val == SWAP_HAS_CACHE) + free_entries++; + } + if (!free_entries) { + for (i = 0; i < SWAPFILE_CLUSTER; i++) + map[i] &= ~SWAP_HAS_CACHE; } unlock_cluster(ci); - mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER); - swap_free_cluster(si, idx); - spin_unlock(&si->lock); + if (free_entries == SWAPFILE_CLUSTER) { + spin_lock(&si->lock); + ci = lock_cluster(si, offset); + memset(map, 0, SWAPFILE_CLUSTER); + unlock_cluster(ci); + mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER); + swap_free_cluster(si, idx); + spin_unlock(&si->lock); + } else if (free_entries) { + for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) { + if (!__swap_entry_free(si, entry, SWAP_HAS_CACHE)) + free_swap_slot(entry); + } + } } #else static inline void swapcache_free_cluster(swp_entry_t entry)