From patchwork Thu Apr 16 18:01:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 11493615 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C738392C for ; Thu, 16 Apr 2020 18:01:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 948D321924 for ; Thu, 16 Apr 2020 18:01:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 948D321924 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=canonical.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C03F08E00DC; Thu, 16 Apr 2020 14:01:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB42F8E00BC; Thu, 16 Apr 2020 14:01:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF0DD8E00DC; Thu, 16 Apr 2020 14:01:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 933A78E00BC for ; Thu, 16 Apr 2020 14:01:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 43825181AEF30 for ; Thu, 16 Apr 2020 18:01:37 +0000 (UTC) X-FDA: 76714485834.16.wound31_e06d648e7660 X-Spam-Summary: 2,0,0,f8d9cea0737e95c5,d41d8cd98f00b204,andrea.righi@canonical.com,,RULES_HIT:41:69:355:379:800:960:973:988:989:1260:1277:1312:1313:1314:1345:1431:1437:1516:1518:1519:1535:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2393:2559:2562:2895:2912:3138:3139:3140:3141:3142:3354:3653:3865:3866:3867:3868:3870:3871:3874:4250:4605:5007:6119:6261:6630:7875:7903:8603:8784:9121:10004:10400:11026:11232:11233:11473:11658:11914:12297:12438:12517:12519:12555:12679:12895:13439:13895:14181:14394:14721:21060:21063:21080:21324:21444:21451:21627:21939:30003:30054:30055:30056:30064:30070,0,RBL:91.189.89.112:@canonical.com:.lbl8.mailshell.net-64.201.201.201 62.8.15.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:62,LUA_SUMMARY:none X-HE-Tag: wound31_e06d648e7660 X-Filterd-Recvd-Size: 5717 Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Apr 2020 18:01:36 +0000 (UTC) Received: from mail-wm1-f71.google.com ([209.85.128.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jP8pS-0005K0-TJ for linux-mm@kvack.org; Thu, 16 Apr 2020 18:01:34 +0000 Received: by mail-wm1-f71.google.com with SMTP id d134so1677076wmd.0 for ; Thu, 16 Apr 2020 11:01:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=lqszIaXl46cTzGLGeC0g1ccnl6uoouKrI1pXyrzOud8=; b=CZi7PkpF8j87rX89pfABAzmSAyASuv/plbkxxys0jB0SrMHaiwXlwOKjx2R+SGXeZQ wo9hEZP6PlQW0VwxdQEoYupVamPMmZJrhWZT67YVVz11118Qa0M6ws+hNGQai9YWcjIK 0P2mLA2TrcVdiFPx8gHlPiL2McGj1mkjSU8sgyVJeCR6XWKo8zRUMBEFsl3LF7oksPee VlhyY6oexwxxe5+8tndZ1NZpZMq7au6/d0bHlk0PEAjehGigUctUstax+qgw4cYPI9wQ 0Hmx15uAUv6Xlu+gOJgsOfxd0fg+sO9N+NF2sF1jbSbQZQip6mb7foOS/WDmgpXXgGxL Hjag== X-Gm-Message-State: AGi0Pub/Ny+x+x6RJNdlHCnNs1G/oMjZW+NqeVAN2jXc+EuVE7AH4Xp0 d9ecfuA5WVLyMGy3uenJENpoYe9rcfUd9gOUEagW+6UV0kfV8xdnFlMl6oK0hP/yLaufxzt2tjS NO71AspsSQoCEjjW4yus3BuILiLmI X-Received: by 2002:adf:e791:: with SMTP id n17mr28844564wrm.217.1587060094536; Thu, 16 Apr 2020 11:01:34 -0700 (PDT) X-Google-Smtp-Source: APiQypKKWmgwSjfEXi7GRtF/apyVZpx9UzyFuvLjmW2jkCHCrRmEDnTSDaPcaJT4BQRHuVrjXl05ZQ== X-Received: by 2002:adf:e791:: with SMTP id n17mr28844526wrm.217.1587060094178; Thu, 16 Apr 2020 11:01:34 -0700 (PDT) Received: from localhost (host123-127-dynamic.36-79-r.retail.telecomitalia.it. [79.36.127.123]) by smtp.gmail.com with ESMTPSA id n6sm4400810wmc.28.2020.04.16.11.01.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Apr 2020 11:01:33 -0700 (PDT) Date: Thu, 16 Apr 2020 20:01:32 +0200 From: Andrea Righi To: Andrew Morton Cc: Huang Ying , Minchan Kim , Anchal Agarwal , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3] mm: swap: properly update readahead statistics in unuse_pte_range() Message-ID: <20200416180132.GB3352@xps-13> MIME-Version: 1.0 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In unuse_pte_range() we blindly swap-in pages without checking if the swap entry is already present in the swap cache. By doing this, the hit/miss ratio used by the swap readahead heuristic is not properly updated and this leads to non-optimal performance during swapoff. Tracing the distribution of the readahead size returned by the swap readahead heuristic during swapoff shows that a small readahead size is used most of the time as if we had only misses (this happens both with cluster and vma readahead), for example: r::swapin_nr_pages(unsigned long offset):unsigned long:$retval COUNT EVENT 36948 $retval = 8 44151 $retval = 4 49290 $retval = 1 527771 $retval = 2 Checking if the swap entry is present in the swap cache, instead, allows to properly update the readahead statistics and the heuristic behaves in a better way during swapoff, selecting a bigger readahead size: r::swapin_nr_pages(unsigned long offset):unsigned long:$retval COUNT EVENT 1618 $retval = 1 4960 $retval = 2 41315 $retval = 4 103521 $retval = 8 In terms of swapoff performance the result is the following: Testing environment =================== - Host: CPU: 1.8GHz Intel Core i7-8565U (quad-core, 8MB cache) HDD: PC401 NVMe SK hynix 512GB MEM: 16GB - Guest (kvm): 8GB of RAM virtio block driver 16GB swap file on ext4 (/swapfile) Test case ========= - allocate 85% of memory - `systemctl hibernate` to force all the pages to be swapped-out to the swap file - resume the system - measure the time that swapoff takes to complete: # /usr/bin/time swapoff /swapfile Result (swapoff time) ====== 5.6 vanilla 5.6 w/ this patch ----------- ----------------- cluster-readahead 22.09s 12.19s vma-readahead 18.20s 15.33s Signed-off-by: "Huang, Ying" Signed-off-by: Andrea Righi Reviewed-by: "Huang, Ying" --- Changes in v3: - properly update swap readahead statistics instead of forcing a fixed-size readahead mm/swapfile.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 5871a2aa86a5..f8bf926c9c8f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1937,10 +1937,14 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, pte_unmap(pte); swap_map = &si->swap_map[offset]; - vmf.vma = vma; - vmf.address = addr; - vmf.pmd = pmd; - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf); + page = lookup_swap_cache(entry, vma, addr); + if (!page) { + vmf.vma = vma; + vmf.address = addr; + vmf.pmd = pmd; + page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + &vmf); + } if (!page) { if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD) goto try_next;