From patchwork Mon Jun 1 16:06:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 11582353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D81BB139A for ; Mon, 1 Jun 2020 16:06:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AFB82206C3 for ; Mon, 1 Jun 2020 16:06:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AFB82206C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=canonical.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D0F5780007; Mon, 1 Jun 2020 12:06:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CC3668E0006; Mon, 1 Jun 2020 12:06:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD5D980007; Mon, 1 Jun 2020 12:06:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id A40BB8E0006 for ; Mon, 1 Jun 2020 12:06:48 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 66D19181AC9B6 for ; Mon, 1 Jun 2020 16:06:48 +0000 (UTC) X-FDA: 76881121296.19.low65_587403379b55b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 5333A1ACC27 for ; Mon, 1 Jun 2020 16:06:48 +0000 (UTC) X-Spam-Summary: 2,0,0,55c35b4a5f257e2e,d41d8cd98f00b204,andrea.righi@canonical.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4321:4605:5007:6120:6261:7903:9592:10004:11026:11658:11914:12043:12294:12296:12297:12438:12517:12519:12555:12679:12895:12986:13141:13230:13869:13894:14093:14096:14181:14721:21060:21080:21324:21444:21451:21627:21990:30005:30054,0,RBL:91.189.89.112:@canonical.com:.lbl8.mailshell.net-64.201.201.201 62.8.15.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: low65_587403379b55b X-Filterd-Recvd-Size: 5881 Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 1 Jun 2020 16:06:47 +0000 (UTC) Received: from mail-wr1-f70.google.com ([209.85.221.70]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jfmxa-0006sb-Jg for linux-mm@kvack.org; Mon, 01 Jun 2020 16:06:46 +0000 Received: by mail-wr1-f70.google.com with SMTP id n6so184304wrv.6 for ; Mon, 01 Jun 2020 09:06:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XeAmA4jTrmNARC3SDWl+jD/pueIRZYDlLrUDy+AkMLI=; b=d15IZVQF3aMAE1KsG8Th6naa1sxo1hXHTHq7OvJPCZCZ/lofoLg1haS1YqoRTBa3Yf Em87GhRgHXzAR91yC4g26VpQIyXHfSHEom5pv6dF++7x4pzd1yrF+xmiNFMQ6qa7dnT2 4iLdBgG+WAyNcq5VqAWvaZjJQQ3pPgXimzFbaECr/LOdiLs1qid+Ng4lwLTmPZ5WHnSo tnltb7tZ0nD3Eb9ChzSmGnfUABIM5q1/hUf5ZeC3g34szKfI/X87OIuqsLsrRZ3Qa+xx OCfT32jW22gX06NFQsZAmeR00qKGG+PtisdqsEKXdos/+DPbaCXC3qbzT4EGhlHYGN0A HB0A== X-Gm-Message-State: AOAM532qoKdfxkrxUQlBy9w4D9iw9+Qv5WxmV8WfAxDuNWmBkwmR+GO5 vHTwqVC/oRqG4e+3SqGUXoD0Y8xx5GZGS/OOhNzJspMOUus2kSA+RIkX5KUTcqTpIympA60K7Xl 3kmsc7As/A/stWC6IxkN4pyhAxH/b X-Received: by 2002:a5d:500d:: with SMTP id e13mr24030209wrt.150.1591027606256; Mon, 01 Jun 2020 09:06:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz+BHxqkANkdUGsMt1WVDpn254B7cTuPi/KBtVflfUw/QH3TY2FO0r0pIcgGqgmYltEFwHOJQ== X-Received: by 2002:a5d:500d:: with SMTP id e13mr24030186wrt.150.1591027605940; Mon, 01 Jun 2020 09:06:45 -0700 (PDT) Received: from xps-13.homenet.telecomitalia.it (host105-135-dynamic.43-79-r.retail.telecomitalia.it. [79.43.135.105]) by smtp.gmail.com with ESMTPSA id k16sm19719262wrp.66.2020.06.01.09.06.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jun 2020 09:06:45 -0700 (PDT) From: Andrea Righi To: "Rafael J . Wysocki" , Pavel Machek Cc: Len Brown , Andrew Morton , linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 1/2] mm: swap: allow partial swapoff with try_to_unuse() Date: Mon, 1 Jun 2020 18:06:35 +0200 Message-Id: <20200601160636.148346-2-andrea.righi@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200601160636.148346-1-andrea.righi@canonical.com> References: <20200601160636.148346-1-andrea.righi@canonical.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5333A1ACC27 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow to run try_to_unuse() passing an arbitrary amount of pages also when frontswap is not used. To preserve the default behavior introduce a new function called try_to_unuse_wait() and add a new 'wait' parameter: if 'wait' is false return as soon as "pages_to_unuse" pages are unused, if it is true simply ignore "pages_to_unuse" and wait until all the pages are unused. In any case the value of 0 in "pages_to_unuse" means "all pages". This is required by the PM / hibernation opportunistic memory reclaim feature. Signed-off-by: Andrea Righi --- include/linux/swapfile.h | 7 +++++++ mm/swapfile.c | 15 +++++++-------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h index e06febf62978..ac4d0ccd1f7b 100644 --- a/include/linux/swapfile.h +++ b/include/linux/swapfile.h @@ -9,6 +9,13 @@ extern spinlock_t swap_lock; extern struct plist_head swap_active_head; extern struct swap_info_struct *swap_info[]; +extern int try_to_unuse_wait(unsigned int type, bool frontswap, bool wait, + unsigned long pages_to_unuse); +static inline int +try_to_unuse(unsigned int type, bool frontswap, unsigned long pages_to_unuse) +{ + return try_to_unuse_wait(type, frontswap, true, pages_to_unuse); +} extern int try_to_unuse(unsigned int, bool, unsigned long); extern unsigned long generic_max_swapfile_size(void); extern unsigned long max_swapfile_size(void); diff --git a/mm/swapfile.c b/mm/swapfile.c index f8bf926c9c8f..651471ccf133 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2121,10 +2121,13 @@ static unsigned int find_next_to_unuse(struct swap_info_struct *si, } /* - * If the boolean frontswap is true, only unuse pages_to_unuse pages; - * pages_to_unuse==0 means all pages; ignored if frontswap is false + * Unuse pages_to_unuse pages; pages_to_unuse==0 means all pages. + * + * If "wait" is false stop as soon as "pages_to_unuse" pages are unused, if + * wait is true "pages_to_unuse" will be ignored and wait until all the pages + * are unused. */ -int try_to_unuse(unsigned int type, bool frontswap, +int try_to_unuse_wait(unsigned int type, bool frontswap, bool wait, unsigned long pages_to_unuse) { struct mm_struct *prev_mm; @@ -2138,10 +2141,6 @@ int try_to_unuse(unsigned int type, bool frontswap, if (!READ_ONCE(si->inuse_pages)) return 0; - - if (!frontswap) - pages_to_unuse = 0; - retry: retval = shmem_unuse(type, frontswap, &pages_to_unuse); if (retval) @@ -2223,7 +2222,7 @@ int try_to_unuse(unsigned int type, bool frontswap, * been preempted after get_swap_page(), temporarily hiding that swap. * It's easy and robust (though cpu-intensive) just to keep retrying. */ - if (READ_ONCE(si->inuse_pages)) { + if (wait && READ_ONCE(si->inuse_pages)) { if (!signal_pending(current)) goto retry; retval = -EINTR; From patchwork Mon Jun 1 16:06:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 11582361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10A1214F6 for ; Mon, 1 Jun 2020 16:07:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C13FA2073B for ; Mon, 1 Jun 2020 16:07:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C13FA2073B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=canonical.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E743280009; Mon, 1 Jun 2020 12:07:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DF92780008; Mon, 1 Jun 2020 12:07:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0D7380009; Mon, 1 Jun 2020 12:07:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id B604480008 for ; Mon, 1 Jun 2020 12:07:05 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7118119F034 for ; Mon, 1 Jun 2020 16:07:05 +0000 (UTC) X-FDA: 76881122010.22.cause41_58c147062175f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id ABFE6180A444F for ; Mon, 1 Jun 2020 16:06:50 +0000 (UTC) X-Spam-Summary: 2,0,0,d90227c0fdbbfceb,d41d8cd98f00b204,andrea.righi@canonical.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:967:973:982:988:989:1260:1311:1314:1345:1359:1431:1434:1437:1500:1515:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2525:2553:2559:2564:2636:2682:2685:2693:2731:2859:2861:2894:2898:2900:2904:2933:2937:2939:2942:2945:2947:2951:2954:3000:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4385:4423:4470:4605:5007:6117:6119:6120:6261:6630:7875:7903:7974:8603:8957:9025:9121:10004:10226:11026:11232:11233:11658:11914:12043:12291:12294:12296:12297:12438:12517:12519:12555:12679:12895:12986:13141:13161:13180:13229:13230:13869:13894:21060:21080:21324:21325:21433:21444:21451:21627:21795:21987:21990:30005:30019:30029:30034:30051:30054:30056:30064:30070:30090:30091,0,RBL:91.189.89.112:@canonical.com:.lbl8.mailshell.net-64.201.201.201 62.8.15.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netch eck:none X-HE-Tag: cause41_58c147062175f X-Filterd-Recvd-Size: 13760 Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 1 Jun 2020 16:06:49 +0000 (UTC) Received: from mail-wr1-f71.google.com ([209.85.221.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jfmxc-0006tG-NR for linux-mm@kvack.org; Mon, 01 Jun 2020 16:06:48 +0000 Received: by mail-wr1-f71.google.com with SMTP id c14so176437wrw.11 for ; Mon, 01 Jun 2020 09:06:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rb4DorW2DTSvKQhxTeczpFcq7NmU3NcOTRxb6hpieGg=; b=XgvaWHtbav027+4IOVJZeYX0iYRSPkf3uw6rgRwbfUbG7Ofat2VZk3ZlFhoGA8MhNM 9wMJnxIIgWZ6RrZTYd/8TZcSQsmfMSKBSxf7c0zoQ/dC3ZhtFX8F1f0FkDITIv4YbhqZ NwRH3tBkqWEDAfUfnx1eN34f8xKO0mqysasjwI+rM2DqIQUMg/aIres+ZHNqGrYLbn+d FmDNp8ygFlW2nB6TYDoQ8EkMt3Rj+XZ1BCT6e8O0sZ58S8Td95xAF1squNHXGpNRbbCp bQQxEH2fQb5sSb1U35P+WS8UnVZEwbwNEHXi26bD0end8MZPTkAqju6LK6ThWMZk6KZM PS1g== X-Gm-Message-State: AOAM532MkYPKDkBhWqvACMJcGd1zUAm4Kdu/vQprHHLrgr3fUYu7dN66 TGNsI9f5J7PJDhCytyPuYKEk/NV6FcZJ4/W1IwpGSkyCA1z+yuPMgwtk+pLGF3hADFpWU/O1Fck ajUBcPwrevE+v6LrunWPkoqBfGSlf X-Received: by 2002:adf:f847:: with SMTP id d7mr1098529wrq.261.1591027607713; Mon, 01 Jun 2020 09:06:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLTpp25Ny6dKQjmgdteuceFco5fZeCOLru4FBdl5AKKV0A8ZkU98XKhIVVEEGzpE9X9OIl8A== X-Received: by 2002:adf:f847:: with SMTP id d7mr1098511wrq.261.1591027607358; Mon, 01 Jun 2020 09:06:47 -0700 (PDT) Received: from xps-13.homenet.telecomitalia.it (host105-135-dynamic.43-79-r.retail.telecomitalia.it. [79.43.135.105]) by smtp.gmail.com with ESMTPSA id k16sm19719262wrp.66.2020.06.01.09.06.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jun 2020 09:06:46 -0700 (PDT) From: Andrea Righi To: "Rafael J . Wysocki" , Pavel Machek Cc: Len Brown , Andrew Morton , linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 2/2] PM: hibernate: introduce opportunistic memory reclaim Date: Mon, 1 Jun 2020 18:06:36 +0200 Message-Id: <20200601160636.148346-3-andrea.righi@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200601160636.148346-1-andrea.righi@canonical.com> References: <20200601160636.148346-1-andrea.righi@canonical.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: ABFE6180A444F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: == Overview == When a system is going to be hibernated, the kernel needs to allocate and dump the content of the entire memory to the resume device (swap) by creating a "hibernation image". To make sure this image fits in the available free memory, the kernel can induce an artificial memory pressure condition that allows to free up some pages (i.e., drop clean page cache pages, writeback dirty page cache pages, swap out anonymous memory, etc.). How much the kernel is pushing to free up memory is determined by /sys/power/image_size: a smaller size will cause more memory to be dropped, cutting down the amount of I/O required to write the hibernation image; a larger image size, instead, is going to generate more I/O, but the system will likely be less sluggish at resume, because more caches will be restored, reducing the paging time. The I/O generated to free up memory, write the hibernation image to disk and load it back to memory is the main bottleneck of hibernation [1]. == Proposed solution == The "opportunistic memory reclaim" aims to provide an interface to the user-space to control the artificial memory pressure. With this feature user-space can trigger the memory reclaim before the actual hibernation is started (e.g., if the system is idle for a certain amount of time). This allows to consistently speed up hibernation performance when needed (in terms of time to hibernate) by reducing the size of the hibernation image in advance. == Interface == The accomplish this goal the following new files are provided in sysfs: - /sys/power/mm_reclaim/run - /sys/power/mm_reclaim/release The former can be used to start the memory reclaim by writing a number representing the desired amount of pages to be reclaimed (with "0" the kernel will try to reclaim as many pages as possible). The latter can be used in the same way to force the kernel to pull a certain amount of swapped out pages back to memory (by writing the number of pages or "0" to load back to memory as many pages as possible); this can be useful immediately after resume to speed up the paging time and get the system back to full speed faster. Memory reclaim and release can be interrupted sending a signal to the process that is writing to /sys/power/mm_reclaim/{run,release} (i.e., to set a timeout for the particular operation). == Testing == Environment: - VM (kvm): 8GB of RAM disk speed: 100 MB/s 8GB swap file on ext4 (/swapfile) Use case: - allocate 85% of memory, wait for 60s almost in idle, then hibernate and resume (measuring the time) Result (average of 10 runs): 5.7-vanilla 5.7-mm_reclaim ----------- -------------- [hibernate] image_size=default 51.56s 4.19s [resume] image_size=default 26.34s 5.01s [hibernate] image_size=0 73.22s 5.36s [resume] image_size=0 5.32s 5.26s NOTE #1: in the 5.7-mm_reclaim case a user-space daemon detects when the system is idle and triggers the opportunistic memory reclaim via /sys/power/mm_reclaim/run. NOTE #2: in the 5.7-mm_reclaim case, after the system is resumed, a user-space process can (optionally) use /sys/power/mm_reclaim/release to pre-load back to memory all (or some) of the swapped out pages in order to have a more responsive system. == Conclusion == Opportunistic memory reclaim can provide a significant benefit to those systems where being able to hibernate quickly is important. The typical use case is with "spot" cloud instances: low-priority instances that can be stopped at any time (prior advice) to prioritize other more privileged instances [2]. Being able to quickly stop low-priority instances that are mostly idle for the majority of time can be critical to provide a better quality of service in the overall cloud infrastructure. == See also == [1] https://lwn.net/Articles/821158/ [2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html Signed-off-by: Andrea Righi --- Documentation/ABI/testing/sysfs-power | 38 +++++++++++ include/linux/swapfile.h | 1 + kernel/power/hibernate.c | 94 ++++++++++++++++++++++++++- mm/swapfile.c | 30 +++++++++ 4 files changed, 162 insertions(+), 1 deletion(-) diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power index 5e6ead29124c..b33db9816a8c 100644 --- a/Documentation/ABI/testing/sysfs-power +++ b/Documentation/ABI/testing/sysfs-power @@ -192,6 +192,44 @@ Description: Reading from this file will display the current value, which is set to 1 MB by default. +What: /sys/power/mm_reclaim/ +Date: May 2020 +Contact: Andrea Righi +Description: + The /sys/power/mm_reclaim directory contains all the + opportunistic memory reclaim files. + +What: /sys/power/mm_reclaim/run +Date: May 2020 +Contact: Andrea Righi +Description: + The /sys/power/mm_reclaim/run file allows user space to trigger + opportunistic memory reclaim. When a string representing a + non-negative number is written to this file, it will be assumed + to represent the amount of pages to be reclaimed (0 is a special + value that means "as many pages as possible"). + + When opportunistic memory reclaim is started the system will be + put into an artificial memory pressure condition and memory + will be reclaimed by dropping clean page cache pages, swapping + out anonymous pages, etc. + + NOTE: it is possible to interrupt the memory reclaim sending a + signal to writer of this file. + +What: /sys/power/mm_reclaim/release +Date: May 2020 +Contact: Andrea Righi +Description: + Force swapped out pages to be loaded back to memory. When a + string representing a non-negative number is written to this + file, it will be assumed to represent the amount of pages to be + pulled back to memory from the swap device(s) (0 is a special + value that means "as many pages as possible"). + + NOTE: it is possible to interrupt the memory release sending a + signal to writer of this file. + What: /sys/power/autosleep Date: April 2012 Contact: Rafael J. Wysocki diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h index ac4d0ccd1f7b..6f4144099958 100644 --- a/include/linux/swapfile.h +++ b/include/linux/swapfile.h @@ -9,6 +9,7 @@ extern spinlock_t swap_lock; extern struct plist_head swap_active_head; extern struct swap_info_struct *swap_info[]; +extern void swap_unuse(unsigned long pages); extern int try_to_unuse_wait(unsigned int type, bool frontswap, bool wait, unsigned long pages_to_unuse); static inline int diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index 30bd28d1d418..caa06eb5a09f 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "power.h" @@ -1150,6 +1151,92 @@ static ssize_t reserved_size_store(struct kobject *kobj, power_attr(reserved_size); +/* + * Try to reclaim some memory in the system, stop when one of the following + * conditions occurs: + * - at least "nr_pages" have been reclaimed + * - no more pages can be reclaimed + * - current task explicitly interrupted by a signal (e.g., user space + * timeout) + * + * @nr_pages - amount of pages to be reclaimed (0 means "as many pages as + * possible"). + */ +static void do_mm_reclaim(unsigned long nr_pages) +{ + while (nr_pages > 0) { + unsigned long reclaimed; + + if (signal_pending(current)) + break; + reclaimed = shrink_all_memory(nr_pages); + if (!reclaimed) + break; + nr_pages -= min_t(unsigned long, reclaimed, nr_pages); + } +} + +static ssize_t run_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return -EINVAL; +} + +static ssize_t run_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t n) +{ + unsigned long nr_pages; + int ret; + + ret = kstrtoul(buf, 0, &nr_pages); + if (ret) + return ret; + if (!nr_pages) + nr_pages = ULONG_MAX; + do_mm_reclaim(nr_pages); + + return n; +} + +power_attr(run); + +static ssize_t release_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return -EINVAL; +} + +static ssize_t release_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t n) +{ + unsigned long nr_pages; + int ret; + + ret = kstrtoul(buf, 0, &nr_pages); + if (ret) + return ret; + if (!nr_pages) + nr_pages = ULONG_MAX; + swap_unuse(nr_pages); + + return n; +} + +power_attr(release); + +static struct attribute *mm_reclaim_attrs[] = { + &run_attr.attr, + &release_attr.attr, + NULL, +}; + +static struct attribute_group mm_reclaim_attr_group = { + .name = "mm_reclaim", + .attrs = mm_reclaim_attrs, +}; + static struct attribute * g[] = { &disk_attr.attr, &resume_offset_attr.attr, @@ -1164,10 +1251,15 @@ static const struct attribute_group attr_group = { .attrs = g, }; +static const struct attribute_group *attr_groups[] = { + &attr_group, + &mm_reclaim_attr_group, + NULL, +}; static int __init pm_disk_init(void) { - return sysfs_create_group(power_kobj, &attr_group); + return sysfs_create_groups(power_kobj, attr_groups); } core_initcall(pm_disk_init); diff --git a/mm/swapfile.c b/mm/swapfile.c index 651471ccf133..7391f122ad73 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1749,6 +1749,36 @@ int free_swap_and_cache(swp_entry_t entry) } #ifdef CONFIG_HIBERNATION +/* + * Force pages to be pulled back to memory from all swap devices. + * + * @nr_pages - number of pages to be pulled from all swap devices + * (0 = all pages from any swap device). + */ +void swap_unuse(unsigned long pages) +{ + int type; + + spin_lock(&swap_lock); + for (type = 0; type < nr_swapfiles; type++) { + struct swap_info_struct *sis = swap_info[type]; + struct block_device *bdev; + + if (!(sis->flags & SWP_WRITEOK)) + continue; + bdev = bdgrab(sis->bdev); + if (!bdev) + continue; + spin_unlock(&swap_lock); + + try_to_unuse_wait(type, false, false, pages); + + bdput(sis->bdev); + spin_lock(&swap_lock); + } + spin_unlock(&swap_lock); +} + /* * Find the swap type that corresponds to given device (if any). *