From patchwork Tue Feb 16 15:49:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 8328551 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 63C61C02AA for ; Tue, 16 Feb 2016 15:52:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 85DA8202AE for ; Tue, 16 Feb 2016 15:52:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7F616202B8 for ; Tue, 16 Feb 2016 15:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932447AbcBPPwN (ORCPT ); Tue, 16 Feb 2016 10:52:13 -0500 Received: from foss.arm.com ([217.140.101.70]:54542 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932444AbcBPPwM (ORCPT ); Tue, 16 Feb 2016 10:52:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4A2B85F5; Tue, 16 Feb 2016 07:51:21 -0800 (PST) Received: from melchizedek.cambridge.arm.com (melchizedek.cambridge.arm.com [10.1.209.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D35BF3F213; Tue, 16 Feb 2016 07:52:08 -0800 (PST) From: James Morse To: linux-arm-kernel@lists.infradead.org Cc: Will Deacon , Sudeep Holla , Geoff Levand , Catalin Marinas , Lorenzo Pieralisi , Mark Rutland , AKASHI Takahiro , Marc Zyngier , "Rafael J . Wysocki" , Pavel Machek , linux-pm@vger.kernel.org, James Morse Subject: [PATCH v5 13/15] PM / Hibernate: Call flush_icache_range() on pages restored in-place Date: Tue, 16 Feb 2016 15:49:25 +0000 Message-Id: <1455637767-31561-14-git-send-email-james.morse@arm.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1455637767-31561-1-git-send-email-james.morse@arm.com> References: <1455637767-31561-1-git-send-email-james.morse@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some architectures require code written to memory as if it were data to be 'cleaned' from any data caches before the processor can fetch them as new instructions. During resume from hibernate, the snapshot code copies some pages directly, meaning these architectures do not get a chance to perform their cache maintenance. Modify the read and decompress code to call flush_icache_range() on all pages that are restored, so that the restored in-place pages are guaranteed to be executable on these architectures. Signed-off-by: James Morse Acked-by: Pavel Machek --- kernel/power/swap.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/kernel/power/swap.c b/kernel/power/swap.c index 12cd989dadf6..a30645d2e93f 100644 --- a/kernel/power/swap.c +++ b/kernel/power/swap.c @@ -37,6 +37,14 @@ #define HIBERNATE_SIG "S1SUSPEND" /* + * When reading an {un,}compressed image, we may restore pages in place, + * in which case some architectures need these pages cleaning before they + * can be executed. We don't know which pages these may be, so clean the lot. + */ +bool clean_pages_on_read = false; +bool clean_pages_on_decompress = false; + +/* * The swap map is a data structure used for keeping track of each page * written to a swap partition. It consists of many swap_map_page * structures that contain each an array of MAP_PAGE_ENTRIES swap entries. @@ -241,6 +249,9 @@ static void hib_end_io(struct bio *bio) if (bio_data_dir(bio) == WRITE) put_page(page); + else if (clean_pages_on_read) + flush_icache_range((unsigned long)page_address(page), + (unsigned long)page_address(page) + PAGE_SIZE); if (bio->bi_error && !hb->error) hb->error = bio->bi_error; @@ -1049,6 +1060,7 @@ static int load_image(struct swap_map_handle *handle, hib_init_batch(&hb); + clean_pages_on_read = true; printk(KERN_INFO "PM: Loading image data pages (%u pages)...\n", nr_to_read); m = nr_to_read / 10; @@ -1124,6 +1136,10 @@ static int lzo_decompress_threadfn(void *data) d->unc_len = LZO_UNC_SIZE; d->ret = lzo1x_decompress_safe(d->cmp + LZO_HEADER, d->cmp_len, d->unc, &d->unc_len); + if (clean_pages_on_decompress) + flush_icache_range((unsigned long)d->unc, + (unsigned long)d->unc + d->unc_len); + atomic_set(&d->stop, 1); wake_up(&d->done); } @@ -1189,6 +1205,8 @@ static int load_image_lzo(struct swap_map_handle *handle, } memset(crc, 0, offsetof(struct crc_data, go)); + clean_pages_on_decompress = true; + /* * Start the decompression threads. */