From patchwork Mon Nov 19 10:16:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10688405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5688913AD for ; Mon, 19 Nov 2018 10:17:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49ADC29C27 for ; Mon, 19 Nov 2018 10:17:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E07229C34; Mon, 19 Nov 2018 10:17:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4AEF29C27 for ; Mon, 19 Nov 2018 10:17:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727953AbeKSUkH (ORCPT ); Mon, 19 Nov 2018 15:40:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55920 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726287AbeKSUkG (ORCPT ); Mon, 19 Nov 2018 15:40:06 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B088E3084243; Mon, 19 Nov 2018 10:16:56 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-77.ams2.redhat.com [10.36.117.77]) by smtp.corp.redhat.com (Postfix) with ESMTP id 81DA3105706D; Mon, 19 Nov 2018 10:16:53 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, devel@linuxdriverproject.org, linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org, xen-devel@lists.xenproject.org, kexec-ml , pv-drivers@vmware.com, David Hildenbrand , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Andrew Morton , Matthew Wilcox , Michal Hocko , "Michael S. Tsirkin" Subject: [PATCH v1 4/8] xen/balloon: mark inflated pages PG_offline Date: Mon, 19 Nov 2018 11:16:12 +0100 Message-Id: <20181119101616.8901-5-david@redhat.com> In-Reply-To: <20181119101616.8901-1-david@redhat.com> References: <20181119101616.8901-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Mon, 19 Nov 2018 10:16:57 +0000 (UTC) Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Mark inflated and never onlined pages PG_offline, to tell the world that the content is stale and should not be dumped. Cc: Boris Ostrovsky Cc: Juergen Gross Cc: Stefano Stabellini Cc: Andrew Morton Cc: Matthew Wilcox Cc: Michal Hocko Cc: "Michael S. Tsirkin" Signed-off-by: David Hildenbrand --- drivers/xen/balloon.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 12148289debd..14dd6b814db3 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -425,6 +425,7 @@ static int xen_bring_pgs_online(struct page *pg, unsigned int order) for (i = 0; i < size; i++) { p = pfn_to_page(start_pfn + i); __online_page_set_limits(p); + __SetPageOffline(p); __balloon_append(p); } mutex_unlock(&balloon_mutex); @@ -493,6 +494,7 @@ static enum bp_state increase_reservation(unsigned long nr_pages) xenmem_reservation_va_mapping_update(1, &page, &frame_list[i]); /* Relinquish the page back to the allocator. */ + __ClearPageOffline(page); free_reserved_page(page); } @@ -519,6 +521,7 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) state = BP_EAGAIN; break; } + __SetPageOffline(page); adjust_managed_page_count(page, -1); xenmem_reservation_scrub_page(page); list_add(&page->lru, &pages);