From patchwork Mon Mar 2 13:49:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11415623 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EC66924 for ; Mon, 2 Mar 2020 13:52:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5DCA42173E for ; Mon, 2 Mar 2020 13:52:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UK7RRhxK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727350AbgCBNwv (ORCPT ); Mon, 2 Mar 2020 08:52:51 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:42240 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726740AbgCBNwv (ORCPT ); Mon, 2 Mar 2020 08:52:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1583157169; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DgbwPUj/m1nwLeqvy+yAawPfy+ZV8Pojgus4///VFkU=; b=UK7RRhxK1fuCE7dd0H2J1frQxEqo7v56REimnVBMnL+3SV+K6lvGFADcBeTIwjxWLSdbJp pe0xCrk+eS77+57U8vOUvnF5Q/zF4cc/BPkkqv5V3yJvR7tkECugQdxa5iOWcVEc1VDnS+ SAjSj4negLx7jkN2Goag0sY7klVswhI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-347-ULAvd2pTNnqgqlIBwKmDlA-1; Mon, 02 Mar 2020 08:52:48 -0500 X-MC-Unique: ULAvd2pTNnqgqlIBwKmDlA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3528D59328; Mon, 2 Mar 2020 13:52:46 +0000 (UTC) Received: from t480s.redhat.com (ovpn-116-114.ams2.redhat.com [10.36.116.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4EEC919C4F; Mon, 2 Mar 2020 13:52:29 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, virtio-dev@lists.oasis-open.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Michal Hocko , Andrew Morton , "Michael S . Tsirkin" , David Hildenbrand , Jason Wang , Oscar Salvador , Igor Mammedov , Dave Young , Dan Williams , Pavel Tatashin , Stefan Hajnoczi , Vlastimil Babka Subject: [PATCH v1 07/11] virtio-mem: Allow to offline partially unplugged memory blocks Date: Mon, 2 Mar 2020 14:49:37 +0100 Message-Id: <20200302134941.315212-8-david@redhat.com> In-Reply-To: <20200302134941.315212-1-david@redhat.com> References: <20200302134941.315212-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Dropping the reference count of PageOffline() pages allows offlining code to skip them. However, we also have to convert PG_reserved to another flag - let's use PG_dirty - so has_unmovable_pages() will properly handle them. PG_reserved pages get detected as unmovable right away. We need the flag to see if we are onlining pages the first time, or if we allocated them via alloc_contig_range(). Properly take care of offlining code also modifying the stats and special handling in case the driver gets unloaded. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Oscar Salvador Cc: Michal Hocko Cc: Igor Mammedov Cc: Dave Young Cc: Andrew Morton Cc: Dan Williams Cc: Pavel Tatashin Cc: Stefan Hajnoczi Cc: Vlastimil Babka Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 64 ++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 5b26d57be551..2916f8b970fa 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -570,6 +570,53 @@ static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id, virtio_mem_retry(vm); } +static void virtio_mem_notify_going_offline(struct virtio_mem *vm, + unsigned long mb_id) +{ + const unsigned long nr_pages = PFN_DOWN(vm->subblock_size); + unsigned long pfn; + int sb_id, i; + + for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) { + if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) + continue; + /* + * Drop our reference to the pages so the memory can get + * offlined and add the unplugged pages to the managed + * page counters (so offlining code can correctly subtract + * them again). + */ + pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + + sb_id * vm->subblock_size); + adjust_managed_page_count(pfn_to_page(pfn), nr_pages); + for (i = 0; i < nr_pages; i++) + page_ref_dec(pfn_to_page(pfn + i)); + } +} + +static void virtio_mem_notify_cancel_offline(struct virtio_mem *vm, + unsigned long mb_id) +{ + const unsigned long nr_pages = PFN_DOWN(vm->subblock_size); + unsigned long pfn; + int sb_id, i; + + for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) { + if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1)) + continue; + /* + * Get the reference we dropped when going offline and + * subtract the unplugged pages from the managed page + * counters. + */ + pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + + sb_id * vm->subblock_size); + adjust_managed_page_count(pfn_to_page(pfn), -nr_pages); + for (i = 0; i < nr_pages; i++) + page_ref_inc(pfn_to_page(pfn + i)); + } +} + /* * This callback will either be called synchronously from add_memory() or * asynchronously (e.g., triggered via user space). We have to be careful @@ -616,6 +663,7 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb, break; } vm->hotplug_active = true; + virtio_mem_notify_going_offline(vm, mb_id); break; case MEM_GOING_ONLINE: mutex_lock(&vm->hotplug_mutex); @@ -640,6 +688,12 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb, mutex_unlock(&vm->hotplug_mutex); break; case MEM_CANCEL_OFFLINE: + if (!vm->hotplug_active) + break; + virtio_mem_notify_cancel_offline(vm, mb_id); + vm->hotplug_active = false; + mutex_unlock(&vm->hotplug_mutex); + break; case MEM_CANCEL_ONLINE: if (!vm->hotplug_active) break; @@ -666,8 +720,11 @@ static void virtio_mem_set_fake_offline(unsigned long pfn, struct page *page = pfn_to_page(pfn); __SetPageOffline(page); - if (!onlined) + if (!onlined) { SetPageDirty(page); + /* FIXME: remove after cleanups */ + ClearPageReserved(page); + } } } @@ -1717,6 +1774,11 @@ static void virtio_mem_remove(struct virtio_device *vdev) rc = virtio_mem_mb_remove(vm, mb_id); BUG_ON(rc); } + /* + * After we unregistered our callbacks, user space can no longer + * offline partially plugged online memory blocks. No need to worry + * about them. + */ /* unregister callbacks */ unregister_virtio_mem_device(vm);