From patchwork Wed May 23 15:11:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10421747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 73E4660224 for ; Wed, 23 May 2018 15:13:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61B3228FE2 for ; Wed, 23 May 2018 15:13:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E6AC2901C; Wed, 23 May 2018 15:13:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 222A22902F for ; Wed, 23 May 2018 15:12:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99DDB6B02AB; Wed, 23 May 2018 11:12:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8D2DC6B02AC; Wed, 23 May 2018 11:12:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 727D56B02AD; Wed, 23 May 2018 11:12:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f199.google.com (mail-qt0-f199.google.com [209.85.216.199]) by kanga.kvack.org (Postfix) with ESMTP id 45F6A6B02AB for ; Wed, 23 May 2018 11:12:16 -0400 (EDT) Received: by mail-qt0-f199.google.com with SMTP id z10-v6so2466921qto.11 for ; Wed, 23 May 2018 08:12:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=0y569tcAt0Gp+I9uL0z9JhRmiH5UGxjXcImDaId8D98=; b=pn4ef+pGPZsCsHrbcHxR+3VB/O+Qekc01JLD1mgvGNh1hxT5y5xVHqVXhLqWpuXTl2 DodHctUJm9V2FZW6ZHKXdbSi67WCnIWJHiAGaP8p8dzvgi+8wEDmWkdXa1J7PZnuckRG pBos4aKyQel8VEmmsgk0M/mtC9o2rboeY1BrIEZdC6AhLvFcuzXgFpK8NZClXLddr2gd JSw6ZhMUS8fJck+G9+USgr5gi5ZQVlRjx31Yxo8apM9nDFdQgTxXInpGZhrNbw56GHDc rYE2wzWxXxHUcuAWex61jPjgBXQnJBw5Z7fu5xeR9QxWGeqM8AMF3jQs/qZuw7BaxyNv tBDg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: ALKqPweCzmLGOzwpE5IyT0VSu3kb6Re26BdOZ30EpvcT+uKbo78QyKjv FpUXeab6rB67+t0r2/eaMjgB2oqXJOzEULySZ6p/VQsrCl8K4nyT4rSZLjp6MZ3XiDi6Fj71u08 PF+c/Z7Cn39AFoSbs2nVMCu7cJ+/grxvJ0ePtf9NZ43mFcCJeqOYBw7dOM/KC4yfHxw== X-Received: by 2002:ac8:1752:: with SMTP id u18-v6mr3154951qtk.177.1527088336047; Wed, 23 May 2018 08:12:16 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKajZ+QBzZMNLlB0Xit1yjSuFsWdDnjUA3dchlZZVnwaeKQUbemgCJ3m4yecwNFmixMsnSm X-Received: by 2002:ac8:1752:: with SMTP id u18-v6mr3154899qtk.177.1527088335330; Wed, 23 May 2018 08:12:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527088335; cv=none; d=google.com; s=arc-20160816; b=RYHgis2LaCz7E7ONwFfoQ4R+D+3K98/oQ8X9sN/P6ZxX8lWbiHbEXOoi9yI4WkfjLE XMu7NRm4XCNp1DOuRkAf67YNFteexuPLWhULVRmwhntO70SMthhV6/AWvEhVJcd9CGqg Msgzjlw1BdewSjBTCCbVXSY7pdl2vpmra/Rl2DWAg7n2FDBSkpm/aZQCY6ZG536Bbuot UU4uxHGPy7uxDXhJNg1UUu38YzBjVrSz/C2Ep7fou9W3wT9Xat9+njE+0PzZl8gYT+VI l8ObFzPRE4TkTm619c6wRqgrl35WjcK7ZVxWFdDyClMKoxG9eQehi+/u5LEs0HyVa6/U vCNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=0y569tcAt0Gp+I9uL0z9JhRmiH5UGxjXcImDaId8D98=; b=Kab+SvVMQY6Hwkc5LaCeVoDCeU60IIqLmzd+2auUDK298MmwxfyjVj9LeQXSt6jNdt xABeE7m0+iSiXSaetmQAL0LzG9YiMnVBbc1QpzIdomMEk0W0PZ9+73sA5uFTjsrRIM4+ NIhglWbuRKWGE49/ilu/zetmM21uG2CU98BUHxcbnzI/nvwzIuDMdNuEXDC/Dwg+Ciok wwRuF8ueUDtyfOqyYZXJTiXPf8mB7eca/KR24Io0jqOuWwEUbHeXUrKwNj9xl+c4Vcw3 nX8ZoAD9M5whpdFq6JMR8MYmVYn7qVcLN3t9dvxh5Vvwmav1FBaq9dyPdGj80OA6q59M j2zA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id 81-v6si1014080qkg.280.2018.05.23.08.12.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 May 2018 08:12:15 -0700 (PDT) Received-SPF: pass (google.com: domain of david@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0585240BC04A; Wed, 23 May 2018 15:12:15 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-112.ams2.redhat.com [10.36.116.112]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8D34710C564A; Wed, 23 May 2018 15:12:11 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, David Hildenbrand , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com Subject: [PATCH v1 03/10] kasan: prepare for online/offline of different start/size Date: Wed, 23 May 2018 17:11:44 +0200 Message-Id: <20180523151151.6730-4-david@redhat.com> In-Reply-To: <20180523151151.6730-1-david@redhat.com> References: <20180523151151.6730-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 23 May 2018 15:12:15 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Wed, 23 May 2018 15:12:15 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'david@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The memory notifier has an important restriction right now: it only works if offline_pages() is called with the same parameters as online_pages(). To overcome this restriction, let's handle it per section. We could do it in smaller granularity, but then we get more vm_area overhead and cannot check that cleanly for actual online parts. A section is marked online as soon as at least one page is online. Similarly, a section is marked offline as soon as all pages are offline. So handling it on a per-section basis allows us to be more flexible. We asssume here, that a section is not split between boot and hotplug memory. Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Dmitry Vyukov Cc: kasan-dev@googlegroups.com Signed-off-by: David Hildenbrand --- mm/kasan/kasan.c | 107 ++++++++++++++++++++++++++++++----------------- 1 file changed, 69 insertions(+), 38 deletions(-) diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index a8b85706e2d6..901601a562a9 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -827,62 +827,93 @@ static bool shadow_mapped(unsigned long addr) return !pte_none(*pte); } -static int __meminit kasan_mem_notifier(struct notifier_block *nb, - unsigned long action, void *data) +static void kasan_offline_pages(unsigned long start_pfn, unsigned long nr_pages) { - struct memory_notify *mem_data = data; - unsigned long nr_shadow_pages, start_kaddr, shadow_start; - unsigned long shadow_end, shadow_size; + unsigned long start = SECTION_ALIGN_DOWN(start_pfn); + unsigned long end = SECTION_ALIGN_UP(start_pfn + nr_pages); + unsigned long pfn; - nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT; - start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn); - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr); - shadow_size = nr_shadow_pages << PAGE_SHIFT; - shadow_end = shadow_start + shadow_size; + for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) { + void *addr, *shadow_start; + struct vm_struct *vm; - if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) || - WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT))) - return NOTIFY_BAD; + /* still online? nothing to do then */ + if (online_section_nr(pfn_to_section_nr(pfn))) + continue; - switch (action) { - case MEM_GOING_ONLINE: { - void *ret; + addr = pfn_to_kaddr(pfn); + shadow_start = kasan_mem_to_shadow(addr); + + /* + * Only hot-added memory has a vm_area. Freeing shadow mapped + * during boot would be tricky, so we'll just have to keep it. + */ + vm = find_vm_area(shadow_start); + if (vm) + vfree(shadow_start); + } +} + +static int kasan_online_pages(unsigned long start_pfn, unsigned long nr_pages) +{ + unsigned long start = SECTION_ALIGN_DOWN(start_pfn); + unsigned long end = SECTION_ALIGN_UP(start_pfn + nr_pages); + unsigned long pfn; + + for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) { + unsigned long shadow_start, shadow_size; + void *addr, *ret; + + /* already online? nothing to do then */ + if (online_section_nr(pfn_to_section_nr(pfn))) + continue; + + addr = pfn_to_kaddr(pfn); + shadow_size = (PAGES_PER_SECTION << PAGE_SHIFT) >> + KASAN_SHADOW_SCALE_SHIFT; + shadow_start = (unsigned long)kasan_mem_to_shadow(addr); /* * If shadow is mapped already than it must have been mapped - * during the boot. This could happen if we onlining previously + * during boot. This could happen if we're onlining previously * offlined memory. */ if (shadow_mapped(shadow_start)) - return NOTIFY_OK; + continue; ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, - shadow_end, GFP_KERNEL, - PAGE_KERNEL, VM_NO_GUARD, - pfn_to_nid(mem_data->start_pfn), - __builtin_return_address(0)); + shadow_start + shadow_size, + GFP_KERNEL, PAGE_KERNEL, VM_NO_GUARD, + pfn_to_nid(pfn), + __builtin_return_address(0)); if (!ret) - return NOTIFY_BAD; - + goto out_free; kmemleak_ignore(ret); - return NOTIFY_OK; } - case MEM_CANCEL_ONLINE: - case MEM_OFFLINE: { - struct vm_struct *vm; + return 0; +out_free: + kasan_offline_pages(start_pfn, nr_pages); + return -ENOMEM; +} - /* - * Only hot-added memory have vm_area. Freeing shadow - * mapped during boot would be tricky, so we'll just - * have to keep it. - */ - vm = find_vm_area((void *)shadow_start); - if (vm) - vfree((void *)shadow_start); - } +static int __meminit kasan_mem_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct memory_notify *mem_data = data; + int ret = 0; + + switch (action) { + case MEM_GOING_ONLINE: + ret = kasan_online_pages(mem_data->start_pfn, + mem_data->nr_pages); + break; + case MEM_CANCEL_ONLINE: + case MEM_OFFLINE: + kasan_offline_pages(mem_data->start_pfn, mem_data->nr_pages); + break; } - return NOTIFY_OK; + return notifier_from_errno(ret); } static int __init kasan_memhotplug_init(void)