From patchwork Thu Dec 5 14:04:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11274829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C572E6C1 for ; Thu, 5 Dec 2019 14:04:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 80FE922525 for ; Thu, 5 Dec 2019 14:04:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="Asq26IOS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 80FE922525 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C4556B1087; Thu, 5 Dec 2019 09:04:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 875C96B1088; Thu, 5 Dec 2019 09:04:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 764756B1089; Thu, 5 Dec 2019 09:04:17 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 600A66B1087 for ; Thu, 5 Dec 2019 09:04:17 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 05153A8C5 for ; Thu, 5 Dec 2019 14:04:17 +0000 (UTC) X-FDA: 76231257354.22.sugar81_5f591b6e65e2a X-Spam-Summary: 2,0,0,9914c696eb33d97d,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::aryabinin@virtuozzo.com:glider@google.com:linux-kernel@vger.kernel.org:dvyukov@google.com:daniel@iogearbox.net:cai@lca.pw:dja@axtens.net,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1437:1515:1535:1605:1730:1747:1777:1792:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4321:4605:5007:6119:6261:6653:6691:7903:8603:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14096:14394:21080:21444:21451:21627:30003:30054:30070,0,RBL:209.85.216.65:@axtens.net:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:106,LUA_SUMMARY:none X-HE-Tag: sugar81_5f591b6e65e2a X-Filterd-Recvd-Size: 9444 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 14:04:15 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id w23so1351846pjd.2 for ; Thu, 05 Dec 2019 06:04:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=t4U/eRKLWH+P5sMJLg1gBzVLKge0A2yZRS0R5HxW/2Q=; b=Asq26IOS4TK4UH/qiyh65Rf6Gr9j+ebjMp/2ndN/f+KYSq7JoZhW7CZZ9l0NUtx7F/ D971GvtNwZ9/Qe+xfSy1AxrMHUomDx1ah/2ishZe6/U0/f+D78ScE6nZr2haNxoGneyD 1zGIhbom3N20ZR2ykUs4QpDlxZcovOxbWDBv0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=t4U/eRKLWH+P5sMJLg1gBzVLKge0A2yZRS0R5HxW/2Q=; b=gjsAQPOmOFQQHSSZPhMD8GdIVQLigtkuRitGTgYIjqI/4J8po/P/jGhxnowyoNV/uL HKXblzVgJD1YkVdvmqc0mD3AdKpTsyPwcNgZNuhedej1ByrgvvBqzKiE530zRMIiGn3f S69Y0ebSp0gJHcvQxqFNXsNwRYy+fo4kw8g8uFdIrw5M/D/f2YxQar7vkcd8KX5VmfT0 rlN6KlPV/czFHfwKQwoKQ4EMjyURe5ogmkpNkyTyjCQCt8xEO/2kGs+S+XoDn6OMcqgh HwbWGkqT6hCRcJPY3afJw9N0HFkfjxunMHebiNBrIiXNweBnrzHzZ818icQMSmmYc63F EMxw== X-Gm-Message-State: APjAAAXNgSuKHf859lHL01WcZx5HJYxol+9cw8+9I97/ZAABvuBVooU3 8mfdyizkcI+/Zo2+GG7bPeFCsg== X-Google-Smtp-Source: APXvYqxGnHj16INKJIl3VHlcRQsBG6NIPPJh7U8CsqZ9o1m2ipnW1nsPbtHWYsdeFTLs3zRnM8rE8w== X-Received: by 2002:a17:902:54f:: with SMTP id 73mr9434121plf.213.1575554653485; Thu, 05 Dec 2019 06:04:13 -0800 (PST) Received: from localhost (2001-44b8-111e-5c00-61b9-031c-bed1-3502.static.ipv6.internode.on.net. [2001:44b8:111e:5c00:61b9:31c:bed1:3502]) by smtp.gmail.com with ESMTPSA id q185sm12628423pfq.110.2019.12.05.06.04.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Dec 2019 06:04:12 -0800 (PST) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, aryabinin@virtuozzo.com, glider@google.com, linux-kernel@vger.kernel.org, dvyukov@google.com Cc: daniel@iogearbox.net, cai@lca.pw, Daniel Axtens Subject: [PATCH 1/3] mm: add apply_to_existing_pages helper Date: Fri, 6 Dec 2019 01:04:05 +1100 Message-Id: <20191205140407.1874-1-dja@axtens.net> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: apply_to_page_range takes an address range, and if any parts of it are not covered by the existing page table hierarchy, it allocates memory to fill them in. In some use cases, this is not what we want - we want to be able to operate exclusively on PTEs that are already in the tables. Add apply_to_existing_pages for this. Adjust the walker functions for apply_to_page_range to take 'create', which switches them between the old and new modes. This will be used in KASAN vmalloc. Signed-off-by: Daniel Axtens Reviewed-by: Andrey Ryabinin --- include/linux/mm.h | 3 ++ mm/memory.c | 131 +++++++++++++++++++++++++++++++++------------ 2 files changed, 99 insertions(+), 35 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c97ea3b694e6..f4dba827d76e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2621,6 +2621,9 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); +extern int apply_to_existing_pages(struct mm_struct *mm, unsigned long address, + unsigned long size, pte_fn_t fn, + void *data); #ifdef CONFIG_PAGE_POISONING extern bool page_poisoning_enabled(void); diff --git a/mm/memory.c b/mm/memory.c index 606da187d1de..e508ba7e0a19 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2021,26 +2021,34 @@ EXPORT_SYMBOL(vm_iomap_memory); static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) + pte_fn_t fn, void *data, bool create) { pte_t *pte; - int err; + int err = 0; spinlock_t *uninitialized_var(ptl); - pte = (mm == &init_mm) ? - pte_alloc_kernel(pmd, addr) : - pte_alloc_map_lock(mm, pmd, addr, &ptl); - if (!pte) - return -ENOMEM; + if (create) { + pte = (mm == &init_mm) ? + pte_alloc_kernel(pmd, addr) : + pte_alloc_map_lock(mm, pmd, addr, &ptl); + if (!pte) + return -ENOMEM; + } else { + pte = (mm == &init_mm) ? + pte_offset_kernel(pmd, addr) : + pte_offset_map_lock(mm, pmd, addr, &ptl); + } BUG_ON(pmd_huge(*pmd)); arch_enter_lazy_mmu_mode(); do { - err = fn(pte++, addr, data); - if (err) - break; + if (create || !pte_none(*pte)) { + err = fn(pte++, addr, data); + if (err) + break; + } } while (addr += PAGE_SIZE, addr != end); arch_leave_lazy_mmu_mode(); @@ -2052,62 +2060,83 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) + pte_fn_t fn, void *data, bool create) { pmd_t *pmd; unsigned long next; - int err; + int err = 0; BUG_ON(pud_huge(*pud)); - pmd = pmd_alloc(mm, pud, addr); - if (!pmd) - return -ENOMEM; + if (create) { + pmd = pmd_alloc(mm, pud, addr); + if (!pmd) + return -ENOMEM; + } else { + pmd = pmd_offset(pud, addr); + } do { next = pmd_addr_end(addr, end); - err = apply_to_pte_range(mm, pmd, addr, next, fn, data); - if (err) - break; + if (create || !pmd_none_or_clear_bad(pmd)) { + err = apply_to_pte_range(mm, pmd, addr, next, fn, data, + create); + if (err) + break; + } } while (pmd++, addr = next, addr != end); return err; } static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) + pte_fn_t fn, void *data, bool create) { pud_t *pud; unsigned long next; - int err; + int err = 0; - pud = pud_alloc(mm, p4d, addr); - if (!pud) - return -ENOMEM; + if (create) { + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return -ENOMEM; + } else { + pud = pud_offset(p4d, addr); + } do { next = pud_addr_end(addr, end); - err = apply_to_pmd_range(mm, pud, addr, next, fn, data); - if (err) - break; + if (create || !pud_none_or_clear_bad(pud)) { + err = apply_to_pmd_range(mm, pud, addr, next, fn, data, + create); + if (err) + break; + } } while (pud++, addr = next, addr != end); return err; } static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) + pte_fn_t fn, void *data, bool create) { p4d_t *p4d; unsigned long next; - int err; + int err = 0; - p4d = p4d_alloc(mm, pgd, addr); - if (!p4d) - return -ENOMEM; + if (create) { + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return -ENOMEM; + } else { + p4d = p4d_offset(pgd, addr); + } do { next = p4d_addr_end(addr, end); - err = apply_to_pud_range(mm, p4d, addr, next, fn, data); - if (err) - break; + if (create || !p4d_none_or_clear_bad(p4d)) { + err = apply_to_pud_range(mm, p4d, addr, next, fn, data, + create); + if (err) + break; + } } while (p4d++, addr = next, addr != end); return err; } @@ -2130,7 +2159,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, pgd = pgd_offset(mm, addr); do { next = pgd_addr_end(addr, end); - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data); + err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, true); if (err) break; } while (pgd++, addr = next, addr != end); @@ -2139,6 +2168,38 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(apply_to_page_range); +/* + * Scan a region of virtual memory, calling a provided function on + * each leaf page table where it exists. + * + * Unlike apply_to_page_range, this does _not_ fill in page tables + * where they are absent. + */ +int apply_to_existing_pages(struct mm_struct *mm, unsigned long addr, + unsigned long size, pte_fn_t fn, void *data) +{ + pgd_t *pgd; + unsigned long next; + unsigned long end = addr + size; + int err = 0; + + if (WARN_ON(addr >= end)) + return -EINVAL; + + pgd = pgd_offset(mm, addr); + do { + next = pgd_addr_end(addr, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, false); + if (err) + break; + } while (pgd++, addr = next, addr != end); + + return err; +} +EXPORT_SYMBOL_GPL(apply_to_existing_pages); + /* * handle_pte_fault chooses page fault handler according to an entry which was * read non-atomically. Before making any commitment, on those architectures From patchwork Thu Dec 5 14:04:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11274837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74B6F6C1 for ; Thu, 5 Dec 2019 14:09:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A0292245C for ; Thu, 5 Dec 2019 14:09:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="HyowXUY/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A0292245C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6EC5B6B108D; Thu, 5 Dec 2019 09:09:06 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 69CB46B108E; Thu, 5 Dec 2019 09:09:06 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58C626B108F; Thu, 5 Dec 2019 09:09:06 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 3D70F6B108D for ; Thu, 5 Dec 2019 09:09:06 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id E7AC1180AD802 for ; Thu, 5 Dec 2019 14:09:05 +0000 (UTC) X-FDA: 76231269450.18.5BF34B2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 4C644100C0CCA for ; Thu, 5 Dec 2019 14:04:20 +0000 (UTC) X-Spam-Summary: 2,0,0,417d5aa7bf89f37e,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::aryabinin@virtuozzo.com:glider@google.com:linux-kernel@vger.kernel.org:dvyukov@google.com:daniel@iogearbox.net:cai@lca.pw:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1711:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3867:3870:3871:3872:3873:4118:4321:4385:4605:5007:6261:6653:7208:7875:8603:8660:9163:10004:11026:11232:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12679:12895:13148:13221:13229:13230:13870:13894:14093:14096:14181:14394:14721:21080:21433:21444:21451:21627:21966:21987:30029:30054,0,RBL:209.85.216.66:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: sleet86_5ff3e7a706917 X-Filterd-Recvd-Size: 7142 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 14:04:19 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id z21so1328380pjq.13 for ; Thu, 05 Dec 2019 06:04:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rrl6fsIQIcheAl6pCi+n0PQV4bZosMoozWNraf35XNA=; b=HyowXUY/2pXTcknCfKms4yOpx0iCfvrspxJVzVRocIfYTBjLCnFCphoYiXoWPX6yvF jQ67TkGT/xU3cdj4zz0GVl92Of4SAIAEy0/cKsKO25T5h7btdVXy1OC/tMhciawvwBa1 Fwq6o2VQhWPR+gOHSJyR/SmxXvbiY7yGBDRDw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rrl6fsIQIcheAl6pCi+n0PQV4bZosMoozWNraf35XNA=; b=NH85VxFh160xx+r2a+goeG7xuFX3kHwhUMJBIroM0RJeWQEYZ8Cang+FUES+3KjSpA cs6e0sDw8710kndiTxkn8TWZhhlCZU9DvS8xdQwNgvl/tJD6dFfS4krGv3WHUB/FIKcO fDHMY/NUO+M/U5S2NhlyXZscNlx1d8qHFz+pbVVHwX8+I8teBSxOxn/br5MDYavcY75a zwMFmO6vBTXYmfGt6Iz2M1RfzikI+R464jtzRVAZGF6DknpM9HXFiGK7AaWnWLyTGKkf eSWHnZh7QgwfNnllFvdUSi77eROfTkCoAHCKJxOZYlaP3CCGEV/2Ypcgb20rC27tici1 E84g== X-Gm-Message-State: APjAAAX0p7aXqO9RLj5/pcGmBloDpkMdU9TZx1NLt/dt0wDIfp3ahJDo NwUTiQa+A7b61rHh1IqczK2g8w== X-Google-Smtp-Source: APXvYqy1JVUw91MEUMOG+zxi2L3sQWgyNeoYhYioWFiY802l6CyXb7i+PsveBap/gGjV3yI85fV1cQ== X-Received: by 2002:a17:90a:a881:: with SMTP id h1mr9398250pjq.50.1575554657418; Thu, 05 Dec 2019 06:04:17 -0800 (PST) Received: from localhost (2001-44b8-111e-5c00-61b9-031c-bed1-3502.static.ipv6.internode.on.net. [2001:44b8:111e:5c00:61b9:31c:bed1:3502]) by smtp.gmail.com with ESMTPSA id c9sm12165045pfn.65.2019.12.05.06.04.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Dec 2019 06:04:16 -0800 (PST) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, aryabinin@virtuozzo.com, glider@google.com, linux-kernel@vger.kernel.org, dvyukov@google.com Cc: daniel@iogearbox.net, cai@lca.pw, Daniel Axtens Subject: [PATCH 2/3] kasan: use apply_to_existing_pages for releasing vmalloc shadow Date: Fri, 6 Dec 2019 01:04:06 +1100 Message-Id: <20191205140407.1874-2-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191205140407.1874-1-dja@axtens.net> References: <20191205140407.1874-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kasan_release_vmalloc uses apply_to_page_range to release vmalloc shadow. Unfortunately, apply_to_page_range can allocate memory to fill in page table entries, which is not what we want. Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if apply_to_page_range does allocate memory, we get a sleep in atomic bug: BUG: sleeping function called from invalid context at mm/page_alloc.c:4681 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name: Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x199/0x216 lib/dump_stack.c:118 ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800 __might_sleep+0x95/0x190 kernel/sched/core.c:6753 prepare_alloc_pages mm/page_alloc.c:4681 [inline] __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730 alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211 alloc_pages include/linux/gfp.h:532 [inline] __get_free_pages+0xc/0x40 mm/page_alloc.c:4786 __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline] pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline] __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459 apply_to_pte_range mm/memory.c:2031 [inline] apply_to_pmd_range mm/memory.c:2068 [inline] apply_to_pud_range mm/memory.c:2088 [inline] apply_to_p4d_range mm/memory.c:2108 [inline] apply_to_page_range+0x77d/0xa00 mm/memory.c:2133 kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970 __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313 try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline] free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368 free_unmap_vmap_area mm/vmalloc.c:1381 [inline] remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209 vm_remove_mappings mm/vmalloc.c:2236 [inline] __vunmap+0x223/0xa20 mm/vmalloc.c:2299 __vfree+0x3f/0xd0 mm/vmalloc.c:2356 __vmalloc_area_node mm/vmalloc.c:2507 [inline] __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547 __vmalloc_node mm/vmalloc.c:2607 [inline] __vmalloc_node_flags mm/vmalloc.c:2621 [inline] vzalloc+0x6f/0x80 mm/vmalloc.c:2666 alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline] alloc_pg_vec net/packet/af_packet.c:4258 [inline] packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342 packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695 __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117 __do_sys_setsockopt net/socket.c:2133 [inline] __se_sys_setsockopt net/socket.c:2130 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130 do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe Switch to using the apply_to_existing_pages helper instead, which won't allocate memory. Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory") Reported-by: Dmitry Vyukov Cc: Andrey Ryabinin Signed-off-by: Daniel Axtens Reviewed-by: Andrey Ryabinin --- Andrew, if you want to take this, it replaces "kasan: Don't allocate page tables in kasan_release_vmalloc()" --- mm/kasan/common.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index e04e73603dfc..26fd0c13dd28 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -957,6 +957,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, { void *shadow_start, *shadow_end; unsigned long region_start, region_end; + unsigned long size; region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); @@ -979,9 +980,10 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, shadow_end = kasan_mem_to_shadow((void *)region_end); if (shadow_end > shadow_start) { - apply_to_page_range(&init_mm, (unsigned long)shadow_start, - (unsigned long)(shadow_end - shadow_start), - kasan_depopulate_vmalloc_pte, NULL); + size = shadow_end - shadow_start; + apply_to_existing_pages(&init_mm, (unsigned long)shadow_start, + size, kasan_depopulate_vmalloc_pte, + NULL); flush_tlb_kernel_range((unsigned long)shadow_start, (unsigned long)shadow_end); } From patchwork Thu Dec 5 14:04:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11274831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C61186C1 for ; Thu, 5 Dec 2019 14:04:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86D3D22525 for ; Thu, 5 Dec 2019 14:04:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="TdxvDXVF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86D3D22525 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A9A6D6B1088; Thu, 5 Dec 2019 09:04:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A4A066B1089; Thu, 5 Dec 2019 09:04:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 939F86B108A; Thu, 5 Dec 2019 09:04:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 7F7FF6B1088 for ; Thu, 5 Dec 2019 09:04:24 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 240A51812D6D5 for ; Thu, 5 Dec 2019 14:04:24 +0000 (UTC) X-FDA: 76231257648.18.knife09_6077fd20df712 X-Spam-Summary: 2,0,0,a18bff17643bd211,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::aryabinin@virtuozzo.com:glider@google.com:linux-kernel@vger.kernel.org:dvyukov@google.com:daniel@iogearbox.net:cai@lca.pw:dja@axtens.net:syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com:syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3874:4117:4321:4385:4605:5007:6261:6653:7875:8603:9010:9592:10004:11026:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13172:13211:13221:13229:13894:14181:14394:14721:21063:21080:21444:21451:21627:21987:30054,0,RBL:209.85.216.65:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none ,Custom_ X-HE-Tag: knife09_6077fd20df712 X-Filterd-Recvd-Size: 6803 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 14:04:22 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id l4so1346502pjt.5 for ; Thu, 05 Dec 2019 06:04:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U+rYuarcFGCelyfsuzmoID8VbDpvPXs1LM8fcwC/sPE=; b=TdxvDXVFzR+61ypb+pkDPPMEjQCXeQyzL1jbq0HfNnf8R45+X/Tq1+GrRUfToFVMhr Zk+sUDBIx4cA3P46G2dqpvqsZdm7TNnQh1XTTXYyolmD+fjA6wViiPUD72sS0R4Xqa1e UG9/MvphoVnmZ8fE5uTs25ux3qgptunBk+NLU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U+rYuarcFGCelyfsuzmoID8VbDpvPXs1LM8fcwC/sPE=; b=BPbPOfhgk9QZB2hw9raM05H2759+C5NRoRddE8DpuXn4XAMD2VCvwRlRJSFjapt8Rq c2ii6q03XW2Ypd3UONm1Ize/Q83JXiaIRnuqCT/tm3I7G15AtQczypAxEIRac3W/oh00 Z3SKRl373XKAwWK6+bbC/ESp0sSdASGaVZmsFZWBKFOrfoqhIP37ABCWi3sxHVDpd3zL klJfa6+SEPpvj6co/3ZBxRwIgSakU6hwLj8aWnE/VaCvwN9qVOgySVF5wJy17rEdgGdh VwMJzVNUbbelWfydWvyd2WqZo9tT3qoMpDmcWXCdpoK5eVM8AGWXDyw9Rk0d8e8ofbMw gmBA== X-Gm-Message-State: APjAAAXFdCsSPGy0G+y8zT5f3VmMtQaZ7GRQZfXPdM54NvXEz5GWrwW8 kBwbA6og0QxhMQ0bIc0/R1vYDQ== X-Google-Smtp-Source: APXvYqxVDhF8ueI6xXs6y+EV68T16oJerbdtvwULNuHUTWfYDLhkt+8b64AvbBxFH7IT4ww5pVig1w== X-Received: by 2002:a17:902:904b:: with SMTP id w11mr5268735plz.204.1575554661870; Thu, 05 Dec 2019 06:04:21 -0800 (PST) Received: from localhost (2001-44b8-111e-5c00-61b9-031c-bed1-3502.static.ipv6.internode.on.net. [2001:44b8:111e:5c00:61b9:31c:bed1:3502]) by smtp.gmail.com with ESMTPSA id q67sm5745928pjb.4.2019.12.05.06.04.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Dec 2019 06:04:21 -0800 (PST) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, aryabinin@virtuozzo.com, glider@google.com, linux-kernel@vger.kernel.org, dvyukov@google.com Cc: daniel@iogearbox.net, cai@lca.pw, Daniel Axtens , syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com, syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com Subject: [PATCH 3/3] kasan: don't assume percpu shadow allocations will succeed Date: Fri, 6 Dec 2019 01:04:07 +1100 Message-Id: <20191205140407.1874-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191205140407.1874-1-dja@axtens.net> References: <20191205140407.1874-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: syzkaller and the fault injector showed that I was wrong to assume that we could ignore percpu shadow allocation failures. Handle failures properly. Merge all the allocated areas back into the free list and release the shadow, then clean up and return NULL. The shadow is released unconditionally, which relies upon the fact that the release function is able to tolerate pages not being present. Also clean up shadows in the recovery path - currently they are not released, which leaks a bit of memory. Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory") Reported-by: syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com Reported-by: syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com Cc: Dmitry Vyukov Cc: Andrey Ryabinin Signed-off-by: Daniel Axtens Reviewed-by: Andrey Ryabinin --- mm/vmalloc.c | 48 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 37af94b6cf30..fa5688093a88 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3291,7 +3291,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, struct vmap_area **vas, *va; struct vm_struct **vms; int area, area2, last_area, term_area; - unsigned long base, start, size, end, last_end; + unsigned long base, start, size, end, last_end, orig_start, orig_end; bool purged = false; enum fit_type type; @@ -3421,6 +3421,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, spin_unlock(&free_vmap_area_lock); + /* populate the kasan shadow space */ + for (area = 0; area < nr_vms; area++) { + if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area])) + goto err_free_shadow; + + kasan_unpoison_vmalloc((void *)vas[area]->va_start, + sizes[area]); + } + /* insert all vm's */ spin_lock(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { @@ -3431,13 +3440,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } spin_unlock(&vmap_area_lock); - /* populate the shadow space outside of the lock */ - for (area = 0; area < nr_vms; area++) { - /* assume success here */ - kasan_populate_vmalloc(vas[area]->va_start, sizes[area]); - kasan_unpoison_vmalloc((void *)vms[area]->addr, sizes[area]); - } - kfree(vas); return vms; @@ -3449,8 +3451,12 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, * and when pcpu_get_vm_areas() is success. */ while (area--) { - merge_or_add_vmap_area(vas[area], &free_vmap_area_root, - &free_vmap_area_list); + orig_start = vas[area]->va_start; + orig_end = vas[area]->va_end; + va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root, + &free_vmap_area_list); + kasan_release_vmalloc(orig_start, orig_end, + va->va_start, va->va_end); vas[area] = NULL; } @@ -3485,6 +3491,28 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, kfree(vas); kfree(vms); return NULL; + +err_free_shadow: + spin_lock(&free_vmap_area_lock); + /* + * We release all the vmalloc shadows, even the ones for regions that + * hadn't been successfully added. This relies on kasan_release_vmalloc + * being able to tolerate this case. + */ + for (area = 0; area < nr_vms; area++) { + orig_start = vas[area]->va_start; + orig_end = vas[area]->va_end; + va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root, + &free_vmap_area_list); + kasan_release_vmalloc(orig_start, orig_end, + va->va_start, va->va_end); + vas[area] = NULL; + kfree(vms[area]); + } + spin_unlock(&free_vmap_area_lock); + kfree(vas); + kfree(vms); + return NULL; } /**