From patchwork Mon Nov 26 23:25:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 10699337 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B4F613BB for ; Mon, 26 Nov 2018 23:25:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 787432A5B2 for ; Mon, 26 Nov 2018 23:25:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 676BF2A63A; Mon, 26 Nov 2018 23:25:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D9A392A5B2 for ; Mon, 26 Nov 2018 23:25:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE22E6B4461; Mon, 26 Nov 2018 18:25:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E91696B4462; Mon, 26 Nov 2018 18:25:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA80A6B4463; Mon, 26 Nov 2018 18:25:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 9BA216B4461 for ; Mon, 26 Nov 2018 18:25:05 -0500 (EST) Received: by mail-pf1-f198.google.com with SMTP id 74so12513874pfk.12 for ; Mon, 26 Nov 2018 15:25:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:subject :in-reply-to:message-id:references:user-agent:mime-version; bh=nCRmhahI7kY4j643ILpESoqKGQRCrmHSVJ/3uA5AXlw=; b=rFLdPWsS5QgwFJ5YQNmVqu1xgkwzpp+0yeAIIL7+iCqW9sLnwDZhVyG1H9vwps+B+4 ahi4SwmCQIRdjqUu/Qrd11m9qio/nnOzY6REGlzZKLsfmip0jzYf1U5s5Q1gKyhN6jqd 0D91pVAFDMbKj1+39dEMO3x9CjvnLWSpSh1zP5j0jULk03SvQgIKultKFv6wmvgQaloC a61Kzbj99UFqt/OAyBD26xUJ3KQQCAAGvDAxDknQdnswWy7roDaO+9euFWdOD7YtGh+u LsEd5jOo4d/ssH37XxemonacWITH304LtIFQ9UJGj55ufxUiEMNphdyH25vgeL7pVEnb Ssmg== X-Gm-Message-State: AA+aEWa7uwGmBFwmvr1AX3QNLrcOG6gPfVZK2b+Zg/pt3AzBshXh4Oy5 3whp/aoXs0N1LQJYyGfzMk71vDuaiGq9KZqtiQ1TmfVpuc+tMpVKpZrKoC6koFGqwtrNDBdLK8w hCXVo3a3EFaOaIWfdJDTTXByMPEWyWKaY1Clw8oEbVaSBwj0mbc+qf8waAAhfOGor1eC9Om8VQk o5sL4gemHcEh+Rxo59RKiffkSLtnLPEAyzA5KsGOOO26Ft7+G5LulZgfkCDKUBf+RfEoFIqlZp5 B5JJIY/YrV9tJRHRUlDcsTcpb3bQJtNfn3LB8pcVkhDizUwsQ/7paJsEmGvg9qAJHsSF0t47ZPF EvnXmkXE8pGL50Be+7uzk6nPvf0fOuKqrzXyAyk2jWp+GXzDych6ZJkfIMPCE19WV020VzsjT9L y X-Received: by 2002:a65:64c8:: with SMTP id t8mr26583563pgv.31.1543274705293; Mon, 26 Nov 2018 15:25:05 -0800 (PST) X-Received: by 2002:a65:64c8:: with SMTP id t8mr26583540pgv.31.1543274704595; Mon, 26 Nov 2018 15:25:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543274704; cv=none; d=google.com; s=arc-20160816; b=E3os14fr5rdgotCmMBJzKc4nSD4LJTLaBQqlt4LJVaj4Sg7REsccpbm4cU6SPBokmm a7ITr850dgSZDEKnf+rdOg7ZxJ71tDY7Ncr4k7P36RQj3GJMhA14B7E4MYf5AUcQ6hiO NMEjjDl+FiTg3ABljas/e81ySszqqZcOhuCELwm2HAXKW3XQapJbH2JZrXnsczr67OL+ tEvbxAveOcpzyYGe+T1CXVZE/tcr1Z58m5OoYec800+T4GaRkfnXTO4Z62LJE0x63HJi uxlGg3av+CYcV+WzwrNVQ9N8HM/A3Zlkvudthl9mwPH+qx/zGCpfWF94uxEJ+GIPz6Dv dFqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:references:message-id:in-reply-to:subject :cc:to:from:date:dkim-signature; bh=nCRmhahI7kY4j643ILpESoqKGQRCrmHSVJ/3uA5AXlw=; b=IH22i+3mcwXL2uA14ZNuTxN9ICDwQWjBR/zGf6wv680Mrs2jfxDZb19VBa8fhjZ9Rc Hk1H9fpAr9/baSYiza92lO8xUWIQfDp8wJYRRjC6eh8aVJJlHakrrNO0HW10veEvmemd GO6Kyta6wg4erkEwjlKkDXQf2XzpJM/aSdDQ/UXlL8n+Me3htVb7PwpP0Hk2O0gyoGCt xZXWCKj38LAPUxaEzyZWciuc2mAK3JLGt2BhasD82OtzyamsvVHt1JjFDSy1XQX/eKMF ZElQmjbaFZXKuomDTRuEIy2dNodC1AyKxLx+kdZqDHIey0URHs+dagvs95X3Y2ks1n+c RV5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Pdtzoav8; spf=pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id h191sor2685046pge.48.2018.11.26.15.25.04 for (Google Transport Security); Mon, 26 Nov 2018 15:25:04 -0800 (PST) Received-SPF: pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Pdtzoav8; spf=pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=nCRmhahI7kY4j643ILpESoqKGQRCrmHSVJ/3uA5AXlw=; b=Pdtzoav835mYbtf6SXhoxvtwfqBZtwTZwsWDylluLmILa46SW55/nsYTLCqh6d+Kw4 v0k4QvP0irC2WHz7YxGoB0NJ0yJygaY3lrHrB8LwlpVebCtb1y1xFcdOS5XXWY0UWz8R XUzozTz+byBOBKsZJ4p0URaUfa+WgMwtZjU6Lgzgtpaz/XrUnB7sTUcAGoOagPo7Y0BK RTku/zk9mUmhflZUUHyVdYqKu6iaIPWa2wcLXYk/pOVIm0gtvLOAtyhAntCQHRGf+nV/ vCI+ZD2NM2zbD+kV/9Ro+PedH7JSYRK7LTli7VWNghi6K4Z9AkBjxMXrmmTSpRyv+cO/ 3YWA== X-Google-Smtp-Source: AFSGD/XxPCrVlSdAO0vLUHG3etB0kcQwV02MaaUz6ANN2A5QkUrF5psVaI5wPt0d4iR7t/S45HpzuA== X-Received: by 2002:a63:b94c:: with SMTP id v12mr26777485pgo.221.1543274703650; Mon, 26 Nov 2018 15:25:03 -0800 (PST) Received: from [100.112.89.103] ([104.133.8.103]) by smtp.gmail.com with ESMTPSA id h15sm1473213pgl.43.2018.11.26.15.25.02 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 26 Nov 2018 15:25:02 -0800 (PST) Date: Mon, 26 Nov 2018 15:25:01 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Andrew Morton cc: "Kirill A. Shutemov" , "Kirill A. Shutemov" , Matthew Wilcox , linux-mm@kvack.org Subject: [PATCH 05/10] mm/khugepaged: fix crashes due to misaccounted holes In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Huge tmpfs testing on a shortish file mapped into a pmd-rounded extent hit shmem_evict_inode()'s WARN_ON(inode->i_blocks) followed by clear_inode()'s BUG_ON(inode->i_data.nrpages) when the file was later closed and unlinked. khugepaged's collapse_shmem() was forgetting to update mapping->nrpages on the rollback path, after it had added but then needs to undo some holes. There is indeed an irritating asymmetry between shmem_charge(), whose callers want it to increment nrpages after successfully accounting blocks, and shmem_uncharge(), when __delete_from_page_cache() already decremented nrpages itself: oh well, just add a comment on that to them both. And shmem_recalc_inode() is supposed to be called when the accounting is expected to be in balance (so it can deduce from imbalance that reclaim discarded some pages): so change shmem_charge() to update nrpages earlier (though it's rare for the difference to matter at all). Fixes: 800d8c63b2e98 ("shmem: add huge pages support") Fixes: f3f0e1d2150b2 ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Hugh Dickins Cc: Kirill A. Shutemov Cc: stable@vger.kernel.org # 4.8+ Acked-by: Kirill A. Shutemov --- mm/khugepaged.c | 5 ++++- mm/shmem.c | 6 +++++- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2070c316f06e..65e82f665c7c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1506,9 +1506,12 @@ static void collapse_shmem(struct mm_struct *mm, khugepaged_pages_collapsed++; } else { struct page *page; + /* Something went wrong: roll back page cache changes */ - shmem_uncharge(mapping->host, nr_none); xas_lock_irq(&xas); + mapping->nrpages -= nr_none; + shmem_uncharge(mapping->host, nr_none); + xas_set(&xas, start); xas_for_each(&xas, page, end - 1) { page = list_first_entry_or_null(&pagelist, diff --git a/mm/shmem.c b/mm/shmem.c index ea26d7a0342d..e6558e49b42a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -297,12 +297,14 @@ bool shmem_charge(struct inode *inode, long pages) if (!shmem_inode_acct_block(inode, pages)) return false; + /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ + inode->i_mapping->nrpages += pages; + spin_lock_irqsave(&info->lock, flags); info->alloced += pages; inode->i_blocks += pages * BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irqrestore(&info->lock, flags); - inode->i_mapping->nrpages += pages; return true; } @@ -312,6 +314,8 @@ void shmem_uncharge(struct inode *inode, long pages) struct shmem_inode_info *info = SHMEM_I(inode); unsigned long flags; + /* nrpages adjustment done by __delete_from_page_cache() or caller */ + spin_lock_irqsave(&info->lock, flags); info->alloced -= pages; inode->i_blocks -= pages * BLOCKS_PER_PAGE;