From patchwork Thu Dec 13 05:15:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 10727931 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 406516C5 for ; Thu, 13 Dec 2018 05:15:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C3A32B148 for ; Thu, 13 Dec 2018 05:15:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 190BF2B1A4; Thu, 13 Dec 2018 05:15:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 902402B148 for ; Thu, 13 Dec 2018 05:15:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 866958E0172; Thu, 13 Dec 2018 00:15:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 815468E0161; Thu, 13 Dec 2018 00:15:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DEB28E0172; Thu, 13 Dec 2018 00:15:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 3FAAB8E0161 for ; Thu, 13 Dec 2018 00:15:24 -0500 (EST) Received: by mail-qt1-f198.google.com with SMTP id j5so759515qtk.11 for ; Wed, 12 Dec 2018 21:15:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=1pCCeg5B8HHsS5A8jef8J0DNIcsVkKv/XAUpFPiTDl8=; b=Y5nmShg7Rp3rnCwNQlfyn0DwbUhzsc6srox/yqzqrbcxO96td8XqHxKAccICpnwACL 5RD2Sv3zgmA33Y5mBCIFkbc8QpV94ZC/sKS3uB+8waa3HbqnO6IRhfo8FQ2kU7DC5LrM GOLbaskNGwBC69ZemIqIpe++UCHrgjPnAj05vHCe6sR1CbsQU3l+6O+aI8KY7GmD0ZUR 9asK4bLOmf3KxXfzjIlox2VQ/PeEmKCaUrM5UZ1bCYdNM4pWb/FJI5d0+igzg1YTSLJF sflP6vqeR12xxkBor4YOtBnsVVrqKdGlUojEMnij2eqtxjzpJo2rs55zVABIM0XPlnlD dZfg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWarDfEq5ZLziox3GJAoUAAp5pdvr2iQlkV2CKNq2g/PMv/iTBWt VKP58+uPeNyRUJXk/K9XaPuHeLA0lJTR8yUYUlk6NZMXFzR5C4KD3qC4DTMoM7j2aOizo9Xmkmn n/Y7tYjF3kJqn/u1YBCc49j6Gi5wYUQYh/HEjqMU5KV1DSOKgaHbGH+6Y25L4EbHRaw== X-Received: by 2002:ac8:2ca9:: with SMTP id 38mr23090255qtw.338.1544678123986; Wed, 12 Dec 2018 21:15:23 -0800 (PST) X-Google-Smtp-Source: AFSGD/U7miRDBLI7g9HowCgFuHnYf27kg4gEHr61bEFfkoBUPlsgFIn2Sn4L6nwwUCz5ShfGYRpn X-Received: by 2002:ac8:2ca9:: with SMTP id 38mr23090224qtw.338.1544678123346; Wed, 12 Dec 2018 21:15:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544678123; cv=none; d=google.com; s=arc-20160816; b=eG6GYJRvjS/vU466GOeXNO8Ls2MoX77Z/Og8Yf/WergtCioxTmFpFVUJAr2d5gETaL pPX5Vmdtzq1CA1AmisaeOf9LYcfVV1XlyL652bWmlp1Rb/vaoSHBoQzGN637kOhDzZRE Si8miBiPeKFcOf72m2MOP/j2aMgGxe+rgEpQYf1RywngEXtrjc8cfEYT+7t1c8w5TGnv DkyeNPpR5Ci4zxGXgqfDNmi/1yg15gOUK9umGEBzDuKWPjGEPU0tCfqMrJhGgOdQMj5D XrmiB801RzakBVROnVXRgntebzP2EJXiEbdiwtYTV4bQO5G00usHLFtnrR4Nmul3I5Z/ DjYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=1pCCeg5B8HHsS5A8jef8J0DNIcsVkKv/XAUpFPiTDl8=; b=FPLBedp36q7yjdXQJjRuux+8Ijhkg0bw33nH7YgH0KII1un497LP8yE2uaQKem6/yU JeK/5GWckviyogT20xoZQpVlIoeIlIs80KW2WwaYHtZPLFz+oG7bUqT9SrGGaEnCW6bk f2uBS72LaJjGysoWXdg8NH8TB4L3beBwdFBrH2QDjq18PPuN4fZI/RlH+ieKtn3wcULn QY0+Jvw6QtouU4+gczkncXXdfqJyYIHFhulm+DLXxbWa3UvXlIudmMrPlWGkiUpk1dzq 2E1d/Fbf5i7+5otkpPHotwm+8HAOKGD5dSwB4BZdK44mRiDhcJbikNO7tnl6wyDK8NjW 0tMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id y65si527004qky.128.2018.12.12.21.15.23 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Dec 2018 21:15:23 -0800 (PST) Received-SPF: pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0774BC05090A; Thu, 13 Dec 2018 05:15:22 +0000 (UTC) Received: from xz-x1.nay.redhat.com (dhcp-14-116.nay.redhat.com [10.66.14.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id 806CC1059582; Thu, 13 Dec 2018 05:15:11 +0000 (UTC) From: Peter Xu To: linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A. Shutemov" , Matthew Wilcox , Michal Hocko , Dave Jiang , "Aneesh Kumar K.V" , Souptick Joarder , Konstantin Khlebnikov , Zi Yan , linux-mm@kvack.org Subject: [PATCH v3] mm: thp: fix flags for pmd migration when split Date: Thu, 13 Dec 2018 13:15:10 +0800 Message-Id: <20181213051510.20306-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 13 Dec 2018 05:15:22 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When splitting a huge migrating PMD, we'll transfer all the existing PMD bits and apply them again onto the small PTEs. However we are fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() or pmd_yound() while actually they don't make sense at all when it's a migration entry. Fix them up. Since at it, drop the ifdef together as not needed. Note that if my understanding is correct about the problem then if without the patch there is chance to lose some of the dirty bits in the migrating pmd pages (on x86_64 we're fetching bit 11 which is part of swap offset instead of bit 2) and it could potentially corrupt the memory of an userspace program which depends on the dirty bit. CC: Andrea Arcangeli CC: Andrew Morton CC: "Kirill A. Shutemov" CC: Matthew Wilcox CC: Michal Hocko CC: Dave Jiang CC: "Aneesh Kumar K.V" CC: Souptick Joarder CC: Konstantin Khlebnikov CC: Zi Yan CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org Signed-off-by: Peter Xu Reviewed-by: Konstantin Khlebnikov Reviewed-by: William Kucharski Acked-by: Kirill A. Shutemov --- v2: - fix it up for young/write/dirty bits too [Konstantin] v3: - fetch write correctly for migration entry; drop macro [Konstantin] --- mm/huge_memory.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f2d19e4fe854..aebade83cec9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2145,23 +2145,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ old_pmd = pmdp_invalidate(vma, haddr, pmd); -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION pmd_migration = is_pmd_migration_entry(old_pmd); - if (pmd_migration) { + if (unlikely(pmd_migration)) { swp_entry_t entry; entry = pmd_to_swp_entry(old_pmd); page = pfn_to_page(swp_offset(entry)); - } else -#endif + write = is_write_migration_entry(entry); + young = false; + soft_dirty = pmd_swp_soft_dirty(old_pmd); + } else { page = pmd_page(old_pmd); + if (pmd_dirty(old_pmd)) + SetPageDirty(page); + write = pmd_write(old_pmd); + young = pmd_young(old_pmd); + soft_dirty = pmd_soft_dirty(old_pmd); + } VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); - if (pmd_dirty(old_pmd)) - SetPageDirty(page); - write = pmd_write(old_pmd); - young = pmd_young(old_pmd); - soft_dirty = pmd_soft_dirty(old_pmd); /* * Withdraw the table only after we mark the pmd entry invalid.