From patchwork Mon Nov 15 13:49:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12619543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 040AAC433F5 for ; Mon, 15 Nov 2021 13:50:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9643A61C4F for ; Mon, 15 Nov 2021 13:50:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9643A61C4F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 34F226B007E; Mon, 15 Nov 2021 08:50:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D9276B0080; Mon, 15 Nov 2021 08:50:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 152106B0081; Mon, 15 Nov 2021 08:50:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 03D236B007E for ; Mon, 15 Nov 2021 08:50:25 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C30C68226C for ; Mon, 15 Nov 2021 13:50:24 +0000 (UTC) X-FDA: 78811299168.15.6176795 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf03.hostedemail.com (Postfix) with ESMTP id B058730000B1 for ; Mon, 15 Nov 2021 13:50:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636984223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Mopq8eqVCQt3k2LgKCs45drQOHTZZHv/QrpqMo5cqQ=; b=BKGGyAhkYUcPRrZZeIfBY8sNOIWduzkbHTQyeVmq/UV4tZl8ngKylteWFpQpnmuW4DBnox EgCiyVnSdwcJqvwv8L0vp9+wOSu6e1FQdo6krojg8/4Mqagzv2wDq7NI7lVFNQaL9nCGC4 48MIGzs/HsfZzdM1cWe2dZA1RHIZHOE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636984223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Mopq8eqVCQt3k2LgKCs45drQOHTZZHv/QrpqMo5cqQ=; b=BKGGyAhkYUcPRrZZeIfBY8sNOIWduzkbHTQyeVmq/UV4tZl8ngKylteWFpQpnmuW4DBnox EgCiyVnSdwcJqvwv8L0vp9+wOSu6e1FQdo6krojg8/4Mqagzv2wDq7NI7lVFNQaL9nCGC4 48MIGzs/HsfZzdM1cWe2dZA1RHIZHOE= Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-503-SoWpPsuiPLGjL2S83_zUuw-1; Mon, 15 Nov 2021 08:50:20 -0500 X-MC-Unique: SoWpPsuiPLGjL2S83_zUuw-1 Received: by mail-pl1-f200.google.com with SMTP id v23-20020a170902bf9700b001421d86afc4so450181pls.9 for ; Mon, 15 Nov 2021 05:50:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/Mopq8eqVCQt3k2LgKCs45drQOHTZZHv/QrpqMo5cqQ=; b=fP23neXdcfzUQQ+I6nFjMTLWqK2Xj8hBiaEF9/w3I2d396vmMUGqSXc3hk769UJTaw nYbi1WhU2//ZMictXC2OAjWfp5DeKLwi/kvexSBKXlJo2iTfVKiVMRQDLdawVXUXvGNf Nbg/mU1GhNxNsfxRo+o5FY+N+pzNEcdvk+DmYmEKnydx9/xGSacAKl9vNgfrocs0pOYN wimr1RQ7wqnaTHKURunQeyYFoq27E3Y4Buceuo2RXq1EfDufZkxl2lGe8oJS4GPdpbFt JIuYgYTedCOIQC5WbQhNsQPiO+04mzI0NASEx5CXi7bK+GYUAap/QI1/gYc29wFPmeaD mF6g== X-Gm-Message-State: AOAM5300lJ9GeR0nV155xZmoMLROhqzM1djPXjLCO559tvIsRQGBthZn Z7293/BwGHArM9aL2TxbZZp77sMpQuh0D8eYCcXb9RHvYIuj22SDhDKEE2lpoPKBz2MbGR3ljUw JgqyNlk+9Y50= X-Received: by 2002:a05:6a00:8c4:b0:44c:9827:16cc with SMTP id s4-20020a056a0008c400b0044c982716ccmr32670989pfu.7.1636984219184; Mon, 15 Nov 2021 05:50:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJy6qrMgzTa1gzoxCkZL1qeOAjLUb/BBgC3H5Gijen4dm8xnmmk9vd5L0ATwGFMxJeM3vHMokQ== X-Received: by 2002:a05:6a00:8c4:b0:44c:9827:16cc with SMTP id s4-20020a056a0008c400b0044c982716ccmr32670970pfu.7.1636984218969; Mon, 15 Nov 2021 05:50:18 -0800 (PST) Received: from localhost.localdomain ([191.101.132.72]) by smtp.gmail.com with ESMTPSA id p188sm15499471pfg.102.2021.11.15.05.50.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Nov 2021 05:50:18 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: David Hildenbrand , peterx@redhat.com, Andrea Arcangeli , Yang Shi , Vlastimil Babka , Hugh Dickins , Andrew Morton , Alistair Popple , "Kirill A . Shutemov" Subject: [PATCH RFC v2 2/2] mm: Rework swap handling of zap_pte_range Date: Mon, 15 Nov 2021 21:49:51 +0800 Message-Id: <20211115134951.85286-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211115134951.85286-1-peterx@redhat.com> References: <20211115134951.85286-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B058730000B1 X-Stat-Signature: rhy6xx88ht5j8ks5539aw4zcpakng6k1 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BKGGyAhk; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BKGGyAhk; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf03.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com X-HE-Tag: 1636984214-378958 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clean the code up by merging the device private/exclusive swap entry handling with the rest, then we merge the pte clear operation too. struct* page is defined in multiple places in the function, move it upward. free_swap_and_cache() is only useful for !non_swap_entry() case, put it into the condition. No functional change intended. Signed-off-by: Peter Xu --- mm/memory.c | 25 ++++++++----------------- 1 file changed, 8 insertions(+), 17 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e454f3c6aeb9..e5d59a6b6479 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1326,6 +1326,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent = *pte; + struct page *page; + if (pte_none(ptent)) continue; @@ -1333,8 +1335,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { - struct page *page; - page = vm_normal_page(vma, addr, ptent); if (unlikely(zap_skip_check_mapping(details, page))) continue; @@ -1368,32 +1368,23 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, entry = pte_to_swp_entry(ptent); if (is_device_private_entry(entry) || is_device_exclusive_entry(entry)) { - struct page *page = pfn_swap_entry_to_page(entry); - + page = pfn_swap_entry_to_page(entry); if (unlikely(zap_skip_check_mapping(details, page))) continue; - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; - if (is_device_private_entry(entry)) page_remove_rmap(page, false); - put_page(page); - continue; - } - - if (!non_swap_entry(entry)) - rss[MM_SWAPENTS]--; - else if (is_migration_entry(entry)) { - struct page *page; - + } else if (is_migration_entry(entry)) { page = pfn_swap_entry_to_page(entry); if (unlikely(zap_skip_check_mapping(details, page))) continue; rss[mm_counter(page)]--; + } else if (!non_swap_entry(entry)) { + rss[MM_SWAPENTS]--; + if (unlikely(!free_swap_and_cache(entry))) + print_bad_pte(vma, addr, ptent, NULL); } - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); } while (pte++, addr += PAGE_SIZE, addr != end);