From patchwork Wed Jan 19 17:50:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12717696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A419C433F5 for ; Wed, 19 Jan 2022 17:50:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81B566B0072; Wed, 19 Jan 2022 12:50:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CAF16B0073; Wed, 19 Jan 2022 12:50:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B8F06B0074; Wed, 19 Jan 2022 12:50:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id 5C63B6B0072 for ; Wed, 19 Jan 2022 12:50:38 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1B0DC95AF5 for ; Wed, 19 Jan 2022 17:50:38 +0000 (UTC) X-FDA: 79047776556.13.E2AE410 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 88EB44000F for ; Wed, 19 Jan 2022 17:50:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:Message-ID: Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=Svz1xd3YrAqMTSQpYkb5MLtVLT+zVX8EPQ133lsxezo=; b=RJ7o6i+IEt9rb3ufa/C/3jT41C 7hBewC73dTbsHwK3g+xSP73bOhAy79qKKzHn5oaGKjpfhseL/D24+s5a0NMB7WQa/jcABoNwWLFnv EQsgvJYAl4pOnL26LUQxEI3U53sVEuOlQA3CjwmYMUKgO5N1uwIgAEWgJJd1SwkA3ThPgMSKPiajH 1gU2cXJrbG6rX8GjgJIRF6PGSW7ZDR+YQgnOZzGRgLljQ422/WLJEOWggV2W1vRAxQ/+lzatTOEaI eRF+3a5xsJ7o0CUG+HbfBXcXTcuAv6hXioxcr8U1hTtTgTK6oyovTHXCGhK4GwKTPn97YrAniz+LG 5qW7+yIQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAF6M-00BcF9-E4; Wed, 19 Jan 2022 17:50:30 +0000 Date: Wed, 19 Jan 2022 17:50:30 +0000 From: Matthew Wilcox To: Hugh Dickins Cc: linux-mm@kvack.org, Dave Hansen Subject: [RFC] Simplify users of vma_address_end() Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 88EB44000F X-Stat-Signature: kiyif77ysc9q9b1t99ppg1cryqc56nw1 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RJ7o6i+I; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1642614634-762481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Hugh, What do you think to this simplification? Dave dislikes the ?: usage in this context, and I thought we could usefully put the condition inside the (inline) function. I could also go for: if (!PageCompound(page)) return address + PAGE_SIZE; VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ in case anyone starts to create compound KSM pages. diff --git a/mm/internal.h b/mm/internal.h index c774075b3893..7cd33ee4df32 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -462,13 +462,13 @@ vma_address(struct page *page, struct vm_area_struct *vma) * Assumes that vma_address() already returned a good starting address. * If page is a compound head, the entire compound page is considered. */ -static inline unsigned long -vma_address_end(struct page *page, struct vm_area_struct *vma) +static inline unsigned long vma_address_end(struct page *page, + unsigned long address, struct vm_area_struct *vma) { pgoff_t pgoff; - unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + if (PageKsm(page) || !PageCompound(page)) + return address + PAGE_SIZE; pgoff = page_to_pgoff(page) + compound_nr(page); address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f7b331081791..fcd7b9ccfb1e 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -181,15 +181,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return true; } - /* - * Seek to next pte only makes sense for THP. - * But more important than that optimization, is to filter out - * any PageKsm page: whose page->index misleads vma_address() - * and vma_address_end() to disaster. - */ - end = PageTransCompound(page) ? - vma_address_end(page, pvmw->vma) : - pvmw->address + PAGE_SIZE; + end = vma_address_end(page, pvmw->address, pvmw->vma); if (pvmw->pte) goto next_pte; restart: diff --git a/mm/rmap.c b/mm/rmap.c index a531b64d53fa..5d5dc2a60a26 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -946,7 +946,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(page, vma)); + vma_address_end(page, address, vma)); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { @@ -1453,8 +1453,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(page, address, vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) { @@ -1757,8 +1756,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(page, address, vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) {