From patchwork Thu Aug 25 16:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12955085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9975ECAA24 for ; Thu, 25 Aug 2022 16:47:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 860B66B007B; Thu, 25 Aug 2022 12:47:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80EC36B007D; Thu, 25 Aug 2022 12:47:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7022F940007; Thu, 25 Aug 2022 12:47:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 628786B007B for ; Thu, 25 Aug 2022 12:47:19 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3D64A41D2A for ; Thu, 25 Aug 2022 16:47:19 +0000 (UTC) X-FDA: 79838695398.28.BCE980D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id F129940007 for ; Thu, 25 Aug 2022 16:47:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661446037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qtxYVN1b73jxS32uO5RlU+EmZHH9eyqhBC2B1AT9Izo=; b=S9Js5PKGrGc9AOt8fsP794RG7HizgMlZ0flvlrPaPd2xdKFzqeE/Na2BK6sZCXaQDwZXNA SLXKHf7q8jyM9PKz+d6am+9DBN2UPUzgpGtBZHfO7KtPjSV8IzMWpdbUn+BA/Jzy0CaANE Q4+N1Jh4kV+LON/8tt0LJZy5IBcNaes= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-327-gOthU2mwMBair-ARSgbNrQ-1; Thu, 25 Aug 2022 12:47:07 -0400 X-MC-Unique: gOthU2mwMBair-ARSgbNrQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6F1371C09C9A; Thu, 25 Aug 2022 16:47:06 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.192.29]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0BE5492CA2; Thu, 25 Aug 2022 16:47:03 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mel Gorman , Jason Gunthorpe , John Hubbard , "Matthew Wilcox (Oracle)" , Andrea Arcangeli , Hugh Dickins , Peter Xu Subject: [PATCH v1 1/3] mm/gup: replace FOLL_NUMA by gup_can_follow_protnone() Date: Thu, 25 Aug 2022 18:46:57 +0200 Message-Id: <20220825164659.89824-2-david@redhat.com> In-Reply-To: <20220825164659.89824-1-david@redhat.com> References: <20220825164659.89824-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661446039; a=rsa-sha256; cv=none; b=nSRuEFRYdCdIxbf803DZxbumjjqJaWhxBdaf+HvAyVkeZdoFUie69/B54Ij9dFajedjHeg VUk0Ksqr3ergLBe3L8NFqTG1/CyL1/KNwIDpNsgRZyYvaHugvecOCaj7T8lxdGXqraS7so t73XeM1bGWUyxU2NtM14tCvr+Q8dNII= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S9Js5PKG; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661446039; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qtxYVN1b73jxS32uO5RlU+EmZHH9eyqhBC2B1AT9Izo=; b=2W1IT1VZKlZE9h0RVNuV1H++P/4cqY7YI0mWgoqwSLixPkUb999qgefcrn1XIJ6dfSrShB W/xh4+/nLIwwUQDwMemC55ti3J3Zo1/+VJBvmc0MgiJwpUkavuKSUEolNuQ9fpwnCpCr7L WSHc+sjwI2l8JgS0P3dV/OwBqiMqIBs= X-Stat-Signature: 6kbytyrqyeoq9bnh9t3szwkyr7k54bs7 X-Rspamd-Queue-Id: F129940007 X-Rspam-User: X-Rspamd-Server: rspam06 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S9Js5PKG; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1661446038-17632 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: No need for a special flag that is not even properly documented to be internal-only. Let's just factor this check out and get rid of this flag. The separate function has the nice benefit that we can centralize comments. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 16 +++++++++++++++- mm/gup.c | 12 ++---------- mm/huge_memory.c | 2 +- 3 files changed, 18 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 982f2607180b..8b85765d7a98 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2881,7 +2881,6 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, * and return without waiting upon it */ #define FOLL_NOFAULT 0x80 /* do not fault in pages */ #define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ -#define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ #define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ #define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ #define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ @@ -2997,6 +2996,21 @@ static inline bool gup_must_unshare(unsigned int flags, struct page *page) return !PageAnonExclusive(page); } +/* + * Indicates whether GUP can follow a PROT_NONE mapped page, or whether + * a (NUMA hinting) fault is required. + */ +static inline bool gup_can_follow_protnone(unsigned int flags) +{ + /* + * FOLL_FORCE has to be able to make progress even if the VMA is + * inaccessible. Further, FOLL_FORCE access usually does not represent + * application behaviour and we should avoid triggering NUMA hinting + * faults. + */ + return flags & FOLL_FORCE; +} + typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); diff --git a/mm/gup.c b/mm/gup.c index 5abdaf487460..a1355dbd848e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -554,7 +554,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, migration_entry_wait(mm, pmd, address); goto retry; } - if ((flags & FOLL_NUMA) && pte_protnone(pte)) + if (pte_protnone(pte) && !gup_can_follow_protnone(flags)) goto no_page; page = vm_normal_page(vma, address, pte); @@ -707,7 +707,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); - if ((flags & FOLL_NUMA) && pmd_protnone(pmdval)) + if (pmd_protnone(pmdval) && !gup_can_follow_protnone(flags)) return no_page_table(vma, flags); retry_locked: @@ -1153,14 +1153,6 @@ static long __get_user_pages(struct mm_struct *mm, VM_BUG_ON(!!pages != !!(gup_flags & (FOLL_GET | FOLL_PIN))); - /* - * If FOLL_FORCE is set then do not force a full fault as the hinting - * fault information is unrelated to the reference behaviour of a task - * using the address space - */ - if (!(gup_flags & FOLL_FORCE)) - gup_flags |= FOLL_NUMA; - do { struct page *page; unsigned int foll_flags = gup_flags; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e9414ee57c5b..482c1826e723 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1449,7 +1449,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, return ERR_PTR(-EFAULT); /* Full NUMA hinting faults to serialise migration in fault paths */ - if ((flags & FOLL_NUMA) && pmd_protnone(*pmd)) + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(flags)) return NULL; if (!pmd_write(*pmd) && gup_must_unshare(flags, page))