From patchwork Fri Nov 22 23:53:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andreas Gruenbacher X-Patchwork-Id: 11258677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F5AC13A4 for ; Fri, 22 Nov 2019 23:53:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1EFE020637 for ; Fri, 22 Nov 2019 23:53:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BSUZSIr0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1EFE020637 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 399736B0530; Fri, 22 Nov 2019 18:53:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 373526B0531; Fri, 22 Nov 2019 18:53:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 287A36B0532; Fri, 22 Nov 2019 18:53:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 132456B0530 for ; Fri, 22 Nov 2019 18:53:47 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A7D3C180AD81F for ; Fri, 22 Nov 2019 23:53:46 +0000 (UTC) X-FDA: 76185568452.12.crate25_782d439a1175c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,agruenba@redhat.com,:torvalds@linux-foundation.org:swhiteho@redhat.com:khlebnikov@yandex-team.ru:kirill@shutemov.name::akpm@linux-foundation.org:linux-kernel@vger.kernel.org:linux-fsdevel@vger.kernel.org:viro@zeniv.linux.org.uk:hannes@cmpxchg.org:cluster-devel@redhat.com:lsahlber@redhat.com:sfrench@samba.org:rpeterso@redhat.com:agruenba@redhat.com,RULES_HIT:30051:30054:30090,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: crate25_782d439a1175c X-Filterd-Recvd-Size: 8257 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 Nov 2019 23:53:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574466825; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CQZMohj51Rvu4NKQeTHUm4c49yOWaO0z0fTazdBpmzU=; b=BSUZSIr0IFdGQK3hwZnSzw35h84etc4HpJpb+18UhYUz9aSdLZVsokRMzkvVcy93g2Gn0a k7+XzokNwQ+xiwUc1dMcpo9rX0oPg3LOwAMVIB77G3XIuZv0CmmDG0GnQNZyIwJy6nHmDb kpkOkPydqi9a/1KYQ41yMcKRjuZ3gRs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-34-eUKUTjKfNcuoqx_xi4L-iA-1; Fri, 22 Nov 2019 18:53:44 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8F516802689; Fri, 22 Nov 2019 23:53:42 +0000 (UTC) Received: from max.com (ovpn-204-21.brq.redhat.com [10.40.204.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC6135C1B5; Fri, 22 Nov 2019 23:53:38 +0000 (UTC) From: Andreas Gruenbacher To: Linus Torvalds Cc: Steven Whitehouse , Konstantin Khlebnikov , "Kirill A. Shutemov" , linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Alexander Viro , Johannes Weiner , cluster-devel@redhat.com, Ronnie Sahlberg , Steve French , Bob Peterson , Andreas Gruenbacher Subject: [RFC PATCH 2/3] fs: Add FAULT_FLAG_CACHED flag for filemap_fault Date: Sat, 23 Nov 2019 00:53:23 +0100 Message-Id: <20191122235324.17245-3-agruenba@redhat.com> In-Reply-To: <20191122235324.17245-1-agruenba@redhat.com> References: <20191122235324.17245-1-agruenba@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: eUKUTjKfNcuoqx_xi4L-iA-1 X-Mimecast-Spam-Score: 0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a FAULT_FLAG_CACHED flag which indicates to filemap_fault that it should only look at the page cache, without triggering filesystem I/O for the actual request or for readahead. When filesystem I/O would be triggered, VM_FAULT_RETRY should be returned instead. This allows the caller to tentatively satisfy a minor page fault out of the page cache, and to retry the operation after taking the necessary steps when that isn't possible. Signed-off-by: Andreas Gruenbacher --- include/linux/mm.h | 4 +++- mm/filemap.c | 43 ++++++++++++++++++++++++++++++------------- 2 files changed, 33 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a2adf95b3f9c..b3317e4b2607 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -392,6 +392,7 @@ extern pgprot_t protection_map[16]; #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */ #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */ #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */ +#define FAULT_FLAG_CACHED 0x200 /* Only look at the page cache */ #define FAULT_FLAG_TRACE \ { FAULT_FLAG_WRITE, "WRITE" }, \ @@ -402,7 +403,8 @@ extern pgprot_t protection_map[16]; { FAULT_FLAG_TRIED, "TRIED" }, \ { FAULT_FLAG_USER, "USER" }, \ { FAULT_FLAG_REMOTE, "REMOTE" }, \ - { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" } + { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" }, \ + { FAULT_FLAG_CACHED, "CACHED" } /* * vm_fault is filled by the the pagefault handler and passed to the vma's diff --git a/mm/filemap.c b/mm/filemap.c index 024ff0b5fcb6..2297fad3b03a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2383,7 +2383,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, * the mmap_sem still held. That's how FAULT_FLAG_RETRY_NOWAIT * is supposed to work. We have way too many special cases.. */ - if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) + if (vmf->flags & (FAULT_FLAG_RETRY_NOWAIT | FAULT_FLAG_CACHED)) return 0; *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); @@ -2460,26 +2460,28 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * so we want to possibly extend the readahead further. We return the file that * was pinned if we have to drop the mmap_sem in order to do IO. */ -static struct file *do_async_mmap_readahead(struct vm_fault *vmf, - struct page *page) +static vm_fault_t do_async_mmap_readahead(struct vm_fault *vmf, + struct page *page, + struct file **fpin) { struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; struct address_space *mapping = file->f_mapping; - struct file *fpin = NULL; pgoff_t offset = vmf->pgoff; /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ) - return fpin; + return 0; if (ra->mmap_miss > 0) ra->mmap_miss--; if (PageReadahead(page)) { - fpin = maybe_unlock_mmap_for_io(vmf, fpin); + if (vmf->flags & FAULT_FLAG_CACHED) + return VM_FAULT_RETRY; + *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); page_cache_async_readahead(mapping, ra, file, page, offset, ra->ra_pages); } - return fpin; + return 0; } /** @@ -2495,8 +2497,11 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, * * vma->vm_mm->mmap_sem must be held on entry. * - * If our return value has VM_FAULT_RETRY set, it's because the mmap_sem - * may be dropped before doing I/O or by lock_page_maybe_drop_mmap(). + * This function may drop the mmap_sem before doing I/O or waiting for a page + * lock; this is indicated by the VM_FAULT_RETRY flag in our return value. + * Setting FAULT_FLAG_CACHED or FAULT_FLAG_RETRY_NOWAIT in vmf->flags will + * prevent dropping the mmap_sem; in that case, VM_FAULT_RETRY indicates that + * the mmap_sem would have been dropped. * * If our return value does not have VM_FAULT_RETRY set, the mmap_sem * has not been released. @@ -2518,9 +2523,15 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) struct page *page; vm_fault_t ret = 0; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) - return VM_FAULT_SIGBUS; + /* + * FAULT_FLAG_CACHED indicates that the inode size is only guaranteed + * to be valid when the page we are looking for is in the page cache. + */ + if (!(vmf->flags & FAULT_FLAG_CACHED)) { + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + if (unlikely(offset >= max_off)) + return VM_FAULT_SIGBUS; + } /* * Do we have something in the page cache already? @@ -2531,8 +2542,14 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * We found the page, so try async readahead before * waiting for the lock. */ - fpin = do_async_mmap_readahead(vmf, page); + ret = do_async_mmap_readahead(vmf, page, &fpin); + if (ret) { + put_page(page); + return ret; + } } else if (!page) { + if (vmf->flags & FAULT_FLAG_CACHED) + goto out_retry; /* No page in the page cache at all */ count_vm_event(PGMAJFAULT); count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);