From patchwork Sat May 23 01:50:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 11566477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C58090 for ; Sat, 23 May 2020 01:51:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD438207DA for ; Sat, 23 May 2020 01:51:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="LAvGq9CG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD438207DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F23C180008; Fri, 22 May 2020 21:50:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED41080007; Fri, 22 May 2020 21:50:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC24F80008; Fri, 22 May 2020 21:50:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id C13A680007 for ; Fri, 22 May 2020 21:50:59 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 702E95DDB for ; Sat, 23 May 2020 01:50:59 +0000 (UTC) X-FDA: 76846305438.06.ear34_5fe541a20b423 X-Spam-Summary: 50,0,0,30e3d6421c724ddc,d41d8cd98f00b204,axboe@kernel.dk,,RULES_HIT:41:355:379:541:967:973:988:989:1260:1311:1314:1345:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2199:2393:2525:2560:2563:2682:2685:2690:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4605:4641:5007:6119:6120:6261:6653:7901:7903:7904:9025:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12679:12683:12903:13153:13161:13228:13229:13845:13894:14096:14181:14394:14721:21080:21433:21444:21611:21627:21740:21795:30034:30051:30054:30070,0,RBL:209.85.214.196:@kernel.dk:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:31,LUA_SUMMARY:none X-HE-Tag: ear34_5fe541a20b423 X-Filterd-Recvd-Size: 5628 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Sat, 23 May 2020 01:50:58 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id m7so5120862plt.5 for ; Fri, 22 May 2020 18:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=7l3Vl4XUCnVrv5zLlaIS2TAqW4v3YGdr3s+ckhxwo3k=; b=LAvGq9CG4T4SASualjrsfImwFHjrZ23/6PxdRJIQJ4CB67r5pCfVQlfHkmlkhrE17o vlrG0IeVUqgfrC0LHjA+Flocnfg3Z03dOs44FH3RPLcrPB9l8he10iFWcqjhkLfP3ZPI lkX6+1Gsd1Ny65q3n+Cu5/d2VSd36EV1pcSsbHCqYL0jCoKysJLQTtXwcVHTtO06HjF9 cr8h4KWHzcRkYb2Fc+NBSvWGcc4/gX6ZqX4d71d/Ybwur2aMHfV+aUt6J4jPxQ+IAS2p 9I2oqK48BHRVuQH/vFIjLYDuT+t0Ky0uAi5UG2mdfd0qP78H9zQUhHra4ZnPk4cE2L2w q+qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=7l3Vl4XUCnVrv5zLlaIS2TAqW4v3YGdr3s+ckhxwo3k=; b=hMJJUh2nFIsd70hFboJMC+ULvR6LDsyni/uyBCrDyOeOFU4TzMl7kVb0mSTtWRnLjI 09jCv0sxTK2USF5zp/9KddpXC+HTOdGTjaCTctOb1/saMd0CtI6jEXtMHuVgHEjDGa4R 8uT+1ospUcwmaqnNwXG1LuR02m/Bg1vhz7bWtxfrdxtjRkyHBerwkPAwxnUW7HV2TP74 kk9e8xRg4le9V2XPu7cUby6CX8RJFmF86n1cv3Z8Ip839L63eeiJBC895+27tG9cCcf3 JjNIMj+AQIAfSd8fzXVAdRj4AgUE86Vz55VzJ7QLPZ4jYoOLdtNdWedYZ775K8c9N7CC +ZsA== X-Gm-Message-State: AOAM533YglEAVzqn3Gdy9b9KQ8sEP6NkG8rXdNCboRoxoTt3FRl7lQWk LSXzMkRWC9txmq1TWuhyuM1ODA== X-Google-Smtp-Source: ABdhPJzbVANbqitIh8v7v51fEGKGFyBynVcF+TbM0Ms1gwAamMyyZ0DbB1Z9x3LfCBk3dayD3vcwSw== X-Received: by 2002:a17:90b:e0c:: with SMTP id ge12mr8085637pjb.3.1590198657579; Fri, 22 May 2020 18:50:57 -0700 (PDT) Received: from x1.lan ([2605:e000:100e:8c61:e0db:da55:b0a4:601]) by smtp.gmail.com with ESMTPSA id a71sm8255477pje.0.2020.05.22.18.50.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 18:50:56 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHSET v2 RFC 0/11] Add support for async buffered reads Date: Fri, 22 May 2020 19:50:38 -0600 Message-Id: <20200523015049.14808-1-axboe@kernel.dk> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We technically support this already through io_uring, but it's implemented with a thread backend to support cases where we would block. This isn't ideal. After a few prep patches, the core of this patchset is adding support for async callbacks on page unlock. With this primitive, we can simply retry the IO operation. With io_uring, this works a lot like poll based retry for files that support it. If a page is currently locked and needed, -EIOCBQUEUED is returned with a callback armed. The callers callback is responsible for restarting the operation. With this callback primitive, we can add support for generic_file_buffered_read(), which is what most file systems end up using for buffered reads. XFS/ext4/btrfs/bdev is wired up, but probably trivial to add more. The file flags support for this by setting FMODE_BUF_RASYNC, similar to what we do for FMODE_NOWAIT. Open to suggestions here if this is the preferred method or not. In terms of results, I wrote a small test app that randomly reads 4G of data in 4K chunks from a file hosted by ext4. The app uses a queue depth of 32. preadv for comparison: real 1m13.821s user 0m0.558s sys 0m11.125s CPU ~13% Mainline: real 0m12.054s user 0m0.111s sys 0m5.659s CPU ~32% + ~50% == ~82% This patchset: real 0m9.283s user 0m0.147s sys 0m4.619s CPU ~52% The CPU numbers are just a rough estimate. For the mainline io_uring run, this includes the app itself and all the threads doing IO on its behalf (32% for the app, ~1.6% per worker and 32 of them). Context switch rate is much smaller with the patchset, since we only have the one task performing IO. The goal here is efficiency. Async thread offload adds latency, and it also adds noticable overhead on items such as adding pages to the page cache. By allowing proper async buffered read support, we don't have X threads hammering on the same inode page cache, we have just the single app actually doing IO. Series can also be found here: https://git.kernel.dk/cgit/linux-block/log/?h=async-buffered.2 or pull from: git://git.kernel.dk/linux-block async-buffered.2 fs/block_dev.c | 2 +- fs/btrfs/file.c | 2 +- fs/ext4/file.c | 2 +- fs/io_uring.c | 102 ++++++++++++++++++++++++++++++++++++++ fs/xfs/xfs_file.c | 2 +- include/linux/blk_types.h | 3 +- include/linux/fs.h | 5 ++ include/linux/pagemap.h | 39 +++++++++++++++ mm/filemap.c | 83 +++++++++++++++++++++++++------ 9 files changed, 219 insertions(+), 21 deletions(-) Changes since v1: - Fix an issue with inline page locking - Fix a potential race with __wait_on_page_locked_async() - Fix a hang related to not setting page_match, thus missing a wakeup