From patchwork Thu Jun 18 14:43:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 11612353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81D7B14E3 for ; Thu, 18 Jun 2020 14:44:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 407CE20888 for ; Thu, 18 Jun 2020 14:44:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="RfkCMfGN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 407CE20888 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D7C6D8D0027; Thu, 18 Jun 2020 10:44:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D2D708D0018; Thu, 18 Jun 2020 10:44:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C43278D0027; Thu, 18 Jun 2020 10:44:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id A25778D0018 for ; Thu, 18 Jun 2020 10:44:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 687001498FF for ; Thu, 18 Jun 2020 14:44:21 +0000 (UTC) X-FDA: 76942603122.08.screw77_4f0dd7326e11 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 38B63180C95B5 for ; Thu, 18 Jun 2020 14:44:21 +0000 (UTC) X-Spam-Summary: 2,0,0,692872ca61be9aca,d41d8cd98f00b204,axboe@kernel.dk,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1434:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4120:4321:4385:4398:5007:6119:6261:6653:7903:8603:10004:11026:11473:11658:11914:12043:12291:12296:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14394:21080:21324:21444:21451:21627:21740:21990:30054:30070,0,RBL:209.85.214.193:@kernel.dk:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: screw77_4f0dd7326e11 X-Filterd-Recvd-Size: 9002 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Thu, 18 Jun 2020 14:44:20 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id g12so2514327pll.10 for ; Thu, 18 Jun 2020 07:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=hItvrdMXW3x4/4yokWms4yIKM/BxkkcMkxbyn/WrUe0=; b=RfkCMfGNHVXXALcOZPZ2jdMi+X2j7BbBbajuLvtbMQgxMG/R/SN9Lmnx9NaM1whfmk FoFat2Ffm7ozF2Am9lqvkz0jsEoF+28OalJ8pxWgXrDzHjm9PzijBHVdQeoN953xZdDY tf/dvYtXZA04B8ODzL5vUKx87iaBZufPMtd6kedtozYO9BMoAcWeY9LWTHO0Ida6RpP3 x1eoT/nTeXvRj4v/5yKnIft1RpIV2avzGEzDdkbF3szc11YLa/Lb2tqMcnSEDn0GFTjr Fg3FMcJwXY/ZGtqbb19AHc5aPFxfUxJG8rl9k5i052NZMKop0iiA8EZgxFfSHlvxzSqN C9BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to:mime-version:content-transfer-encoding; bh=hItvrdMXW3x4/4yokWms4yIKM/BxkkcMkxbyn/WrUe0=; b=b5reuceDOeTZLnjzZYs3i6rp3xDDNERjmE/NTncBPXhcgvGMblhtLfVwVzD1fb5tiZ UO9MW8ARRQZLbO6Z2se4oi963HP97u1KVk4ORsFRofaSX2rCmeQwf5KvyhFt5yjRhXUV AjA70FhcheNBLvdbGSM17xQ2hlGIH6lHnX+iISpubs9M1sH1aYQ/YnZsBwBfex7fcJwt LPOmyLis4EysBIWywpbg4YJQJC2GMiN8AgwM+hwZNlp8sXSqQY9qVfmvn8sb8h1R4TAX Qp8zzrSarqSSvL+xafve/gyepodhn6PoRgpAwsKnsdUt5LWQWbSMki8tHGuPHQgA+bXu 62FA== X-Gm-Message-State: AOAM530/V9wlBnF1dOELqfn2eZ5IWPz5aRB6iYDaL/vPO+FFBj+mHzfM 7pJ1VXxbhBFvaQLx14oek9aeEg== X-Google-Smtp-Source: ABdhPJxS5EFTVcmIXFEiggPIpu8uYctGJadye2v8tGqmJWPLNu+hH9ccfCYUZJj+BdxVYR30oM/QCw== X-Received: by 2002:a17:902:7c8f:: with SMTP id y15mr3899856pll.95.1592491459962; Thu, 18 Jun 2020 07:44:19 -0700 (PDT) Received: from x1.localdomain ([65.144.74.34]) by smtp.gmail.com with ESMTPSA id g9sm3127197pfm.151.2020.06.18.07.44.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jun 2020 07:44:19 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, Jens Axboe Subject: [PATCH 15/15] io_uring: support true async buffered reads, if file provides it Date: Thu, 18 Jun 2020 08:43:55 -0600 Message-Id: <20200618144355.17324-16-axboe@kernel.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618144355.17324-1-axboe@kernel.dk> References: <20200618144355.17324-1-axboe@kernel.dk> Reply-To: "[PATCHSET v7 0/15]"@smtpin08.hostedemail.com, Add@smtpin08.hostedemail.com, support@smtpin08.hostedemail.com, for@smtpin08.hostedemail.com, async@smtpin08.hostedemail.com, buffered@smtpin08.hostedemail.com, reads@smtpin08.hostedemail.com MIME-Version: 1.0 X-Rspamd-Queue-Id: 38B63180C95B5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt the buffered read to an io-wq worker. Instead we can rely on page unlocking callbacks to support retry based async IO. This is a lot more efficient than doing async thread offload. The retry is done similarly to how we handle poll based retry. From the unlock callback, we simply queue the retry to a task_work based handler. Signed-off-by: Jens Axboe --- fs/io_uring.c | 145 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 137 insertions(+), 8 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 40413fb9d07b..94282be1c413 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -78,6 +78,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -503,6 +504,8 @@ struct io_async_rw { struct iovec *iov; ssize_t nr_segs; ssize_t size; + struct wait_page_queue wpq; + struct callback_head task_work; }; struct io_async_ctx { @@ -2750,6 +2753,126 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, return 0; } +static void __io_async_buf_error(struct io_kiocb *req, int error) +{ + struct io_ring_ctx *ctx = req->ctx; + + spin_lock_irq(&ctx->completion_lock); + io_cqring_fill_event(req, error); + io_commit_cqring(ctx); + spin_unlock_irq(&ctx->completion_lock); + + io_cqring_ev_posted(ctx); + req_set_fail_links(req); + io_double_put_req(req); +} + +static void io_async_buf_cancel(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + __io_async_buf_error(req, -ECANCELED); +} + +static void io_async_buf_retry(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_ring_ctx *ctx; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + ctx = req->ctx; + + __set_current_state(TASK_RUNNING); + if (!io_sq_thread_acquire_mm(ctx, req)) { + mutex_lock(&ctx->uring_lock); + __io_queue_sqe(req, NULL); + mutex_unlock(&ctx->uring_lock); + } else { + __io_async_buf_error(req, -EFAULT); + } +} + +static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode, + int sync, void *arg) +{ + struct wait_page_queue *wpq; + struct io_kiocb *req = wait->private; + struct io_async_rw *rw = &req->io->rw; + struct wait_page_key *key = arg; + struct task_struct *tsk; + int ret; + + wpq = container_of(wait, struct wait_page_queue, wait); + + ret = wake_page_match(wpq, key); + if (ret != 1) + return ret; + + list_del_init(&wait->entry); + + init_task_work(&rw->task_work, io_async_buf_retry); + /* submit ref gets dropped, acquire a new one */ + refcount_inc(&req->refs); + tsk = req->task; + ret = task_work_add(tsk, &rw->task_work, true); + if (unlikely(ret)) { + /* queue just for cancelation */ + init_task_work(&rw->task_work, io_async_buf_cancel); + tsk = io_wq_get_task(req->ctx->io_wq); + task_work_add(tsk, &rw->task_work, true); + } + wake_up_process(tsk); + return 1; +} + +static bool io_rw_should_retry(struct io_kiocb *req) +{ + struct kiocb *kiocb = &req->rw.kiocb; + int ret; + + /* never retry for NOWAIT, we just complete with -EAGAIN */ + if (req->flags & REQ_F_NOWAIT) + return false; + + /* already tried, or we're doing O_DIRECT */ + if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_WAITQ)) + return false; + /* + * just use poll if we can, and don't attempt if the fs doesn't + * support callback based unlocks + */ + if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC)) + return false; + + /* + * If request type doesn't require req->io to defer in general, + * we need to allocate it here + */ + if (!req->io && __io_alloc_async_ctx(req)) + return false; + + ret = kiocb_wait_page_queue_init(kiocb, &req->io->rw.wpq, + io_async_buf_func, req); + if (!ret) { + io_get_req_task(req); + return true; + } + + return false; +} + +static int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter) +{ + if (req->file->f_op->read_iter) + return call_read_iter(req->file, &req->rw.kiocb, iter); + return loop_rw_iter(READ, req->file, &req->rw.kiocb, iter); +} + static int io_read(struct io_kiocb *req, bool force_nonblock) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; @@ -2784,10 +2907,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) unsigned long nr_segs = iter.nr_segs; ssize_t ret2 = 0; - if (req->file->f_op->read_iter) - ret2 = call_read_iter(req->file, kiocb, &iter); - else - ret2 = loop_rw_iter(READ, req->file, kiocb, &iter); + ret2 = io_iter_do_read(req, &iter); /* Catch -EAGAIN return for forced non-blocking submission */ if (!force_nonblock || (ret2 != -EAGAIN && ret2 != -EIO)) { @@ -2799,17 +2919,26 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) ret = io_setup_async_rw(req, io_size, iovec, inline_vecs, &iter); if (ret) - goto out_free; + goto out; /* any defer here is final, must blocking retry */ if (!(req->flags & REQ_F_NOWAIT) && !file_can_poll(req->file)) req->flags |= REQ_F_MUST_PUNT; + /* if we can retry, do so with the callbacks armed */ + if (io_rw_should_retry(req)) { + ret2 = io_iter_do_read(req, &iter); + if (ret2 == -EIOCBQUEUED) { + goto out; + } else if (ret2 != -EAGAIN) { + kiocb_done(kiocb, ret2); + goto out; + } + } + kiocb->ki_flags &= ~IOCB_WAITQ; return -EAGAIN; } } -out_free: - kfree(iovec); - req->flags &= ~REQ_F_NEED_CLEANUP; +out: return ret; }