From patchwork Fri Nov 21 10:08:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 5354091 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 064BDC11AC for ; Fri, 21 Nov 2014 10:18:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3387920125 for ; Fri, 21 Nov 2014 10:18:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4E9F5200E6 for ; Fri, 21 Nov 2014 10:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758551AbaKUKR1 (ORCPT ); Fri, 21 Nov 2014 05:17:27 -0500 Received: from mail-pa0-f51.google.com ([209.85.220.51]:53987 "EHLO mail-pa0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758459AbaKUKRY (ORCPT ); Fri, 21 Nov 2014 05:17:24 -0500 Received: by mail-pa0-f51.google.com with SMTP id ey11so4622583pad.10 for ; Fri, 21 Nov 2014 02:17:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=c/s/0+BoHmcK868dXjpOq8r9AUPdTm0a0MSiQh6jGEs=; b=Q0rlWQ9Uwb4EQWQKizfqzE+AJJ8JbC8FY109PcYeBdYJLikKTVkdReKywC7F5elv2B 15Nef9qClAgy1mnSCX5phVap20veRF9bU8/xN4VAJ5we2NOTy33TH+dSYW0WZfUSV+Ix J+tXbUcOk6s8wS+hI9JMjDHpTHpzxUfoj6SRu1xUIU85Mf/yb6Lra9OW0e7HHiySrw98 u93vPDwbzhrpBDWEBX2w8Gp56Va3yMStQEfHImKcPo80rSeGtCOWc5Ai4pfjCCm26ckc YVxec44f4XOLYVnKQ9Zru/XLa3hQBTX+QGTMm39x65Dox5dbBt66iNG+amiu1TSn8PdT 7N8w== X-Gm-Message-State: ALoCoQlIWQkt0gKRR73P7JgqYntYCNQQCPGwqwDLFfFtZdCyA8SSVphHKZL3DbQh8wXF9t9xQ0wp X-Received: by 10.68.167.99 with SMTP id zn3mr5690236pbb.30.1416564544883; Fri, 21 Nov 2014 02:09:04 -0800 (PST) Received: from mew.localdomain ([24.19.133.29]) by mx.google.com with ESMTPSA id g12sm4391480pdj.27.2014.11.21.02.09.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 21 Nov 2014 02:09:04 -0800 (PST) From: Omar Sandoval To: Alexander Viro , Andrew Morton , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, Trond Myklebust , Mel Gorman Cc: Omar Sandoval , Dave Kleikamp , Ming Lei Subject: [PATCH v2 1/5] direct-io: don't dirty ITER_BVEC pages on read Date: Fri, 21 Nov 2014 02:08:27 -0800 Message-Id: X-Mailer: git-send-email 2.1.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Reads through the iov_iter infrastructure for kernel pages shouldn't be dirtied by the direct I/O code. This is based on Dave Kleikamp's and Ming Lei's previously posted patches. Cc: Dave Kleikamp Cc: Ming Lei Signed-off-by: Omar Sandoval Acked-by: Dave Kleikamp --- fs/direct-io.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index e181b6b..e542ce4 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -120,6 +120,7 @@ struct dio { spinlock_t bio_lock; /* protects BIO fields below */ int page_errors; /* errno from get_user_pages() */ int is_async; /* is IO async ? */ + int should_dirty; /* should we mark read pages dirty? */ bool defer_completion; /* defer AIO completion to workqueue? */ int io_error; /* IO error in completion path */ unsigned long refcount; /* direct_io_worker() and bios */ @@ -392,7 +393,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) dio->refcount++; spin_unlock_irqrestore(&dio->bio_lock, flags); - if (dio->is_async && dio->rw == READ) + if (dio->is_async && dio->rw == READ && dio->should_dirty) bio_set_pages_dirty(bio); if (sdio->submit_io) @@ -463,13 +464,13 @@ static int dio_bio_complete(struct dio *dio, struct bio *bio) if (!uptodate) dio->io_error = -EIO; - if (dio->is_async && dio->rw == READ) { + if (dio->is_async && dio->rw == READ && dio->should_dirty) { bio_check_pages_dirty(bio); /* transfers ownership */ } else { bio_for_each_segment_all(bvec, bio, i) { struct page *page = bvec->bv_page; - if (dio->rw == READ && !PageCompound(page)) + if (dio->rw == READ && !PageCompound(page) && dio->should_dirty) set_page_dirty_lock(page); page_cache_release(page); } @@ -1177,6 +1178,7 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode, dio->inode = inode; dio->rw = rw; + dio->should_dirty = !(iter->type & ITER_BVEC); /* * For AIO O_(D)SYNC writes we need to defer completions to a workqueue