From patchwork Fri May 18 16:48:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10411357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 76A1E6031B for ; Fri, 18 May 2018 16:50:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6572D2844E for ; Fri, 18 May 2018 16:50:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A038288BD; Fri, 18 May 2018 16:50:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F13A2844E for ; Fri, 18 May 2018 16:50:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81BF06B0632; Fri, 18 May 2018 12:50:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7CA596B0634; Fri, 18 May 2018 12:50:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BA5F6B0635; Fri, 18 May 2018 12:50:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id 1F9F66B0632 for ; Fri, 18 May 2018 12:50:03 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id b31-v6so5393959plb.5 for ; Fri, 18 May 2018 09:50:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=UOZvmKerXXD0vnkxsBJsm8Pk926IiYFgFgJh+XxTaH4vXWoWMHDjqpGFL3Xr1cMYbn W4vcWavQy8aruW4XcJ81skYrMoKAY1WHyZ+Yk43Kn91pIvSffKhunqv361gJvMlLx4IB b4sewcstNQP7EcwavdqtOwIIwkyaMNUApsZBH/orXh+Qwn18d/q+Fb4jHu9Pymd1VSFW 0EhnovKWeeABvxMxQJtsSS5wAxrCwFpsIyovucblv9yZgbnL2dA1RYi8IV2r0y7wCR24 HCHhvoCYLjCtI6q/942W8UZQ5ODgCMNa9NstFTXzFo+PhosRl1fRIKl5wGCcdnFSIoh0 mqaw== X-Gm-Message-State: ALKqPwfpn07Ls9QgQNPgI0fQEjsdspXE42jAoQBspng9TeFx8+sVPPlG 3RGTn45e3rvSb6MV0/qy+04r2+T+op7C0S/dE/zg1JwI4/KntF9DWZX8LLYSveAmDmBAaZiqXlk GN/3I/Y3EiUOaaEJzCIsZb7O4qloH2Wnvl3FjgWLLqaDYFvRQY9/sTll051nrCqY= X-Received: by 2002:a65:43cb:: with SMTP id n11-v6mr7952010pgp.287.1526662202786; Fri, 18 May 2018 09:50:02 -0700 (PDT) X-Google-Smtp-Source: AB8JxZq0K+wD8W2NoIiAlFhN0KklK0r92FtIVe2JKxg9MbBiNdb2DSv/FQ4DyNvJcYZCSvpmPrHF X-Received: by 2002:a65:43cb:: with SMTP id n11-v6mr7951976pgp.287.1526662201944; Fri, 18 May 2018 09:50:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526662201; cv=none; d=google.com; s=arc-20160816; b=kP37r6STpt9JNMVpI3LuhT6Rydr9fFOqXj5b1B2v32Pn8WcbtpkFoiuMewjp/AMsON jQ2pS3gpdSqNzMSBgoqxiGxct6fDBqrADPynIavUrZ79aCFpwrqxYakn6I5VIIKR6AQ5 iozNeh3cd44JOH60lulL30BnNudltR+n3aHVX3jGDJn45krAjiTJelTyyR2OmgIxuYJl vuYTXzpzwzmbeBcv50WBwzF/nlC8GFz/gjr5jwAgLDttMn7W4Bpt2HwFRs77uocw4zSk 16Wk//5MU0Hdk4BvLGf3crHw9TDmtOsh08J2fgO5uL5L7v265zBSfPvzAK2EM9JqWxnT ZRAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=b2AebnZnrCzzppU6gOiB9rJBCKVZdQMb/iP4AuOhRKihGqmCblCrx6hWOptQyv9CIQ 5iQYH1rylu9DJiUXVDErXVCC/jpEMQDxYd/vAodoyFjTAD4ziXWj3WRJzaJbIsmZQ7lk zXW7BwoTVGdVzhD9os3+qz5lUdQbJVMvkGEAMxyvLtRZx8Y+zhUnzZymCVTnz9OaKLcT 4GnWxUYzv1Xwn7kTRivRMZ6BfnUy+pknNTlTWJ7YrFKAYnlKM0UfnVSRSExZ/BCigJ+S Bxk/fHO1FT6EbFFboPaaZjoQHTlrJPotuB62wOut0ushDjTaLo1nt/Bw1Yo+hTnXzM4m sbQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=WCOWoy4k; spf=pass (google.com: best guess record for domain of batv+77ddf8e9b1b344f28472+5381+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+77ddf8e9b1b344f28472+5381+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 89-v6si7847742plc.59.2018.05.18.09.50.01 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 May 2018 09:50:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+77ddf8e9b1b344f28472+5381+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=WCOWoy4k; spf=pass (google.com: best guess record for domain of batv+77ddf8e9b1b344f28472+5381+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+77ddf8e9b1b344f28472+5381+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=WCOWoy4keiknTCrXtQMiBI7Va G6O//nZGs52sZmomhQ6UgaW0ifALvzK0RtkOv2XLzMhIwkNBMk9ZF9udI643nCbef+YgVwbZtGXHJ RET8yGg9tX3VyB+kbTe9dww5voGfjtVYor+1ckNPFTwiq+yiO3e8ziK2iEpWq+g8gb/IkvYQsUkpi FNud4HHmH7nRuQoblGOMqh/HTm6P9VgkXthhuFKtIj9pdRE3BCqIGyLvDgrNcNrg2iv6DJdrMWD5y /qD/OiRYv7vT4KClGFBMd+idyiHmoAe+lilbdmv66Rh1izaoJM92lnDeimI57AjGojlyJUe3Yriva zkZ1HgFPA==; Received: from 80-109-164-210.cable.dynamic.surfer.at ([80.109.164.210] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJiZr-00078F-V4; Fri, 18 May 2018 16:50:00 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 29/34] xfs: don't look at buffer heads in xfs_add_to_ioend Date: Fri, 18 May 2018 18:48:25 +0200 Message-Id: <20180518164830.1552-30-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180518164830.1552-1-hch@lst.de> References: <20180518164830.1552-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Calculate all information for the bio based on the passed in information without requiring a buffer_head structure. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_aops.c | 68 ++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index f01c1dd737ec..592b33b35a30 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -44,7 +44,6 @@ struct xfs_writepage_ctx { struct xfs_bmbt_irec imap; unsigned int io_type; struct xfs_ioend *ioend; - sector_t last_block; }; void @@ -545,11 +544,6 @@ xfs_start_page_writeback( unlock_page(page); } -static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh) -{ - return bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); -} - /* * Submit the bio for an ioend. We are passed an ioend with a bio attached to * it, and we submit that bio. The ioend may be used for multiple bio @@ -604,27 +598,20 @@ xfs_submit_ioend( return 0; } -static void -xfs_init_bio_from_bh( - struct bio *bio, - struct buffer_head *bh) -{ - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); - bio_set_dev(bio, bh->b_bdev); -} - static struct xfs_ioend * xfs_alloc_ioend( struct inode *inode, unsigned int type, xfs_off_t offset, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct xfs_ioend *ioend; struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, xfs_ioend_bioset); - xfs_init_bio_from_bh(bio, bh); + bio_set_dev(bio, bdev); + bio->bi_iter.bi_sector = sector; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); @@ -649,13 +636,14 @@ static void xfs_chain_bio( struct xfs_ioend *ioend, struct writeback_control *wbc, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct bio *new; new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - xfs_init_bio_from_bh(new, bh); - + bio_set_dev(new, bdev); + new->bi_iter.bi_sector = sector; bio_chain(ioend->io_bio, new); bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); @@ -665,39 +653,45 @@ xfs_chain_bio( } /* - * Test to see if we've been building up a completion structure for - * earlier buffers -- if so, we try to append to this ioend if we - * can, otherwise we finish off any current ioend and start another. - * Return the ioend we finished off so that the caller can submit it - * once it has finished processing the dirty page. + * Test to see if we have an existing ioend structure that we could append to + * first, otherwise finish off the current ioend and start another. */ STATIC void xfs_add_to_ioend( struct inode *inode, - struct buffer_head *bh, xfs_off_t offset, + struct page *page, struct xfs_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { + struct xfs_inode *ip = XFS_I(inode); + struct xfs_mount *mp = ip->i_mount; + struct block_device *bdev = xfs_find_bdev_for_inode(inode); + unsigned len = i_blocksize(inode); + unsigned poff = offset & (PAGE_SIZE - 1); + sector_t sector; + + sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + + ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); + if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type || - bh->b_blocknr != wpc->last_block + 1 || + sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, bh); + wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, + bdev, sector); } /* - * If the buffer doesn't fit into the bio we need to allocate a new - * one. This shouldn't happen more than once for a given buffer. + * If the block doesn't fit into the bio we need to allocate a new + * one. This shouldn't happen more than once for a given block. */ - while (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) - xfs_chain_bio(wpc->ioend, wbc, bh); + while (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) + xfs_chain_bio(wpc->ioend, wbc, bdev, sector); - wpc->ioend->io_size += bh->b_size; - wpc->last_block = bh->b_blocknr; - xfs_start_buffer_writeback(bh); + wpc->ioend->io_size += len; } STATIC void @@ -893,7 +887,9 @@ xfs_writepage_map( lock_buffer(bh); xfs_map_at_offset(inode, bh, &wpc->imap, file_offset); - xfs_add_to_ioend(inode, bh, file_offset, wpc, wbc, &submit_list); + xfs_add_to_ioend(inode, file_offset, page, wpc, wbc, + &submit_list); + xfs_start_buffer_writeback(bh); count++; }