From patchwork Wed Jul 1 08:59:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85AC792A for ; Wed, 1 Jul 2020 09:00:03 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6423920775 for ; Wed, 1 Jul 2020 09:00:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="EYF649Iv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6423920775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4D27C1143B073; Wed, 1 Jul 2020 02:00:03 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id AA0AC11427DB7 for ; Wed, 1 Jul 2020 02:00:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DuKm6XIf65lTt+8cAqOIgCUPKDhwN9ycHZvv7MaSo+A=; b=EYF649IvW3U5/7Q2IsweErHngn a8Uai3mUFQKukY/FPGjp5DiX+nHPTrbTYYOWEhWWUIHZ/Zyf5r1RrJVO0LxxAzHucExYVM+YrZkwS v6gsIPkpIjyZxLI2CxXaFjHoVHljjz2FuKuw8qy0WBQG20RU4Cy74m4xewIkEUu9rvwhKYZeOvjbd hoHgl2y8+6qkEL5Q9kx0iy3vmKCTDGUIDsCkRx0IubtnRGqdPjghuBY4CvUhwmwoOUeSnwk73oKSc PPa3nKOdLDVwkuLRgPHQy3UKEt5Qnuz1usMcuJjy0QHiofa8F70Hd0Q6aIJegMzVLldGccSNlyBED BMgzApPA==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYas-00087O-Jv; Wed, 01 Jul 2020 08:59:51 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 01/20] nfblock: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:28 +0200 Message-Id: <20200701085947.3354405-2-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: TCNK4IYHI36I7QWAQHI7AVFZMB3FD5XL X-Message-ID-Hash: TCNK4IYHI36I7QWAQHI7AVFZMB3FD5XL X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, Geert Uytterhoeven X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig Reviewed-by: Geert Uytterhoeven Acked-by: Geert Uytterhoeven --- arch/m68k/emu/nfblock.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c index c3a630440512e9..87e8b1700acd28 100644 --- a/arch/m68k/emu/nfblock.c +++ b/arch/m68k/emu/nfblock.c @@ -61,7 +61,7 @@ struct nfhd_device { static blk_qc_t nfhd_make_request(struct request_queue *queue, struct bio *bio) { - struct nfhd_device *dev = queue->queuedata; + struct nfhd_device *dev = bio->bi_disk->private_data; struct bio_vec bvec; struct bvec_iter iter; int dir, len, shift; @@ -122,7 +122,6 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 bsize) if (dev->queue == NULL) goto free_dev; - dev->queue->queuedata = dev; blk_queue_logical_block_size(dev->queue, bsize); dev->disk = alloc_disk(16); From patchwork Wed Jul 1 08:59:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635727 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF76F739 for ; Wed, 1 Jul 2020 09:00:04 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9E37320842 for ; Wed, 1 Jul 2020 09:00:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vfpo6dKX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E37320842 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6802B114493CF; Wed, 1 Jul 2020 02:00:03 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1B90D11427DB7 for ; Wed, 1 Jul 2020 02:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mXzFKc+c1347jb3Dn9ppL+M5YgGcOuGfnfqDijAsZ3Y=; b=vfpo6dKX6pcfkXojO7MEBdP8I+ 86AaePADZXUJsTBKwuUgnpCIP5Z0XAZQGW6c/Kb9F/JPuY9WyJ3Rl/BaIzYlOubo09Ei7FzSnqQI0 gfCEMoloEjs6P3LEQnwVlzRHhUGadgcFRuv65T0aKFTLTXDZaxWyzGyBQkhXiSfyB+vRqn/1Jc5k2 zTH1Kh5OVuZJjkBVSpDk9P2/NeNvS/14c0tl9qqr7KZlHRN3w6sOvy5/jKp8AvIrsjxMcqV3gxYja FugjR0mjktszsGn1XDTbj9LDx0mg0iBf25yLxb9OisE3KCivktdJc3E9LZFZgcvuzMqVbMum3MmEs upxKYJvw==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYau-00087W-24; Wed, 01 Jul 2020 08:59:52 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 02/20] simdisk: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:29 +0200 Message-Id: <20200701085947.3354405-3-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: FXXYNZVI5M5ADMPRMKRVVKNNARNIWZ3J X-Message-ID-Hash: FXXYNZVI5M5ADMPRMKRVVKNNARNIWZ3J X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- arch/xtensa/platforms/iss/simdisk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c index 49322b66cda931..31b5020077a059 100644 --- a/arch/xtensa/platforms/iss/simdisk.c +++ b/arch/xtensa/platforms/iss/simdisk.c @@ -103,7 +103,7 @@ static void simdisk_transfer(struct simdisk *dev, unsigned long sector, static blk_qc_t simdisk_make_request(struct request_queue *q, struct bio *bio) { - struct simdisk *dev = q->queuedata; + struct simdisk *dev = bio->bi_disk->private_data; struct bio_vec bvec; struct bvec_iter iter; sector_t sector = bio->bi_iter.bi_sector; @@ -273,8 +273,6 @@ static int __init simdisk_setup(struct simdisk *dev, int which, goto out_alloc_queue; } - dev->queue->queuedata = dev; - dev->gd = alloc_disk(SIMDISK_MINORS); if (dev->gd == NULL) { pr_err("alloc_disk failed\n"); From patchwork Wed Jul 1 08:59:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635729 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90574739 for ; Wed, 1 Jul 2020 09:00:06 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6EA362078A for ; Wed, 1 Jul 2020 09:00:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cG9uWeax" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6EA362078A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 87C94114493D4; Wed, 1 Jul 2020 02:00:04 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 6799311427DB7 for ; Wed, 1 Jul 2020 02:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nyRpegl2g2GPLIzpDQmlwzdY5HJ2Dd3E7fu8tgbKXsw=; b=cG9uWeaxaNO/1MVPIWmhvQXLAG iSkoMG+mDqCp5Fas05hxeEwDFJzbgUVjvM+qHyRdAModBVUY6niFiW0rnnavhFRNyz1pykeOhxkhZ NZVGVYHrmpq04hvVgiUtI5yX1wM+7igRvEMydnCoRQeuacqPuZqBQw3s5pqynfJWw1cXYywN0/bxi 7SwIJoZIPt4nm/r/GiOvJKRN2dJAAkG6ZSJj5j6m8j6Tra0NHETrtGEustOMES2x9BE2MEDdvPFxf /+IEWPO5yHD5avrdkfbZhwAa30i/liOzJ70LhshoWEaD5sgYOPmU7O855Wq//djMSdTtULqhfhiaU 2SXkVv7Q==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYav-00087h-DT; Wed, 01 Jul 2020 08:59:53 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 03/20] drbd: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:30 +0200 Message-Id: <20200701085947.3354405-4-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: FWB3HL3BTO4HTQZXGJUVMDCYNI22J3R7 X-Message-ID-Hash: FWB3HL3BTO4HTQZXGJUVMDCYNI22J3R7 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_main.c | 1 - drivers/block/drbd/drbd_req.c | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 45fbd526c453bc..26f4e0aa7393b4 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -2805,7 +2805,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig if (!q) goto out_no_q; device->rq_queue = q; - q->queuedata = device; disk = alloc_disk(1); if (!disk) diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index c80a2f1c3c2a73..3f09b2ab977822 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1595,7 +1595,7 @@ void do_submit(struct work_struct *ws) blk_qc_t drbd_make_request(struct request_queue *q, struct bio *bio) { - struct drbd_device *device = (struct drbd_device *) q->queuedata; + struct drbd_device *device = bio->bi_disk->private_data; unsigned long start_jif; blk_queue_split(q, &bio); From patchwork Wed Jul 1 08:59:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635731 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E268592A for ; Wed, 1 Jul 2020 09:00:07 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C045720775 for ; Wed, 1 Jul 2020 09:00:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="IIiLjJ1F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C045720775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 9A51F1145BDE1; Wed, 1 Jul 2020 02:00:04 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 438581142B3BF for ; Wed, 1 Jul 2020 02:00:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KYQPEhMrWwbjunajTA0tfrHgAEeVrpu9W222AN41VOw=; b=IIiLjJ1FkkaM5l8VC6G9OIV52H gZvhuVSspN+bujILyy30Kswzd5P/J3rezISucSTUWkpsnylLzhpob+Zex9IgzOczLFN1PzC5IiS4R yjosIAlxK5d8ZNi4GxzbdQCkD5WDKCkzcv3yvT+uRRO1s4iJBDkp7DAYBPLj900+F1B8hh0WZ5LUF OcxrgA/zS4jrWHGfD8SKktWoxQSg5a0Gj5XBTsH14yheSKEkLwPQFwA+xuFf7bc5RLojTjfSgxD2Y 3pS1pgNp51Pm9kYIf/VxqyUK8gEJFw3XDX1FAs10ZXyiJKr/t9fsmoG3XEb/C8cw0S+zfWX4FC6St dYaiFJ9A==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYaw-00087s-KG; Wed, 01 Jul 2020 08:59:55 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 04/20] null_blk: stop using ->queuedata for bio mode Date: Wed, 1 Jul 2020 10:59:31 +0200 Message-Id: <20200701085947.3354405-5-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: 77IGF3PXMZCSQK37YYZ57G7JTEPGS5QD X-Message-ID-Hash: 77IGF3PXMZCSQK37YYZ57G7JTEPGS5QD X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/null_blk_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 82259242b9b5c9..93ce0a00b2ae01 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1392,7 +1392,7 @@ static blk_qc_t null_queue_bio(struct request_queue *q, struct bio *bio) { sector_t sector = bio->bi_iter.bi_sector; sector_t nr_sectors = bio_sectors(bio); - struct nullb *nullb = q->queuedata; + struct nullb *nullb = bio->bi_disk->private_data; struct nullb_queue *nq = nullb_to_queue(nullb); struct nullb_cmd *cmd; From patchwork Wed Jul 1 08:59:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635733 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C005292A for ; Wed, 1 Jul 2020 09:00:09 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9DD5A20775 for ; Wed, 1 Jul 2020 09:00:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cUbn7Pxl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DD5A20775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id C31E211460B22; Wed, 1 Jul 2020 02:00:05 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id DF530114493D5 for ; Wed, 1 Jul 2020 02:00:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=x/Lapiju/RCIeWIuz3o8sd6pkgFSKFSdWu2H7EUGYIE=; b=cUbn7Pxl3iVIktS63nJOjTDhdj CfyO1exZuxlh1JTjnZcHfOk9yHVqjEWcViE8zZCbBBD3GXQYi/MTmpBuPjUDMw2Wa4XSvPxrryj3s +6nt9I1whJaM+9EbgTix65yH/OCDuOmII1bxFx0Ld/UtuF3OaOm1ahW7UrzPmD0YIS6pu26Vo9qw+ sQt0dJJFtA9hIqHf4548XR65FQ6n+eFn7Og7Ox2Pi9nK2nV7JLpXxh+QjwclNNOIZAwkU8zAUi1qk KhPBZghQeL9IpqMWdpQXiMIordOsXNOIW5luE/k8g8DuBcRRZebGPUK43sxdftZJI1Woo/J9tAeLJ R/Y4MCfQ==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYay-000882-6q; Wed, 01 Jul 2020 08:59:56 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 05/20] ps3vram: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:32 +0200 Message-Id: <20200701085947.3354405-6-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: FSEY4MYIYUBC42NDDSMNHNC5FBTBTFVZ X-Message-ID-Hash: FSEY4MYIYUBC42NDDSMNHNC5FBTBTFVZ X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/ps3vram.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c index 821d4d8b1d763e..5a1d1d137c7248 100644 --- a/drivers/block/ps3vram.c +++ b/drivers/block/ps3vram.c @@ -587,7 +587,7 @@ static struct bio *ps3vram_do_bio(struct ps3_system_bus_device *dev, static blk_qc_t ps3vram_make_request(struct request_queue *q, struct bio *bio) { - struct ps3_system_bus_device *dev = q->queuedata; + struct ps3_system_bus_device *dev = bio->bi_disk->private_data; struct ps3vram_priv *priv = ps3_system_bus_get_drvdata(dev); int busy; @@ -745,7 +745,6 @@ static int ps3vram_probe(struct ps3_system_bus_device *dev) } priv->queue = queue; - queue->queuedata = dev; blk_queue_max_segments(queue, BLK_MAX_SEGMENTS); blk_queue_max_segment_size(queue, BLK_MAX_SEGMENT_SIZE); blk_queue_max_hw_sectors(queue, BLK_SAFE_MAX_SECTORS); From patchwork Wed Jul 1 08:59:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635735 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1709F739 for ; Wed, 1 Jul 2020 09:00:11 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E93A9207C4 for ; Wed, 1 Jul 2020 09:00:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="oe8ekqXD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E93A9207C4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id D79A911460B20; Wed, 1 Jul 2020 02:00:08 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id BD3FE11460B20 for ; Wed, 1 Jul 2020 02:00:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8L8p6UbF45RVD9lwh+Y9njbbxbDNtEWDlp4D4BAc300=; b=oe8ekqXDaiQpf+S7EOEpw5yRKU CoccpvH0RXaTxQWoCPtjEwyvtVtWSraeKQsDTT3/dXCE6CPFU3popYOUuF2cLKx7kd/ICsdyf6ErY CvOZoeMylTxnk/A1oWX1ruW7W02Ziuz7TawHOD6ezCqHTu81ofml6Aba7D0Xv9UWmFQ2uewycGqpS ZLtKo5o5LYMXf3OQjPh5c+jmMs0Anl5tX624os3tcwTZeva0jjKMOIyQOb26XS/mH7o2PKcB5UJmf mubemCzwosWrfRg8pHGXhlP4j58/n/uKxB5JSMN+P8IKpFm0lJ/fwwlkwAmPRJIy+qqjxFTjAP2oj kK65tiEQ==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYaz-00088I-Jt; Wed, 01 Jul 2020 08:59:58 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 06/20] rsxx: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:33 +0200 Message-Id: <20200701085947.3354405-7-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: MVWQWN6RR4FEVPSQ5FF32QFJ557CTBCZ X-Message-ID-Hash: MVWQWN6RR4FEVPSQ5FF32QFJ557CTBCZ X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/rsxx/dev.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c index 3ba07ab30c84f5..6a4d8d26e32cbd 100644 --- a/drivers/block/rsxx/dev.c +++ b/drivers/block/rsxx/dev.c @@ -119,7 +119,7 @@ static void bio_dma_done_cb(struct rsxx_cardinfo *card, static blk_qc_t rsxx_make_request(struct request_queue *q, struct bio *bio) { - struct rsxx_cardinfo *card = q->queuedata; + struct rsxx_cardinfo *card = bio->bi_disk->private_data; struct rsxx_bio_meta *bio_meta; blk_status_t st = BLK_STS_IOERR; @@ -267,8 +267,6 @@ int rsxx_setup_dev(struct rsxx_cardinfo *card) card->queue->limits.discard_alignment = RSXX_HW_BLK_SIZE; } - card->queue->queuedata = card; - snprintf(card->gendisk->disk_name, sizeof(card->gendisk->disk_name), "rsxx%d", card->disk_id); card->gendisk->major = card->major; @@ -289,7 +287,6 @@ void rsxx_destroy_dev(struct rsxx_cardinfo *card) card->gendisk = NULL; blk_cleanup_queue(card->queue); - card->queue->queuedata = NULL; unregister_blkdev(card->major, DRIVER_NAME); } From patchwork Wed Jul 1 08:59:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635737 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61DDF92A for ; Wed, 1 Jul 2020 09:00:12 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4104C2078B for ; Wed, 1 Jul 2020 09:00:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="CalUSBbj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4104C2078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0021811460B27; Wed, 1 Jul 2020 02:00:09 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id AB67E1142B3BE for ; Wed, 1 Jul 2020 02:00:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=h+zTEp6W7EJyyCCOp/IKx6w4eJjLDZBSmAu+cD30b/M=; b=CalUSBbjHhqEFYSwjrf91po0Aj 2mtO3oUaBojx0m2g/BAyWJsMP3tiGWKJQ4vdeXV0J6l5yzw1NPped+yx7kKzY6UfiQmVeJG/2MoTJ CYmg79UM86ns6N5mu5W6MnFamxsohRNH0sjaU4F6tJ4fblYSvvNMmfCDS4T+HS7eNQZl2q7hNqIQ5 kkM2CycE35Ez7Sj+87Mt3qKansh5lZtDVRHPVjrfWPp+4YHr5Zu0gljaa/iu5Zw+3mTQck2BXZSux Cr38RtHFy7pPcMkr2BrcGymI1OyaSqMtHmRY1010j+qOHwmiZKbAqO8Z2kyswVFO5T7VkCePn5yyV UOFpvqJg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb1-00088r-0A; Wed, 01 Jul 2020 08:59:59 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 07/20] umem: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:34 +0200 Message-Id: <20200701085947.3354405-8-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: 6V3TUJFIMW6QFFY7RRMDGF5UUOOT6OTA X-Message-ID-Hash: 6V3TUJFIMW6QFFY7RRMDGF5UUOOT6OTA X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/umem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/block/umem.c b/drivers/block/umem.c index 1e2aa5ae27963c..5498f1cf36b3fe 100644 --- a/drivers/block/umem.c +++ b/drivers/block/umem.c @@ -521,7 +521,8 @@ static int mm_check_plugged(struct cardinfo *card) static blk_qc_t mm_make_request(struct request_queue *q, struct bio *bio) { - struct cardinfo *card = q->queuedata; + struct cardinfo *card = bio->bi_disk->private_data; + pr_debug("mm_make_request %llu %u\n", (unsigned long long)bio->bi_iter.bi_sector, bio->bi_iter.bi_size); @@ -888,7 +889,6 @@ static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) card->queue = blk_alloc_queue(mm_make_request, NUMA_NO_NODE); if (!card->queue) goto failed_alloc; - card->queue->queuedata = card; tasklet_init(&card->tasklet, process_page, (unsigned long)card); From patchwork Wed Jul 1 08:59:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635739 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3A8C739 for ; Wed, 1 Jul 2020 09:00:13 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF4542078B for ; Wed, 1 Jul 2020 09:00:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="hAcTP892" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF4542078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 1429A114493CC; Wed, 1 Jul 2020 02:00:10 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 3485C1142B3BE for ; Wed, 1 Jul 2020 02:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=t6BqkN6Yxn/8o7GWQerTXSXZiT8M8tft7EnhoW3IIiE=; b=hAcTP892NCCovzX2RpSmkixMog HbSnGhzmiEi7CnPF5n2sxdgakapG2OHqqw5H6XKqc+ghl6qYzfNMLDnMaO1cBG69NDChGRSYU+UOH hQeiL/X6/qDLMAZoZHKVKkAKNRq83YhtRRPLSYV7C+e3TxLyt4WbCjNMs1CVf+mzmrs6yA0IRGK/a 68kq/4RTXTzlT/aia1gZYh618fXe7Lfwpp2DSJuejMw4sErcKJz3WQY2UbphgLA0imC8+URm/7LX4 1GYzzIg3zotDYqxd8qFVl4y4yd3DRKsYQunww+RsVy9GpkXOhrWRpW52iSzbH5TZxIHTdqsBscTPV u324UHxA==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb2-00089D-BO; Wed, 01 Jul 2020 09:00:00 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 08/20] zram: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:35 +0200 Message-Id: <20200701085947.3354405-9-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: MQ5MNBQROEPNDQ33C7Z2BYEEOGJT3DH2 X-Message-ID-Hash: MQ5MNBQROEPNDQ33C7Z2BYEEOGJT3DH2 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig --- drivers/block/zram/zram_drv.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 6e2ad90b17a376..0564e3f384089e 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1586,7 +1586,7 @@ static void __zram_make_request(struct zram *zram, struct bio *bio) */ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio) { - struct zram *zram = queue->queuedata; + struct zram *zram = bio->bi_disk->private_data; if (!valid_io_request(zram, bio->bi_iter.bi_sector, bio->bi_iter.bi_size)) { @@ -1912,7 +1912,6 @@ static int zram_add(void) zram->disk->first_minor = device_id; zram->disk->fops = &zram_devops; zram->disk->queue = queue; - zram->disk->queue->queuedata = zram; zram->disk->private_data = zram; snprintf(zram->disk->disk_name, 16, "zram%d", device_id); From patchwork Wed Jul 1 08:59:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635741 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17494739 for ; Wed, 1 Jul 2020 09:00:15 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA2A220775 for ; Wed, 1 Jul 2020 09:00:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="nD8tkFOm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA2A220775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 2D8141145BDE1; Wed, 1 Jul 2020 02:00:13 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id C5EF41142B3B5 for ; Wed, 1 Jul 2020 02:00:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=L49detOHIFMZBv2u/yvsEnOxSRyecAx0WO9o6Vj19Gg=; b=nD8tkFOmq0yODLS9KjTxL/HM95 MvS029kkwYsL36B0BmjMX/x9+IEpPDqr2WTTylNnyoVByKvX8Fz2bdLyEri3jcgzbXdBpjXEZiRN/ Kv2cmoSNgbOLTH1SVsD8VEwZo7rxN1BfOjSfa5ePg+NzhxpFrAEsq3Il5o3FuecHqO5zoa9FfjwOt F7EigSY4p4tURql9PUAvn71ne7d34PkGjfS44LN23yuNnQ6t1iDHp9fI1fqYALQNaauX7TSeQcI93 NxYnlsI0Z/8PAgv0H2dv4MWlnvq8m2bs+1isajyyRBamHPxZSa7Uhkxn0EjbugRlZwV5Ee9favJDw fSVyymAQ==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb3-00089Z-KL; Wed, 01 Jul 2020 09:00:01 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 09/20] bcache: stop setting ->queuedata Date: Wed, 1 Jul 2020 10:59:36 +0200 Message-Id: <20200701085947.3354405-10-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: FJ6BLONJQG4X6NO2PM7AF44SPKYG6CYA X-Message-ID-Hash: FJ6BLONJQG4X6NO2PM7AF44SPKYG6CYA X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Nothing in bcache actually uses the ->queuedata field. Signed-off-by: Christoph Hellwig Acked-by: Coly Li --- drivers/md/bcache/super.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 2014016f9a60d3..21aa168113d30b 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -876,7 +876,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size, return -ENOMEM; d->disk->queue = q; - q->queuedata = d; q->backing_dev_info->congested_data = d; q->limits.max_hw_sectors = UINT_MAX; q->limits.max_sectors = UINT_MAX; From patchwork Wed Jul 1 08:59:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635743 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 68BBA739 for ; Wed, 1 Jul 2020 09:00:16 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 46B162074D for ; Wed, 1 Jul 2020 09:00:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="IXEOhZUY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 46B162074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 465C71146440C; Wed, 1 Jul 2020 02:00:13 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 50E8A114493CA for ; Wed, 1 Jul 2020 02:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KKN2Rzx949H+NoG0kbydbtU4p/7MhBgkwEQu/fGRZVo=; b=IXEOhZUYnQNNzu3t9mdfIy8WvY DVEKFQZET7Rij1211SD+rN05JspdCg15L9ZU6/zu7LxxnnhTJikfz7M1sJIG8xQA0xcu0rO6KOLkb pAwzjGPZ5OMd8TtwrqIQDxg6YkuwDQzyqoSNr0tCHSOb4F1kRKGGsFlGiuFFAo1r/b3LbJBZmAZVu 6trBluSgqUBQbxJnNSWeQ0yd34j2v4iAe5/SECmBmDF/OR9mu/RezGXQwj1QExXbaYQvu/P37eAOq cXfzABCIpIGkJ+9JXWCdhyzKfyzomUaRkeTwBL+vMOKjF8SP8927x9+HrlnQNtmqsIgjy8MWEEE89 w+1ZOK5g==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb5-0008Ap-A7; Wed, 01 Jul 2020 09:00:03 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 10/20] dm: stop using ->queuedata Date: Wed, 1 Jul 2020 10:59:37 +0200 Message-Id: <20200701085947.3354405-11-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: CY4KGBAIMYBALQTPRBS7EBYGFEI4X3PY X-Message-ID-Hash: CY4KGBAIMYBALQTPRBS7EBYGFEI4X3PY X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of setting up the queuedata as well just use one private data field. Signed-off-by: Christoph Hellwig Acked-by: Mike Snitzer --- drivers/md/dm.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index e44473fe0f4873..c8d91f271c272e 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1789,7 +1789,7 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) { - struct mapped_device *md = q->queuedata; + struct mapped_device *md = bio->bi_disk->private_data; blk_qc_t ret = BLK_QC_T_NONE; int srcu_idx; struct dm_table *map; @@ -1995,7 +1995,6 @@ static struct mapped_device *alloc_dev(int minor) md->queue = blk_alloc_queue(dm_make_request, numa_node_id); if (!md->queue) goto bad; - md->queue->queuedata = md; md->disk = alloc_disk_node(1, md->numa_node_id); if (!md->disk) From patchwork Wed Jul 1 08:59:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635745 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3964192A for ; Wed, 1 Jul 2020 09:00:18 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 183C62078B for ; Wed, 1 Jul 2020 09:00:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="KTbKBFdI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 183C62078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 68B311146B904; Wed, 1 Jul 2020 02:00:15 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 36F9B11460B25 for ; Wed, 1 Jul 2020 02:00:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=N4V7c3aeDLuwhXVBAMVeI93XDU7dFLU/PzjjNBBA1go=; b=KTbKBFdIcNq7kcCYKLM8QT5rvT +BTkHkyQXt5A+fSSZkebxiNfAonPP5ePXVQUMMfxf2F7LG5La2VTPgqnueUPN5wD41j0tq2tSTnLG CnKo5+wyDIIBjuHKF5QovT9L95/sxG+C/xoA0AdoHdiigLjTXCAbH8EMqQ0vQ6JEpeKfr9jbHMR/h Vfo7Nw/N6PlBvbAXfnAU9KGWVRVSG5zRNxncxOSmyTyPNmmc+0D+LbE1uQGICJ1ISGr39OgP8dzV5 /Y+pLFxgALv6ZbHRImHf0e+iUhlCNBZYvL8OGqJC+01BhnA2k43UqnSF95GoYRGtCYLt+QNb4JfYL QrjhBVtg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb7-0008Ct-0I; Wed, 01 Jul 2020 09:00:05 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 11/20] fs: remove a weird comment in submit_bh_wbc Date: Wed, 1 Jul 2020 10:59:38 +0200 Message-Id: <20200701085947.3354405-12-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: CZHKTG6HQATAO66AFM25GTTKMIWVZ7O7 X-Message-ID-Hash: CZHKTG6HQATAO66AFM25GTTKMIWVZ7O7 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: All bios can get remapped if submitted to partitions. No need to comment on that. Signed-off-by: Christoph Hellwig --- fs/buffer.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 64fe82ec65ff1f..2725ebbcfdc246 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -3040,12 +3040,7 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, if (test_set_buffer_req(bh) && (op == REQ_OP_WRITE)) clear_buffer_write_io_error(bh); - /* - * from here on down, it's all bio -- do the initial mapping, - * submit_bio -> generic_make_request may further map this bio around - */ bio = bio_alloc(GFP_NOIO, 1); - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_write_hint = write_hint; From patchwork Wed Jul 1 08:59:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635747 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C10E392A for ; Wed, 1 Jul 2020 09:00:19 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9F39B206C3 for ; Wed, 1 Jul 2020 09:00:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZDEqk0RV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F39B206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 7E44911460B25; Wed, 1 Jul 2020 02:00:18 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 2DE2511460B22 for ; Wed, 1 Jul 2020 02:00:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vTBE+SlewIh0DyW5e4ikOFFHUr9uNcNNKOO9zYfoev4=; b=ZDEqk0RVdn4GmioVOO+ue7FX/r klKjoHS6fBkns64Q+XtRWA6wusT0ZSvo4z8m4qEi7uA6fcrsahOsjtnfNe3TdMRnJUKmnfehbf70m eUa1qrhPBnQ/ze9ZbDmQRPVSWI6sbYAJ14UhGxPynh/A+iMrzrxs5kH/GfC90bsUcIJZAmDY5TO7Q uzB4xgNKD9yoYFIJJgkaoj1JiffaTTOxYBZ99r4tODocjzLdW1Ni37l4xD9lrDg0KWXZVPnsZ6wQS EelU5sJKSuHmtFeiynjWz+HpJ2FljR/uEzl+iEZirHE0dLT9SfU8KTMWQB1N9JDmWco7rSWV4udAI jvGuPPzQ==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYb8-0008Dd-Pl; Wed, 01 Jul 2020 09:00:07 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 12/20] block: remove the request_queue argument from blk_queue_split Date: Wed, 1 Jul 2020 10:59:39 +0200 Message-Id: <20200701085947.3354405-13-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: DMQD2VZ5UVHRMDWE5REZPEXNUHSZSZC3 X-Message-ID-Hash: DMQD2VZ5UVHRMDWE5REZPEXNUHSZSZC3 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The queue can be trivially derived from the bio, so pass one less argument. Signed-off-by: Christoph Hellwig Acked-by: Song Liu --- block/blk-merge.c | 21 ++++++++++----------- block/blk-mq.c | 2 +- block/blk.h | 3 +-- drivers/block/drbd/drbd_req.c | 2 +- drivers/block/pktcdvd.c | 2 +- drivers/block/ps3vram.c | 2 +- drivers/block/rsxx/dev.c | 2 +- drivers/block/umem.c | 2 +- drivers/lightnvm/pblk-init.c | 4 ++-- drivers/md/dm.c | 2 +- drivers/md/md.c | 2 +- drivers/nvme/host/multipath.c | 9 ++++----- drivers/s390/block/dcssblk.c | 2 +- drivers/s390/block/xpram.c | 2 +- include/linux/blkdev.h | 2 +- 15 files changed, 28 insertions(+), 31 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 9c9fb21584b64e..20fa2290604105 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -283,20 +283,20 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, /** * __blk_queue_split - split a bio and submit the second half - * @q: [in] request queue pointer * @bio: [in, out] bio to be split * @nr_segs: [out] number of segments in the first bio * * Split a bio into two bios, chain the two bios, submit the second half and * store a pointer to the first half in *@bio. If the second bio is still too * big it will be split by a recursive call to this function. Since this - * function may allocate a new bio from @q->bio_split, it is the responsibility - * of the caller to ensure that @q is only released after processing of the + * function may allocate a new bio from @bio->bi_disk->queue->bio_split, it is + * the responsibility of the caller to ensure that + * @bio->bi_disk->queue->bio_split is only released after processing of the * split bio has finished. */ -void __blk_queue_split(struct request_queue *q, struct bio **bio, - unsigned int *nr_segs) +void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) { + struct request_queue *q = (*bio)->bi_disk->queue; struct bio *split = NULL; switch (bio_op(*bio)) { @@ -345,20 +345,19 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio, /** * blk_queue_split - split a bio and submit the second half - * @q: [in] request queue pointer * @bio: [in, out] bio to be split * * Split a bio into two bios, chains the two bios, submit the second half and * store a pointer to the first half in *@bio. Since this function may allocate - * a new bio from @q->bio_split, it is the responsibility of the caller to - * ensure that @q is only released after processing of the split bio has - * finished. + * a new bio from @bio->bi_disk->queue->bio_split, it is the responsibility of + * the caller to ensure that @bio->bi_disk->queue->bio_split is only released + * after processing of the split bio has finished. */ -void blk_queue_split(struct request_queue *q, struct bio **bio) +void blk_queue_split(struct bio **bio) { unsigned int nr_segs; - __blk_queue_split(q, bio, &nr_segs); + __blk_queue_split(bio, &nr_segs); } EXPORT_SYMBOL(blk_queue_split); diff --git a/block/blk-mq.c b/block/blk-mq.c index 65e0846fd06519..dbadb7defd618a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2166,7 +2166,7 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) blk_status_t ret; blk_queue_bounce(q, &bio); - __blk_queue_split(q, &bio, &nr_segs); + __blk_queue_split(&bio, &nr_segs); if (!bio_integrity_prep(bio)) goto queue_exit; diff --git a/block/blk.h b/block/blk.h index 0184a31fe4dfaf..0114fd92c8a05b 100644 --- a/block/blk.h +++ b/block/blk.h @@ -220,8 +220,7 @@ ssize_t part_timeout_show(struct device *, struct device_attribute *, char *); ssize_t part_timeout_store(struct device *, struct device_attribute *, const char *, size_t); -void __blk_queue_split(struct request_queue *q, struct bio **bio, - unsigned int *nr_segs); +void __blk_queue_split(struct bio **bio, unsigned int *nr_segs); int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs); int ll_front_merge_fn(struct request *req, struct bio *bio, diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 3f09b2ab977822..9368680474223a 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1598,7 +1598,7 @@ blk_qc_t drbd_make_request(struct request_queue *q, struct bio *bio) struct drbd_device *device = bio->bi_disk->private_data; unsigned long start_jif; - blk_queue_split(q, &bio); + blk_queue_split(&bio); start_jif = jiffies; diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 27a33adc41e487..29b0c62dc86c1f 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2434,7 +2434,7 @@ static blk_qc_t pkt_make_request(struct request_queue *q, struct bio *bio) char b[BDEVNAME_SIZE]; struct bio *split; - blk_queue_split(q, &bio); + blk_queue_split(&bio); pd = q->queuedata; if (!pd) { diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c index 5a1d1d137c7248..76cc584aa76346 100644 --- a/drivers/block/ps3vram.c +++ b/drivers/block/ps3vram.c @@ -593,7 +593,7 @@ static blk_qc_t ps3vram_make_request(struct request_queue *q, struct bio *bio) dev_dbg(&dev->core, "%s\n", __func__); - blk_queue_split(q, &bio); + blk_queue_split(&bio); spin_lock_irq(&priv->lock); busy = !bio_list_empty(&priv->list); diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c index 6a4d8d26e32cbd..1d52bc73dd0f82 100644 --- a/drivers/block/rsxx/dev.c +++ b/drivers/block/rsxx/dev.c @@ -123,7 +123,7 @@ static blk_qc_t rsxx_make_request(struct request_queue *q, struct bio *bio) struct rsxx_bio_meta *bio_meta; blk_status_t st = BLK_STS_IOERR; - blk_queue_split(q, &bio); + blk_queue_split(&bio); might_sleep(); diff --git a/drivers/block/umem.c b/drivers/block/umem.c index 5498f1cf36b3fe..3b89c07f9e9d6e 100644 --- a/drivers/block/umem.c +++ b/drivers/block/umem.c @@ -527,7 +527,7 @@ static blk_qc_t mm_make_request(struct request_queue *q, struct bio *bio) (unsigned long long)bio->bi_iter.bi_sector, bio->bi_iter.bi_size); - blk_queue_split(q, &bio); + blk_queue_split(&bio); spin_lock_irq(&card->lock); *card->biotail = bio; diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 6e677ff62cc969..7a4a1b7a941d2c 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -63,7 +63,7 @@ static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio) * constraint. Writes can be of arbitrary size. */ if (bio_data_dir(bio) == READ) { - blk_queue_split(q, &bio); + blk_queue_split(&bio); pblk_submit_read(pblk, bio); } else { /* Prevent deadlock in the case of a modest LUN configuration @@ -71,7 +71,7 @@ static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio) * leaves at least 256KB available for user I/O. */ if (pblk_get_secs(bio) > pblk_rl_max_io(&pblk->rl)) - blk_queue_split(q, &bio); + blk_queue_split(&bio); pblk_write_to_cache(pblk, bio, PBLK_IOTYPE_USER); } diff --git a/drivers/md/dm.c b/drivers/md/dm.c index c8d91f271c272e..5aa7a604f4cbc5 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1776,7 +1776,7 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, */ if (current->bio_list) { if (is_abnormal_io(bio)) - blk_queue_split(md->queue, &bio); + blk_queue_split(&bio); else dm_queue_split(md, ti, &bio); } diff --git a/drivers/md/md.c b/drivers/md/md.c index f567f536b529bd..ff20868e5e1b98 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -475,7 +475,7 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio) return BLK_QC_T_NONE; } - blk_queue_split(q, &bio); + blk_queue_split(&bio); if (mddev == NULL || mddev->pers == NULL) { bio_io_error(bio); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index da78e499947a9f..5a5205ea570a77 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -301,12 +301,11 @@ static blk_qc_t nvme_ns_head_make_request(struct request_queue *q, int srcu_idx; /* - * The namespace might be going away and the bio might - * be moved to a different queue via blk_steal_bios(), - * so we need to use the bio_split pool from the original - * queue to allocate the bvecs from. + * The namespace might be going away and the bio might be moved to a + * different queue via blk_steal_bios(), so we need to use the bio_split + * pool from the original queue to allocate the bvecs from. */ - blk_queue_split(q, &bio); + blk_queue_split(&bio); srcu_idx = srcu_read_lock(&head->srcu); ns = nvme_find_path(head); diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index 384edffe5cb4ae..dfe21eb7276021 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -878,7 +878,7 @@ dcssblk_make_request(struct request_queue *q, struct bio *bio) unsigned long source_addr; unsigned long bytes_done; - blk_queue_split(q, &bio); + blk_queue_split(&bio); bytes_done = 0; dev_info = bio->bi_disk->private_data; diff --git a/drivers/s390/block/xpram.c b/drivers/s390/block/xpram.c index 45a04daec89ed9..5456f0ad5a40a4 100644 --- a/drivers/s390/block/xpram.c +++ b/drivers/s390/block/xpram.c @@ -191,7 +191,7 @@ static blk_qc_t xpram_make_request(struct request_queue *q, struct bio *bio) unsigned long page_addr; unsigned long bytes; - blk_queue_split(q, &bio); + blk_queue_split(&bio); if ((bio->bi_iter.bi_sector & 7) != 0 || (bio->bi_iter.bi_size & 4095) != 0) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 15497782c17656..d002defc178934 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -871,7 +871,7 @@ extern void blk_rq_unprep_clone(struct request *rq); extern blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq); extern int blk_rq_append_bio(struct request *rq, struct bio **bio); -extern void blk_queue_split(struct request_queue *, struct bio **); +extern void blk_queue_split(struct bio **); extern int scsi_verify_blk_ioctl(struct block_device *, unsigned int); extern int scsi_cmd_blk_ioctl(struct block_device *, fmode_t, unsigned int, void __user *); From patchwork Wed Jul 1 08:59:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635749 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D51A739 for ; Wed, 1 Jul 2020 09:00:21 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3B8772074D for ; Wed, 1 Jul 2020 09:00:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Z/F5JgSB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B8772074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 95E7F114726CB; Wed, 1 Jul 2020 02:00:19 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 3C67A1145BDE1 for ; Wed, 1 Jul 2020 02:00:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tMUnz3Lcybn8dQofQ1ryTzIcUAQ8+R8Yiszv/2NG4rI=; b=Z/F5JgSBdsAyHAjWTFzaMp6g/q oUzASIPvrJSJhS8H/JfA1QH8G6IT7wmV5wLn36QX/aHDo/hsxYVlgCl6qoTtD6nZ6IvOBUe/JnXQt goBpJk9KmMg/PLs2u8o5kRTqrjTKbPsqjWXOy6JnNqrkKOfwRlEIIUz20TquRejre+i/DVwTPiltW jHYesHi+fT0Dd0T/0ANjA7isBclM9yDdn8EbcZjL2aV1xUSGZFyB4MPPtssG0ekb0Vesqf9t2hN5D 7E5atGWNGy/7VUYWuHLULEFzZmWMZQO1JDyBMSdbTihaE6Swz+QOzO9Y+CnVIeWgK7UHJ4QLdU4Me +zEre2Dw==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbA-0008Ea-Rh; Wed, 01 Jul 2020 09:00:09 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 13/20] block: tidy up a warning in bio_check_ro Date: Wed, 1 Jul 2020 10:59:40 +0200 Message-Id: <20200701085947.3354405-14-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: X3OS5NLYLWCO5JI5GHAGPTDKXXVOJTT3 X-Message-ID-Hash: X3OS5NLYLWCO5JI5GHAGPTDKXXVOJTT3 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The "generic_make_request: " prefix has no value, and will soon become stale. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 76cfd5709f66cd..95dca74534ff73 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -869,8 +869,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part) return false; WARN_ONCE(1, - "generic_make_request: Trying to write " - "to read-only block-device %s (partno %d)\n", + "Trying to write to read-only block-device %s (partno %d)\n", bio_devname(bio, b), part->partno); /* Older lvm-tools actually trigger this */ return false; From patchwork Wed Jul 1 08:59:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635751 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF5C2739 for ; Wed, 1 Jul 2020 09:00:22 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D7F22074D for ; Wed, 1 Jul 2020 09:00:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="pGlUJrbO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D7F22074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id ADCD7114726CF; Wed, 1 Jul 2020 02:00:21 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 383111145BDE1 for ; Wed, 1 Jul 2020 02:00:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=llOojDD3lqJfk9XWeWL9pqlJf45UocCmu2uEUE0dVqA=; b=pGlUJrbOxh0JwXeTwQoJpYf2F/ Wj1CCsNV+D84p9IQoC65j7VZnlNqsrYsLYThDSTFAmKMsEUR9St/lgE0ZmSPIQQCuP7dgvSl2ZuCN qmHjB/58qr1vtBaiVphQaEP9vrD2GonVUZUsFfMDwdCYWZmTuiqkDgguyEU9GJPldWF1NUiFWd5Tm PuuqBh6fEuRMCrVHN8SIXYmkZ8bhrPxFnV6RVSwro1PKig0pS+Kd1OL4Q1S+1+uAY1tH2+UawdfXw RbbQhCUERQ3oGJvZGCbwR4bKQ7UF6Akd/eN6H9X2jHLGH0eeFYrqjDeKzOU7MKgBYWr9GW+QGURtd 0nPrWjmA==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbC-0008Ex-2b; Wed, 01 Jul 2020 09:00:10 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 14/20] block: remove the NULL queue check in generic_make_request_checks Date: Wed, 1 Jul 2020 10:59:41 +0200 Message-Id: <20200701085947.3354405-15-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: HF2TIBBUA6B7P5BPB2O5OQKBDKPGGPOO X-Message-ID-Hash: HF2TIBBUA6B7P5BPB2O5OQKBDKPGGPOO X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: All registers disks must have a valid queue pointer, so don't bother to log a warning for that case. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 95dca74534ff73..37435d0d433564 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -973,22 +973,12 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, static noinline_for_stack bool generic_make_request_checks(struct bio *bio) { - struct request_queue *q; + struct request_queue *q = bio->bi_disk->queue; int nr_sectors = bio_sectors(bio); blk_status_t status = BLK_STS_IOERR; - char b[BDEVNAME_SIZE]; might_sleep(); - q = bio->bi_disk->queue; - if (unlikely(!q)) { - printk(KERN_ERR - "generic_make_request: Trying to access " - "nonexistent block-device %s (%Lu)\n", - bio_devname(bio, b), (long long)bio->bi_iter.bi_sector); - goto end_io; - } - /* * For a REQ_NOWAIT based request, return -EOPNOTSUPP * if queue is not a request based queue. From patchwork Wed Jul 1 08:59:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635753 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6E18D739 for ; Wed, 1 Jul 2020 09:00:24 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C0EA2074D for ; Wed, 1 Jul 2020 09:00:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Xqc61nrv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C0EA2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id C36E2114726D2; Wed, 1 Jul 2020 02:00:22 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 735131145BDE1 for ; Wed, 1 Jul 2020 02:00:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MVPFkWqT27H+EK/gr/8Zgz6Ls+wUfbhG+l5NiS06UDM=; b=Xqc61nrvMsR15YDOdMP6gtz3OE dp+q2eYG614WsXa61aoCzCbPCQ7EVEVjOMGDkZ7NwFsWva1CU/EvlYVng9fHqdWz36bmP6jHTHMtm 9HwoHIhv4HRqd58L1tsbK2MdDGVgvpVUQYejP9ypWrETeNZsm35WLbdn6x/AvJVm+VOSzNnxf8fIp KbxYtgEm6H6dIO6eulryJM42lvmAA4qbqnqGgGosrBmekGRp2FABgjxHfMQ4AZJpDtEP0NdKqhokl jx62VheF1t1OKHUt+hiBH3kaQ5UWqD0m6Z3qmOJFA/qujzu0C+QlUtYPS+cJEhcj1NpJAxoejnICo sgM4n4Xg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbD-0008FC-9s; Wed, 01 Jul 2020 09:00:11 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 15/20] block: remove the nr_sectors variable in generic_make_request_checks Date: Wed, 1 Jul 2020 10:59:42 +0200 Message-Id: <20200701085947.3354405-16-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: 7VT4OA6I53YW43A2UBOBQLRDIUPCUBKC X-Message-ID-Hash: 7VT4OA6I53YW43A2UBOBQLRDIUPCUBKC X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The variable is only used once, so just open code the bio_sector() there. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 37435d0d433564..28f60985dc75cc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -974,7 +974,6 @@ static noinline_for_stack bool generic_make_request_checks(struct bio *bio) { struct request_queue *q = bio->bi_disk->queue; - int nr_sectors = bio_sectors(bio); blk_status_t status = BLK_STS_IOERR; might_sleep(); @@ -1007,7 +1006,7 @@ generic_make_request_checks(struct bio *bio) if (op_is_flush(bio->bi_opf) && !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) { bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA); - if (!nr_sectors) { + if (!bio_sectors(bio)) { status = BLK_STS_OK; goto end_io; } From patchwork Wed Jul 1 08:59:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE18E92A for ; Wed, 1 Jul 2020 09:00:25 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AC6D7206C3 for ; Wed, 1 Jul 2020 09:00:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DRlcLd3I" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC6D7206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id D858111460B25; Wed, 1 Jul 2020 02:00:23 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id CC3DB11460B27 for ; Wed, 1 Jul 2020 02:00:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=URaf5h3Vltw7cp+VhmMz6d5PfFEfEIzC9ReUXZBzNZE=; b=DRlcLd3IOwNR11GB9y9oWJzbON TdS4j8/1gubwkNHgXnKX66R+etr6LkCNqYdf9zW7bXADLXgk99zUe2k88vBiyUg18XQYoamFpc0zX 74IyQtn/Kdq07ogweLVDugS8auKIVCDBPbXc0HM5QJCCCCCcGOINEhKnVXkWNQzzDl2Nxfq/CbBI2 3i5g4zMGKgSshTKdcURCrSA0SM+wRiWIxr3xltMVXg8own+uJkJYGzGnUet/aC4GXwVWJCpb6acMi YwBiFxQJndaDslyn5JszVgFMExyy6PUtoX/YkIJlQrxQeVvWe/vm9yJFspgUo1eAsgbvtZZh/lPpT NudsvCRg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbF-0008G7-11; Wed, 01 Jul 2020 09:00:14 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 16/20] block: move ->make_request_fn to struct block_device_operations Date: Wed, 1 Jul 2020 10:59:43 +0200 Message-Id: <20200701085947.3354405-17-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: 65ZHP5MROSE6A4UEM367IYVGWVRBIMF2 X-Message-ID-Hash: 65ZHP5MROSE6A4UEM367IYVGWVRBIMF2 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The make_request_fn is a little weird in that it sits directly in struct request_queue instead of an operation vector. Replace it with a block_device_operations method called submit_bio (which describes much better what it does). Also remove the request_queue argument to it, as the queue can be derived pretty trivially from the bio. Signed-off-by: Christoph Hellwig Acked-by: Dan Williams Acked-by: Coly Li --- Documentation/block/biodoc.rst | 2 +- .../block/writeback_cache_control.rst | 2 +- arch/m68k/emu/nfblock.c | 5 +- arch/xtensa/platforms/iss/simdisk.c | 5 +- block/blk-cgroup.c | 2 +- block/blk-core.c | 53 +++++++------------ block/blk-mq.c | 10 ++-- block/blk.h | 2 - drivers/block/brd.c | 5 +- drivers/block/drbd/drbd_int.h | 2 +- drivers/block/drbd/drbd_main.c | 9 ++-- drivers/block/drbd/drbd_req.c | 2 +- drivers/block/null_blk_main.c | 17 ++++-- drivers/block/pktcdvd.c | 11 ++-- drivers/block/ps3vram.c | 15 +++--- drivers/block/rsxx/dev.c | 7 ++- drivers/block/umem.c | 5 +- drivers/block/zram/zram_drv.c | 11 ++-- drivers/lightnvm/core.c | 8 +-- drivers/lightnvm/pblk-init.c | 12 +++-- drivers/md/bcache/request.c | 4 +- drivers/md/bcache/request.h | 4 +- drivers/md/bcache/super.c | 23 +++++--- drivers/md/dm.c | 23 ++++---- drivers/md/md.c | 5 +- drivers/nvdimm/blk.c | 5 +- drivers/nvdimm/btt.c | 5 +- drivers/nvdimm/pmem.c | 5 +- drivers/nvme/host/core.c | 1 + drivers/nvme/host/multipath.c | 5 +- drivers/nvme/host/nvme.h | 1 + drivers/s390/block/dcssblk.c | 9 ++-- drivers/s390/block/xpram.c | 6 +-- include/linux/blk-mq.h | 2 +- include/linux/blkdev.h | 7 +-- include/linux/lightnvm.h | 3 +- 36 files changed, 153 insertions(+), 140 deletions(-) diff --git a/Documentation/block/biodoc.rst b/Documentation/block/biodoc.rst index b964796ec9c780..267384159bf793 100644 --- a/Documentation/block/biodoc.rst +++ b/Documentation/block/biodoc.rst @@ -1036,7 +1036,7 @@ Now the generic block layer performs partition-remapping early and thus provides drivers with a sector number relative to whole device, rather than having to take partition number into account in order to arrive at the true sector number. The routine blk_partition_remap() is invoked by -generic_make_request even before invoking the queue specific make_request_fn, +generic_make_request even before invoking the queue specific ->submit_bio, so the i/o scheduler also gets to operate on whole disk sector numbers. This should typically not require changes to block drivers, it just never gets to invoke its own partition sector offset calculations since all bios diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst index 2c752c57c14c62..b208488d0aae85 100644 --- a/Documentation/block/writeback_cache_control.rst +++ b/Documentation/block/writeback_cache_control.rst @@ -47,7 +47,7 @@ the Forced Unit Access is implemented. The REQ_PREFLUSH and REQ_FUA flags may both be set on a single bio. -Implementation details for make_request_fn based block drivers +Implementation details for bio based block drivers -------------------------------------------------------------- These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c index 87e8b1700acd28..92d26c81244134 100644 --- a/arch/m68k/emu/nfblock.c +++ b/arch/m68k/emu/nfblock.c @@ -59,7 +59,7 @@ struct nfhd_device { struct gendisk *disk; }; -static blk_qc_t nfhd_make_request(struct request_queue *queue, struct bio *bio) +static blk_qc_t nfhd_submit_bio(struct bio *bio) { struct nfhd_device *dev = bio->bi_disk->private_data; struct bio_vec bvec; @@ -93,6 +93,7 @@ static int nfhd_getgeo(struct block_device *bdev, struct hd_geometry *geo) static const struct block_device_operations nfhd_ops = { .owner = THIS_MODULE, + .submit_bio = nfhd_submit_bio, .getgeo = nfhd_getgeo, }; @@ -118,7 +119,7 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 bsize) dev->bsize = bsize; dev->bshift = ffs(bsize) - 10; - dev->queue = blk_alloc_queue(nfhd_make_request, NUMA_NO_NODE); + dev->queue = blk_alloc_queue(NUMA_NO_NODE); if (dev->queue == NULL) goto free_dev; diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c index 31b5020077a059..5107140dbb7edc 100644 --- a/arch/xtensa/platforms/iss/simdisk.c +++ b/arch/xtensa/platforms/iss/simdisk.c @@ -101,7 +101,7 @@ static void simdisk_transfer(struct simdisk *dev, unsigned long sector, spin_unlock(&dev->lock); } -static blk_qc_t simdisk_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t simdisk_submit_bio(struct bio *bio) { struct simdisk *dev = bio->bi_disk->private_data; struct bio_vec bvec; @@ -144,6 +144,7 @@ static void simdisk_release(struct gendisk *disk, fmode_t mode) static const struct block_device_operations simdisk_ops = { .owner = THIS_MODULE, + .submit_bio = simdisk_submit_bio, .open = simdisk_open, .release = simdisk_release, }; @@ -267,7 +268,7 @@ static int __init simdisk_setup(struct simdisk *dev, int which, spin_lock_init(&dev->lock); dev->users = 0; - dev->queue = blk_alloc_queue(simdisk_make_request, NUMA_NO_NODE); + dev->queue = blk_alloc_queue(NUMA_NO_NODE); if (dev->queue == NULL) { pr_err("blk_alloc_queue failed\n"); goto out_alloc_queue; diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 1ce94afc03bcfd..bc1df69e371c21 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1012,7 +1012,7 @@ static int blkcg_css_online(struct cgroup_subsys_state *css) * blkcg_init_queue - initialize blkcg part of request queue * @q: request_queue to initialize * - * Called from __blk_alloc_queue(). Responsible for initializing blkcg + * Called from blk_alloc_queue(). Responsible for initializing blkcg * part of new request_queue @q. * * RETURNS: diff --git a/block/blk-core.c b/block/blk-core.c index 28f60985dc75cc..cb07a726dd7117 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -283,7 +283,7 @@ EXPORT_SYMBOL(blk_dump_rq_flags); * A block device may call blk_sync_queue to ensure that any * such activity is cancelled, thus allowing it to release resources * that the callbacks might use. The caller must already have made sure - * that its ->make_request_fn will not re-add plugging prior to calling + * that its ->submit_bio will not re-add plugging prior to calling * this function. * * This function does not cancel any asynchronous activity arising @@ -510,7 +510,7 @@ static void blk_timeout_work(struct work_struct *work) { } -struct request_queue *__blk_alloc_queue(int node_id) +struct request_queue *blk_alloc_queue(int node_id) { struct request_queue *q; int ret; @@ -575,6 +575,7 @@ struct request_queue *__blk_alloc_queue(int node_id) blk_queue_dma_alignment(q, 511); blk_set_default_limits(&q->limits); + q->nr_requests = BLKDEV_MAX_RQ; return q; @@ -592,21 +593,6 @@ struct request_queue *__blk_alloc_queue(int node_id) kmem_cache_free(blk_requestq_cachep, q); return NULL; } - -struct request_queue *blk_alloc_queue(make_request_fn make_request, int node_id) -{ - struct request_queue *q; - - if (WARN_ON_ONCE(!make_request)) - return NULL; - - q = __blk_alloc_queue(node_id); - if (!q) - return NULL; - q->make_request_fn = make_request; - q->nr_requests = BLKDEV_MAX_RQ; - return q; -} EXPORT_SYMBOL(blk_alloc_queue); /** @@ -1088,15 +1074,15 @@ generic_make_request_checks(struct bio *bio) static blk_qc_t do_make_request(struct bio *bio) { - struct request_queue *q = bio->bi_disk->queue; + struct gendisk *disk = bio->bi_disk; blk_qc_t ret = BLK_QC_T_NONE; if (blk_crypto_bio_prep(&bio)) { - if (!q->make_request_fn) - return blk_mq_make_request(q, bio); - ret = q->make_request_fn(q, bio); + if (!disk->fops->submit_bio) + return blk_mq_submit_bio(bio); + ret = disk->fops->submit_bio(bio); } - blk_queue_exit(q); + blk_queue_exit(disk->queue); return ret; } @@ -1113,10 +1099,9 @@ blk_qc_t generic_make_request(struct bio *bio) { /* * bio_list_on_stack[0] contains bios submitted by the current - * make_request_fn. - * bio_list_on_stack[1] contains bios that were submitted before - * the current make_request_fn, but that haven't been processed - * yet. + * ->submit_bio. + * bio_list_on_stack[1] contains bios that were submitted before the + * current ->submit_bio_bio, but that haven't been processed yet. */ struct bio_list bio_list_on_stack[2]; blk_qc_t ret = BLK_QC_T_NONE; @@ -1125,10 +1110,10 @@ blk_qc_t generic_make_request(struct bio *bio) goto out; /* - * We only want one ->make_request_fn to be active at a time, else + * We only want one ->submit_bio to be active at a time, else * stack usage with stacked devices could be a problem. So use * current->bio_list to keep a list of requests submited by a - * make_request_fn function. current->bio_list is also used as a + * ->submit_bio method. current->bio_list is also used as a * flag to say if generic_make_request is currently active in this * task or not. If it is NULL, then no make_request is active. If * it is non-NULL, then a make_request is active, and new requests @@ -1146,12 +1131,12 @@ blk_qc_t generic_make_request(struct bio *bio) * We pretend that we have just taken it off a longer list, so * we assign bio_list to a pointer to the bio_list_on_stack, * thus initialising the bio_list of new bios to be - * added. ->make_request() may indeed add some more bios + * added. ->submit_bio() may indeed add some more bios * through a recursive call to generic_make_request. If it * did, we find a non-NULL value in bio_list and re-enter the loop * from the top. In this case we really did just take the bio * of the top of the list (no pretending) and so remove it from - * bio_list, and call into ->make_request() again. + * bio_list, and call into ->submit_bio() again. */ BUG_ON(bio->bi_next); bio_list_init(&bio_list_on_stack[0]); @@ -1201,9 +1186,9 @@ EXPORT_SYMBOL(generic_make_request); */ blk_qc_t direct_make_request(struct bio *bio) { - struct request_queue *q = bio->bi_disk->queue; + struct gendisk *disk = bio->bi_disk; - if (WARN_ON_ONCE(q->make_request_fn)) { + if (WARN_ON_ONCE(!disk->queue->mq_ops)) { bio_io_error(bio); return BLK_QC_T_NONE; } @@ -1212,10 +1197,10 @@ blk_qc_t direct_make_request(struct bio *bio) if (unlikely(bio_queue_enter(bio))) return BLK_QC_T_NONE; if (!blk_crypto_bio_prep(&bio)) { - blk_queue_exit(q); + blk_queue_exit(disk->queue); return BLK_QC_T_NONE; } - return blk_mq_make_request(q, bio); + return blk_mq_submit_bio(bio); } EXPORT_SYMBOL_GPL(direct_make_request); diff --git a/block/blk-mq.c b/block/blk-mq.c index dbadb7defd618a..948987e9b6ab3e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2136,8 +2136,7 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) } /** - * blk_mq_make_request - Create and send a request to block device. - * @q: Request queue pointer. + * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. * * Builds up a request structure from @q and @bio and send to the device. The @@ -2151,8 +2150,9 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) * * Returns: Request queue cookie. */ -blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) +blk_qc_t blk_mq_submit_bio(struct bio *bio) { + struct request_queue *q = bio->bi_disk->queue; const int is_sync = op_is_sync(bio->bi_opf); const int is_flush_fua = op_is_flush(bio->bi_opf); struct blk_mq_alloc_data data = { @@ -2277,7 +2277,7 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) blk_queue_exit(q); return BLK_QC_T_NONE; } -EXPORT_SYMBOL_GPL(blk_mq_make_request); /* only for request based dm */ +EXPORT_SYMBOL_GPL(blk_mq_submit_bio); /* only for request based dm */ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) @@ -3017,7 +3017,7 @@ struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, { struct request_queue *uninit_q, *q; - uninit_q = __blk_alloc_queue(set->numa_node); + uninit_q = blk_alloc_queue(set->numa_node); if (!uninit_q) return ERR_PTR(-ENOMEM); uninit_q->queuedata = queuedata; diff --git a/block/blk.h b/block/blk.h index 0114fd92c8a05b..9dcf51c94096a7 100644 --- a/block/blk.h +++ b/block/blk.h @@ -419,8 +419,6 @@ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size) #endif } -struct request_queue *__blk_alloc_queue(int node_id); - int bio_add_hw_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset, unsigned int max_sectors, bool *same_page); diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 2fb25c348d531b..2723a70eb85593 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -282,7 +282,7 @@ static int brd_do_bvec(struct brd_device *brd, struct page *page, return err; } -static blk_qc_t brd_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t brd_submit_bio(struct bio *bio) { struct brd_device *brd = bio->bi_disk->private_data; struct bio_vec bvec; @@ -330,6 +330,7 @@ static int brd_rw_page(struct block_device *bdev, sector_t sector, static const struct block_device_operations brd_fops = { .owner = THIS_MODULE, + .submit_bio = brd_submit_bio, .rw_page = brd_rw_page, }; @@ -381,7 +382,7 @@ static struct brd_device *brd_alloc(int i) spin_lock_init(&brd->brd_lock); INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); - brd->brd_queue = blk_alloc_queue(brd_make_request, NUMA_NO_NODE); + brd->brd_queue = blk_alloc_queue(NUMA_NO_NODE); if (!brd->brd_queue) goto out_free_dev; diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index 33d0831c99b613..0327408da79c7a 100644 --- a/drivers/block/drbd/drbd_int.h +++ b/drivers/block/drbd/drbd_int.h @@ -1451,7 +1451,7 @@ extern void conn_free_crypto(struct drbd_connection *connection); /* drbd_req */ extern void do_submit(struct work_struct *ws); extern void __drbd_make_request(struct drbd_device *, struct bio *, unsigned long); -extern blk_qc_t drbd_make_request(struct request_queue *q, struct bio *bio); +extern blk_qc_t drbd_submit_bio(struct bio *bio); extern int drbd_read_remote(struct drbd_device *device, struct drbd_request *req); extern int is_valid_ar_handle(struct drbd_request *, sector_t); diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 26f4e0aa7393b4..2b05de0896e282 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -132,9 +132,10 @@ wait_queue_head_t drbd_pp_wait; DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5); static const struct block_device_operations drbd_ops = { - .owner = THIS_MODULE, - .open = drbd_open, - .release = drbd_release, + .owner = THIS_MODULE, + .submit_bio = drbd_submit_bio, + .open = drbd_open, + .release = drbd_release, }; struct bio *bio_alloc_drbd(gfp_t gfp_mask) @@ -2801,7 +2802,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig drbd_init_set_defaults(device); - q = blk_alloc_queue(drbd_make_request, NUMA_NO_NODE); + q = blk_alloc_queue(NUMA_NO_NODE); if (!q) goto out_no_q; device->rq_queue = q; diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 9368680474223a..c7e14c9a6e5f83 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1593,7 +1593,7 @@ void do_submit(struct work_struct *ws) } } -blk_qc_t drbd_make_request(struct request_queue *q, struct bio *bio) +blk_qc_t drbd_submit_bio(struct bio *bio) { struct drbd_device *device = bio->bi_disk->private_data; unsigned long start_jif; diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 93ce0a00b2ae01..907c6858aec0c3 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1388,7 +1388,7 @@ static struct nullb_queue *nullb_to_queue(struct nullb *nullb) return &nullb->queues[index]; } -static blk_qc_t null_queue_bio(struct request_queue *q, struct bio *bio) +static blk_qc_t null_submit_bio(struct bio *bio) { sector_t sector = bio->bi_iter.bi_sector; sector_t nr_sectors = bio_sectors(bio); @@ -1575,7 +1575,13 @@ static void null_config_discard(struct nullb *nullb) blk_queue_flag_set(QUEUE_FLAG_DISCARD, nullb->q); } -static const struct block_device_operations null_ops = { +static const struct block_device_operations null_bio_ops = { + .owner = THIS_MODULE, + .submit_bio = null_submit_bio, + .report_zones = null_report_zones, +}; + +static const struct block_device_operations null_rq_ops = { .owner = THIS_MODULE, .report_zones = null_report_zones, }; @@ -1647,7 +1653,10 @@ static int null_gendisk_register(struct nullb *nullb) disk->flags |= GENHD_FL_EXT_DEVT | GENHD_FL_SUPPRESS_PARTITION_INFO; disk->major = null_major; disk->first_minor = nullb->index; - disk->fops = &null_ops; + if (queue_is_mq(nullb->q)) + disk->fops = &null_rq_ops; + else + disk->fops = &null_bio_ops; disk->private_data = nullb; disk->queue = nullb->q; strncpy(disk->disk_name, nullb->disk_name, DISK_NAME_LEN); @@ -1792,7 +1801,7 @@ static int null_add_dev(struct nullb_device *dev) goto out_cleanup_tags; } } else if (dev->queue_mode == NULL_Q_BIO) { - nullb->q = blk_alloc_queue(null_queue_bio, dev->home_node); + nullb->q = blk_alloc_queue(dev->home_node); if (!nullb->q) { rv = -ENOMEM; goto out_cleanup_queues; diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 29b0c62dc86c1f..5588bd4cd267e8 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -36,7 +36,7 @@ * block device, assembling the pieces to full packets and queuing them to the * packet I/O scheduler. * - * At the top layer there is a custom make_request_fn function that forwards + * At the top layer there is a custom ->submit_bio function that forwards * read requests directly to the iosched queue and puts write requests in the * unaligned write queue. A kernel thread performs the necessary read * gathering to convert the unaligned writes to aligned writes and then feeds @@ -2428,7 +2428,7 @@ static void pkt_make_request_write(struct request_queue *q, struct bio *bio) } } -static blk_qc_t pkt_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t pkt_submit_bio(struct bio *bio) { struct pktcdvd_device *pd; char b[BDEVNAME_SIZE]; @@ -2436,7 +2436,7 @@ static blk_qc_t pkt_make_request(struct request_queue *q, struct bio *bio) blk_queue_split(&bio); - pd = q->queuedata; + pd = bio->bi_disk->queue->queuedata; if (!pd) { pr_err("%s incorrect request queue\n", bio_devname(bio, b)); goto end_io; @@ -2480,7 +2480,7 @@ static blk_qc_t pkt_make_request(struct request_queue *q, struct bio *bio) split = bio; } - pkt_make_request_write(q, split); + pkt_make_request_write(bio->bi_disk->queue, split); } while (split != bio); return BLK_QC_T_NONE; @@ -2685,6 +2685,7 @@ static char *pkt_devnode(struct gendisk *disk, umode_t *mode) static const struct block_device_operations pktcdvd_ops = { .owner = THIS_MODULE, + .submit_bio = pkt_submit_bio, .open = pkt_open, .release = pkt_close, .ioctl = pkt_ioctl, @@ -2749,7 +2750,7 @@ static int pkt_setup_dev(dev_t dev, dev_t* pkt_dev) disk->flags = GENHD_FL_REMOVABLE; strcpy(disk->disk_name, pd->name); disk->private_data = pd; - disk->queue = blk_alloc_queue(pkt_make_request, NUMA_NO_NODE); + disk->queue = blk_alloc_queue(NUMA_NO_NODE); if (!disk->queue) goto out_mem2; diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c index 76cc584aa76346..1088798c8dd0c9 100644 --- a/drivers/block/ps3vram.c +++ b/drivers/block/ps3vram.c @@ -90,12 +90,6 @@ struct ps3vram_priv { static int ps3vram_major; - -static const struct block_device_operations ps3vram_fops = { - .owner = THIS_MODULE, -}; - - #define DMA_NOTIFIER_HANDLE_BASE 0x66604200 /* first DMA notifier handle */ #define DMA_NOTIFIER_OFFSET_BASE 0x1000 /* first DMA notifier offset */ #define DMA_NOTIFIER_SIZE 0x40 @@ -585,7 +579,7 @@ static struct bio *ps3vram_do_bio(struct ps3_system_bus_device *dev, return next; } -static blk_qc_t ps3vram_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t ps3vram_submit_bio(struct bio *bio) { struct ps3_system_bus_device *dev = bio->bi_disk->private_data; struct ps3vram_priv *priv = ps3_system_bus_get_drvdata(dev); @@ -610,6 +604,11 @@ static blk_qc_t ps3vram_make_request(struct request_queue *q, struct bio *bio) return BLK_QC_T_NONE; } +static const struct block_device_operations ps3vram_fops = { + .owner = THIS_MODULE, + .submit_bio = ps3vram_submit_bio, +}; + static int ps3vram_probe(struct ps3_system_bus_device *dev) { struct ps3vram_priv *priv; @@ -737,7 +736,7 @@ static int ps3vram_probe(struct ps3_system_bus_device *dev) ps3vram_proc_init(dev); - queue = blk_alloc_queue(ps3vram_make_request, NUMA_NO_NODE); + queue = blk_alloc_queue(NUMA_NO_NODE); if (!queue) { dev_err(&dev->core, "blk_alloc_queue failed\n"); error = -ENOMEM; diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c index 1d52bc73dd0f82..edacefff6e355b 100644 --- a/drivers/block/rsxx/dev.c +++ b/drivers/block/rsxx/dev.c @@ -50,6 +50,8 @@ struct rsxx_bio_meta { static struct kmem_cache *bio_meta_pool; +static blk_qc_t rsxx_submit_bio(struct bio *bio); + /*----------------- Block Device Operations -----------------*/ static int rsxx_blkdev_ioctl(struct block_device *bdev, fmode_t mode, @@ -92,6 +94,7 @@ static int rsxx_getgeo(struct block_device *bdev, struct hd_geometry *geo) static const struct block_device_operations rsxx_fops = { .owner = THIS_MODULE, + .submit_bio = rsxx_submit_bio, .getgeo = rsxx_getgeo, .ioctl = rsxx_blkdev_ioctl, }; @@ -117,7 +120,7 @@ static void bio_dma_done_cb(struct rsxx_cardinfo *card, } } -static blk_qc_t rsxx_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t rsxx_submit_bio(struct bio *bio) { struct rsxx_cardinfo *card = bio->bi_disk->private_data; struct rsxx_bio_meta *bio_meta; @@ -233,7 +236,7 @@ int rsxx_setup_dev(struct rsxx_cardinfo *card) return -ENOMEM; } - card->queue = blk_alloc_queue(rsxx_make_request, NUMA_NO_NODE); + card->queue = blk_alloc_queue(NUMA_NO_NODE); if (!card->queue) { dev_err(CARD_TO_DEV(card), "Failed queue alloc\n"); unregister_blkdev(card->major, DRIVER_NAME); diff --git a/drivers/block/umem.c b/drivers/block/umem.c index 3b89c07f9e9d6e..2b95d7b33b9186 100644 --- a/drivers/block/umem.c +++ b/drivers/block/umem.c @@ -519,7 +519,7 @@ static int mm_check_plugged(struct cardinfo *card) return !!blk_check_plugged(mm_unplug, card, sizeof(struct blk_plug_cb)); } -static blk_qc_t mm_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t mm_submit_bio(struct bio *bio) { struct cardinfo *card = bio->bi_disk->private_data; @@ -779,6 +779,7 @@ static int mm_getgeo(struct block_device *bdev, struct hd_geometry *geo) static const struct block_device_operations mm_fops = { .owner = THIS_MODULE, + .submit_bio = mm_submit_bio, .getgeo = mm_getgeo, .revalidate_disk = mm_revalidate, }; @@ -886,7 +887,7 @@ static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) card->biotail = &card->bio; spin_lock_init(&card->lock); - card->queue = blk_alloc_queue(mm_make_request, NUMA_NO_NODE); + card->queue = blk_alloc_queue(NUMA_NO_NODE); if (!card->queue) goto failed_alloc; diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 0564e3f384089e..f9a57f147ee1e6 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -793,9 +793,9 @@ static void zram_sync_read(struct work_struct *work) } /* - * Block layer want one ->make_request_fn to be active at a time - * so if we use chained IO with parent IO in same context, - * it's a deadlock. To avoid, it, it uses worker thread context. + * Block layer want one ->submit_bio to be active at a time, so if we use + * chained IO with parent IO in same context, it's a deadlock. To avoid that, + * use a worker thread context. */ static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec, unsigned long entry, struct bio *bio) @@ -1584,7 +1584,7 @@ static void __zram_make_request(struct zram *zram, struct bio *bio) /* * Handler function for all zram I/O requests. */ -static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio) +static blk_qc_t zram_submit_bio(struct bio *bio) { struct zram *zram = bio->bi_disk->private_data; @@ -1813,6 +1813,7 @@ static int zram_open(struct block_device *bdev, fmode_t mode) static const struct block_device_operations zram_devops = { .open = zram_open, + .submit_bio = zram_submit_bio, .swap_slot_free_notify = zram_slot_free_notify, .rw_page = zram_rw_page, .owner = THIS_MODULE @@ -1891,7 +1892,7 @@ static int zram_add(void) #ifdef CONFIG_ZRAM_WRITEBACK spin_lock_init(&zram->wb_limit_lock); #endif - queue = blk_alloc_queue(zram_make_request, NUMA_NO_NODE); + queue = blk_alloc_queue(NUMA_NO_NODE); if (!queue) { pr_err("Error allocating disk queue for device %d\n", device_id); diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index db38a68abb6c03..fe78bf0fdce545 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -236,10 +236,6 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, return tgt_dev; } -static const struct block_device_operations nvm_fops = { - .owner = THIS_MODULE, -}; - static struct nvm_tgt_type *__nvm_find_target_type(const char *name) { struct nvm_tgt_type *tt; @@ -380,7 +376,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) goto err_dev; } - tqueue = blk_alloc_queue(tt->make_rq, dev->q->node); + tqueue = blk_alloc_queue(dev->q->node); if (!tqueue) { ret = -ENOMEM; goto err_disk; @@ -390,7 +386,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) tdisk->flags = GENHD_FL_EXT_DEVT; tdisk->major = 0; tdisk->first_minor = 0; - tdisk->fops = &nvm_fops; + tdisk->fops = tt->bops; tdisk->queue = tqueue; targetdata = tt->init(tgt_dev, tdisk, create->flags); diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 7a4a1b7a941d2c..b6246f73895cf8 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -47,9 +47,9 @@ static struct pblk_global_caches pblk_caches = { struct bio_set pblk_bio_set; -static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio) +static blk_qc_t pblk_submit_bio(struct bio *bio) { - struct pblk *pblk = q->queuedata; + struct pblk *pblk = bio->bi_disk->queue->queuedata; if (bio_op(bio) == REQ_OP_DISCARD) { pblk_discard(pblk, bio); @@ -79,6 +79,12 @@ static blk_qc_t pblk_make_rq(struct request_queue *q, struct bio *bio) return BLK_QC_T_NONE; } +static const struct block_device_operations pblk_bops = { + .owner = THIS_MODULE, + .submit_bio = pblk_submit_bio, +}; + + static size_t pblk_trans_map_size(struct pblk *pblk) { int entry_size = 8; @@ -1280,7 +1286,7 @@ static struct nvm_tgt_type tt_pblk = { .name = "pblk", .version = {1, 0, 0}, - .make_rq = pblk_make_rq, + .bops = &pblk_bops, .capacity = pblk_capacity, .init = pblk_init, diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 7acf024e99f351..fc5702b10074d6 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -1158,7 +1158,7 @@ static void quit_max_writeback_rate(struct cache_set *c, /* Cached devices - read & write stuff */ -blk_qc_t cached_dev_make_request(struct request_queue *q, struct bio *bio) +blk_qc_t cached_dev_submit_bio(struct bio *bio) { struct search *s; struct bcache_device *d = bio->bi_disk->private_data; @@ -1291,7 +1291,7 @@ static void flash_dev_nodata(struct closure *cl) continue_at(cl, search_free, NULL); } -blk_qc_t flash_dev_make_request(struct request_queue *q, struct bio *bio) +blk_qc_t flash_dev_submit_bio(struct bio *bio) { struct search *s; struct closure *cl; diff --git a/drivers/md/bcache/request.h b/drivers/md/bcache/request.h index bb005c93dd7218..82b38366a95deb 100644 --- a/drivers/md/bcache/request.h +++ b/drivers/md/bcache/request.h @@ -37,10 +37,10 @@ unsigned int bch_get_congested(const struct cache_set *c); void bch_data_insert(struct closure *cl); void bch_cached_dev_request_init(struct cached_dev *dc); -blk_qc_t cached_dev_make_request(struct request_queue *q, struct bio *bio); +blk_qc_t cached_dev_submit_bio(struct bio *bio); void bch_flash_dev_request_init(struct bcache_device *d); -blk_qc_t flash_dev_make_request(struct request_queue *q, struct bio *bio); +blk_qc_t flash_dev_submit_bio(struct bio *bio); extern struct kmem_cache *bch_search_cache; diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 21aa168113d30b..de13f6e916966d 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -680,7 +680,16 @@ static int ioctl_dev(struct block_device *b, fmode_t mode, return d->ioctl(d, mode, cmd, arg); } -static const struct block_device_operations bcache_ops = { +static const struct block_device_operations bcache_cached_ops = { + .submit_bio = cached_dev_submit_bio, + .open = open_dev, + .release = release_dev, + .ioctl = ioctl_dev, + .owner = THIS_MODULE, +}; + +static const struct block_device_operations bcache_flash_ops = { + .submit_bio = flash_dev_submit_bio, .open = open_dev, .release = release_dev, .ioctl = ioctl_dev, @@ -820,8 +829,8 @@ static void bcache_device_free(struct bcache_device *d) } static int bcache_device_init(struct bcache_device *d, unsigned int block_size, - sector_t sectors, make_request_fn make_request_fn, - struct block_device *cached_bdev) + sector_t sectors, struct block_device *cached_bdev, + const struct block_device_operations *ops) { struct request_queue *q; const size_t max_stripes = min_t(size_t, INT_MAX, @@ -868,10 +877,10 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size, d->disk->major = bcache_major; d->disk->first_minor = idx_to_first_minor(idx); - d->disk->fops = &bcache_ops; + d->disk->fops = ops; d->disk->private_data = d; - q = blk_alloc_queue(make_request_fn, NUMA_NO_NODE); + q = blk_alloc_queue(NUMA_NO_NODE); if (!q) return -ENOMEM; @@ -1355,7 +1364,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size) ret = bcache_device_init(&dc->disk, block_size, dc->bdev->bd_part->nr_sects - dc->sb.data_offset, - cached_dev_make_request, dc->bdev); + dc->bdev, &bcache_cached_ops); if (ret) return ret; @@ -1468,7 +1477,7 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u) kobject_init(&d->kobj, &bch_flash_dev_ktype); if (bcache_device_init(d, block_bytes(c), u->sectors, - flash_dev_make_request, NULL)) + NULL, &bcache_flash_ops)) goto err; bcache_device_attach(d, c, u - c->uuids); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 5aa7a604f4cbc5..5acfaba3700dfc 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1770,7 +1770,7 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, } /* - * If in ->make_request_fn we need to use blk_queue_split(), otherwise + * If in ->queue_bio we need to use blk_queue_split(), otherwise * queue_limits for abnormal requests (e.g. discard, writesame, etc) * won't be imposed. */ @@ -1787,7 +1787,7 @@ static blk_qc_t dm_process_bio(struct mapped_device *md, return __split_and_process_bio(md, map, bio); } -static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t dm_submit_bio(struct bio *bio) { struct mapped_device *md = bio->bi_disk->private_data; blk_qc_t ret = BLK_QC_T_NONE; @@ -1798,12 +1798,12 @@ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) /* * We are called with a live reference on q_usage_counter, but * that one will be released as soon as we return. Grab an - * extra one as blk_mq_make_request expects to be able to - * consume a reference (which lives until the request is freed - * in case a request is allocated). + * extra one as blk_mq_submit_bio expects to be able to consume + * a reference (which lives until the request is freed in case a + * request is allocated). */ - percpu_ref_get(&q->q_usage_counter); - return blk_mq_make_request(q, bio); + percpu_ref_get(&bio->bi_disk->queue->q_usage_counter); + return blk_mq_submit_bio(bio); } map = dm_get_live_table(md, &srcu_idx); @@ -1988,11 +1988,11 @@ static struct mapped_device *alloc_dev(int minor) spin_lock_init(&md->uevent_lock); /* - * default to bio-based required ->make_request_fn until DM - * table is loaded and md->type established. If request-based - * table is loaded: blk-mq will override accordingly. + * default to bio-based until DM table is loaded and md->type + * established. If request-based table is loaded: blk-mq will + * override accordingly. */ - md->queue = blk_alloc_queue(dm_make_request, numa_node_id); + md->queue = blk_alloc_queue(numa_node_id); if (!md->queue) goto bad; @@ -3232,6 +3232,7 @@ static const struct pr_ops dm_pr_ops = { }; static const struct block_device_operations dm_blk_dops = { + .submit_bio = dm_submit_bio, .open = dm_blk_open, .release = dm_blk_close, .ioctl = dm_blk_ioctl, diff --git a/drivers/md/md.c b/drivers/md/md.c index ff20868e5e1b98..7b7cb49be35114 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -463,7 +463,7 @@ void md_handle_request(struct mddev *mddev, struct bio *bio) } EXPORT_SYMBOL(md_handle_request); -static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t md_submit_bio(struct bio *bio) { const int rw = bio_data_dir(bio); const int sgrp = op_stat_group(bio_op(bio)); @@ -5641,7 +5641,7 @@ static int md_alloc(dev_t dev, char *name) mddev->hold_active = UNTIL_STOP; error = -ENOMEM; - mddev->queue = blk_alloc_queue(md_make_request, NUMA_NO_NODE); + mddev->queue = blk_alloc_queue(NUMA_NO_NODE); if (!mddev->queue) goto abort; @@ -7823,6 +7823,7 @@ static int md_revalidate(struct gendisk *disk) static const struct block_device_operations md_fops = { .owner = THIS_MODULE, + .submit_bio = md_submit_bio, .open = md_open, .release = md_release, .ioctl = md_ioctl, diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c index 39030a324d7ffe..1f718381a04553 100644 --- a/drivers/nvdimm/blk.c +++ b/drivers/nvdimm/blk.c @@ -162,7 +162,7 @@ static int nsblk_do_bvec(struct nd_namespace_blk *nsblk, return err; } -static blk_qc_t nd_blk_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t nd_blk_submit_bio(struct bio *bio) { struct bio_integrity_payload *bip; struct nd_namespace_blk *nsblk = bio->bi_disk->private_data; @@ -225,6 +225,7 @@ static int nsblk_rw_bytes(struct nd_namespace_common *ndns, static const struct block_device_operations nd_blk_fops = { .owner = THIS_MODULE, + .submit_bio = nd_blk_submit_bio, .revalidate_disk = nvdimm_revalidate_disk, }; @@ -250,7 +251,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk) internal_nlba = div_u64(nsblk->size, nsblk_internal_lbasize(nsblk)); available_disk_size = internal_nlba * nsblk_sector_size(nsblk); - q = blk_alloc_queue(nd_blk_make_request, NUMA_NO_NODE); + q = blk_alloc_queue(NUMA_NO_NODE); if (!q) return -ENOMEM; if (devm_add_action_or_reset(dev, nd_blk_release_queue, q)) diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c index 48e9d169b6f9ce..412d21d8f64351 100644 --- a/drivers/nvdimm/btt.c +++ b/drivers/nvdimm/btt.c @@ -1439,7 +1439,7 @@ static int btt_do_bvec(struct btt *btt, struct bio_integrity_payload *bip, return ret; } -static blk_qc_t btt_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t btt_submit_bio(struct bio *bio) { struct bio_integrity_payload *bip = bio_integrity(bio); struct btt *btt = bio->bi_disk->private_data; @@ -1512,6 +1512,7 @@ static int btt_getgeo(struct block_device *bd, struct hd_geometry *geo) static const struct block_device_operations btt_fops = { .owner = THIS_MODULE, + .submit_bio = btt_submit_bio, .rw_page = btt_rw_page, .getgeo = btt_getgeo, .revalidate_disk = nvdimm_revalidate_disk, @@ -1523,7 +1524,7 @@ static int btt_blk_init(struct btt *btt) struct nd_namespace_common *ndns = nd_btt->ndns; /* create a new disk and request queue for btt */ - btt->btt_queue = blk_alloc_queue(btt_make_request, NUMA_NO_NODE); + btt->btt_queue = blk_alloc_queue(NUMA_NO_NODE); if (!btt->btt_queue) return -ENOMEM; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index d25e66fd942dd6..94790e6e0e4ce1 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -189,7 +189,7 @@ static blk_status_t pmem_do_write(struct pmem_device *pmem, return rc; } -static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t pmem_submit_bio(struct bio *bio) { int ret = 0; blk_status_t rc = 0; @@ -281,6 +281,7 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, static const struct block_device_operations pmem_fops = { .owner = THIS_MODULE, + .submit_bio = pmem_submit_bio, .rw_page = pmem_rw_page, .revalidate_disk = nvdimm_revalidate_disk, }; @@ -423,7 +424,7 @@ static int pmem_attach_disk(struct device *dev, return -EBUSY; } - q = blk_alloc_queue(pmem_make_request, dev_to_node(dev)); + q = blk_alloc_queue(dev_to_node(dev)); if (!q) return -ENOMEM; diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 6810c8812aed26..5192a024dc1b9c 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2178,6 +2178,7 @@ static void nvme_ns_head_release(struct gendisk *disk, fmode_t mode) const struct block_device_operations nvme_ns_head_ops = { .owner = THIS_MODULE, + .submit_bio = nvme_ns_head_submit_bio, .open = nvme_ns_head_open, .release = nvme_ns_head_release, .ioctl = nvme_ioctl, diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 5a5205ea570a77..89afcf943bf846 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -291,8 +291,7 @@ static bool nvme_available_path(struct nvme_ns_head *head) return false; } -static blk_qc_t nvme_ns_head_make_request(struct request_queue *q, - struct bio *bio) +blk_qc_t nvme_ns_head_submit_bio(struct bio *bio) { struct nvme_ns_head *head = bio->bi_disk->private_data; struct device *dev = disk_to_dev(head->disk); @@ -374,7 +373,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head) if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || !multipath) return 0; - q = blk_alloc_queue(nvme_ns_head_make_request, ctrl->numa_node); + q = blk_alloc_queue(ctrl->numa_node); if (!q) goto out; blk_queue_flag_set(QUEUE_FLAG_NONROT, q); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 61780c38f51fdb..9f2b0e0b455871 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -586,6 +586,7 @@ void nvme_mpath_stop(struct nvme_ctrl *ctrl); bool nvme_mpath_clear_current_path(struct nvme_ns *ns); void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl); struct nvme_ns *nvme_find_path(struct nvme_ns_head *head); +blk_qc_t nvme_ns_head_submit_bio(struct bio *bio); static inline void nvme_mpath_check_last_path(struct nvme_ns *ns) { diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index dfe21eb7276021..35666c3537dea9 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -31,8 +31,7 @@ static int dcssblk_open(struct block_device *bdev, fmode_t mode); static void dcssblk_release(struct gendisk *disk, fmode_t mode); -static blk_qc_t dcssblk_make_request(struct request_queue *q, - struct bio *bio); +static blk_qc_t dcssblk_submit_bio(struct bio *bio); static long dcssblk_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, void **kaddr, pfn_t *pfn); @@ -41,6 +40,7 @@ static char dcssblk_segments[DCSSBLK_PARM_LEN] = "\0"; static int dcssblk_major; static const struct block_device_operations dcssblk_devops = { .owner = THIS_MODULE, + .submit_bio = dcssblk_submit_bio, .open = dcssblk_open, .release = dcssblk_release, }; @@ -651,8 +651,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char } dev_info->gd->major = dcssblk_major; dev_info->gd->fops = &dcssblk_devops; - dev_info->dcssblk_queue = - blk_alloc_queue(dcssblk_make_request, NUMA_NO_NODE); + dev_info->dcssblk_queue = blk_alloc_queue(NUMA_NO_NODE); dev_info->gd->queue = dev_info->dcssblk_queue; dev_info->gd->private_data = dev_info; blk_queue_logical_block_size(dev_info->dcssblk_queue, 4096); @@ -868,7 +867,7 @@ dcssblk_release(struct gendisk *disk, fmode_t mode) } static blk_qc_t -dcssblk_make_request(struct request_queue *q, struct bio *bio) +dcssblk_submit_bio(struct bio *bio) { struct dcssblk_dev_info *dev_info; struct bio_vec bvec; diff --git a/drivers/s390/block/xpram.c b/drivers/s390/block/xpram.c index 5456f0ad5a40a4..c2536f7767b366 100644 --- a/drivers/s390/block/xpram.c +++ b/drivers/s390/block/xpram.c @@ -182,7 +182,7 @@ static unsigned long xpram_highest_page_index(void) /* * Block device make request function. */ -static blk_qc_t xpram_make_request(struct request_queue *q, struct bio *bio) +static blk_qc_t xpram_submit_bio(struct bio *bio) { xpram_device_t *xdev = bio->bi_disk->private_data; struct bio_vec bvec; @@ -250,6 +250,7 @@ static int xpram_getgeo(struct block_device *bdev, struct hd_geometry *geo) static const struct block_device_operations xpram_devops = { .owner = THIS_MODULE, + .submit_bio = xpram_submit_bio, .getgeo = xpram_getgeo, }; @@ -343,8 +344,7 @@ static int __init xpram_setup_blkdev(void) xpram_disks[i] = alloc_disk(1); if (!xpram_disks[i]) goto out; - xpram_queues[i] = blk_alloc_queue(xpram_make_request, - NUMA_NO_NODE); + xpram_queues[i] = blk_alloc_queue(NUMA_NO_NODE); if (!xpram_queues[i]) { put_disk(xpram_disks[i]); goto out; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index faa340b70123b1..23230c1d031e0a 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -596,6 +596,6 @@ static inline void blk_mq_cleanup_rq(struct request *rq) rq->q->mq_ops->cleanup_rq(rq); } -blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio); +blk_qc_t blk_mq_submit_bio(struct bio *bio); #endif diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index d002defc178934..083ffc5bc51b09 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -286,8 +286,6 @@ static inline unsigned short req_get_ioprio(struct request *req) struct blk_queue_ctx; -typedef blk_qc_t (make_request_fn) (struct request_queue *q, struct bio *bio); - struct bio_vec; enum blk_eh_timer_return { @@ -398,8 +396,6 @@ struct request_queue { struct blk_queue_stats *stats; struct rq_qos *rq_qos; - make_request_fn *make_request_fn; - const struct blk_mq_ops *mq_ops; /* sw queues */ @@ -1162,7 +1158,7 @@ static inline int blk_rq_map_sg(struct request_queue *q, struct request *rq, extern void blk_dump_rq_flags(struct request *, char *); bool __must_check blk_get_queue(struct request_queue *); -struct request_queue *blk_alloc_queue(make_request_fn make_request, int node_id); +struct request_queue *blk_alloc_queue(int node_id); extern void blk_put_queue(struct request_queue *); extern void blk_set_queue_dying(struct request_queue *); @@ -1778,6 +1774,7 @@ static inline void blk_ksm_unregister(struct request_queue *q) { } struct block_device_operations { + blk_qc_t (*submit_bio) (struct bio *bio); int (*open) (struct block_device *, fmode_t); void (*release) (struct gendisk *, fmode_t); int (*rw_page)(struct block_device *, sector_t, struct page *, unsigned int); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index ee8ec2e68055af..1db223710b284a 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -631,7 +631,6 @@ static inline int nvm_next_ppa_in_chk(struct nvm_tgt_dev *dev, return last; } -typedef blk_qc_t (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *); typedef sector_t (nvm_tgt_capacity_fn)(void *); typedef void *(nvm_tgt_init_fn)(struct nvm_tgt_dev *, struct gendisk *, int flags); @@ -650,7 +649,7 @@ struct nvm_tgt_type { int flags; /* target entry points */ - nvm_tgt_make_rq_fn *make_rq; + const struct block_device_operations *bops; nvm_tgt_capacity_fn *capacity; /* module-specific init/teardown */ From patchwork Wed Jul 1 08:59:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F04D739 for ; Wed, 1 Jul 2020 09:00:28 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D19320775 for ; Wed, 1 Jul 2020 09:00:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="YMd8VgMe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D19320775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0E7A61149FF6D; Wed, 1 Jul 2020 02:00:27 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 4B08011460B21 for ; Wed, 1 Jul 2020 02:00:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MJrioLwiDMaTjHlhHFnNGiOK8Kz2X3a88EROOcQ4RTQ=; b=YMd8VgMeJERK/+kU4txHGSvn2a dg0zui97e2oH6gbvK3p3mzPgBebmRrl8PLGez7WgqdsXkX6wZs+Gug6BuJcWng7y3GNiNLuij4PZM zjuLkpy1T+v5jtqrfjAMFxLszmWXAjA28s2A1ceP0P9D2e9pg0Hy/aW0XUUyhiE/8yDl7pZ/lKktS J0FG20cg69mspjUz0xLRJh+r3M9HZwkf2sPNAtNNOUIjr+bgMeqU6f2a7qsv/gqRdGPFbtl3DQ12q 2nO7zq44/xMlG4BeBlyMViuCLoxY6NERxQRmcqACuKVL7hQm4vksNz+R5ZeYHaM8on1/aweugPmPK op+BeoZQ==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbH-0008Go-67; Wed, 01 Jul 2020 09:00:15 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 17/20] block: rename generic_make_request to submit_bio_noacct Date: Wed, 1 Jul 2020 10:59:44 +0200 Message-Id: <20200701085947.3354405-18-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: MXLMEXEEWQGV7HIABW6D4U5OCM6MLJFK X-Message-ID-Hash: MXLMEXEEWQGV7HIABW6D4U5OCM6MLJFK X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: generic_make_request has always been very confusingly misnamed, so rename it to submit_bio_noacct to make it clear that it is submit_bio minus accounting and a few checks. Signed-off-by: Christoph Hellwig Acked-by: Coly Li Acked-by: Song Liu --- Documentation/block/biodoc.rst | 2 +- .../fault-injection/fault-injection.rst | 2 +- Documentation/trace/ftrace.rst | 4 +-- block/bio.c | 14 ++++---- block/blk-core.c | 32 +++++++++---------- block/blk-crypto-fallback.c | 2 +- block/blk-crypto.c | 2 +- block/blk-merge.c | 2 +- block/blk-throttle.c | 4 +-- block/bounce.c | 2 +- drivers/block/drbd/drbd_int.h | 6 ++-- drivers/block/drbd/drbd_main.c | 2 +- drivers/block/drbd/drbd_receiver.c | 2 +- drivers/block/drbd/drbd_req.c | 2 +- drivers/block/drbd/drbd_worker.c | 2 +- drivers/block/pktcdvd.c | 2 +- drivers/lightnvm/pblk-read.c | 2 +- drivers/md/bcache/bcache.h | 2 +- drivers/md/bcache/btree.c | 2 +- drivers/md/bcache/request.c | 7 ++-- drivers/md/dm-cache-target.c | 6 ++-- drivers/md/dm-clone-target.c | 10 +++--- drivers/md/dm-crypt.c | 6 ++-- drivers/md/dm-delay.c | 2 +- drivers/md/dm-era-target.c | 2 +- drivers/md/dm-integrity.c | 4 +-- drivers/md/dm-mpath.c | 2 +- drivers/md/dm-raid1.c | 2 +- drivers/md/dm-snap-persistent.c | 2 +- drivers/md/dm-snap.c | 6 ++-- drivers/md/dm-thin.c | 4 +-- drivers/md/dm-verity-target.c | 2 +- drivers/md/dm-writecache.c | 2 +- drivers/md/dm-zoned-target.c | 2 +- drivers/md/dm.c | 10 +++--- drivers/md/md-faulty.c | 4 +-- drivers/md/md-linear.c | 4 +-- drivers/md/md-multipath.c | 4 +-- drivers/md/raid0.c | 8 ++--- drivers/md/raid1.c | 14 ++++---- drivers/md/raid10.c | 28 ++++++++-------- drivers/md/raid5.c | 10 +++--- drivers/nvme/host/multipath.c | 2 +- include/linux/blkdev.h | 2 +- 44 files changed, 115 insertions(+), 118 deletions(-) diff --git a/Documentation/block/biodoc.rst b/Documentation/block/biodoc.rst index 267384159bf793..afda5e30a82e5a 100644 --- a/Documentation/block/biodoc.rst +++ b/Documentation/block/biodoc.rst @@ -1036,7 +1036,7 @@ Now the generic block layer performs partition-remapping early and thus provides drivers with a sector number relative to whole device, rather than having to take partition number into account in order to arrive at the true sector number. The routine blk_partition_remap() is invoked by -generic_make_request even before invoking the queue specific ->submit_bio, +submit_bio_noacct even before invoking the queue specific ->submit_bio, so the i/o scheduler also gets to operate on whole disk sector numbers. This should typically not require changes to block drivers, it just never gets to invoke its own partition sector offset calculations since all bios diff --git a/Documentation/fault-injection/fault-injection.rst b/Documentation/fault-injection/fault-injection.rst index f51bb21d20e44b..f850ad018b70a8 100644 --- a/Documentation/fault-injection/fault-injection.rst +++ b/Documentation/fault-injection/fault-injection.rst @@ -24,7 +24,7 @@ Available fault injection capabilities injects disk IO errors on devices permitted by setting /sys/block//make-it-fail or - /sys/block///make-it-fail. (generic_make_request()) + /sys/block///make-it-fail. (submit_bio_noacct()) - fail_mmc_request diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst index 430a16283103d4..80ba765a82379e 100644 --- a/Documentation/trace/ftrace.rst +++ b/Documentation/trace/ftrace.rst @@ -1453,7 +1453,7 @@ function-trace, we get a much larger output:: => __blk_run_queue_uncond => __blk_run_queue => blk_queue_bio - => generic_make_request + => submit_bio_noacct => submit_bio => submit_bh => __ext3_get_inode_loc @@ -1738,7 +1738,7 @@ tracers. => __blk_run_queue_uncond => __blk_run_queue => blk_queue_bio - => generic_make_request + => submit_bio_noacct => submit_bio => submit_bh => ext3_bread diff --git a/block/bio.c b/block/bio.c index fc1299f9d86a24..ef91782fd668ce 100644 --- a/block/bio.c +++ b/block/bio.c @@ -358,7 +358,7 @@ static void bio_alloc_rescue(struct work_struct *work) if (!bio) break; - generic_make_request(bio); + submit_bio_noacct(bio); } } @@ -416,19 +416,19 @@ static void punt_bios_to_rescuer(struct bio_set *bs) * submit the previously allocated bio for IO before attempting to allocate * a new one. Failure to do so can cause deadlocks under memory pressure. * - * Note that when running under generic_make_request() (i.e. any block + * Note that when running under submit_bio_noacct() (i.e. any block * driver), bios are not submitted until after you return - see the code in - * generic_make_request() that converts recursion into iteration, to prevent + * submit_bio_noacct() that converts recursion into iteration, to prevent * stack overflows. * * This would normally mean allocating multiple bios under - * generic_make_request() would be susceptible to deadlocks, but we have + * submit_bio_noacct() would be susceptible to deadlocks, but we have * deadlock avoidance code that resubmits any blocked bios from a rescuer * thread. * * However, we do not guarantee forward progress for allocations from other * mempools. Doing multiple allocations from the same mempool under - * generic_make_request() should be avoided - instead, use bio_set's front_pad + * submit_bio_noacct() should be avoided - instead, use bio_set's front_pad * for per bio allocations. * * RETURNS: @@ -457,14 +457,14 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, unsigned int nr_iovecs, nr_iovecs > 0)) return NULL; /* - * generic_make_request() converts recursion to iteration; this + * submit_bio_noacct() converts recursion to iteration; this * means if we're running beneath it, any bios we allocate and * submit will not be submitted (and thus freed) until after we * return. * * This exposes us to a potential deadlock if we allocate * multiple bios from the same bio_set() while running - * underneath generic_make_request(). If we were to allocate + * underneath submit_bio_noacct(). If we were to allocate * multiple bios (say a stacking block driver that was splitting * bios), we would deadlock if we exhausted the mempool's * reserve. diff --git a/block/blk-core.c b/block/blk-core.c index cb07a726dd7117..ff9a88d2d244cb 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -956,8 +956,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, return BLK_STS_OK; } -static noinline_for_stack bool -generic_make_request_checks(struct bio *bio) +static noinline_for_stack bool submit_bio_checks(struct bio *bio) { struct request_queue *q = bio->bi_disk->queue; blk_status_t status = BLK_STS_IOERR; @@ -985,9 +984,8 @@ generic_make_request_checks(struct bio *bio) } /* - * Filter flush bio's early so that make_request based - * drivers without flush support don't have to worry - * about them. + * Filter flush bio's early so that bio based drivers without flush + * support don't have to worry about them. */ if (op_is_flush(bio->bi_opf) && !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) { @@ -1072,7 +1070,7 @@ generic_make_request_checks(struct bio *bio) return false; } -static blk_qc_t do_make_request(struct bio *bio) +static blk_qc_t __submit_bio(struct bio *bio) { struct gendisk *disk = bio->bi_disk; blk_qc_t ret = BLK_QC_T_NONE; @@ -1087,7 +1085,7 @@ static blk_qc_t do_make_request(struct bio *bio) } /** - * generic_make_request - re-submit a bio to the block device layer for I/O + * submit_bio_noacct - re-submit a bio to the block device layer for I/O * @bio: The bio describing the location in memory and on the device. * * This is a version of submit_bio() that shall only be used for I/O that is @@ -1095,7 +1093,7 @@ static blk_qc_t do_make_request(struct bio *bio) * systems and other upper level users of the block layer should use * submit_bio() instead. */ -blk_qc_t generic_make_request(struct bio *bio) +blk_qc_t submit_bio_noacct(struct bio *bio) { /* * bio_list_on_stack[0] contains bios submitted by the current @@ -1106,7 +1104,7 @@ blk_qc_t generic_make_request(struct bio *bio) struct bio_list bio_list_on_stack[2]; blk_qc_t ret = BLK_QC_T_NONE; - if (!generic_make_request_checks(bio)) + if (!submit_bio_checks(bio)) goto out; /* @@ -1114,7 +1112,7 @@ blk_qc_t generic_make_request(struct bio *bio) * stack usage with stacked devices could be a problem. So use * current->bio_list to keep a list of requests submited by a * ->submit_bio method. current->bio_list is also used as a - * flag to say if generic_make_request is currently active in this + * flag to say if submit_bio_noacct is currently active in this * task or not. If it is NULL, then no make_request is active. If * it is non-NULL, then a make_request is active, and new requests * should be added at the tail @@ -1132,7 +1130,7 @@ blk_qc_t generic_make_request(struct bio *bio) * we assign bio_list to a pointer to the bio_list_on_stack, * thus initialising the bio_list of new bios to be * added. ->submit_bio() may indeed add some more bios - * through a recursive call to generic_make_request. If it + * through a recursive call to submit_bio_noacct. If it * did, we find a non-NULL value in bio_list and re-enter the loop * from the top. In this case we really did just take the bio * of the top of the list (no pretending) and so remove it from @@ -1150,7 +1148,7 @@ blk_qc_t generic_make_request(struct bio *bio) /* Create a fresh bio_list for all subordinate requests */ bio_list_on_stack[1] = bio_list_on_stack[0]; bio_list_init(&bio_list_on_stack[0]); - ret = do_make_request(bio); + ret = __submit_bio(bio); /* sort new bios into those for a lower level * and those for the same level @@ -1174,13 +1172,13 @@ blk_qc_t generic_make_request(struct bio *bio) out: return ret; } -EXPORT_SYMBOL(generic_make_request); +EXPORT_SYMBOL(submit_bio_noacct); /** * direct_make_request - hand a buffer directly to its device driver for I/O * @bio: The bio describing the location in memory and on the device. * - * This function behaves like generic_make_request(), but does not protect + * This function behaves like submit_bio_noacct(), but does not protect * against recursion. Must only be used if the called driver is known * to be blk-mq based. */ @@ -1192,7 +1190,7 @@ blk_qc_t direct_make_request(struct bio *bio) bio_io_error(bio); return BLK_QC_T_NONE; } - if (!generic_make_request_checks(bio)) + if (!submit_bio_checks(bio)) return BLK_QC_T_NONE; if (unlikely(bio_queue_enter(bio))) return BLK_QC_T_NONE; @@ -1263,13 +1261,13 @@ blk_qc_t submit_bio(struct bio *bio) blk_qc_t ret; psi_memstall_enter(&pflags); - ret = generic_make_request(bio); + ret = submit_bio_noacct(bio); psi_memstall_leave(&pflags); return ret; } - return generic_make_request(bio); + return submit_bio_noacct(bio); } EXPORT_SYMBOL(submit_bio); diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index 6e49688a2d8023..c162b754efbd6a 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -228,7 +228,7 @@ static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr) return false; } bio_chain(split_bio, bio); - generic_make_request(bio); + submit_bio_noacct(bio); *bio_ptr = split_bio; } diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 6533c9b36ab80a..2d5e60023b08bb 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -239,7 +239,7 @@ void __blk_crypto_free_request(struct request *rq) * kernel crypto API. When the crypto API fallback is used for encryption, * blk-crypto may choose to split the bio into 2 - the first one that will * continue to be processed and the second one that will be resubmitted via - * generic_make_request. A bounce bio will be allocated to encrypt the contents + * submit_bio_noacct. A bounce bio will be allocated to encrypt the contents * of the aforementioned "first one", and *bio_ptr will be updated to this * bounce bio. * diff --git a/block/blk-merge.c b/block/blk-merge.c index 20fa2290604105..5196dc14527016 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -338,7 +338,7 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) bio_chain(split, *bio); trace_block_split(q, split, (*bio)->bi_iter.bi_sector); - generic_make_request(*bio); + submit_bio_noacct(*bio); *bio = split; } } diff --git a/block/blk-throttle.c b/block/blk-throttle.c index ad37043297ed58..fee3325edf27b9 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1339,8 +1339,8 @@ static void blk_throtl_dispatch_work_fn(struct work_struct *work) if (!bio_list_empty(&bio_list_on_stack)) { blk_start_plug(&plug); - while((bio = bio_list_pop(&bio_list_on_stack))) - generic_make_request(bio); + while ((bio = bio_list_pop(&bio_list_on_stack))) + submit_bio_noacct(bio); blk_finish_plug(&plug); } } diff --git a/block/bounce.c b/block/bounce.c index c3aaed07012467..431be88a024050 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -309,7 +309,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, if (!passthrough && sectors < bio_sectors(*bio_orig)) { bio = bio_split(*bio_orig, sectors, GFP_NOIO, &bounce_bio_split); bio_chain(bio, *bio_orig); - generic_make_request(*bio_orig); + submit_bio_noacct(*bio_orig); *bio_orig = bio; } bio = bounce_clone_bio(*bio_orig, GFP_NOIO, passthrough ? NULL : diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index 0327408da79c7a..fe6cb99eb91764 100644 --- a/drivers/block/drbd/drbd_int.h +++ b/drivers/block/drbd/drbd_int.h @@ -1576,12 +1576,12 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size); /* * used to submit our private bio */ -static inline void drbd_generic_make_request(struct drbd_device *device, +static inline void drbd_submit_bio_noacct(struct drbd_device *device, int fault_type, struct bio *bio) { __release(local); if (!bio->bi_disk) { - drbd_err(device, "drbd_generic_make_request: bio->bi_disk == NULL\n"); + drbd_err(device, "drbd_submit_bio_noacct: bio->bi_disk == NULL\n"); bio->bi_status = BLK_STS_IOERR; bio_endio(bio); return; @@ -1590,7 +1590,7 @@ static inline void drbd_generic_make_request(struct drbd_device *device, if (drbd_insert_fault(device, fault_type)) bio_io_error(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); } void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backing_dev *bdev, diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 2b05de0896e282..7c34cc0ad8ccdf 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -2325,7 +2325,7 @@ static void do_retry(struct work_struct *ws) * workqueues instead. */ - /* We are not just doing generic_make_request(), + /* We are not just doing submit_bio_noacct(), * as we want to keep the start_time information. */ inc_ap_bio(device); __drbd_make_request(device, bio, start_jif); diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index 3a3f2b6a821f39..c74f561b4eab51 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -1723,7 +1723,7 @@ int drbd_submit_peer_request(struct drbd_device *device, bios = bios->bi_next; bio->bi_next = NULL; - drbd_generic_make_request(device, fault_type, bio); + drbd_submit_bio_noacct(device, fault_type, bio); } while (bios); return 0; diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index c7e14c9a6e5f83..674be09b2da94a 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -1164,7 +1164,7 @@ drbd_submit_req_private_bio(struct drbd_request *req) else if (bio_op(bio) == REQ_OP_DISCARD) drbd_process_discard_or_zeroes_req(req, EE_TRIM); else - generic_make_request(bio); + submit_bio_noacct(bio); put_ldev(device); } else bio_io_error(bio); diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c index 2b89c9f2ca7075..7c903de5c4e103 100644 --- a/drivers/block/drbd/drbd_worker.c +++ b/drivers/block/drbd/drbd_worker.c @@ -1525,7 +1525,7 @@ int w_restart_disk_io(struct drbd_work *w, int cancel) drbd_req_make_private_bio(req, req->master_bio); bio_set_dev(req->private_bio, device->ldev->backing_bdev); - generic_make_request(req->private_bio); + submit_bio_noacct(req->private_bio); return 0; } diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 5588bd4cd267e8..4becc1efe775fc 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -913,7 +913,7 @@ static void pkt_iosched_process_queue(struct pktcdvd_device *pd) } atomic_inc(&pd->cdrw.pending_bios); - generic_make_request(bio); + submit_bio_noacct(bio); } } diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index 140927ebf41e9a..c28537a489bc10 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -320,7 +320,7 @@ void pblk_submit_read(struct pblk *pblk, struct bio *bio) split_bio = bio_split(bio, nr_secs * NR_PHY_IN_LOG, GFP_KERNEL, &pblk_bio_set); bio_chain(split_bio, bio); - generic_make_request(bio); + submit_bio_noacct(bio); /* New bio contains first N sectors of the previous one, so * we can continue to use existing rqd, but we need to shrink diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 221e0191b6870f..3c708e8b5e2d34 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -929,7 +929,7 @@ static inline void closure_bio_submit(struct cache_set *c, bio_endio(bio); return; } - generic_make_request(bio); + submit_bio_noacct(bio); } /* diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 6548a601edf0e4..d5c51e33204679 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -959,7 +959,7 @@ static struct btree *mca_alloc(struct cache_set *c, struct btree_op *op, * bch_btree_node_get - find a btree node in the cache and lock it, reading it * in from disk if necessary. * - * If IO is necessary and running under generic_make_request, returns -EAGAIN. + * If IO is necessary and running under submit_bio_noacct, returns -EAGAIN. * * The btree node will have either a read or a write lock held, depending on * level and op->lock. diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index fc5702b10074d6..dd012ebface012 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -1115,7 +1115,7 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio) !blk_queue_discard(bdev_get_queue(dc->bdev))) bio->bi_end_io(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); } static void quit_max_writeback_rate(struct cache_set *c, @@ -1197,7 +1197,7 @@ blk_qc_t cached_dev_submit_bio(struct bio *bio) if (!bio->bi_iter.bi_size) { /* * can't call bch_journal_meta from under - * generic_make_request + * submit_bio_noacct */ continue_at_nobarrier(&s->cl, cached_dev_nodata, @@ -1311,8 +1311,7 @@ blk_qc_t flash_dev_submit_bio(struct bio *bio) if (!bio->bi_iter.bi_size) { /* - * can't call bch_journal_meta from under - * generic_make_request + * can't call bch_journal_meta from under submit_bio_noacct */ continue_at_nobarrier(&s->cl, flash_dev_nodata, diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index d3bb355819a421..9eccced928960a 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -886,7 +886,7 @@ static void accounted_complete(struct cache *cache, struct bio *bio) static void accounted_request(struct cache *cache, struct bio *bio) { accounted_begin(cache, bio); - generic_make_request(bio); + submit_bio_noacct(bio); } static void issue_op(struct bio *bio, void *context) @@ -1792,7 +1792,7 @@ static bool process_bio(struct cache *cache, struct bio *bio) bool commit_needed; if (map_bio(cache, bio, get_bio_block(cache, bio), &commit_needed) == DM_MAPIO_REMAPPED) - generic_make_request(bio); + submit_bio_noacct(bio); return commit_needed; } @@ -1858,7 +1858,7 @@ static bool process_discard_bio(struct cache *cache, struct bio *bio) if (cache->features.discard_passdown) { remap_to_origin(cache, bio); - generic_make_request(bio); + submit_bio_noacct(bio); } else bio_endio(bio); diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c index 5ce96ddf1ce1eb..59ed8a67c2e34f 100644 --- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -330,7 +330,7 @@ static void submit_bios(struct bio_list *bios) blk_start_plug(&plug); while ((bio = bio_list_pop(bios))) - generic_make_request(bio); + submit_bio_noacct(bio); blk_finish_plug(&plug); } @@ -346,7 +346,7 @@ static void submit_bios(struct bio_list *bios) static void issue_bio(struct clone *clone, struct bio *bio) { if (!bio_triggers_commit(clone, bio)) { - generic_make_request(bio); + submit_bio_noacct(bio); return; } @@ -473,7 +473,7 @@ static void complete_discard_bio(struct clone *clone, struct bio *bio, bool succ bio_region_range(clone, bio, &rs, &nr_regions); trim_bio(bio, region_to_sector(clone, rs), nr_regions << clone->region_shift); - generic_make_request(bio); + submit_bio_noacct(bio); } else bio_endio(bio); } @@ -865,7 +865,7 @@ static void hydration_overwrite(struct dm_clone_region_hydration *hd, struct bio bio->bi_private = hd; atomic_inc(&hd->clone->hydrations_in_flight); - generic_make_request(bio); + submit_bio_noacct(bio); } /* @@ -1281,7 +1281,7 @@ static void process_deferred_flush_bios(struct clone *clone) */ bio_endio(bio); } else { - generic_make_request(bio); + submit_bio_noacct(bio); } } } diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 000ddfab5ba058..ad324abb8c497e 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1789,7 +1789,7 @@ static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp) return 1; } - generic_make_request(clone); + submit_bio_noacct(clone); return 0; } @@ -1815,7 +1815,7 @@ static void kcryptd_io_write(struct dm_crypt_io *io) { struct bio *clone = io->ctx.bio_out; - generic_make_request(clone); + submit_bio_noacct(clone); } #define crypt_io_from_node(node) rb_entry((node), struct dm_crypt_io, rb_node) @@ -1893,7 +1893,7 @@ static void kcryptd_crypt_write_io_submit(struct dm_crypt_io *io, int async) clone->bi_iter.bi_sector = cc->start + io->sector; if (likely(!async) && test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags)) { - generic_make_request(clone); + submit_bio_noacct(clone); return; } diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c index f496213f8b6753..2628a832787b04 100644 --- a/drivers/md/dm-delay.c +++ b/drivers/md/dm-delay.c @@ -72,7 +72,7 @@ static void flush_bios(struct bio *bio) while (bio) { n = bio->bi_next; bio->bi_next = NULL; - generic_make_request(bio); + submit_bio_noacct(bio); bio = n; } } diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c index bdb84b8e71621d..566ddbdb16a4ef 100644 --- a/drivers/md/dm-era-target.c +++ b/drivers/md/dm-era-target.c @@ -1265,7 +1265,7 @@ static void process_deferred_bios(struct era *era) bio_io_error(bio); else while ((bio = bio_list_pop(&marked_bios))) - generic_make_request(bio); + submit_bio_noacct(bio); } static void process_rpc_calls(struct era *era) diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 81dc5ff0890956..ae866e469e1b98 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -2115,12 +2115,12 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map dio->in_flight = (atomic_t)ATOMIC_INIT(1); dio->completion = NULL; - generic_make_request(bio); + submit_bio_noacct(bio); return; } - generic_make_request(bio); + submit_bio_noacct(bio); if (need_sync_io) { wait_for_completion_io(&read_comp); diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 78cff42d987ee5..73bb23de6336f1 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -677,7 +677,7 @@ static void process_queued_bios(struct work_struct *work) bio_endio(bio); break; case DM_MAPIO_REMAPPED: - generic_make_request(bio); + submit_bio_noacct(bio); break; case DM_MAPIO_SUBMITTED: break; diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index 2f655d9f420064..fa09bc4e4c54a1 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -779,7 +779,7 @@ static void do_writes(struct mirror_set *ms, struct bio_list *writes) wakeup_mirrord(ms); } else { map_bio(get_default_mirror(ms), bio); - generic_make_request(bio); + submit_bio_noacct(bio); } } } diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c index 963d3774c93e28..2d1d4a4c399cdc 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -252,7 +252,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op, /* * Issue the synchronous I/O from a different thread - * to avoid generic_make_request recursion. + * to avoid submit_bio_noacct recursion. */ INIT_WORK_ONSTACK(&req.work, do_metadata); queue_work(ps->metadata_wq, &req.work); diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c index 6b11a266299f14..4668b2cd98f4e2 100644 --- a/drivers/md/dm-snap.c +++ b/drivers/md/dm-snap.c @@ -1568,7 +1568,7 @@ static void flush_bios(struct bio *bio) while (bio) { n = bio->bi_next; bio->bi_next = NULL; - generic_make_request(bio); + submit_bio_noacct(bio); bio = n; } } @@ -1588,7 +1588,7 @@ static void retry_origin_bios(struct dm_snapshot *s, struct bio *bio) bio->bi_next = NULL; r = do_origin(s->origin, bio, false); if (r == DM_MAPIO_REMAPPED) - generic_make_request(bio); + submit_bio_noacct(bio); bio = n; } } @@ -1829,7 +1829,7 @@ static void start_full_bio(struct dm_snap_pending_exception *pe, bio->bi_end_io = full_bio_end_io; bio->bi_private = callback_data; - generic_make_request(bio); + submit_bio_noacct(bio); } static struct dm_snap_pending_exception * diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index fa8d5464c1fb51..fe2de28887096f 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -758,7 +758,7 @@ static void issue(struct thin_c *tc, struct bio *bio) struct pool *pool = tc->pool; if (!bio_triggers_commit(tc, bio)) { - generic_make_request(bio); + submit_bio_noacct(bio); return; } @@ -2394,7 +2394,7 @@ static void process_deferred_bios(struct pool *pool) if (bio->bi_opf & REQ_PREFLUSH) bio_endio(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); } } diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index eec9f252e9354b..75fa4d9b761717 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -681,7 +681,7 @@ static int verity_map(struct dm_target *ti, struct bio *bio) verity_submit_prefetch(v, io); - generic_make_request(bio); + submit_bio_noacct(bio); return DM_MAPIO_SUBMITTED; } diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index 74f3c506f08487..62421554b83851 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -1238,7 +1238,7 @@ static int writecache_flush_thread(void *data) bio_end_sector(bio)); wc_unlock(wc); bio_set_dev(bio, wc->dev->bdev); - generic_make_request(bio); + submit_bio_noacct(bio); } else { writecache_flush(wc); wc_unlock(wc); diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index a907a9446c0b5c..05a3cfefe93728 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -140,7 +140,7 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone, bio_advance(bio, clone->bi_iter.bi_size); refcount_inc(&bioctx->ref); - generic_make_request(clone); + submit_bio_noacct(clone); if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone)) zone->wp_block += nr_blocks; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 5acfaba3700dfc..b32b539dbace56 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1305,7 +1305,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) if (md->type == DM_TYPE_NVME_BIO_BASED) ret = direct_make_request(clone); else - ret = generic_make_request(clone); + ret = submit_bio_noacct(clone); break; case DM_MAPIO_KILL: free_tio(tio); @@ -1652,7 +1652,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, error = __split_and_process_non_flush(&ci); if (current->bio_list && ci.sector_count && !error) { /* - * Remainder must be passed to generic_make_request() + * Remainder must be passed to submit_bio_noacct() * so that it gets handled *after* bios already submitted * have been completely processed. * We take a clone of the original to store in @@ -1677,7 +1677,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, bio_chain(b, bio); trace_block_split(md->queue, b, bio->bi_iter.bi_sector); - ret = generic_make_request(bio); + ret = submit_bio_noacct(bio); break; } } @@ -1745,7 +1745,7 @@ static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struc bio_chain(split, *bio); trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector); - generic_make_request(*bio); + submit_bio_noacct(*bio); *bio = split; } } @@ -2500,7 +2500,7 @@ static void dm_wq_work(struct work_struct *work) break; if (dm_request_based(md)) - (void) generic_make_request(c); + (void) submit_bio_noacct(c); else (void) dm_process_bio(md, map, c); } diff --git a/drivers/md/md-faulty.c b/drivers/md/md-faulty.c index 50ad4ba86f0e74..fda4cb3f936f39 100644 --- a/drivers/md/md-faulty.c +++ b/drivers/md/md-faulty.c @@ -169,7 +169,7 @@ static bool faulty_make_request(struct mddev *mddev, struct bio *bio) if (bio_data_dir(bio) == WRITE) { /* write request */ if (atomic_read(&conf->counters[WriteAll])) { - /* special case - don't decrement, don't generic_make_request, + /* special case - don't decrement, don't submit_bio_noacct, * just fail immediately */ bio_io_error(bio); @@ -214,7 +214,7 @@ static bool faulty_make_request(struct mddev *mddev, struct bio *bio) } else bio_set_dev(bio, conf->rdev->bdev); - generic_make_request(bio); + submit_bio_noacct(bio); return true; } diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c index 26c75c0199fa1b..8efada3ee16f30 100644 --- a/drivers/md/md-linear.c +++ b/drivers/md/md-linear.c @@ -267,7 +267,7 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio) struct bio *split = bio_split(bio, end_sector - bio_sector, GFP_NOIO, &mddev->bio_set); bio_chain(split, bio); - generic_make_request(bio); + submit_bio_noacct(bio); bio = split; } @@ -286,7 +286,7 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio) bio_sector); mddev_check_writesame(mddev, bio); mddev_check_write_zeroes(mddev, bio); - generic_make_request(bio); + submit_bio_noacct(bio); } return true; diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 152f9e65a22665..277fdfd9ee5480 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -131,7 +131,7 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) mp_bh->bio.bi_private = mp_bh; mddev_check_writesame(mddev, &mp_bh->bio); mddev_check_write_zeroes(mddev, &mp_bh->bio); - generic_make_request(&mp_bh->bio); + submit_bio_noacct(&mp_bh->bio); return true; } @@ -348,7 +348,7 @@ static void multipathd(struct md_thread *thread) bio->bi_opf |= REQ_FAILFAST_TRANSPORT; bio->bi_end_io = multipath_end_request; bio->bi_private = mp_bh; - generic_make_request(bio); + submit_bio_noacct(bio); } } spin_unlock_irqrestore(&conf->device_lock, flags); diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 322386ff5d225d..e9e91c8d8afcea 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -495,7 +495,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) zone->zone_end - bio->bi_iter.bi_sector, GFP_NOIO, &mddev->bio_set); bio_chain(split, bio); - generic_make_request(bio); + submit_bio_noacct(bio); bio = split; end = zone->zone_end; } else @@ -559,7 +559,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) trace_block_bio_remap(bdev_get_queue(rdev->bdev), discard_bio, disk_devt(mddev->gendisk), bio->bi_iter.bi_sector); - generic_make_request(discard_bio); + submit_bio_noacct(discard_bio); } bio_endio(bio); } @@ -600,7 +600,7 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio) struct bio *split = bio_split(bio, sectors, GFP_NOIO, &mddev->bio_set); bio_chain(split, bio); - generic_make_request(bio); + submit_bio_noacct(bio); bio = split; } @@ -633,7 +633,7 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio) disk_devt(mddev->gendisk), bio_sector); mddev_check_writesame(mddev, bio); mddev_check_write_zeroes(mddev, bio); - generic_make_request(bio); + submit_bio_noacct(bio); return true; } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index dcd27f3da84eca..2aa2649cca660e 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -834,7 +834,7 @@ static void flush_bio_list(struct r1conf *conf, struct bio *bio) /* Just ignore it */ bio_endio(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); bio = next; cond_resched(); } @@ -1312,7 +1312,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, struct bio *split = bio_split(bio, max_sectors, gfp, &conf->bio_split); bio_chain(split, bio); - generic_make_request(bio); + submit_bio_noacct(bio); bio = split; r1_bio->master_bio = bio; r1_bio->sectors = max_sectors; @@ -1338,7 +1338,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, trace_block_bio_remap(read_bio->bi_disk->queue, read_bio, disk_devt(mddev->gendisk), r1_bio->sector); - generic_make_request(read_bio); + submit_bio_noacct(read_bio); } static void raid1_write_request(struct mddev *mddev, struct bio *bio, @@ -1483,7 +1483,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, struct bio *split = bio_split(bio, max_sectors, GFP_NOIO, &conf->bio_split); bio_chain(split, bio); - generic_make_request(bio); + submit_bio_noacct(bio); bio = split; r1_bio->master_bio = bio; r1_bio->sectors = max_sectors; @@ -2240,7 +2240,7 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio) atomic_inc(&r1_bio->remaining); md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio)); - generic_make_request(wbio); + submit_bio_noacct(wbio); } put_sync_write_buf(r1_bio, 1); @@ -2926,7 +2926,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, md_sync_acct_bio(bio, nr_sectors); if (read_targets == 1) bio->bi_opf &= ~MD_FAILFAST; - generic_make_request(bio); + submit_bio_noacct(bio); } } } else { @@ -2935,7 +2935,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, md_sync_acct_bio(bio, nr_sectors); if (read_targets == 1) bio->bi_opf &= ~MD_FAILFAST; - generic_make_request(bio); + submit_bio_noacct(bio); } return nr_sectors; } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index ec136e44aef7f8..e45fd56cf58450 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -917,7 +917,7 @@ static void flush_pending_writes(struct r10conf *conf) /* Just ignore it */ bio_endio(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); bio = next; } blk_finish_plug(&plug); @@ -1102,7 +1102,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) /* Just ignore it */ bio_endio(bio); else - generic_make_request(bio); + submit_bio_noacct(bio); bio = next; } kfree(plug); @@ -1194,7 +1194,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, gfp, &conf->bio_split); bio_chain(split, bio); allow_barrier(conf); - generic_make_request(bio); + submit_bio_noacct(bio); wait_barrier(conf); bio = split; r10_bio->master_bio = bio; @@ -1221,7 +1221,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, trace_block_bio_remap(read_bio->bi_disk->queue, read_bio, disk_devt(mddev->gendisk), r10_bio->sector); - generic_make_request(read_bio); + submit_bio_noacct(read_bio); return; } @@ -1479,7 +1479,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, GFP_NOIO, &conf->bio_split); bio_chain(split, bio); allow_barrier(conf); - generic_make_request(bio); + submit_bio_noacct(bio); wait_barrier(conf); bio = split; r10_bio->master_bio = bio; @@ -2099,7 +2099,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) tbio->bi_opf |= MD_FAILFAST; tbio->bi_iter.bi_sector += conf->mirrors[d].rdev->data_offset; bio_set_dev(tbio, conf->mirrors[d].rdev->bdev); - generic_make_request(tbio); + submit_bio_noacct(tbio); } /* Now write out to any replacement devices @@ -2118,7 +2118,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) atomic_inc(&r10_bio->remaining); md_sync_acct(conf->mirrors[d].replacement->bdev, bio_sectors(tbio)); - generic_make_request(tbio); + submit_bio_noacct(tbio); } done: @@ -2241,7 +2241,7 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio) wbio = r10_bio->devs[1].bio; wbio2 = r10_bio->devs[1].repl_bio; /* Need to test wbio2->bi_end_io before we call - * generic_make_request as if the former is NULL, + * submit_bio_noacct as if the former is NULL, * the latter is free to free wbio2. */ if (wbio2 && !wbio2->bi_end_io) @@ -2249,13 +2249,13 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio) if (wbio->bi_end_io) { atomic_inc(&conf->mirrors[d].rdev->nr_pending); md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio)); - generic_make_request(wbio); + submit_bio_noacct(wbio); } if (wbio2) { atomic_inc(&conf->mirrors[d].replacement->nr_pending); md_sync_acct(conf->mirrors[d].replacement->bdev, bio_sectors(wbio2)); - generic_make_request(wbio2); + submit_bio_noacct(wbio2); } } @@ -2889,7 +2889,7 @@ static void raid10_set_cluster_sync_high(struct r10conf *conf) * a number of r10_bio structures, one for each out-of-sync device. * As we setup these structures, we collect all bio's together into a list * which we then process collectively to add pages, and then process again - * to pass to generic_make_request. + * to pass to submit_bio_noacct. * * The r10_bio structures are linked using a borrowed master_bio pointer. * This link is counted in ->remaining. When the r10_bio that points to NULL @@ -3496,7 +3496,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, if (bio->bi_end_io == end_sync_read) { md_sync_acct_bio(bio, nr_sectors); bio->bi_status = 0; - generic_make_request(bio); + submit_bio_noacct(bio); } } @@ -4654,7 +4654,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, md_sync_acct_bio(read_bio, r10_bio->sectors); atomic_inc(&r10_bio->remaining); read_bio->bi_next = NULL; - generic_make_request(read_bio); + submit_bio_noacct(read_bio); sectors_done += nr_sectors; if (sector_nr <= last) goto read_more; @@ -4717,7 +4717,7 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio) md_sync_acct_bio(b, r10_bio->sectors); atomic_inc(&r10_bio->remaining); b->bi_next = NULL; - generic_make_request(b); + submit_bio_noacct(b); } end_reshape_request(r10_bio); } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index ab8067f9ce8c68..8dea4398b191ae 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -873,7 +873,7 @@ static void dispatch_bio_list(struct bio_list *tmp) struct bio *bio; while ((bio = bio_list_pop(tmp))) - generic_make_request(bio); + submit_bio_noacct(bio); } static int cmp_stripe(void *priv, struct list_head *a, struct list_head *b) @@ -1151,7 +1151,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, bi); else - generic_make_request(bi); + submit_bio_noacct(bi); } if (rrdev) { if (s->syncing || s->expanding || s->expanded @@ -1201,7 +1201,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, rbi); else - generic_make_request(rbi); + submit_bio_noacct(rbi); } if (!rdev && !rrdev) { if (op_is_write(op)) @@ -5289,7 +5289,7 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) trace_block_bio_remap(align_bi->bi_disk->queue, align_bi, disk_devt(mddev->gendisk), raid_bio->bi_iter.bi_sector); - generic_make_request(align_bi); + submit_bio_noacct(align_bi); return 1; } else { rcu_read_unlock(); @@ -5309,7 +5309,7 @@ static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio) struct r5conf *conf = mddev->private; split = bio_split(raid_bio, sectors, GFP_NOIO, &conf->bio_split); bio_chain(split, raid_bio); - generic_make_request(raid_bio); + submit_bio_noacct(raid_bio); raid_bio = split; } diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 89afcf943bf846..f07fa47c251d9d 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -351,7 +351,7 @@ static void nvme_requeue_work(struct work_struct *work) * path. */ bio->bi_disk = head->disk; - generic_make_request(bio); + submit_bio_noacct(bio); } } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 083ffc5bc51b09..b73cfa6a5141df 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -852,7 +852,7 @@ static inline void rq_flush_dcache_pages(struct request *rq) extern int blk_register_queue(struct gendisk *disk); extern void blk_unregister_queue(struct gendisk *disk); -extern blk_qc_t generic_make_request(struct bio *bio); +blk_qc_t submit_bio_noacct(struct bio *bio); extern blk_qc_t direct_make_request(struct bio *bio); extern void blk_rq_init(struct request_queue *q, struct request *rq); extern void blk_put_request(struct request *); From patchwork Wed Jul 1 08:59:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635761 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A498B739 for ; Wed, 1 Jul 2020 09:00:31 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8339C2074D for ; Wed, 1 Jul 2020 09:00:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RKMWPwGh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8339C2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 3A07811487766; Wed, 1 Jul 2020 02:00:29 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 06BD31149FF6B for ; Wed, 1 Jul 2020 02:00:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yZp3ZhmmnZVm/xMn8H8i+YIbZxG0gRGZfGVMIZjaPzw=; b=RKMWPwGhSFWRqAqKPhX/CZ8Tjr 2y9UV+7WeqYw079kzvJhfBLKNUSnR39UjjcWcPFdEXxijNON689BbqBUoAHoa4MINvOmGyGIo8p6D WBJBVcdmy65C0w0U6CrySV4utK7ZZ6HyASSo1xa6KLt62iitIX6XPwRC25GJIHaUvU32tDpJxF/IZ VW63cu8DySq2vkkwy/xd7AUzFnf/7KsER6SY0OiH44xnptYrxnSXEmH3ZsHYzvYnVn7Ql6UghoGbc Rt1XjbpS7dVpsBnv4qKhpvsABOXO94ealDA3iuzJa4NI/JBfp2v9rUYzGuLm+Ykbcfs8xzOsSpYeC ylv9vMYg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbJ-0008HS-16; Wed, 01 Jul 2020 09:00:17 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 18/20] block: refator submit_bio_noacct Date: Wed, 1 Jul 2020 10:59:45 +0200 Message-Id: <20200701085947.3354405-19-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: X5CUA3RFRXLQ4QEE4Q2LUZHVLDRAQJZK X-Message-ID-Hash: X5CUA3RFRXLQ4QEE4Q2LUZHVLDRAQJZK X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Split out a __submit_bio_noacct helper for the actual de-recursion algorithm, and simplify the loop by using a continue when we can't enter the queue for a bio. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 143 +++++++++++++++++++++++++---------------------- 1 file changed, 75 insertions(+), 68 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index ff9a88d2d244cb..57b5dc00d44cb1 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1084,6 +1084,74 @@ static blk_qc_t __submit_bio(struct bio *bio) return ret; } +/* + * The loop in this function may be a bit non-obvious, and so deserves some + * explanation: + * + * - Before entering the loop, bio->bi_next is NULL (as all callers ensure + * that), so we have a list with a single bio. + * - We pretend that we have just taken it off a longer list, so we assign + * bio_list to a pointer to the bio_list_on_stack, thus initialising the + * bio_list of new bios to be added. ->submit_bio() may indeed add some more + * bios through a recursive call to submit_bio_noacct. If it did, we find a + * non-NULL value in bio_list and re-enter the loop from the top. + * - In this case we really did just take the bio of the top of the list (no + * pretending) and so remove it from bio_list, and call into ->submit_bio() + * again. + * + * bio_list_on_stack[0] contains bios submitted by the current ->submit_bio. + * bio_list_on_stack[1] contains bios that were submitted before the current + * ->submit_bio_bio, but that haven't been processed yet. + */ +static blk_qc_t __submit_bio_noacct(struct bio *bio) +{ + struct bio_list bio_list_on_stack[2]; + blk_qc_t ret = BLK_QC_T_NONE; + + BUG_ON(bio->bi_next); + + bio_list_init(&bio_list_on_stack[0]); + current->bio_list = bio_list_on_stack; + + do { + struct request_queue *q = bio->bi_disk->queue; + struct bio_list lower, same; + + if (unlikely(bio_queue_enter(bio) != 0)) + continue; + + /* + * Create a fresh bio_list for all subordinate requests. + */ + bio_list_on_stack[1] = bio_list_on_stack[0]; + bio_list_init(&bio_list_on_stack[0]); + + ret = __submit_bio(bio); + + /* + * Sort new bios into those for a lower level and those for the + * same level. + */ + bio_list_init(&lower); + bio_list_init(&same); + while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL) + if (q == bio->bi_disk->queue) + bio_list_add(&same, bio); + else + bio_list_add(&lower, bio); + + /* + * Now assemble so we handle the lowest level first. + */ + bio_list_merge(&bio_list_on_stack[0], &lower); + bio_list_merge(&bio_list_on_stack[0], &same); + bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]); + } while ((bio = bio_list_pop(&bio_list_on_stack[0]))); + + current->bio_list = NULL; + return ret; +} + /** * submit_bio_noacct - re-submit a bio to the block device layer for I/O * @bio: The bio describing the location in memory and on the device. @@ -1095,82 +1163,21 @@ static blk_qc_t __submit_bio(struct bio *bio) */ blk_qc_t submit_bio_noacct(struct bio *bio) { - /* - * bio_list_on_stack[0] contains bios submitted by the current - * ->submit_bio. - * bio_list_on_stack[1] contains bios that were submitted before the - * current ->submit_bio_bio, but that haven't been processed yet. - */ - struct bio_list bio_list_on_stack[2]; - blk_qc_t ret = BLK_QC_T_NONE; - if (!submit_bio_checks(bio)) - goto out; + return BLK_QC_T_NONE; /* - * We only want one ->submit_bio to be active at a time, else - * stack usage with stacked devices could be a problem. So use - * current->bio_list to keep a list of requests submited by a - * ->submit_bio method. current->bio_list is also used as a - * flag to say if submit_bio_noacct is currently active in this - * task or not. If it is NULL, then no make_request is active. If - * it is non-NULL, then a make_request is active, and new requests - * should be added at the tail + * We only want one ->submit_bio to be active at a time, else stack + * usage with stacked devices could be a problem. Use current->bio_list + * to collect a list of requests submited by a ->submit_bio method while + * it is active, and then process them after it returned. */ if (current->bio_list) { bio_list_add(¤t->bio_list[0], bio); - goto out; + return BLK_QC_T_NONE; } - /* following loop may be a bit non-obvious, and so deserves some - * explanation. - * Before entering the loop, bio->bi_next is NULL (as all callers - * ensure that) so we have a list with a single bio. - * We pretend that we have just taken it off a longer list, so - * we assign bio_list to a pointer to the bio_list_on_stack, - * thus initialising the bio_list of new bios to be - * added. ->submit_bio() may indeed add some more bios - * through a recursive call to submit_bio_noacct. If it - * did, we find a non-NULL value in bio_list and re-enter the loop - * from the top. In this case we really did just take the bio - * of the top of the list (no pretending) and so remove it from - * bio_list, and call into ->submit_bio() again. - */ - BUG_ON(bio->bi_next); - bio_list_init(&bio_list_on_stack[0]); - current->bio_list = bio_list_on_stack; - do { - struct request_queue *q = bio->bi_disk->queue; - - if (likely(bio_queue_enter(bio) == 0)) { - struct bio_list lower, same; - - /* Create a fresh bio_list for all subordinate requests */ - bio_list_on_stack[1] = bio_list_on_stack[0]; - bio_list_init(&bio_list_on_stack[0]); - ret = __submit_bio(bio); - - /* sort new bios into those for a lower level - * and those for the same level - */ - bio_list_init(&lower); - bio_list_init(&same); - while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL) - if (q == bio->bi_disk->queue) - bio_list_add(&same, bio); - else - bio_list_add(&lower, bio); - /* now assemble so we handle the lowest level first */ - bio_list_merge(&bio_list_on_stack[0], &lower); - bio_list_merge(&bio_list_on_stack[0], &same); - bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]); - } - bio = bio_list_pop(&bio_list_on_stack[0]); - } while (bio); - current->bio_list = NULL; /* deactivate */ - -out: - return ret; + return __submit_bio_noacct(bio); } EXPORT_SYMBOL(submit_bio_noacct); From patchwork Wed Jul 1 08:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04F1E739 for ; Wed, 1 Jul 2020 09:00:30 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D765D20775 for ; Wed, 1 Jul 2020 09:00:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qMNQ5x4R" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D765D20775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 244EC114726D0; Wed, 1 Jul 2020 02:00:29 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id E380311487766 for ; Wed, 1 Jul 2020 02:00:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EAMIpv1ziuqNgf5ZNOHoxjY+dqzuD4FouWA6YFyHhkE=; b=qMNQ5x4RLz46SfC5VFJYmHu5Nx xI0sjPMkrZu4JzWH34erLzcJLJaVi42hoTRxgTqPpAaZ10BnqAfleFt5C4BDjFM7Jqls0VeMH/gqf JA3ksJqj7QKrohDk4U7Nu5sAUxcmERKiJBmNcC4cwjoQwwewNksjinjd1ESmGAyAqTxprSKh519+R qO3AP4glQ4JIPWzLlAmJTJ/p5U8MhH7mj4rvJOfk+i3Ndn0uebCLnEpXHTGMMoRlA6B0XeegOCH4b q01IOa4xLW4RLaoayu0NTlJoWxQpHxYHTgi94RpyBco2yaAblICORPz0jkVk33ytG9sq9oRItFIdp U2gEkKCg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbK-0008Hs-Cg; Wed, 01 Jul 2020 09:00:18 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 19/20] block: shortcut __submit_bio_noacct for blk-mq drivers Date: Wed, 1 Jul 2020 10:59:46 +0200 Message-Id: <20200701085947.3354405-20-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: 6JWK3KNQXPPED3BEZU5G33AHFAZBNB56 X-Message-ID-Hash: 6JWK3KNQXPPED3BEZU5G33AHFAZBNB56 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: For blk-mq drivers bios can only be inserted for the same queue. So bypass the complicated sorting logic in __submit_bio_noacct with a blk-mq simpler submission helper. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 57b5dc00d44cb1..2ff166f0d24ee3 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1152,6 +1152,34 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio) return ret; } +static blk_qc_t __submit_bio_noacct_mq(struct bio *bio) +{ + struct gendisk *disk = bio->bi_disk; + struct bio_list bio_list; + blk_qc_t ret = BLK_QC_T_NONE; + + bio_list_init(&bio_list); + current->bio_list = &bio_list; + + do { + WARN_ON_ONCE(bio->bi_disk != disk); + + if (unlikely(bio_queue_enter(bio) != 0)) + continue; + + if (!blk_crypto_bio_prep(&bio)) { + blk_queue_exit(disk->queue); + ret = BLK_QC_T_NONE; + continue; + } + + ret = blk_mq_submit_bio(bio); + } while ((bio = bio_list_pop(&bio_list))); + + current->bio_list = NULL; + return ret; +} + /** * submit_bio_noacct - re-submit a bio to the block device layer for I/O * @bio: The bio describing the location in memory and on the device. @@ -1177,6 +1205,8 @@ blk_qc_t submit_bio_noacct(struct bio *bio) return BLK_QC_T_NONE; } + if (!bio->bi_disk->fops->submit_bio) + return __submit_bio_noacct_mq(bio); return __submit_bio_noacct(bio); } EXPORT_SYMBOL(submit_bio_noacct); From patchwork Wed Jul 1 08:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11635763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FDD792A for ; Wed, 1 Jul 2020 09:00:33 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7E85720775 for ; Wed, 1 Jul 2020 09:00:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uk7IalP7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E85720775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 63C011149FF6C; Wed, 1 Jul 2020 02:00:30 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id ECA0211460B22 for ; Wed, 1 Jul 2020 02:00:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jQY+NtFCO+7CU3/nFb1b4IXvqab2qyR+s502t/FTrN0=; b=uk7IalP7MNGMM6uU4dAQlW8WyT wAAF3+qlfzX6YNhcR3wK+W1kCSwSAAIh+p6a51QmR/WkvI5iNXX6RT/rUdJ1Xa1ZGQGj78s7Yqvqv apKuRzsOW0hTYSSpzgapI7XjWd8J5KsEfQ3xUFfoxf0IJVlsoz/OdGiccYlHINoaSVCTGbcgukSEb eDReKFk2IXHSvZ0m2RrUUS0NHvjl0G8Haam0Ucf3tnKRRxbqUGybDQOBJMKimJ9oSqmYsG3MeDOsT +mgRZL3+n+xb4vpMz97ijSIURtS2Bit7lRbBO0vSx/zQv2yJuSSYpUXX7qRfYpSCkzrXWbLSNMus7 gE2BAOEw==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbL-0008II-NY; Wed, 01 Jul 2020 09:00:20 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 20/20] block: remove direct_make_request Date: Wed, 1 Jul 2020 10:59:47 +0200 Message-Id: <20200701085947.3354405-21-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: KQ43PCDPQ2ICS4NCSJDEW22WNA5Y77B3 X-Message-ID-Hash: KQ43PCDPQ2ICS4NCSJDEW22WNA5Y77B3 X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Now that submit_bio_noacct has a decent blk-mq fast path there is no more need for this bypass. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 28 ---------------------------- drivers/md/dm.c | 5 +---- drivers/nvme/host/multipath.c | 2 +- include/linux/blkdev.h | 1 - 4 files changed, 2 insertions(+), 34 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 2ff166f0d24ee3..bf882b8d84450c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1211,34 +1211,6 @@ blk_qc_t submit_bio_noacct(struct bio *bio) } EXPORT_SYMBOL(submit_bio_noacct); -/** - * direct_make_request - hand a buffer directly to its device driver for I/O - * @bio: The bio describing the location in memory and on the device. - * - * This function behaves like submit_bio_noacct(), but does not protect - * against recursion. Must only be used if the called driver is known - * to be blk-mq based. - */ -blk_qc_t direct_make_request(struct bio *bio) -{ - struct gendisk *disk = bio->bi_disk; - - if (WARN_ON_ONCE(!disk->queue->mq_ops)) { - bio_io_error(bio); - return BLK_QC_T_NONE; - } - if (!submit_bio_checks(bio)) - return BLK_QC_T_NONE; - if (unlikely(bio_queue_enter(bio))) - return BLK_QC_T_NONE; - if (!blk_crypto_bio_prep(&bio)) { - blk_queue_exit(disk->queue); - return BLK_QC_T_NONE; - } - return blk_mq_submit_bio(bio); -} -EXPORT_SYMBOL_GPL(direct_make_request); - /** * submit_bio - submit a bio to the block device layer for I/O * @bio: The &struct bio which describes the I/O diff --git a/drivers/md/dm.c b/drivers/md/dm.c index b32b539dbace56..2cb33896198c4c 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1302,10 +1302,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) /* the bio has been remapped so dispatch it */ trace_block_bio_remap(clone->bi_disk->queue, clone, bio_dev(io->orig_bio), sector); - if (md->type == DM_TYPE_NVME_BIO_BASED) - ret = direct_make_request(clone); - else - ret = submit_bio_noacct(clone); + ret = submit_bio_noacct(clone); break; case DM_MAPIO_KILL: free_tio(tio); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index f07fa47c251d9d..a986ac52c4cc7f 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -314,7 +314,7 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio) trace_block_bio_remap(bio->bi_disk->queue, bio, disk_devt(ns->head->disk), bio->bi_iter.bi_sector); - ret = direct_make_request(bio); + ret = submit_bio_noacct(bio); } else if (nvme_available_path(head)) { dev_warn_ratelimited(dev, "no usable path - requeuing I/O\n"); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b73cfa6a5141df..1cc913ffdbe21e 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -853,7 +853,6 @@ static inline void rq_flush_dcache_pages(struct request *rq) extern int blk_register_queue(struct gendisk *disk); extern void blk_unregister_queue(struct gendisk *disk); blk_qc_t submit_bio_noacct(struct bio *bio); -extern blk_qc_t direct_make_request(struct bio *bio); extern void blk_rq_init(struct request_queue *q, struct request *rq); extern void blk_put_request(struct request *); extern struct request *blk_get_request(struct request_queue *, unsigned int op,