From patchwork Mon May 22 22:09:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sarthak Kukreti X-Patchwork-Id: 13251179 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22F5EC77B75 for ; Mon, 22 May 2023 22:10:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684793433; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=yBFA0/soahSalWtid5ZgMvQ8eNrkjGIyWkeO7S9hsDA=; b=AujGTSbBphQsyYTE9lWTrrOSEty24N5VM0q9aeViD0Vhiu3smhYSY0mdMYB7nlklWhpjt3 UT+rhmvdSs5s7J/Ayd7PnqmYg3+3p4pteU4hkniRzDQfsZOCfFAc67QOycTFCM48oVaKFE iB5iIIhLv10DJviHFVLTUSj2YI0uCsk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-338-5Rs-Woc7OJKgZHf3wX8zNw-1; Mon, 22 May 2023 18:10:30 -0400 X-MC-Unique: 5Rs-Woc7OJKgZHf3wX8zNw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE25E3C02B96; Mon, 22 May 2023 22:10:27 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1CC2E482060; Mon, 22 May 2023 22:10:26 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D4ECE19465A0; Mon, 22 May 2023 22:10:25 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 66AB11946595 for ; Mon, 22 May 2023 22:10:14 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id CFC94482062; Mon, 22 May 2023 22:10:14 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast10.extmail.prod.ext.rdu2.redhat.com [10.11.55.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C7C05482060 for ; Mon, 22 May 2023 22:10:14 +0000 (UTC) Received: from us-smtp-inbound-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A76BB1C0419B for ; Mon, 22 May 2023 22:10:14 +0000 (UTC) Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-388-wyMxxV6eOumIAq6FWmkIvA-1; Mon, 22 May 2023 18:10:05 -0400 X-MC-Unique: wyMxxV6eOumIAq6FWmkIvA-1 Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-64d426e63baso3552978b3a.0 for ; Mon, 22 May 2023 15:10:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793404; x=1687385404; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zhk/FFw49fszGKvEVhYvbvMP+dTpGvDn5zl+3/9L1Dc=; b=DZG498S/J4aWC/2A79S5TKGQIGGhNnYbkkOhwq7Iy1ZY8U6wq02VlSsQelakIctszl LOgY007X1F2HEDmQWSBseFB3DN7kPqmaS6A6eRuw3+YTqCL0s5yKPy1m9ZVqKmEUCn1Q M5XWkM0tTCJB6XvgkDHKYkHuVKuNQBVMIU09saIBnjmLcLDEdSmkTAR+SiZ+uf0EdcZc xrad8GLQK/d1FF55CMDOhZNDTFfS/dNhxkAzvQsCic9P9lrF7XgO2tnkAHzkNYNFP66r nBHgLidlXBQbZJ1aOu1ua2g7CbpVORSCBLDy58+I5kHQyIMMtD5SbtxlwxrsWiQt7Rvx eU6w== X-Gm-Message-State: AC+VfDz6KTGXhF7H8hSORYuBQmehZ/E5LTqEgFmeVC5o+1ATMJM5cphN XNA6UtRf7HBJLtMpMaYVw2KNmUFLMlZCa8bJb0U= X-Google-Smtp-Source: ACHHUZ5WRtvEBtYahpKG6aPeTI5eHL+V7B6L6O9jAUq/08MWwaaWRvizWzszQGPZBbFKrv/0tLK8Kg== X-Received: by 2002:a17:902:ea0f:b0:1ad:c736:2090 with SMTP id s15-20020a170902ea0f00b001adc7362090mr14394226plg.3.1684793403998; Mon, 22 May 2023 15:10:03 -0700 (PDT) Received: from localhost ([2620:15c:9d:2:7ecd:bb34:d662:f45a]) by smtp.gmail.com with UTF8SMTPSA id v4-20020a170902b7c400b001ab2b4105ddsm5308998plz.60.2023.05.22.15.10.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 22 May 2023 15:10:03 -0700 (PDT) From: Sarthak Kukreti To: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Date: Mon, 22 May 2023 15:09:55 -0700 Message-ID: <20230522221000.603769-1-sarthakkukreti@chromium.org> In-Reply-To: <20230522163710.GA11607@frogsfrogsfrogs> References: <20230522163710.GA11607@frogsfrogsfrogs> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Subject: [dm-devel] [PATCH v7 5/5] loop: Add support for provision requests X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jens Axboe , Theodore Ts'o , Sarthak Kukreti , "Michael S. Tsirkin" , "Darrick J. Wong" , Jason Wang , Bart Van Assche , Mike Snitzer , Christoph Hellwig , Andreas Dilger , Stefan Hajnoczi , Brian Foster , Alasdair Kergon Errors-To: dm-devel-bounces@redhat.com Sender: "dm-devel" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: chromium.org On Mon, May 22, 2023 at 9:37 AM Darrick J. Wong wrote: > > If someone calls fallocate(UNSHARE_RANGE) on a loop bdev, shouldn't > there be a way to pass that through to the fallocate call to the backing > file? > > --D > Yeah, I think we could add a REQ_UNSHARE bit (similar to REQ_NOUNMAP) to pass down the intent to the backing file (and possibly beyond...). I took a stab at implementing it as a follow up patch so that there's less review churn on the current series. If it looks good, I can add it to the end of the series (or incorporate this into the existing block and loop patches): From: Sarthak Kukreti Date: Mon, 22 May 2023 14:18:15 -0700 Subject: [PATCH] block: Pass unshare intent via REQ_OP_PROVISION Allow REQ_OP_PROVISION to pass in an extra REQ_UNSHARE bit to annotate unshare requests to underlying layers. Layers that support FALLOC_FL_UNSHARE will be able to use this as an indicator of which fallocate() mode to use. Signed-off-by: Sarthak Kukreti --- block/blk-lib.c | 6 +++++- block/fops.c | 6 +++++- drivers/block/loop.c | 35 +++++++++++++++++++++++++++++------ include/linux/blk_types.h | 3 +++ include/linux/blkdev.h | 3 ++- 5 files changed, 44 insertions(+), 9 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 3cff5fb654f5..bea6f5a700b3 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -350,6 +350,7 @@ EXPORT_SYMBOL(blkdev_issue_secure_erase); * @sector: start sector * @nr_sects: number of sectors to provision * @gfp_mask: memory allocation flags (for bio_alloc) + * @flags: controls detailed behavior * * Description: * Issues a provision request to the block device for the range of sectors. @@ -357,7 +358,7 @@ EXPORT_SYMBOL(blkdev_issue_secure_erase); * underlying storage pool to allocate space for this block range. */ int blkdev_issue_provision(struct block_device *bdev, sector_t sector, - sector_t nr_sects, gfp_t gfp) + sector_t nr_sects, gfp_t gfp, unsigned flags) { sector_t bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; unsigned int max_sectors = bdev_max_provision_sectors(bdev); @@ -380,6 +381,9 @@ int blkdev_issue_provision(struct block_device *bdev, sector_t sector, bio->bi_iter.bi_sector = sector; bio->bi_iter.bi_size = req_sects << SECTOR_SHIFT; + if (flags & BLKDEV_UNSHARE_RANGE) + bio->bi_opf |= REQ_UNSHARE; + sector += req_sects; nr_sects -= req_sects; if (!nr_sects) { diff --git a/block/fops.c b/block/fops.c index be2e41f160bf..6848756f0557 100644 --- a/block/fops.c +++ b/block/fops.c @@ -659,7 +659,11 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, case FALLOC_FL_KEEP_SIZE: case FALLOC_FL_UNSHARE_RANGE | FALLOC_FL_KEEP_SIZE: error = blkdev_issue_provision(bdev, start >> SECTOR_SHIFT, - len >> SECTOR_SHIFT, GFP_KERNEL); + len >> SECTOR_SHIFT, GFP_KERNEL, + (mode & + FALLOC_FL_UNSHARE_RANGE) ? + BLKDEV_UNSHARE_RANGE : + 0); break; case FALLOC_FL_ZERO_RANGE: case FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE: diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 7fe1a6629754..c844b145d666 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -306,6 +306,30 @@ static int lo_read_simple(struct loop_device *lo, struct request *rq, return 0; } +static bool validate_fallocate_mode(struct loop_device *lo, int mode) +{ + bool ret = true; + + switch (mode) { + case FALLOC_FL_PUNCH_HOLE: + case FALLOC_FL_ZERO_RANGE: + if (!bdev_max_discard_sectors(lo->lo_device)) + ret = false; + break; + case 0: + case FALLOC_FL_UNSHARE_RANGE: + if (!bdev_max_provision_sectors(lo->lo_device)) + ret = false; + break; + + default: + ret = false; + } + + return ret; +} + + static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos, int mode) { @@ -316,11 +340,7 @@ static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos, struct file *file = lo->lo_backing_file; int ret; - if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_ZERO_RANGE) && - !bdev_max_discard_sectors(lo->lo_device)) - return -EOPNOTSUPP; - - if (mode == 0 && !bdev_max_provision_sectors(lo->lo_device)) + if (!validate_fallocate_mode(lo, mode)) return -EOPNOTSUPP; mode |= FALLOC_FL_KEEP_SIZE; @@ -493,7 +513,10 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq) case REQ_OP_DISCARD: return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE); case REQ_OP_PROVISION: - return lo_fallocate(lo, rq, pos, 0); + return lo_fallocate(lo, rq, pos, + (rq->cmd_flags & REQ_UNSHARE) ? + FALLOC_FL_UNSHARE_RANGE : + 0); case REQ_OP_WRITE: if (cmd->use_aio) return lo_rw_aio(lo, cmd, pos, ITER_SOURCE); diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index b7bb0226fdee..1a536fd897cb 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -423,6 +423,8 @@ enum req_flag_bits { */ /* for REQ_OP_WRITE_ZEROES: */ __REQ_NOUNMAP, /* do not free blocks when zeroing */ + /* for REQ_OP_PROVISION: */ + __REQ_UNSHARE, /* unshare blocks */ __REQ_NR_BITS, /* stops here */ }; @@ -451,6 +453,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) +#define REQ_UNSHARE (__force blk_opf_t)(1ULL << __REQ_UNSHARE) #define REQ_FAILFAST_MASK \ (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 462ce586d46f..60c09b0d3fc9 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1049,10 +1049,11 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); extern int blkdev_issue_provision(struct block_device *bdev, sector_t sector, - sector_t nr_sects, gfp_t gfp_mask); + sector_t nr_sects, gfp_t gfp_mask, unsigned int flags); #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ +#define BLKDEV_UNSHARE_RANGE (1 << 2) /* unshare range on provision */ extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,