diff mbox series

[v7,4/9] block: Add core atomic write support

Message ID 20240602140912.970947-5-john.g.garry@oracle.com (mailing list archive)
State Superseded
Headers show
Series block atomic writes | expand

Commit Message

John Garry June 2, 2024, 2:09 p.m. UTC
Add atomic write support, as follows:
- add helper functions to get request_queue atomic write limits
- report request_queue atomic write support limits to sysfs and update Doc
- support to safely merge atomic writes
- deal with splitting atomic writes
- misc helper functions
- add a per-request atomic write flag

New request_queue limits are added, as follows:
- atomic_write_hw_max is set by the block driver and is the maximum length
  of an atomic write which the device may support. It is not
  necessarily a power-of-2.
- atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and
  max_hw_sectors. It is always a power-of-2. Atomic writes may be merged,
  and atomic_write_max_sectors would be the limit on a merged atomic write
  request size. This value is not capped at max_sectors, as the value in
  max_sectors can be controlled from userspace, and it would only cause
  trouble if userspace could limit atomic_write_unit_max_bytes and the
  other atomic write limits.
- atomic_write_hw_unit_{min,max} are set by the block driver and are the
  min/max length of an atomic write unit which the device may support. They
  both must be a power-of-2. Typically atomic_write_hw_unit_max will hold
  the same value as atomic_write_hw_max.
- atomic_write_unit_{min,max} are derived from
  atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits.
  Both min and max values must be a power-of-2.
- atomic_write_hw_boundary is set by the block driver. If non-zero, it
  indicates an LBA space boundary at which an atomic write straddles no
  longer is atomically executed by the disk. The value must be a
  power-of-2. Note that it would be acceptable to enforce a rule that
  atomic_write_hw_boundary_sectors is a multiple of
  atomic_write_hw_unit_max, but the resultant code would be more
  complicated.

All atomic writes limits are by default set 0 to indicate no atomic write
support. Even though it is assumed by Linux that a logical block can always
be atomically written, we ignore this as it is not of particular interest.
Stacked devices are just not supported either for now.

An atomic write must always be submitted to the block driver as part of a
single request. As such, only a single BIO must be submitted to the block
layer for an atomic write. When a single atomic write BIO is submitted, it
cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited
by the maximum guaranteed BIO size which will not be required to be split.
This max size is calculated by request_queue max segments and the number
of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace
issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each
segment containing PAGE_SIZE of data, apart from the first+last, which each
can fit logical block size of data. The first+last will be LBS
length/aligned as we rely on direct IO alignment rules also.

New sysfs files are added to report the following atomic write limits:
- atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in
				bytes
- atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in
				bytes
- atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in
				bytes
- atomic_write_max_bytes      - same as atomic_write_max_sectors in bytes

Atomic writes may only be merged with other atomic writes and only under
the following conditions:
- total resultant request length <= atomic_write_max_bytes
- the merged write does not straddle a boundary

Helper function bdev_can_atomic_write() is added to indicate whether
atomic writes may be issued to a bdev. If a bdev is a partition, the
partition start must be aligned with both atomic_write_unit_min_sectors
and atomic_write_hw_boundary_sectors.

FSes will rely on the block layer to validate that an atomic write BIO
submitted will be of valid size, so add blk_validate_atomic_write_op_size()
for this purpose. Userspace expects an atomic write which is of invalid
size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use
BLK_STS_INVAL for when a BIO needs to be split, as this should mean an
invalid size BIO.

Flag REQ_ATOMIC is used for indicating an atomic write.

Co-developed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 Documentation/ABI/stable/sysfs-block | 53 ++++++++++++++++
 block/blk-core.c                     | 19 ++++++
 block/blk-merge.c                    | 95 +++++++++++++++++++++++++++-
 block/blk-settings.c                 | 52 +++++++++++++++
 block/blk-sysfs.c                    | 33 ++++++++++
 block/blk.h                          |  3 +
 include/linux/blk_types.h            |  8 ++-
 include/linux/blkdev.h               | 54 ++++++++++++++++
 8 files changed, 315 insertions(+), 2 deletions(-)

Comments

Hannes Reinecke June 3, 2024, 9:26 a.m. UTC | #1
On 6/2/24 16:09, John Garry wrote:
> Add atomic write support, as follows:
> - add helper functions to get request_queue atomic write limits
> - report request_queue atomic write support limits to sysfs and update Doc
> - support to safely merge atomic writes
> - deal with splitting atomic writes
> - misc helper functions
> - add a per-request atomic write flag
> 
> New request_queue limits are added, as follows:
> - atomic_write_hw_max is set by the block driver and is the maximum length
>    of an atomic write which the device may support. It is not
>    necessarily a power-of-2.
> - atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and
>    max_hw_sectors. It is always a power-of-2. Atomic writes may be merged,
>    and atomic_write_max_sectors would be the limit on a merged atomic write
>    request size. This value is not capped at max_sectors, as the value in
>    max_sectors can be controlled from userspace, and it would only cause
>    trouble if userspace could limit atomic_write_unit_max_bytes and the
>    other atomic write limits.
> - atomic_write_hw_unit_{min,max} are set by the block driver and are the
>    min/max length of an atomic write unit which the device may support. They
>    both must be a power-of-2. Typically atomic_write_hw_unit_max will hold
>    the same value as atomic_write_hw_max.
> - atomic_write_unit_{min,max} are derived from
>    atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits.
>    Both min and max values must be a power-of-2.
> - atomic_write_hw_boundary is set by the block driver. If non-zero, it
>    indicates an LBA space boundary at which an atomic write straddles no
>    longer is atomically executed by the disk. The value must be a
>    power-of-2. Note that it would be acceptable to enforce a rule that
>    atomic_write_hw_boundary_sectors is a multiple of
>    atomic_write_hw_unit_max, but the resultant code would be more
>    complicated.
> 
> All atomic writes limits are by default set 0 to indicate no atomic write
> support. Even though it is assumed by Linux that a logical block can always
> be atomically written, we ignore this as it is not of particular interest.
> Stacked devices are just not supported either for now.
> 
> An atomic write must always be submitted to the block driver as part of a
> single request. As such, only a single BIO must be submitted to the block
> layer for an atomic write. When a single atomic write BIO is submitted, it
> cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited
> by the maximum guaranteed BIO size which will not be required to be split.
> This max size is calculated by request_queue max segments and the number
> of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace
> issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each
> segment containing PAGE_SIZE of data, apart from the first+last, which each
> can fit logical block size of data. The first+last will be LBS
> length/aligned as we rely on direct IO alignment rules also.
> 
> New sysfs files are added to report the following atomic write limits:
> - atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in
> 				bytes
> - atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in
> 				bytes
> - atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in
> 				bytes
> - atomic_write_max_bytes      - same as atomic_write_max_sectors in bytes
> 
> Atomic writes may only be merged with other atomic writes and only under
> the following conditions:
> - total resultant request length <= atomic_write_max_bytes
> - the merged write does not straddle a boundary
> 
> Helper function bdev_can_atomic_write() is added to indicate whether
> atomic writes may be issued to a bdev. If a bdev is a partition, the
> partition start must be aligned with both atomic_write_unit_min_sectors
> and atomic_write_hw_boundary_sectors.
> 
> FSes will rely on the block layer to validate that an atomic write BIO
> submitted will be of valid size, so add blk_validate_atomic_write_op_size()
> for this purpose. Userspace expects an atomic write which is of invalid
> size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use
> BLK_STS_INVAL for when a BIO needs to be split, as this should mean an
> invalid size BIO.
> 
> Flag REQ_ATOMIC is used for indicating an atomic write.
> 
> Co-developed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com>
> Signed-off-by: John Garry <john.g.garry@oracle.com>
> ---
>   Documentation/ABI/stable/sysfs-block | 53 ++++++++++++++++
>   block/blk-core.c                     | 19 ++++++
>   block/blk-merge.c                    | 95 +++++++++++++++++++++++++++-
>   block/blk-settings.c                 | 52 +++++++++++++++
>   block/blk-sysfs.c                    | 33 ++++++++++
>   block/blk.h                          |  3 +
>   include/linux/blk_types.h            |  8 ++-
>   include/linux/blkdev.h               | 54 ++++++++++++++++
>   8 files changed, 315 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
> index 831f19a32e08..cea8856f798d 100644
> --- a/Documentation/ABI/stable/sysfs-block
> +++ b/Documentation/ABI/stable/sysfs-block
> @@ -21,6 +21,59 @@ Description:
>   		device is offset from the internal allocation unit's
>   		natural alignment.
>   
> +What:		/sys/block/<disk>/atomic_write_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
> +Description:
> +		[RO] This parameter specifies the maximum atomic write
> +		size reported by the device. This parameter is relevant
> +		for merging of writes, where a merged atomic write
> +		operation must not exceed this number of bytes.
> +		This parameter may be greater than the value in
> +		atomic_write_unit_max_bytes as
> +		atomic_write_unit_max_bytes will be rounded down to a
> +		power-of-two and atomic_write_unit_max_bytes may also be
> +		limited by some other queue limits, such as max_segments.
> +		This parameter - along with atomic_write_unit_min_bytes
> +		and atomic_write_unit_max_bytes - will not be larger than
> +		max_hw_sectors_kb, but may be larger than max_sectors_kb.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_min_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
> +Description:
> +		[RO] This parameter specifies the smallest block which can
> +		be written atomically with an atomic write operation. All
> +		atomic write operations must begin at a
> +		atomic_write_unit_min boundary and must be multiples of
> +		atomic_write_unit_min. This value must be a power-of-two.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
> +Description:
> +		[RO] This parameter defines the largest block which can be
> +		written atomically with an atomic write operation. This
> +		value must be a multiple of atomic_write_unit_min and must
> +		be a power-of-two. This value will not be larger than
> +		atomic_write_max_bytes.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_boundary_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
> +Description:
> +		[RO] A device may need to internally split an atomic write I/O
> +		which straddles a given logical block address boundary. This
> +		parameter specifies the size in bytes of the atomic boundary if
> +		one is reported by the device. This value must be a
> +		power-of-two and at least the size as in
> +		atomic_write_unit_max_bytes.
> +		Any attempt to merge atomic write I/Os must not result in a
> +		merged I/O which crosses this boundary (if any).
> +
>   
>   What:		/sys/block/<disk>/diskseq
>   Date:		February 2021
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 82c3ae22d76d..d9f58fe71758 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -174,6 +174,8 @@ static const struct {
>   	/* Command duration limit device-side timeout */
>   	[BLK_STS_DURATION_LIMIT]	= { -ETIME, "duration limit exceeded" },
>   
> +	[BLK_STS_INVAL]		= { -EINVAL,	"invalid" },
> +
>   	/* everything else not covered above: */
>   	[BLK_STS_IOERR]		= { -EIO,	"I/O" },
>   };
> @@ -739,6 +741,18 @@ void submit_bio_noacct_nocheck(struct bio *bio)
>   		__submit_bio_noacct(bio);
>   }
>   
> +static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
> +						 struct bio *bio)
> +{
> +	if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q))
> +		return BLK_STS_INVAL;
> +
> +	if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q))
> +		return BLK_STS_INVAL;
> +
> +	return BLK_STS_OK;
> +}
> +
>   /**
>    * submit_bio_noacct - re-submit a bio to the block device layer for I/O
>    * @bio:  The bio describing the location in memory and on the device.
> @@ -797,6 +811,11 @@ void submit_bio_noacct(struct bio *bio)
>   	switch (bio_op(bio)) {
>   	case REQ_OP_READ:
>   	case REQ_OP_WRITE:
> +		if (bio->bi_opf & REQ_ATOMIC) {
> +			status = blk_validate_atomic_write_op_size(q, bio);
> +			if (status != BLK_STS_OK)
> +				goto end_io;
> +		}
>   		break;
>   	case REQ_OP_FLUSH:
>   		/*
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 8957e08e020c..ad07759ca147 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -18,6 +18,46 @@
>   #include "blk-rq-qos.h"
>   #include "blk-throttle.h"
>   
> +/*
> + * rq_straddles_atomic_write_boundary - check for boundary violation
> + * @rq: request to check
> + * @front: data size to be appended to front
> + * @back: data size to be appended to back
> + *
> + * Determine whether merging a request or bio into another request will result
> + * in a merged request which straddles an atomic write boundary.
> + *
> + * The value @front_adjust is the data which would be appended to the front of
> + * @rq, while the value @back_adjust is the data which would be appended to the
> + * back of @rq. Callers will typically only have either @front_adjust or
> + * @back_adjust as non-zero.
> + *
> + */
> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
> +					unsigned int front_adjust,
> +					unsigned int back_adjust)
> +{
> +	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
> +	u64 mask, start_rq_pos, end_rq_pos;
> +
> +	if (!boundary)
> +		return false;
> +
> +	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
> +	end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
> +
> +	start_rq_pos -= front_adjust;
> +	end_rq_pos += back_adjust;
> +
> +	mask = ~(boundary - 1);
> +
> +	/* Top bits are different, so crossed a boundary */
> +	if ((start_rq_pos & mask) != (end_rq_pos & mask))
> +		return true;
> +
> +	return false;
> +}

But isn't that precisely what 'chunk_sectors' is doing?
IE ensuring that requests never cross that boundary?

Q1: Shouldn't we rather use/modify/adapt chunk_sectors for this thing?
Q2: If we don't, shouldn't we align the atomic write boundary to the 
chunk_sectors setting to ensure both match up?

Cheers,

Hannes
John Garry June 3, 2024, 11:38 a.m. UTC | #2
On 03/06/2024 10:26, Hannes Reinecke wrote:
>>
>> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
>> +                    unsigned int front_adjust,
>> +                    unsigned int back_adjust)
>> +{
>> +    unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>> +    u64 mask, start_rq_pos, end_rq_pos;
>> +
>> +    if (!boundary)
>> +        return false;
>> +
>> +    start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
>> +    end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
>> +
>> +    start_rq_pos -= front_adjust;
>> +    end_rq_pos += back_adjust;
>> +
>> +    mask = ~(boundary - 1);
>> +
>> +    /* Top bits are different, so crossed a boundary */
>> +    if ((start_rq_pos & mask) != (end_rq_pos & mask))
>> +        return true;
>> +
>> +    return false;
>> +}
> 
> But isn't that precisely what 'chunk_sectors' is doing?
> IE ensuring that requests never cross that boundary?
> 

> Q1: Shouldn't we rather use/modify/adapt chunk_sectors for this thing?

So you are saying that we can re-use blk_chunk_sectors_left() to 
determine whether merging a bio/req would cross the boundary, right?

It seems ok in principle - we would just need to ensure that it is 
watertight.

> Q2: If we don't, shouldn't we align the atomic write boundary to the 
> chunk_sectors setting to ensure both match up?

Yeah, right. But we can only handle what HW tells.

The atomic write boundary is only relevant to NVMe. NVMe NOIOB - which 
we use to set chunk_sectors - is an IO optimization hint, AFAIK. However 
the atomic write boundary is a hard limit. So if NOIOB is not aligned 
with the atomic write boundary - which seems unlikely - then the atomic 
write boundary takes priority.

Thanks,
John
Hannes Reinecke June 3, 2024, 12:31 p.m. UTC | #3
On 6/3/24 13:38, John Garry wrote:
> On 03/06/2024 10:26, Hannes Reinecke wrote:
>>>
>>> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
>>> +                    unsigned int front_adjust,
>>> +                    unsigned int back_adjust)
>>> +{
>>> +    unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>>> +    u64 mask, start_rq_pos, end_rq_pos;
>>> +
>>> +    if (!boundary)
>>> +        return false;
>>> +
>>> +    start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
>>> +    end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
>>> +
>>> +    start_rq_pos -= front_adjust;
>>> +    end_rq_pos += back_adjust;
>>> +
>>> +    mask = ~(boundary - 1);
>>> +
>>> +    /* Top bits are different, so crossed a boundary */
>>> +    if ((start_rq_pos & mask) != (end_rq_pos & mask))
>>> +        return true;
>>> +
>>> +    return false;
>>> +}
>>
>> But isn't that precisely what 'chunk_sectors' is doing?
>> IE ensuring that requests never cross that boundary?
>>
> 
>> Q1: Shouldn't we rather use/modify/adapt chunk_sectors for this thing?
> 
> So you are saying that we can re-use blk_chunk_sectors_left() to 
> determine whether merging a bio/req would cross the boundary, right?
> 
> It seems ok in principle - we would just need to ensure that it is 
> watertight.
> 

We currently use chunk_sectors for quite some different things, most 
notably zones boundaries, NIOIB, raid stripes etc.
So I don't have an issue adding another use-case for it.

>> Q2: If we don't, shouldn't we align the atomic write boundary to the 
>> chunk_sectors setting to ensure both match up?
> 
> Yeah, right. But we can only handle what HW tells.
> 
> The atomic write boundary is only relevant to NVMe. NVMe NOIOB - which 
> we use to set chunk_sectors - is an IO optimization hint, AFAIK. However 
> the atomic write boundary is a hard limit. So if NOIOB is not aligned 
> with the atomic write boundary - which seems unlikely - then the atomic 
> write boundary takes priority.
> 
Which is what I said; we need to check. And I would treat a NOIOB value 
not aligned to the atomic write boundary as an error.

But the real issue here is that the atomic write boundary only matters
for requests, and not for the entire queue.
So using chunk_sectors is out of question as this would affect all 
requests, and my comment was actually wrong.
I'll retract it.

Cheers,

Hannes
John Garry June 3, 2024, 1:29 p.m. UTC | #4
On 03/06/2024 13:31, Hannes Reinecke wrote:
>>
>> It seems ok in principle - we would just need to ensure that it is 
>> watertight.
>>
> 
> We currently use chunk_sectors for quite some different things, most 
> notably zones boundaries, NIOIB, raid stripes etc.
> So I don't have an issue adding another use-case for it.
> 
>>> Q2: If we don't, shouldn't we align the atomic write boundary to the 
>>> chunk_sectors setting to ensure both match up?
>>
>> Yeah, right. But we can only handle what HW tells.
>>
>> The atomic write boundary is only relevant to NVMe. NVMe NOIOB - which 
>> we use to set chunk_sectors - is an IO optimization hint, AFAIK. 
>> However the atomic write boundary is a hard limit. So if NOIOB is not 
>> aligned with the atomic write boundary - which seems unlikely - then 
>> the atomic write boundary takes priority.
>>
> Which is what I said; we need to check. And I would treat a NOIOB value 
> not aligned to the atomic write boundary as an error.

Yeah, maybe we can reject that in blk_validate_limits(), by error'ing or 
disabling atomic writes there.

> 
> But the real issue here is that the atomic write boundary only matters
> for requests, and not for the entire queue.
> So using chunk_sectors is out of question as this would affect all 
> requests, and my comment was actually wrong.
> I'll retract it.

I think that some of the logic could be re-used. 
rq_straddles_atomic_write_boundary() is checked in merging of reqs/bios 
(to see if the resultant req straddles a boundary).

So instead of saying: "will the resultant req straddle a boundary", 
re-using path like blk_rq_get_max_sectors() -> blk_chunk_sectors_left(), 
we check "is there space within the boundary limit to add this req/bio". 
We need to take care of front and back merges, though.

Thanks,
John
Christoph Hellwig June 5, 2024, 8:31 a.m. UTC | #5
On Mon, Jun 03, 2024 at 02:31:04PM +0200, Hannes Reinecke wrote:
> We currently use chunk_sectors for quite some different things, most 
> notably zones boundaries, NIOIB, raid stripes etc.
> So I don't have an issue adding another use-case for it.

So as zone as a device supports atomic/untorn writes you limit all
I/O including reads to the boundaries?  I can't see how that would
ever make sense.
Christoph Hellwig June 5, 2024, 8:32 a.m. UTC | #6
On Mon, Jun 03, 2024 at 02:29:26PM +0100, John Garry wrote:
> I think that some of the logic could be re-used. 
> rq_straddles_atomic_write_boundary() is checked in merging of reqs/bios (to 
> see if the resultant req straddles a boundary).
>
> So instead of saying: "will the resultant req straddle a boundary", 
> re-using path like blk_rq_get_max_sectors() -> blk_chunk_sectors_left(), we 
> check "is there space within the boundary limit to add this req/bio". We 
> need to take care of front and back merges, though.

Yes, we've used the trick to pass in the relevant limit in explicitly
to reuse infrastructure in other places, e.g. max_hw_sectors vs
max_zone_append_sectors for adding to a bio while respecting hardware
limits.
John Garry June 5, 2024, 11:21 a.m. UTC | #7
On 05/06/2024 09:32, Christoph Hellwig wrote:
> On Mon, Jun 03, 2024 at 02:29:26PM +0100, John Garry wrote:
>> I think that some of the logic could be re-used.
>> rq_straddles_atomic_write_boundary() is checked in merging of reqs/bios (to
>> see if the resultant req straddles a boundary).
>>
>> So instead of saying: "will the resultant req straddle a boundary",
>> re-using path like blk_rq_get_max_sectors() -> blk_chunk_sectors_left(), we
>> check "is there space within the boundary limit to add this req/bio". We
>> need to take care of front and back merges, though.
> 
> Yes, we've used the trick to pass in the relevant limit in explicitly
> to reuse infrastructure in other places, e.g. max_hw_sectors vs
> max_zone_append_sectors for adding to a bio while respecting hardware
> limits.
> 

I assume that you are talking about something like 
queue_limits_max_zone_append_sectors().

Anyway, below is the prep patch I was considering for this re-use. It's 
just renaming any infrastructure for "chunk_sectors" to generic 
"boundary_sectors".

------>8-------

The purpose of the chunk_sectors limit is to ensure that a mergeable 
request fits within the boundary of the chunck_sector value.

Such a feature will be useful for other request_queue boundary limits, 
so generalize the chunk_sectors merge code.

This idea was proposed by Hannes Reinecke.

Signed-off-by: John Garry <john.g.garry@oracle.com>

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8957e08e020c..6574c8b64ecc 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -168,11 +168,12 @@ static inline unsigned get_max_io_size(struct bio 
*bio,
  	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
  	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
  	unsigned max_sectors = lim->max_sectors, start, end;
+	unsigned int boundary_sectors = lim->chunk_sectors;

-	if (lim->chunk_sectors) {
+	if (boundary_sectors) {
  		max_sectors = min(max_sectors,
-			blk_chunk_sectors_left(bio->bi_iter.bi_sector,
-					       lim->chunk_sectors));
+			blk_boundary_sectors_left(bio->bi_iter.bi_sector,
+					      boundary_sectors));
  	}

  	start = bio->bi_iter.bi_sector & (pbs - 1);
@@ -588,19 +589,19 @@ static inline unsigned int 
blk_rq_get_max_sectors(struct request *rq,
  						  sector_t offset)
  {
  	struct request_queue *q = rq->q;
-	unsigned int max_sectors;
+	unsigned int max_sectors, boundary_sectors = q->limits.chunk_sectors;

  	if (blk_rq_is_passthrough(rq))
  		return q->limits.max_hw_sectors;

  	max_sectors = blk_queue_get_max_sectors(rq);

-	if (!q->limits.chunk_sectors ||
+	if (!boundary_sectors ||
  	    req_op(rq) == REQ_OP_DISCARD ||
  	    req_op(rq) == REQ_OP_SECURE_ERASE)
  		return max_sectors;
  	return min(max_sectors,
-		   blk_chunk_sectors_left(offset, q->limits.chunk_sectors));
+		   blk_boundary_sectors_left(offset, boundary_sectors));
  }

  static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 13037d6a6f62..b648253c2300 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1188,7 +1188,7 @@ static sector_t __max_io_len(struct dm_target *ti, 
sector_t sector,
  		return len;
  	return min_t(sector_t, len,
  		min(max_sectors ? : queue_max_sectors(ti->table->md->queue),
-		    blk_chunk_sectors_left(target_offset, max_granularity)));
+		    blk_boundary_sectors_left(target_offset, max_granularity)));
  }

  static inline sector_t max_io_len(struct dm_target *ti, sector_t sector)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ac8e0cb2353a..7657698b47f4 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -866,14 +866,14 @@ static inline bool bio_straddles_zones(struct bio 
*bio)
  }

  /*
- * Return how much of the chunk is left to be used for I/O at a given 
offset.
+ * Return how much within the boundary is left to be used for I/O at a 
given offset.
   */
-static inline unsigned int blk_chunk_sectors_left(sector_t offset,
-		unsigned int chunk_sectors)
+static inline unsigned int blk_boundary_sectors_left(sector_t offset,
+		unsigned int boundary_sectors)
  {
-	if (unlikely(!is_power_of_2(chunk_sectors)))
-		return chunk_sectors - sector_div(offset, chunk_sectors);
-	return chunk_sectors - (offset & (chunk_sectors - 1));
+	if (unlikely(!is_power_of_2(boundary_sectors)))
+		return boundary_sectors - sector_div(offset, boundary_sectors);
+	return boundary_sectors - (offset & (boundary_sectors - 1));
  }

  /**
Christoph Hellwig June 6, 2024, 5:44 a.m. UTC | #8
On Wed, Jun 05, 2024 at 12:21:51PM +0100, John Garry wrote:
> I assume that you are talking about something like 
> queue_limits_max_zone_append_sectors().

Or rather how that is passed to bio_add_hw_page, yes.

> Anyway, below is the prep patch I was considering for this re-use. It's 
> just renaming any infrastructure for "chunk_sectors" to generic 
> "boundary_sectors".

Looks reasonable.  Note that the comment above blk_boundary_sectors_left
could use a line break.
diff mbox series

Patch

diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 831f19a32e08..cea8856f798d 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -21,6 +21,59 @@  Description:
 		device is offset from the internal allocation unit's
 		natural alignment.
 
+What:		/sys/block/<disk>/atomic_write_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter specifies the maximum atomic write
+		size reported by the device. This parameter is relevant
+		for merging of writes, where a merged atomic write
+		operation must not exceed this number of bytes.
+		This parameter may be greater than the value in
+		atomic_write_unit_max_bytes as
+		atomic_write_unit_max_bytes will be rounded down to a
+		power-of-two and atomic_write_unit_max_bytes may also be
+		limited by some other queue limits, such as max_segments.
+		This parameter - along with atomic_write_unit_min_bytes
+		and atomic_write_unit_max_bytes - will not be larger than
+		max_hw_sectors_kb, but may be larger than max_sectors_kb.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_min_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter specifies the smallest block which can
+		be written atomically with an atomic write operation. All
+		atomic write operations must begin at a
+		atomic_write_unit_min boundary and must be multiples of
+		atomic_write_unit_min. This value must be a power-of-two.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] This parameter defines the largest block which can be
+		written atomically with an atomic write operation. This
+		value must be a multiple of atomic_write_unit_min and must
+		be a power-of-two. This value will not be larger than
+		atomic_write_max_bytes.
+
+
+What:		/sys/block/<disk>/atomic_write_boundary_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <himanshu.madhani@oracle.com>
+Description:
+		[RO] A device may need to internally split an atomic write I/O
+		which straddles a given logical block address boundary. This
+		parameter specifies the size in bytes of the atomic boundary if
+		one is reported by the device. This value must be a
+		power-of-two and at least the size as in
+		atomic_write_unit_max_bytes.
+		Any attempt to merge atomic write I/Os must not result in a
+		merged I/O which crosses this boundary (if any).
+
 
 What:		/sys/block/<disk>/diskseq
 Date:		February 2021
diff --git a/block/blk-core.c b/block/blk-core.c
index 82c3ae22d76d..d9f58fe71758 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -174,6 +174,8 @@  static const struct {
 	/* Command duration limit device-side timeout */
 	[BLK_STS_DURATION_LIMIT]	= { -ETIME, "duration limit exceeded" },
 
+	[BLK_STS_INVAL]		= { -EINVAL,	"invalid" },
+
 	/* everything else not covered above: */
 	[BLK_STS_IOERR]		= { -EIO,	"I/O" },
 };
@@ -739,6 +741,18 @@  void submit_bio_noacct_nocheck(struct bio *bio)
 		__submit_bio_noacct(bio);
 }
 
+static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
+						 struct bio *bio)
+{
+	if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q))
+		return BLK_STS_INVAL;
+
+	if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q))
+		return BLK_STS_INVAL;
+
+	return BLK_STS_OK;
+}
+
 /**
  * submit_bio_noacct - re-submit a bio to the block device layer for I/O
  * @bio:  The bio describing the location in memory and on the device.
@@ -797,6 +811,11 @@  void submit_bio_noacct(struct bio *bio)
 	switch (bio_op(bio)) {
 	case REQ_OP_READ:
 	case REQ_OP_WRITE:
+		if (bio->bi_opf & REQ_ATOMIC) {
+			status = blk_validate_atomic_write_op_size(q, bio);
+			if (status != BLK_STS_OK)
+				goto end_io;
+		}
 		break;
 	case REQ_OP_FLUSH:
 		/*
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8957e08e020c..ad07759ca147 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -18,6 +18,46 @@ 
 #include "blk-rq-qos.h"
 #include "blk-throttle.h"
 
+/*
+ * rq_straddles_atomic_write_boundary - check for boundary violation
+ * @rq: request to check
+ * @front: data size to be appended to front
+ * @back: data size to be appended to back
+ *
+ * Determine whether merging a request or bio into another request will result
+ * in a merged request which straddles an atomic write boundary.
+ *
+ * The value @front_adjust is the data which would be appended to the front of
+ * @rq, while the value @back_adjust is the data which would be appended to the
+ * back of @rq. Callers will typically only have either @front_adjust or
+ * @back_adjust as non-zero.
+ *
+ */
+static bool rq_straddles_atomic_write_boundary(struct request *rq,
+					unsigned int front_adjust,
+					unsigned int back_adjust)
+{
+	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
+	u64 mask, start_rq_pos, end_rq_pos;
+
+	if (!boundary)
+		return false;
+
+	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
+	end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
+
+	start_rq_pos -= front_adjust;
+	end_rq_pos += back_adjust;
+
+	mask = ~(boundary - 1);
+
+	/* Top bits are different, so crossed a boundary */
+	if ((start_rq_pos & mask) != (end_rq_pos & mask))
+		return true;
+
+	return false;
+}
+
 static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
 {
 	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
@@ -167,7 +207,16 @@  static inline unsigned get_max_io_size(struct bio *bio,
 {
 	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
 	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
-	unsigned max_sectors = lim->max_sectors, start, end;
+	unsigned max_sectors, start, end;
+
+	/*
+	 * We ignore lim->max_sectors for atomic writes simply because
+	 * it may less than the bio size, which we cannot tolerate.
+	 */
+	if (bio->bi_opf & REQ_ATOMIC)
+		max_sectors = lim->atomic_write_max_sectors;
+	else
+		max_sectors = lim->max_sectors;
 
 	if (lim->chunk_sectors) {
 		max_sectors = min(max_sectors,
@@ -305,6 +354,11 @@  struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
 	*segs = nsegs;
 	return NULL;
 split:
+	if (bio->bi_opf & REQ_ATOMIC) {
+		bio->bi_status = BLK_STS_INVAL;
+		bio_endio(bio);
+		return ERR_PTR(-EINVAL);
+	}
 	/*
 	 * We can't sanely support splitting for a REQ_NOWAIT bio. End it
 	 * with EAGAIN if splitting is required and return an error pointer.
@@ -646,6 +700,13 @@  int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, bio->bi_iter.bi_size)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -665,6 +726,13 @@  static int ll_front_merge_fn(struct request *req, struct bio *bio,
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				bio->bi_iter.bi_size, 0)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -701,6 +769,13 @@  static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
 		return 0;
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, blk_rq_bytes(next))) {
+			return 0;
+		}
+	}
+
 	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
 	if (total_phys_segments > blk_rq_get_max_segments(req))
 		return 0;
@@ -798,6 +873,18 @@  static enum elv_merge blk_try_req_merge(struct request *req,
 	return ELEVATOR_NO_MERGE;
 }
 
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
+					      struct bio *bio)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
+}
+
+static bool blk_atomic_write_mergeable_rqs(struct request *rq,
+					   struct request *next)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
+}
+
 /*
  * For non-mq, this has to be called with the request spinlock acquired.
  * For mq with scheduling, the appropriate queue wide lock should be held.
@@ -821,6 +908,9 @@  static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!blk_atomic_write_mergeable_rqs(req, next))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -952,6 +1042,9 @@  bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
+		return false;
+
 	return true;
 }
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 996f247fc98e..25d3ca2e2f0d 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -97,6 +97,41 @@  static int blk_validate_zoned_limits(struct queue_limits *lim)
 	return 0;
 }
 
+/*
+ * Returns max guaranteed bytes which we can fit in a bio.
+ *
+ * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
+ * from first and last segments.
+ */
+static
+unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *limits)
+{
+	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
+	unsigned int length;
+
+	length = min(max_segments, 2) * limits->logical_block_size;
+	if (max_segments > 2)
+		length += (max_segments - 2) * PAGE_SIZE;
+
+	return length;
+}
+
+static void blk_atomic_writes_update_limits(struct queue_limits *limits)
+{
+	unsigned int unit_limit = min(limits->max_hw_sectors << SECTOR_SHIFT,
+					blk_queue_max_guaranteed_bio(limits));
+
+	unit_limit = rounddown_pow_of_two(unit_limit);
+
+	limits->atomic_write_max_sectors =
+		min(limits->atomic_write_hw_max >> SECTOR_SHIFT,
+			limits->max_hw_sectors);
+	limits->atomic_write_unit_min =
+		min(limits->atomic_write_hw_unit_min, unit_limit);
+	limits->atomic_write_unit_max =
+		min(limits->atomic_write_hw_unit_max, unit_limit);
+}
+
 /*
  * Check that the limits in lim are valid, initialize defaults for unset
  * values, and cap values based on others where needed.
@@ -230,6 +265,23 @@  static int blk_validate_limits(struct queue_limits *lim)
 		lim->misaligned = 0;
 	}
 
+	/*
+	 * The atomic write boundary size just needs to be a multiple of
+	 * unit_max (and not necessarily a power-of-2), so this following check
+	 * could be relaxed in future.
+	 * Furthermore, if needed, unit_max could be reduced so that the
+	 * boundary size was compliant (with a !power-of-2 boundary).
+	 */
+	if (lim->atomic_write_hw_boundary &&
+	    !is_power_of_2(lim->atomic_write_hw_boundary)) {
+
+		lim->atomic_write_hw_max = 0;
+		lim->atomic_write_hw_boundary = 0;
+		lim->atomic_write_hw_unit_min = 0;
+		lim->atomic_write_hw_unit_max = 0;
+	}
+	blk_atomic_writes_update_limits(lim);
+
 	return blk_validate_zoned_limits(lim);
 }
 
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f0f9314ab65c..42fbbaa52ccf 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -118,6 +118,30 @@  static ssize_t queue_max_discard_segments_show(struct request_queue *q,
 	return queue_var_show(queue_max_discard_segments(q), page);
 }
 
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_max_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
+}
+
 static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
 {
 	return queue_var_show(q->limits.max_integrity_segments, page);
@@ -495,6 +519,11 @@  QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
 QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
 QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
 
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
 QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
 QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
 QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -618,6 +647,10 @@  static struct attribute *queue_attrs[] = {
 	&queue_discard_max_entry.attr,
 	&queue_discard_max_hw_entry.attr,
 	&queue_discard_zeroes_data_entry.attr,
+	&queue_atomic_write_max_bytes_entry.attr,
+	&queue_atomic_write_boundary_entry.attr,
+	&queue_atomic_write_unit_min_entry.attr,
+	&queue_atomic_write_unit_max_entry.attr,
 	&queue_write_same_max_entry.attr,
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
diff --git a/block/blk.h b/block/blk.h
index 75c1683fc320..b2fa42657f62 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -193,6 +193,9 @@  static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 	if (unlikely(op == REQ_OP_WRITE_ZEROES))
 		return q->limits.max_write_zeroes_sectors;
 
+	if (rq->cmd_flags & REQ_ATOMIC)
+		return q->limits.atomic_write_max_sectors;
+
 	return q->limits.max_sectors;
 }
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 781c4500491b..632edd71f8c6 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -162,6 +162,11 @@  typedef u16 blk_short_t;
  */
 #define BLK_STS_DURATION_LIMIT	((__force blk_status_t)17)
 
+/*
+ * Invalid size or alignment.
+ */
+#define BLK_STS_INVAL	((__force blk_status_t)19)
+
 /**
  * blk_path_error - returns true if error may be path related
  * @error: status the request was completed with
@@ -370,7 +375,7 @@  enum req_flag_bits {
 	__REQ_SWAP,		/* swap I/O */
 	__REQ_DRV,		/* for driver use */
 	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
-
+	__REQ_ATOMIC,		/* for atomic write operations */
 	/*
 	 * Command specific flags, keep last:
 	 */
@@ -402,6 +407,7 @@  enum req_flag_bits {
 #define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
 #define REQ_DRV		(__force blk_opf_t)(1ULL << __REQ_DRV)
 #define REQ_FS_PRIVATE	(__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE)
+#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)
 
 #define REQ_NOUNMAP	(__force blk_opf_t)(1ULL << __REQ_NOUNMAP)
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ac8e0cb2353a..565acbd3adcb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,15 @@  struct queue_limits {
 	unsigned int		discard_alignment;
 	unsigned int		zone_write_granularity;
 
+	/* atomic write limits */
+	unsigned int		atomic_write_hw_max;
+	unsigned int		atomic_write_max_sectors;
+	unsigned int		atomic_write_hw_boundary;
+	unsigned int		atomic_write_hw_unit_min;
+	unsigned int		atomic_write_unit_min;
+	unsigned int		atomic_write_hw_unit_max;
+	unsigned int		atomic_write_unit_max;
+
 	unsigned short		max_segments;
 	unsigned short		max_integrity_segments;
 	unsigned short		max_discard_segments;
@@ -1354,6 +1363,30 @@  static inline int queue_dma_alignment(const struct request_queue *q)
 	return q ? q->limits.dma_alignment : 511;
 }
 
+static inline unsigned int
+queue_atomic_write_unit_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_max;
+}
+
+static inline unsigned int
+queue_atomic_write_unit_min_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_min;
+}
+
+static inline unsigned int
+queue_atomic_write_boundary_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_hw_boundary;
+}
+
+static inline unsigned int
+queue_atomic_write_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
+}
+
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
 {
 	return queue_dma_alignment(bdev_get_queue(bdev));
@@ -1595,6 +1628,27 @@  struct io_comp_batch {
 	void (*complete)(struct io_comp_batch *);
 };
 
+static inline bool bdev_can_atomic_write(struct block_device *bdev)
+{
+	struct request_queue *bd_queue = bdev->bd_queue;
+	struct queue_limits *limits = &bd_queue->limits;
+
+	if (!limits->atomic_write_unit_min)
+		return false;
+
+	if (bdev_is_partition(bdev)) {
+		sector_t bd_start_sect = bdev->bd_start_sect;
+		unsigned int alignment =
+			max(limits->atomic_write_unit_min,
+			    limits->atomic_write_hw_boundary);
+
+		if (!IS_ALIGNED(bd_start_sect, alignment >> SECTOR_SHIFT))
+			return false;
+	}
+
+	return true;
+}
+
 #define DEFINE_IO_COMP_BATCH(name)	struct io_comp_batch name = { }
 
 #endif /* _LINUX_BLKDEV_H */