Message ID | 20240219130109.341523-11-john.g.garry@oracle.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | block atomic writes | expand |
On Mon, Feb 19, 2024 at 01:01:08PM +0000, John Garry wrote: > From: Alan Adamson <alan.adamson@oracle.com> > > Add support to set block layer request_queue atomic write limits. The > limits will be derived from either the namespace or controller atomic > parameters. > > NVMe atomic-related parameters are grouped into "normal" and "power-fail" > (or PF) class of parameter. For atomic write support, only PF parameters > are of interest. The "normal" parameters are concerned with racing reads > and writes (which also applies to PF). See NVM Command Set Specification > Revision 1.0d section 2.1.4 for reference. > > Whether to use per namespace or controller atomic parameters is decided by > NSFEAT bit 1 - see Figure 97: Identify - Identify Namespace Data Structure, > #NVM Command Set. > > NVMe namespaces may define an atomic boundary, whereby no atomic guarantees > are provided for a write which straddles this per-lba space boundary. The > block layer merging policy is such that no merges may occur in which the > resultant request would straddle such a boundary. > > Unlike SCSI, NVMe specifies no granularity or alignment rules. In addition, > again unlike SCSI, there is no dedicated atomic write command - a write > which adheres to the atomic size limit and boundary is implicitly atomic. > > If NSFEAT bit 1 is set, the following parameters are of interest: > - NAWUPF (Namespace Atomic Write Unit Power Fail) > - NABSPF (Namespace Atomic Boundary Size Power Fail) > - NABO (Namespace Atomic Boundary Offset) > > and we set request_queue limits as follows: > - atomic_write_unit_max = rounddown_pow_of_two(NAWUPF) > - atomic_write_max_bytes = NAWUPF > - atomic_write_boundary = NABSPF > > If in the unlikely scenario that NABO is non-zero, then atomic writes will > not be supported at all as dealing with this adds extra complexity. This > policy may change in future. > > In all cases, atomic_write_unit_min is set to the logical block size. > > If NSFEAT bit 1 is unset, the following parameter is of interest: > - AWUPF (Atomic Write Unit Power Fail) > > and we set request_queue limits as follows: > - atomic_write_unit_max = rounddown_pow_of_two(AWUPF) > - atomic_write_max_bytes = AWUPF > - atomic_write_boundary = 0 > > The block layer requires that the atomic_write_boundary value is a > power-of-2. However, it is really only required that atomic_write_boundary > be a multiple of atomic_write_unit_max. As such, if NABSPF were not a > power-of-2, atomic_write_unit_max could be reduced such that it was > divisible into NABSPF. However, this complexity will not be yet supported. > > A helper function, nvme_valid_atomic_write(), is also added for the > submission path to verify that a request has been submitted to the driver > will actually be executed atomically. Maybe patch 11 should be folded into this one. No bigged, the series as a whole looks good. Reviewed-by: Keith Busch <kbusch@kernel.org>
On Mon, Feb 19, 2024 at 12:21:14PM -0700, Keith Busch wrote: > Maybe patch 11 should be folded into this one. No bigged, the series as > a whole looks good. I did suggest just that last round already..
On 20/02/2024 06:55, Christoph Hellwig wrote: > On Mon, Feb 19, 2024 at 12:21:14PM -0700, Keith Busch wrote: >> Maybe patch 11 should be folded into this one. No bigged, the series as >> a whole looks good. > > I did suggest just that last round already.. > I created the helper function and folded it in, but not the callsite. Not folding in the callsite change did not make sense to me, but that was my understanding of the request. Anyway, I can fold everything into this patch. Thanks, John
Thanks for writing a good commit message! > NVMe namespaces may define an atomic boundary, whereby no atomic guarantees > are provided for a write which straddles this per-lba space boundary. The > block layer merging policy is such that no merges may occur in which the > resultant request would straddle such a boundary. > > Unlike SCSI, NVMe specifies no granularity or alignment rules. Well, the boundary really is sort of a granularity and alignment, isn't it?
On 20/02/2024 08:31, Christoph Hellwig wrote: > Thanks for writing a good commit message! > >> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees >> are provided for a write which straddles this per-lba space boundary. The >> block layer merging policy is such that no merges may occur in which the >> resultant request would straddle such a boundary. >> >> Unlike SCSI, NVMe specifies no granularity or alignment rules. > > Well, the boundary really is sort of a granularity and alignment, > isn't it? NVMe does indeed have the boundary rule, but it is not really the same as SCSI granularity and alignment. Anyway, I can word that statement to be clearer. Thanks, John
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 0a96362912ce..c5bc663c8582 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -934,6 +934,31 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns, return BLK_STS_OK; } +__maybe_unused +static bool nvme_valid_atomic_write(struct request *req) +{ + struct request_queue *q = req->q; + u32 boundary_bytes = queue_atomic_write_boundary_bytes(q); + + if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q)) + return false; + + if (boundary_bytes) { + u64 mask = boundary_bytes - 1, imask = ~mask; + u64 start = blk_rq_pos(req) << SECTOR_SHIFT; + u64 end = start + blk_rq_bytes(req) - 1; + + /* If greater then must be crossing a boundary */ + if (blk_rq_bytes(req) > boundary_bytes) + return false; + + if ((start & imask) != (end & imask)) + return false; + } + + return true; +} + static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd, enum nvme_opcode op) @@ -1960,6 +1985,45 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, blk_queue_write_cache(q, vwc, vwc); } +static void nvme_update_atomic_write_disk_info(struct nvme_ctrl *ctrl, + struct gendisk *disk, struct nvme_id_ns *id, u32 bs, + u32 atomic_bs) +{ + unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0; + struct request_queue *q = disk->queue; + + if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) { + if (le16_to_cpu(id->nabspf)) + boundary = (le16_to_cpu(id->nabspf) + 1) * bs; + + /* + * The boundary size just needs to be a multiple of unit_max + * (and not necessarily a power-of-2), so this could be relaxed + * in the block layer in future. + * Furthermore, if needed, unit_max could be reduced so that the + * boundary size was compliant. + */ + if (!boundary || is_power_of_2(boundary)) { + max_bytes = atomic_bs; + unit_min = bs; + unit_max = rounddown_pow_of_two(atomic_bs); + } else { + dev_notice(ctrl->device, "Unsupported atomic write boundary (%d)\n", + boundary); + boundary = 0; + } + } else if (ctrl->subsys->awupf) { + max_bytes = atomic_bs; + unit_min = bs; + unit_max = rounddown_pow_of_two(atomic_bs); + } + + blk_queue_atomic_write_max_bytes(q, max_bytes); + blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT); + blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT); + blk_queue_atomic_write_boundary_bytes(q, boundary); +} + static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk, struct nvme_ns_head *head, struct nvme_id_ns *id) { @@ -1990,6 +2054,9 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk, atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; else atomic_bs = (1 + ctrl->subsys->awupf) * bs; + + nvme_update_atomic_write_disk_info(ctrl, disk, id, bs, + atomic_bs); } if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {