Message ID | 20240823103811.2421-8-anuj20.g@samsung.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [v3,01/10] block: define set of integrity flags to be inherited by cloned bip | expand |
This patch came in twice, once with a "block" and once with a "block,nvme" prefix. One should be fine, and I think just block is the right one. How do we communicate what flags can be combined to the upper layer and userspace given the SCSI limitations here? > --- a/include/linux/bio-integrity.h > +++ b/include/linux/bio-integrity.h > @@ -11,6 +11,9 @@ enum bip_flags { > BIP_DISK_NOCHECK = 1 << 3, /* disable disk integrity checking */ > BIP_IP_CHECKSUM = 1 << 4, /* IP checksum */ > BIP_COPY_USER = 1 << 5, /* Kernel bounce buffer in use */ > + BIP_CHECK_GUARD = 1 << 6, > + BIP_CHECK_REFTAG = 1 << 7, > + BIP_CHECK_APPTAG = 1 << 8, Maybe describe the flags here like the other ones?
On 8/24/2024 2:05 PM, Christoph Hellwig wrote: > How do we communicate what flags can be combined to the upper layer > and userspace given the SCSI limitations here? Will adding a new read-only sysfs attribute (under the group [*]) that publishes all valid combinations be fine? With Guard/Reftag/Apptag, we get 6 combinations. For NVMe, all can be valid. For SCSI, maximum 4 can be valid. And we factor the pi-type in while listing what all is valid. For example: 010 or 001 is not valid for SCSI and should not be shown by this. [*] const struct attribute_group blk_integrity_attr_group = { .name = "integrity", .attrs = integrity_attrs, };
Kanchan, > With Guard/Reftag/Apptag, we get 6 combinations. For NVMe, all can be > valid. For SCSI, maximum 4 can be valid. And we factor the pi-type in > while listing what all is valid. For example: 010 or 001 is not valid > for SCSI and should not be shown by this. I thought we had tentatively agreed to let the block layer integrity flags only describe what the controller should do? And then let sd.c decide what to do about RDPROTECT/WRPROTECT (since host-to-target is a different protection envelope anyway). That is kind of how it works already.
On Wed, Aug 28, 2024 at 07:12:22PM +0530, Kanchan Joshi wrote: > On 8/24/2024 2:05 PM, Christoph Hellwig wrote: > > How do we communicate what flags can be combined to the upper layer > > and userspace given the SCSI limitations here? > > Will adding a new read-only sysfs attribute (under the group [*]) that > publishes all valid combinations be fine? > That's not a good application interface. Finding the sysfs file is already a pain for direct users of the block device, and almost impossible when going through a file system.
On Wed, Aug 28, 2024 at 11:16:53PM -0400, Martin K. Petersen wrote: > I thought we had tentatively agreed to let the block layer integrity > flags only describe what the controller should do? And then let sd.c > decide what to do about RDPROTECT/WRPROTECT (since host-to-target is a > different protection envelope anyway). That is kind of how it works > already. Yes, that should easier to manage.
On Thu, Aug 29, 2024 at 8:47 AM Martin K. Petersen <martin.petersen@oracle.com> wrote: > > > Kanchan, > > > With Guard/Reftag/Apptag, we get 6 combinations. For NVMe, all can be > > valid. For SCSI, maximum 4 can be valid. And we factor the pi-type in > > while listing what all is valid. For example: 010 or 001 is not valid > > for SCSI and should not be shown by this. > > I thought we had tentatively agreed to let the block layer integrity > flags only describe what the controller should do? And then let sd.c > decide what to do about RDPROTECT/WRPROTECT (since host-to-target is a > different protection envelope anyway). That is kind of how it works > already. > Do you see that this patch (and this set of flags) are fine? If not, which specific flags do you suggest should be introduced?
Martin, Christoph On 29/08/24 06:59PM, Anuj gupta wrote: >On Thu, Aug 29, 2024 at 8:47 AM Martin K. Petersen ><martin.petersen@oracle.com> wrote: >> >> >> Kanchan, >> >> > With Guard/Reftag/Apptag, we get 6 combinations. For NVMe, all can be >> > valid. For SCSI, maximum 4 can be valid. And we factor the pi-type in >> > while listing what all is valid. For example: 010 or 001 is not valid >> > for SCSI and should not be shown by this. >> >> I thought we had tentatively agreed to let the block layer integrity >> flags only describe what the controller should do? And then let sd.c >> decide what to do about RDPROTECT/WRPROTECT (since host-to-target is a >> different protection envelope anyway). That is kind of how it works >> already. >> >Do you see that this patch (and this set of flags) are fine? >If not, which specific flags do you suggest should be introduced? While other things are sorted for next iteration, it's not fully clear what are we missing for this part. Can you comment on the above?
Anuj, > Do you see that this patch (and this set of flags) are fine? > If not, which specific flags do you suggest should be introduced? The three exposed BIP flags are fine. We'll deal with the target flag peculiarities in the SCSI disk driver.
diff --git a/block/bio-integrity.c b/block/bio-integrity.c index aaf67eb427ab..7fbf8c307a36 100644 --- a/block/bio-integrity.c +++ b/block/bio-integrity.c @@ -444,6 +444,11 @@ bool bio_integrity_prep(struct bio *bio) if (bi->csum_type == BLK_INTEGRITY_CSUM_IP) bip->bip_flags |= BIP_IP_CHECKSUM; + /* describe what tags to check in payload */ + if (bi->csum_type) + bip->bip_flags |= BIP_CHECK_GUARD; + if (bi->flags & BLK_INTEGRITY_REF_TAG) + bip->bip_flags |= BIP_CHECK_REFTAG; if (bio_integrity_add_page(bio, virt_to_page(buf), len, offset_in_page(buf)) < len) { printk(KERN_ERR "could not attach integrity payload\n"); diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 33fa01c599ad..d4c366df8f12 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1002,19 +1002,13 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, return BLK_STS_NOTSUPP; control |= NVME_RW_PRINFO_PRACT; } - - switch (ns->head->pi_type) { - case NVME_NS_DPS_PI_TYPE3: + if (bio_integrity_flagged(req->bio, BIP_CHECK_GUARD)) control |= NVME_RW_PRINFO_PRCHK_GUARD; - break; - case NVME_NS_DPS_PI_TYPE1: - case NVME_NS_DPS_PI_TYPE2: - control |= NVME_RW_PRINFO_PRCHK_GUARD | - NVME_RW_PRINFO_PRCHK_REF; + if (bio_integrity_flagged(req->bio, BIP_CHECK_REFTAG)) { + control |= NVME_RW_PRINFO_PRCHK_REF; if (op == nvme_cmd_zone_append) control |= NVME_RW_APPEND_PIREMAP; nvme_set_ref_tag(ns, cmnd, req); - break; } } diff --git a/include/linux/bio-integrity.h b/include/linux/bio-integrity.h index a1a9031a5985..c7c0121689e1 100644 --- a/include/linux/bio-integrity.h +++ b/include/linux/bio-integrity.h @@ -11,6 +11,9 @@ enum bip_flags { BIP_DISK_NOCHECK = 1 << 3, /* disable disk integrity checking */ BIP_IP_CHECKSUM = 1 << 4, /* IP checksum */ BIP_COPY_USER = 1 << 5, /* Kernel bounce buffer in use */ + BIP_CHECK_GUARD = 1 << 6, + BIP_CHECK_REFTAG = 1 << 7, + BIP_CHECK_APPTAG = 1 << 8, }; struct bio_integrity_payload { @@ -40,7 +43,8 @@ struct uio_meta { }; #define BIP_CLONE_FLAGS (BIP_MAPPED_INTEGRITY | BIP_CTRL_NOCHECK | \ - BIP_DISK_NOCHECK | BIP_IP_CHECKSUM) + BIP_DISK_NOCHECK | BIP_IP_CHECKSUM | \ + BIP_CHECK_GUARD | BIP_CHECK_REFTAG | BIP_CHECK_APPTAG) #ifdef CONFIG_BLK_DEV_INTEGRITY