diff mbox series

[RFC,4/4] nvme: enable logical block size > PAGE_SIZE

Message ID 20230621083823.1724337-5-p.raghav@samsung.com (mailing list archive)
State New, archived
Headers show
Series minimum folio order support in filemap | expand

Commit Message

Pankaj Raghav June 21, 2023, 8:38 a.m. UTC
Don't set the capacity to zero for when logical block size > PAGE_SIZE
as the block device with iomap aops support allocating block cache with
a minimum folio order.

Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 drivers/nvme/host/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Hannes Reinecke June 21, 2023, 9:07 a.m. UTC | #1
On 6/21/23 10:38, Pankaj Raghav wrote:
> Don't set the capacity to zero for when logical block size > PAGE_SIZE
> as the block device with iomap aops support allocating block cache with
> a minimum folio order.
> 
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>   drivers/nvme/host/core.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 98bfb3d9c22a..36cf610f938c 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1886,7 +1886,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
>   	 * The block layer can't support LBA sizes larger than the page size
>   	 * yet, so catch this early and don't allow block I/O.
>   	 */
> -	if (ns->lba_shift > PAGE_SHIFT) {
> +	if ((ns->lba_shift > PAGE_SHIFT) && IS_ENABLED(CONFIG_BUFFER_HEAD)) {
>   		capacity = 0;
>   		bs = (1 << 9);
>   	}
Again, I can't see why this would be contingent on CONFIG_BUFFER_HEAD.
I'll be rebasing my patchset on your mapping_set_orders() patches and 
repost.

Cheers,

Hannes
Pankaj Raghav June 21, 2023, 10:47 a.m. UTC | #2
On 2023-06-21 11:07, Hannes Reinecke wrote:
> On 6/21/23 10:38, Pankaj Raghav wrote:
>> Don't set the capacity to zero for when logical block size > PAGE_SIZE
>> as the block device with iomap aops support allocating block cache with
>> a minimum folio order.
>>
>> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
>> ---
>>   drivers/nvme/host/core.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 98bfb3d9c22a..36cf610f938c 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -1886,7 +1886,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
>>        * The block layer can't support LBA sizes larger than the page size
>>        * yet, so catch this early and don't allow block I/O.
>>        */
>> -    if (ns->lba_shift > PAGE_SHIFT) {
>> +    if ((ns->lba_shift > PAGE_SHIFT) && IS_ENABLED(CONFIG_BUFFER_HEAD)) {
>>           capacity = 0;
>>           bs = (1 << 9);
>>       }
> Again, I can't see why this would be contingent on CONFIG_BUFFER_HEAD.
> I'll be rebasing my patchset on your mapping_set_orders() patches and repost.
> 

As I explained in the previous email, I get a BUG from buffer.c when I don't make it conditional.
The hope is when we move to iomap based aops for the block cache, we can just get rid of
`if (ns->lba_shift > PAGE_SHIFT)` block and no dependence on CONFIG_BUFFER_HEAD.

> Cheers,
> 
> Hannes
>
diff mbox series

Patch

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 98bfb3d9c22a..36cf610f938c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1886,7 +1886,7 @@  static void nvme_update_disk_info(struct gendisk *disk,
 	 * The block layer can't support LBA sizes larger than the page size
 	 * yet, so catch this early and don't allow block I/O.
 	 */
-	if (ns->lba_shift > PAGE_SHIFT) {
+	if ((ns->lba_shift > PAGE_SHIFT) && IS_ENABLED(CONFIG_BUFFER_HEAD)) {
 		capacity = 0;
 		bs = (1 << 9);
 	}