From patchwork Tue Aug 6 12:06:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 13754717 Received: from mail.nearlyone.de (mail.nearlyone.de [49.12.199.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9B7D1CB304; Tue, 6 Aug 2024 12:07:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=49.12.199.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722946063; cv=none; b=qR1QGmVQhGi1yjlCPLhRTSdiHIbpBEJFEccoyG1VPMwEdwybfL+P9v/es2rQ7caW2HIXVUlue9+n9+jvCWbKdJNg4EWCHDpAh5qX08bf6QAwA9S7UufEI7x3dQ62RaHC/7bmvz5ft6Ghf5acvHy7kjle23G8r3TaHHUq78ex3Vg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722946063; c=relaxed/simple; bh=rqUt5gS/PcnER2oS15dZ6el8u8RIfAM9rJcwSIWi/UQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=lYsLau/RHYVWid24wJLR2wBd4shIPICUlf62mvPyl46eveknI3OdDmnMscF+28hVMWToFkmqDDO06EmpxiSKASd/2oOZryGLjS177epjeKW4YlL7Lsbpaw7T03zgoQl7j5skMXdpz4+dBMcwGoulydjTP5J6YIOfTIe/haYAUXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de; spf=pass smtp.mailfrom=monom.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b=gb4GR8dr; arc=none smtp.client-ip=49.12.199.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=monom.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b="gb4GR8dr" Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 56242DAE24; Tue, 6 Aug 2024 14:07:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1722946059; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=ZJMIfdHHQBNU04oVVQ3gVXM9JbI7qZNMNerTEV2zluI=; b=gb4GR8drrpvILcdgJztb/yWP3UWYo1PxrWyTdGyuQdTHud68XMxzqNrJYnym5o0NKtmgQB biHfHWhOOFB7R8dbj87RCembE6BvE/0kK6byhPBkfnB0X+8aO7F/yuS/iPYmXBNE2iCSOr xZSiyEdv0++H5pNTiRWm9Z8ALv8lJzPlkNEtD4QMjwUnMkOsScDhAND1pHSLHNAVzG4e9i sXbn9HtwNu2sE7Vgan8EA+J+bFcQu6qlsIXOBCjcvA65wyjtYtjewaNsI5+1JClFnoACjl ztGbsyvQPy7idFyBbS5BBQBag0IqwLoTbxWSyg7K9F7t4yc8S0UBec75xT6/qg== From: Daniel Wagner Date: Tue, 06 Aug 2024 14:06:45 +0200 Subject: [PATCH v3 13/15] virtio: blk/scsi: use block layer helpers to calculate num of queues Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240806-isolcpus-io-queues-v3-13-da0eecfeaf8b@suse.de> References: <20240806-isolcpus-io-queues-v3-0-da0eecfeaf8b@suse.de> In-Reply-To: <20240806-isolcpus-io-queues-v3-0-da0eecfeaf8b@suse.de> To: Jens Axboe , Keith Busch , Sagi Grimberg , Thomas Gleixner , Christoph Hellwig , "Martin K. Petersen" , John Garry , "Michael S. Tsirkin" , Jason Wang , Kashyap Desai , Sumit Saxena , Shivasharan S , Chandrakanth patil , Sathya Prakash Veerichetty , Suganath Prabu Subramani , Nilesh Javali , GR-QLogic-Storage-Upstream@marvell.com, Jonathan Corbet Cc: Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Sridhar Balaraman , "brookxu.cn" , Ming Lei , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, virtualization@lists.linux.dev, megaraidlinux.pdl@broadcom.com, mpi3mr-linuxdrv.pdl@broadcom.com, MPT-FusionLinux.pdl@broadcom.com, storagedev@microchip.com, linux-doc@vger.kernel.org, Daniel Wagner X-Mailer: b4 0.14.0 X-Last-TLS-Session-Version: TLSv1.3 Multiqueue devices should only allocate queues for the housekeeping CPUs when isolcpus=io_queue is set. This avoids that the isolated CPUs get disturbed with OS workload. Use helpers which calculates the correct number of queues which should be used when isolcpus is used. Signed-off-by: Daniel Wagner --- drivers/block/virtio_blk.c | 5 ++--- drivers/scsi/virtio_scsi.c | 1 + 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 8f68037da00b..3584a1e00791 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -982,9 +982,8 @@ static int init_vq(struct virtio_blk *vblk) return -EINVAL; } - num_vqs = min_t(unsigned int, - min_not_zero(num_request_queues, nr_cpu_ids), - num_vqs); + num_vqs = blk_mq_num_possible_queues( + min_not_zero(num_request_queues, num_vqs)); num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1); diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 8013a082598b..1c1feec59781 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -921,6 +921,7 @@ static int virtscsi_probe(struct virtio_device *vdev) /* We need to know how many queues before we allocate. */ num_queues = virtscsi_config_get(vdev, num_queues) ? : 1; num_queues = min_t(unsigned int, nr_cpu_ids, num_queues); + num_queues = blk_mq_num_possible_queues(num_queues); num_targets = virtscsi_config_get(vdev, max_target) + 1;