From patchwork Wed Jun 28 12:57:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dongli Zhang X-Patchwork-Id: 9814227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 00482603F2 for ; Wed, 28 Jun 2017 12:59:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E78302838B for ; Wed, 28 Jun 2017 12:59:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D96B828472; Wed, 28 Jun 2017 12:59:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 425192838B for ; Wed, 28 Jun 2017 12:59:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dQCX5-0002Ia-PR; Wed, 28 Jun 2017 12:57:23 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dQCX4-0002IT-CC for xen-devel@lists.xenproject.org; Wed, 28 Jun 2017 12:57:22 +0000 Received: from [85.158.137.68] by server-17.bemta-3.messagelabs.com id DF/7E-01859-1B7A3595; Wed, 28 Jun 2017 12:57:21 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRWlGSWpSXmKPExsUyZ7p8oO7G5cG RBnObGC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oz7bx+zFvQLVzS2/2RqYGwT6GLk4hAS6GCS eHO8jw3C+cYo8WndS3YIZyOjxLY3jSxdjJxATjejxI3NgiA2m4COxLQDp8DiIgIOEpv2H2QHs ZkFUiUmX7sBNImDQ1ggWeLBEy6QMIuAqsTTF6fBwrwCbhKX9oBVSwjISdw818kMYRtL9M3qY5 nAyLOAkWEVo0ZxalFZapGukaVeUlFmekZJbmJmjq6hgbFebmpxcWJ6ak5iUrFecn7uJkag3+s ZGBh3MDbt9TvEKMnBpCTKu/dLUKQQX1J+SmVGYnFGfFFpTmrxIUYNDg6BvjWrLzBKseTl56Uq SfBeWRYcKSRYlJqeWpGWmQMMTJhSCQ4eJRFe63lAad7igsTc4sx0iNQpRkUpcd4LIH0CIImM0 jy4Nlg0XGKUlRLmZWRgYBDiKUgtys0sQZV/xSjOwagkzPsPZApPZl4J3PRXQIuZgBazzAsAWV ySiJCSamCs9T6kc+3mG//ktVVWh9ttBfs3rXgaZD1h4yRWvo3Pnvw4mPKCzb4zznfHhLy0zNc TH0ZPEbawqDC1qj97u1J+ZekJn/dTL/XwRQdHtHk7pzoIfN71QYzZq6SydMbieZGJ39cfm3Eu Rs9hRti9KQVfxd/qftedtPbKbjtHFsOpE77NF/+rsMVMiaU4I9FQi7moOBEAq/BoxIECAAA= X-Env-Sender: dongli.zhang@oracle.com X-Msg-Ref: server-4.tower-31.messagelabs.com!1498654639!49694346!1 X-Originating-IP: [156.151.31.81] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 39991 invoked from network); 28 Jun 2017 12:57:20 -0000 Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81) by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Jun 2017 12:57:20 -0000 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v5SCvEJY028292 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 28 Jun 2017 12:57:15 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0021.oracle.com (8.13.8/8.14.4) with ESMTP id v5SCvE4v011790 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 28 Jun 2017 12:57:14 GMT Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v5SCvEVJ000472; Wed, 28 Jun 2017 12:57:14 GMT Received: from linux.cn.oracle.com (/10.182.69.113) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 28 Jun 2017 05:57:13 -0700 From: Dongli Zhang To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Date: Wed, 28 Jun 2017 20:57:28 +0800 Message-Id: <1498654648-9970-1-git-send-email-dongli.zhang@oracle.com> X-Mailer: git-send-email 2.7.4 X-Source-IP: aserv0021.oracle.com [141.146.126.233] Cc: jgross@suse.com, boris.ostrovsky@oracle.com, roger.pau@citrix.com Subject: [Xen-devel] [PATCH v2 1/1] xen/blkfront: always allocate grants first from per-queue persistent grants X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch partially reverts 3df0e50 ("xen/blkfront: pseudo support for multi hardware queues/rings"). The xen-blkfront queue/ring might hang due to grants allocation failure in the situation when gnttab_free_head is almost empty while many persistent grants are reserved for this queue/ring. As persistent grants management was per-queue since 73716df ("xen/blkfront: make persistent grants pool per-queue"), we should always allocate from persistent grants first. Signed-off-by: Dongli Zhang Acked-by: Roger Pau Monné --- Changed since v1: * use "max_grefs - rinfo->persistent_gnts_c" as callback argument --- drivers/block/xen-blkfront.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3945963..4544a1c 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -713,6 +713,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri * existing persistent grants, or if we have to get new grants, * as there are not sufficiently many free. */ + bool new_persistent_gnts = false; struct scatterlist *sg; int num_sg, max_grefs, num_grant; @@ -724,19 +725,21 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri */ max_grefs += INDIRECT_GREFS(max_grefs); - /* - * We have to reserve 'max_grefs' grants because persistent - * grants are shared by all rings. - */ - if (max_grefs > 0) - if (gnttab_alloc_grant_references(max_grefs, &setup.gref_head) < 0) { + /* Check if we have enough persistent grants to allocate a requests */ + if (rinfo->persistent_gnts_c < max_grefs) { + new_persistent_gnts = true; + + if (gnttab_alloc_grant_references( + max_grefs - rinfo->persistent_gnts_c, + &setup.gref_head) < 0) { gnttab_request_free_callback( &rinfo->callback, blkif_restart_queue_callback, rinfo, - max_grefs); + max_grefs - rinfo->persistent_gnts_c); return 1; } + } /* Fill out a communications ring structure. */ id = blkif_ring_get_request(rinfo, req, &ring_req); @@ -837,7 +840,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri if (unlikely(require_extra_req)) rinfo->shadow[extra_id].req = *extra_ring_req; - if (max_grefs > 0) + if (new_persistent_gnts) gnttab_free_grant_references(setup.gref_head); return 0;