From patchwork Thu Feb 9 14:48:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 9564721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 027AB601C3 for ; Thu, 9 Feb 2017 14:48:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE0EF28503 for ; Thu, 9 Feb 2017 14:48:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D0CBA28517; Thu, 9 Feb 2017 14:48:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73476284F9 for ; Thu, 9 Feb 2017 14:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753069AbdBIOsq (ORCPT ); Thu, 9 Feb 2017 09:48:46 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48148 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752764AbdBIOso (ORCPT ); Thu, 9 Feb 2017 09:48:44 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5B59D804FB; Thu, 9 Feb 2017 14:48:45 +0000 (UTC) Received: from tleilax.poochiereds.net (ovpn-120-188.rdu2.redhat.com [10.10.120.188] (may be forged)) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v19EmhbC017086; Thu, 9 Feb 2017 09:48:44 -0500 From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: zyan@redhat.com, sage@redhat.com, idryomov@gmail.com, jspray@redhat.com Subject: [PATCH v4 2/6] libceph: abort already submitted but abortable requests when map or pool goes full Date: Thu, 9 Feb 2017 09:48:32 -0500 Message-Id: <20170209144836.12525-3-jlayton@redhat.com> In-Reply-To: <20170209144836.12525-1-jlayton@redhat.com> References: <20170209144836.12525-1-jlayton@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 09 Feb 2017 14:48:45 +0000 (UTC) Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a Ceph volume hits capacity, a flag is set in the OSD map to indicate that, and a new map is sprayed around the cluster. With cephfs we want it to shut down any abortable requests that are in progress with an -ENOSPC error as they'd just hang otherwise. Add a new ceph_osdc_abort_on_full helper function to handle this. It will first check whether there is an out-of-space condition in the cluster. It will then walk the tree and abort any request that has r_abort_on_full set with an ENOSPC error. Call this new function directly whenever we get a new OSD map. Signed-off-by: Jeff Layton --- net/ceph/osd_client.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index f68bb42da240..cdb0b58c4c99 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -1777,6 +1777,47 @@ static void complete_request(struct ceph_osd_request *req, int err) ceph_osdc_put_request(req); } +/* + * Drop all pending requests that are stalled waiting on a full condition to + * clear, and complete them with ENOSPC as the return code. + */ +static void ceph_osdc_abort_on_full(struct ceph_osd_client *osdc) +{ + struct ceph_osd_request *req; + struct ceph_osd *osd; + struct rb_node *m, *n; + u32 latest_epoch = 0; + bool osdmap_full = ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL); + + dout("enter abort_on_full\n"); + + if (!osdmap_full && !have_pool_full(osdc)) + goto out; + + for (n = rb_first(&osdc->osds); n; n = rb_next(n)) { + osd = rb_entry(n, struct ceph_osd, o_node); + mutex_lock(&osd->lock); + m = rb_first(&osd->o_requests); + while (m) { + req = rb_entry(m, struct ceph_osd_request, r_node); + m = rb_next(m); + + if (req->r_abort_on_full && + (osdmap_full || pool_full(osdc, req->r_t.base_oloc.pool))) { + u32 cur_epoch = le32_to_cpu(req->r_replay_version.epoch); + + dout("%s: abort tid=%llu flags 0x%x\n", __func__, req->r_tid, req->r_flags); + complete_request(req, -ENOSPC); + if (cur_epoch > latest_epoch) + latest_epoch = cur_epoch; + } + } + mutex_unlock(&osd->lock); + } +out: + dout("return abort_on_full latest_epoch=%u\n", latest_epoch); +} + static void cancel_map_check(struct ceph_osd_request *req) { struct ceph_osd_client *osdc = req->r_osdc; @@ -3292,6 +3333,7 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) ceph_monc_got_map(&osdc->client->monc, CEPH_SUB_OSDMAP, osdc->osdmap->epoch); + ceph_osdc_abort_on_full(osdc); up_write(&osdc->lock); wake_up_all(&osdc->client->auth_wq); return;