From patchwork Fri Sep 9 18:42:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9324119 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0C7CD607D3 for ; Fri, 9 Sep 2016 18:43:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 014F929871 for ; Fri, 9 Sep 2016 18:43:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EA5D129994; Fri, 9 Sep 2016 18:43:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5893229640 for ; Fri, 9 Sep 2016 18:43:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754600AbcIISnH (ORCPT ); Fri, 9 Sep 2016 14:43:07 -0400 Received: from mail-pa0-f41.google.com ([209.85.220.41]:35989 "EHLO mail-pa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754414AbcIISmd (ORCPT ); Fri, 9 Sep 2016 14:42:33 -0400 Received: by mail-pa0-f41.google.com with SMTP id id6so30774734pad.3 for ; Fri, 09 Sep 2016 11:42:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=y6o/gbAdtCC9xj8m+7gPXd1znfGyK3bQwgwqewnMzio=; b=zc1/1d0V+2mSic1ovWTLA34/yROlLq0PWgqmg8e/zfwGyBSTac06BMfxYAOgpVV7Bs CXqPg9N85kvKp9Fc+TdMyiU2SEnAdy2nV/+yTGXN3jIU1ng0hzvMxKc2TtNbV3xEQIeY 4ctIJF+UBTKN0v9jhIo9YGgXYiG5fwCR8Mavzip4qL5hWFw73PiTXjC9u2IyTOhCGaf1 i33ta7XCZM4JTT5qs6pCmBJJCqlqwMHS0MIpaz4mkHpsuU3xrFbD13FcPd9pqRjH/InF RLvrWTxDyGcQlJKK18j1eL8egNDYwSRWP0a6Jnab/b5kEdSrvcub4uSYDlsB/HS8aOQr 6cug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=y6o/gbAdtCC9xj8m+7gPXd1znfGyK3bQwgwqewnMzio=; b=eKul15gbyICwQzOZbFxLF5YVB8biGS9ZMYXEDI+Y7+gBI5R1q9anHXOacX3GNcrb6l +7bJFoRWViFtP0L1/DKooXQ2L805CZ/8eQlakEMwUslq1q2aZe8o01KRg/On8Rt3NbQD g5FEZhsKK3G3PD0vxU6mpi/PoZVwTwrqqsIjk45qGhQaDmphKH0gFbMHCbJ7vklakx5o jfMZG5u5pWtlCa4c6nZXbmnEXufLsPAEOda+mw6jByYwvvN/o/nIyJiJ3BdoQ0VHKcyo 5EJdyMyqg1VHoAkL32sxLD0MvZAKxhUxoc481F4qumj6n7TSXXPg1fdUUEUeXNu2cQw3 l4JA== X-Gm-Message-State: AE9vXwNV74MNy4j1UG4fjlIs6uYW/yTrnwFf4N9ATVPl8B0a7eibNUxFZOYDS3hekZmnqASF X-Received: by 10.66.136.8 with SMTP id pw8mr8390813pab.54.1473446552626; Fri, 09 Sep 2016 11:42:32 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:200::a:be18]) by smtp.gmail.com with ESMTPSA id s1sm6940075paz.47.2016.09.09.11.42.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Sep 2016 11:42:32 -0700 (PDT) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, Alexei Starovoitov Subject: [PATCH v3 5/5] sbitmap: randomize initial last_cache values Date: Fri, 9 Sep 2016 11:42:11 -0700 Message-Id: <38c544e501d976df00a7e13b3041fcfca2a117ed.1473446095.git.osandov@fb.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval In order to get good cache behavior from a sbitmap, we want each CPU to stick to its own cacheline(s) as much as possible. This might happen naturally as the bitmap gets filled up and the last_cache values spread out, but we really want this behavior from the start. blk-mq apparently intended to do this, but the code to do this was never wired up. Get rid of the dead code and make it part of the sbitmap library. Signed-off-by: Omar Sandoval --- block/blk-mq-tag.c | 8 -------- block/blk-mq-tag.h | 1 - lib/sbitmap.c | 6 ++++++ 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index e1c2bed..cef618f 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -7,7 +7,6 @@ */ #include #include -#include #include #include "blk.h" @@ -419,13 +418,6 @@ void blk_mq_free_tags(struct blk_mq_tags *tags) kfree(tags); } -void blk_mq_tag_init_last_tag(struct blk_mq_tags *tags, unsigned int *tag) -{ - unsigned int depth = tags->nr_tags - tags->nr_reserved_tags; - - *tag = prandom_u32() % depth; -} - int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int tdepth) { tdepth -= tags->nr_reserved_tags; diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index f90b850..09f4cc0 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -30,7 +30,6 @@ extern void blk_mq_put_tag(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, unsigned int tag); extern bool blk_mq_has_free_tags(struct blk_mq_tags *tags); extern ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page); -extern void blk_mq_tag_init_last_tag(struct blk_mq_tags *tags, unsigned int *last_tag); extern int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int depth); extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 3a91269..d873bb0a 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -15,6 +15,7 @@ * along with this program. If not, see . */ +#include #include int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, @@ -208,6 +209,11 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, return -ENOMEM; } + if (depth && !round_robin) { + for_each_possible_cpu(i) + *per_cpu_ptr(sbq->alloc_hint, i) = prandom_u32() % depth; + } + sbq->wake_batch = SBQ_WAKE_BATCH; if (sbq->wake_batch > depth / SBQ_WAIT_QUEUES) sbq->wake_batch = max(1U, depth / SBQ_WAIT_QUEUES);