From patchwork Tue Aug 21 19:06:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kashyap Desai X-Patchwork-Id: 10572303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B44A2109C for ; Tue, 21 Aug 2018 19:07:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC27E2945D for ; Tue, 21 Aug 2018 19:07:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A076E2A9F8; Tue, 21 Aug 2018 19:07:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48A142945D for ; Tue, 21 Aug 2018 19:07:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727300AbeHUW2W (ORCPT ); Tue, 21 Aug 2018 18:28:22 -0400 Received: from mail-it0-f45.google.com ([209.85.214.45]:37121 "EHLO mail-it0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726627AbeHUW2W (ORCPT ); Tue, 21 Aug 2018 18:28:22 -0400 Received: by mail-it0-f45.google.com with SMTP id h20-v6so5503478itf.2 for ; Tue, 21 Aug 2018 12:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:from:date:message-id:subject:to :content-transfer-encoding; bh=gi6DEyesUx8IB8tiGrVGpvVAHKtEB5j/F+wBC7qeNx8=; b=SHRNGDL+vT17ZfUv06wL14Gttx8J2ReklB6FaMD8C6D3lvIQAiEzOTX3Qh35qUvRRD A6+PPBI7pybCqeB4QhhfhNTe11I5+wrGPpF3hBXOR6bjf5OXaMqRpxmoRe5sE3bp4vrg MjuXC1e25RkltoinGyW3cBbGXYmlU7OsyFoVo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-transfer-encoding; bh=gi6DEyesUx8IB8tiGrVGpvVAHKtEB5j/F+wBC7qeNx8=; b=ZB5txjLShpwzJXsK/ZI0lptYstY4cWbBrELtlZ3Lmh1ZM57PsuvbyTS7fpIiSeh+yq MGDFNL5XEUh88/QAJRqJ9qXhex3Ib6yJXhsrUNmzvqeYb1rgCGz+tj74WAS7geSYEErH kHvLvR0OeyDMCIsaO0HTl52vJJ2vfoiSYtfkqIldBid3UUJsUpWo6CUVvDO20eC7rRBZ pDxPrXvWN4ENlhUXD7rzb1snm5ljeajuEeFD59nCQlxkACTBSYRFiU1IL34JGObh9aGS o3n36vWfTjWK5Sx50Mbobil+FQl4Wzau3fmWehAYv+c6s7mLVvU7bgg4PtcXCQskXcK4 RCUw== X-Gm-Message-State: APzg51C7pirx7B5ZOLrRB9n/dP/EGqwd1hozNLicmakRcpQVSLsNCba3 KCPB9jvw3OElh6P1FIWTQarIvmQWLtYqzrOnmhAauA== X-Google-Smtp-Source: ANB0VdYFUQYzXoY/sL5Vwztu5+GIdoP5HN+gJ9bV+OlJ/9unywBuooIqUC94Bmkg6TvWP4XMJDd272okcF01Ax0Q6+E= X-Received: by 2002:a24:eec7:: with SMTP id b190-v6mr597587iti.32.1534878419153; Tue, 21 Aug 2018 12:06:59 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:4c07:0:0:0:0:0 with HTTP; Tue, 21 Aug 2018 12:06:58 -0700 (PDT) From: Kashyap Desai Date: Tue, 21 Aug 2018 13:06:58 -0600 Message-ID: Subject: [RFC0/1] megaraid_sas : IRQ polling using threaded interrupt To: linux-scsi , linux-block@vger.kernel.org, Peter Rivera , Steve Hagan Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi, I refer below thread for interrupt polling related discussion for storage devices (NVME/HBA). http://lists.infradead.org/pipermail/linux-nvme/2017-January/007749.html I am trying to evaluate the similar concept to reduce interrupts using irq-poll and threaded ISR. Below is high level changes I did in my experiment. In megaraid_sas driver, I schedule irq poll using “irq_poll_sched” and disable that particular IRQ line. I was expecting more completion after I disable irq line, but in most of the cases just one completion happened from irq poll context. It must be due to host side processing is much faster than IO processing at back end device. Similar observation was posted in above mentioned thread as well. I added manual wait using “udelay()” and wait for some more time in irq poll context. Using additional wait, I am able to reduce interrupt per seconds and performance impact on latency is not visible since I choose udelay(1). One drawback I see is overall CPU utilization goes up since driver is using extra delay to reduce interrupts. To overcome CPU utilization issue, I switch to threaded ISR and replace udelay() with usleep(). Using threaded interrupt based polling CPU utilization goes to normal. I am trying to understand what is a drawback using threaded ISR and doing interrupt polling from threaded ISR. One problem I understood is threaded ISR can be preempted by other high priority context (most likely soft/hard interrupt). If threaded interrupt is running on some CPU-x and some other device is also requesting interrupt on same CPU-x, there can be latency introduced because CPU-x will complete ISR before it can run threaded ISR. Thanks ,Kashyap