From patchwork Tue Apr 12 18:36:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 8814411 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 81130C0554 for ; Tue, 12 Apr 2016 18:36:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 88F86201C8 for ; Tue, 12 Apr 2016 18:36:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 955E22021F for ; Tue, 12 Apr 2016 18:36:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932793AbcDLSgn (ORCPT ); Tue, 12 Apr 2016 14:36:43 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:51498 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934281AbcDLSgl (ORCPT ); Tue, 12 Apr 2016 14:36:41 -0400 Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.11/8.16.0.11) with SMTP id u3CIYLr9005679; Tue, 12 Apr 2016 11:36:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=77EFSp9YsTOSDq4hflIp2OZQJkPOSZjJrHx6ONNU7jI=; b=CaChnxPUxxDxWVibdmZhzun4A+EiiplGVHTzPuT7Culv0tNwSwHQOKFQFi8RqCiyvdOP jfpbYSZJwe6YFbqDhu1VqlOgg74GQNezaPD5sbQIMrrByslKkvBdTTaG1VyHbsz2/pfD MELW6fAfRe8jXdaF+UTR96270bAWvVOniV4= Received: from mail.thefacebook.com ([199.201.64.23]) by m0001303.ppops.net with ESMTP id 226wda741n-13 (version=TLSv1 cipher=AES128-SHA bits=128 verify=NOT); Tue, 12 Apr 2016 11:36:38 -0700 Received: from localhost.localdomain (192.168.54.13) by mail.thefacebook.com (192.168.16.19) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 12 Apr 2016 11:36:29 -0700 From: Jens Axboe To: CC: , Jens Axboe Subject: [PATCH 20/20] block: kill blk_queue_flush() Date: Tue, 12 Apr 2016 12:36:15 -0600 Message-ID: <1460486175-25724-21-git-send-email-axboe@fb.com> X-Mailer: git-send-email 2.8.0.rc4.6.g7e4ba36 In-Reply-To: <1460486175-25724-1-git-send-email-axboe@fb.com> References: <1460486175-25724-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-04-12_09:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We don't have any drivers left using it, so kill it off. Update documentation to use the newer blk_queue_write_cache(). Signed-off-by: Jens Axboe --- Documentation/block/writeback_cache_control.txt | 4 ++-- block/blk-settings.c | 20 -------------------- include/linux/blkdev.h | 1 - 3 files changed, 2 insertions(+), 23 deletions(-) diff --git a/Documentation/block/writeback_cache_control.txt b/Documentation/block/writeback_cache_control.txt index 83407d36630a..59e0516cbf6b 100644 --- a/Documentation/block/writeback_cache_control.txt +++ b/Documentation/block/writeback_cache_control.txt @@ -71,7 +71,7 @@ requests that have a payload. For devices with volatile write caches the driver needs to tell the block layer that it supports flushing caches by doing: - blk_queue_flush(sdkp->disk->queue, REQ_FLUSH); + blk_queue_write_cache(sdkp->disk->queue, true, false); and handle empty REQ_FLUSH requests in its prep_fn/request_fn. Note that REQ_FLUSH requests with a payload are automatically turned into a sequence @@ -79,7 +79,7 @@ of an empty REQ_FLUSH request followed by the actual write by the block layer. For devices that also support the FUA bit the block layer needs to be told to pass through the REQ_FUA bit using: - blk_queue_flush(sdkp->disk->queue, REQ_FLUSH | REQ_FUA); + blk_queue_write_cache(sdkp->disk->queue, true, true); and the driver must handle write requests that have the REQ_FUA bit set in prep_fn/request_fn. If the FUA bit is not natively supported the block diff --git a/block/blk-settings.c b/block/blk-settings.c index c903bee43cf8..80d9327a214b 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -820,26 +820,6 @@ void blk_queue_update_dma_alignment(struct request_queue *q, int mask) } EXPORT_SYMBOL(blk_queue_update_dma_alignment); -/** - * blk_queue_flush - configure queue's cache flush capability - * @q: the request queue for the device - * @flush: 0, REQ_FLUSH or REQ_FLUSH | REQ_FUA - * - * Tell block layer cache flush capability of @q. If it supports - * flushing, REQ_FLUSH should be set. If it supports bypassing - * write cache for individual writes, REQ_FUA should be set. - */ -void blk_queue_flush(struct request_queue *q, unsigned int flush) -{ - WARN_ON_ONCE(flush & ~(REQ_FLUSH | REQ_FUA)); - - if (WARN_ON_ONCE(!(flush & REQ_FLUSH) && (flush & REQ_FUA))) - flush &= ~REQ_FUA; - - q->flush_flags = flush & (REQ_FLUSH | REQ_FUA); -} -EXPORT_SYMBOL_GPL(blk_queue_flush); - void blk_queue_flush_queueable(struct request_queue *q, bool queueable) { q->flush_not_queueable = !queueable; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 18b7ff2184dd..2a4cfce5c66b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1009,7 +1009,6 @@ extern void blk_queue_update_dma_alignment(struct request_queue *, int); extern void blk_queue_softirq_done(struct request_queue *, softirq_done_fn *); extern void blk_queue_rq_timed_out(struct request_queue *, rq_timed_out_fn *); extern void blk_queue_rq_timeout(struct request_queue *, unsigned int); -extern void blk_queue_flush(struct request_queue *q, unsigned int flush); extern void blk_queue_flush_queueable(struct request_queue *q, bool queueable); extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua); extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev);