From patchwork Thu Apr 4 16:19:52 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2393861 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id ACD9BDF25A for ; Thu, 4 Apr 2013 16:21:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1763018Ab3DDQT5 (ORCPT ); Thu, 4 Apr 2013 12:19:57 -0400 Received: from mail-ia0-f179.google.com ([209.85.210.179]:52975 "EHLO mail-ia0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1762988Ab3DDQTy (ORCPT ); Thu, 4 Apr 2013 12:19:54 -0400 Received: by mail-ia0-f179.google.com with SMTP id x24so2369502iak.24 for ; Thu, 04 Apr 2013 09:19:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=Ut6/cg+U1aPe10+MtthMts5oFv7QTZRhiSWCo3+Kauo=; b=AojpdLoTNG7d4e+679y5XK6t5fD3jTOEMb1vOS+WpYdCipMqOF2rlcX3iwgIPBPXQH 8TfkMP/oF2egS6M2kjAw3Lq3MChIDz2Bb9c2Ue3JCQ3Zxz3ytw4eNkXT1oWmqtdj9U1g J0VGie9qHbu6SwwIsU9UMGMvohaFvYpn98wxw2IJ+1Pxg602kZmfzLJF3Jn7JtygO6LN u2IK5dPo4MRW8tlAwQnQXpGpZWhtXOSTYSmwYN2YGmbWAGYtOq/c5gnCJL9V4M97EPYi tZgism5MdIXGKat7/rOzzIwu4IugiKXODbZRnKV26iMWDhJQJVQ7UWW+NrOqSAa4jP3x OR/g== X-Received: by 10.43.117.136 with SMTP id fm8mr3646377icc.33.1365092393541; Thu, 04 Apr 2013 09:19:53 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id xd4sm9823105igb.3.2013.04.04.09.19.52 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Apr 2013 09:19:52 -0700 (PDT) Message-ID: <515DA828.7070206@inktank.com> Date: Thu, 04 Apr 2013 11:19:52 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: "ceph-devel@vger.kernel.org" Subject: [PATCH 7/9] ceph: kill ceph alloc_page_vec() References: <515DA755.2090504@inktank.com> In-Reply-To: <515DA755.2090504@inktank.com> X-Gm-Message-State: ALoCoQnFq+6TGQ1R44693OxGKst3dO8eCf/vHrj4WB9ZX9NQt4HCVL8SrolBKkefVd+6Ar6sgAEO Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org There is a helper function alloc_page_vec() that, despite its generic sounding name depends heavily on an osd request structure being populated with certain information. There is only one place this function is used, and it ends up being a bit simpler to just open code what it does, so get rid of the helper. The real motivation for this is deferring building the of the osd request message, and this is a step in that direction. Signed-off-by: Alex Elder --- fs/ceph/addr.c | 45 ++++++++++++++++++--------------------------- 1 file changed, 18 insertions(+), 27 deletions(-) struct ceph_snap_context *snapc, @@ -851,6 +828,9 @@ get_more_pages: if (locked_pages == 0) { struct ceph_vino vino; int num_ops = do_sync ? 2 : 1; + size_t size; + struct page **pages; + mempool_t *pool = NULL; /* prepare async write request */ offset = (u64) page_offset(page); @@ -870,13 +850,24 @@ get_more_pages: num_ops, ops, snapc, vino.snap, &inode->i_mtime); + req->r_callback = writepages_finish; + req->r_inode = inode; + + max_pages = calc_pages_for(0, (u64)len); + size = max_pages * sizeof (*pages); + pages = kmalloc(size, GFP_NOFS); + if (!pages) { + pool = fsc->wb_pagevec_pool; + + pages = mempool_alloc(pool, GFP_NOFS); + WARN_ON(!pages); + } + + req->r_data_out.pages = pages; + req->r_data_out.pages_from_pool = !!pool; req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGES; req->r_data_out.length = len; req->r_data_out.alignment = 0; - max_pages = calc_pages_for(0, (u64)len); - alloc_page_vec(fsc, req); - req->r_callback = writepages_finish; - req->r_inode = inode; } /* note position of first page in pvec */ diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 5b4ac17..e976c6d 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -631,29 +631,6 @@ static void writepages_finish(struct ceph_osd_request *req, ceph_osdc_put_request(req); } -/* - * allocate a page vec, either directly, or if necessary, via a the - * mempool. we avoid the mempool if we can because req->r_data_out.length - * may be less than the maximum write size. - */ -static void alloc_page_vec(struct ceph_fs_client *fsc, - struct ceph_osd_request *req) -{ - size_t size; - int num_pages; - - num_pages = calc_pages_for((u64)req->r_data_out.alignment, - (u64)req->r_data_out.length); - size = sizeof (struct page *) * num_pages; - req->r_data_out.pages = kmalloc(size, GFP_NOFS); - if (!req->r_data_out.pages) { - req->r_data_out.pages = mempool_alloc(fsc->wb_pagevec_pool, - GFP_NOFS); - req->r_data_out.pages_from_pool = 1; - WARN_ON(!req->r_data_out.pages); - } -} - static struct ceph_osd_request * ceph_writepages_osd_request(struct inode *inode, u64 offset, u64 *len,