From patchwork Thu Aug 1 11:51:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sha Zhengju X-Patchwork-Id: 2836952 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4F91F9F7D6 for ; Thu, 1 Aug 2013 11:52:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 675A520184 for ; Thu, 1 Aug 2013 11:52:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3612120163 for ; Thu, 1 Aug 2013 11:52:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754291Ab3HALwF (ORCPT ); Thu, 1 Aug 2013 07:52:05 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:36755 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752934Ab3HALwD (ORCPT ); Thu, 1 Aug 2013 07:52:03 -0400 Received: by mail-pb0-f46.google.com with SMTP id rq2so2003887pbb.33 for ; Thu, 01 Aug 2013 04:52:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=EmNsr2BDIFuRxAv6ZuizEGJcG8jsn4T5uopZ46pI9cg=; b=eo2JpTo+1No6Ufk3x8cptntoNnNdImbPpXzulo2/Qj380V4iqOJtfCbhhR/FHeYZUG X1xzDuzkSYA/7wYNEU461WyxxyhjpEt84Wshcd+qH5n0NVPi3GnwFjYwHGy9nddkn/o/ 6zw7jiDCp2P45sQu6EdykowJSuzCewMcyAcVEkMATQJSAxZ5ADwDyNBtrHSyY6t+9fqK rHW99O/lOspyHNHszUf271V8+6UYkTqu87qNoN0sTkBvO84QsF0xvf0pPg/q3Bo663+0 1q5BAgKCxk5Mj7+hju5kJfdRoJoPKzCas0ynq2qNziLThfHtI0u8iZLHaz8JqEjhjjN9 hjMw== X-Received: by 10.66.118.163 with SMTP id kn3mr3520449pab.165.1375357921937; Thu, 01 Aug 2013 04:52:01 -0700 (PDT) Received: from localhost.localdomain ([123.119.100.96]) by mx.google.com with ESMTPSA id 4sm1721381pbw.32.2013.08.01.04.51.55 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 01 Aug 2013 04:52:00 -0700 (PDT) From: Sha Zhengju To: linux-fsdevel@vger.kernel.org, ceph-devel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org Cc: sage@inktank.com, mhocko@suse.cz, kamezawa.hiroyu@jp.fujitsu.com, glommer@gmail.com, gthelen@google.com, fengguang.wu@intel.com, akpm@linux-foundation.org, Sha Zhengju Subject: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem Date: Thu, 1 Aug 2013 19:51:32 +0800 Message-Id: <1375357892-10188-1-git-send-email-handai.szj@taobao.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1375357402-9811-1-git-send-email-handai.szj@taobao.com> References: <1375357402-9811-1-git-send-email-handai.szj@taobao.com> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sha Zhengju Following we will begin to add memcg dirty page accounting around __set_page_dirty_ {buffers,nobuffers} in vfs layer, so we'd better use vfs interface to avoid exporting those details to filesystems. Signed-off-by: Sha Zhengju --- fs/ceph/addr.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 3e68ac1..1445bf1 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -76,7 +76,7 @@ static int ceph_set_page_dirty(struct page *page) if (unlikely(!mapping)) return !TestSetPageDirty(page); - if (TestSetPageDirty(page)) { + if (!__set_page_dirty_nobuffers(page)) { dout("%p set_page_dirty %p idx %lu -- already dirty\n", mapping->host, page, page->index); return 0; @@ -107,14 +107,7 @@ static int ceph_set_page_dirty(struct page *page) snapc, snapc->seq, snapc->num_snaps); spin_unlock(&ci->i_ceph_lock); - /* now adjust page */ - spin_lock_irq(&mapping->tree_lock); if (page->mapping) { /* Race with truncate? */ - WARN_ON_ONCE(!PageUptodate(page)); - account_page_dirtied(page, page->mapping); - radix_tree_tag_set(&mapping->page_tree, - page_index(page), PAGECACHE_TAG_DIRTY); - /* * Reference snap context in page->private. Also set * PagePrivate so that we get invalidatepage callback. @@ -126,14 +119,10 @@ static int ceph_set_page_dirty(struct page *page) undo = 1; } - spin_unlock_irq(&mapping->tree_lock); - if (undo) /* whoops, we failed to dirty the page */ ceph_put_wrbuffer_cap_refs(ci, 1, snapc); - __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); - BUG_ON(!PageDirty(page)); return 1; }