From patchwork Mon Mar 2 14:24:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 5913931 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BFF05BF440 for ; Mon, 2 Mar 2015 14:42:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E90E020212 for ; Mon, 2 Mar 2015 14:41:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12E672020F for ; Mon, 2 Mar 2015 14:41:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRUo-0006o2-Vf; Mon, 02 Mar 2015 14:38:58 +0000 Received: from merlin.infradead.org ([2001:4978:20e::2]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRJH-00045G-JM for linux-arm-kernel@bombadil.infradead.org; Mon, 02 Mar 2015 14:27:03 +0000 Received: from szxga01-in.huawei.com ([119.145.14.64]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRJD-0003SM-Qm for linux-arm-kernel@lists.infradead.org; Mon, 02 Mar 2015 14:27:01 +0000 Received: from 172.24.2.119 (EHLO lggeml422-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CKE50600; Mon, 02 Mar 2015 22:25:55 +0800 (CST) Received: from kernel-host.huawei (10.107.197.247) by lggeml422-hub.china.huawei.com (10.72.61.32) with Microsoft SMTP Server id 14.3.158.1; Mon, 2 Mar 2015 22:25:43 +0800 From: Wang Nan To: , , , , Subject: [RFC PATCH v4 15/34] early kprobes: use stop_machine() based optimization method for early kprobes. Date: Mon, 2 Mar 2015 22:24:53 +0800 Message-ID: <1425306312-3437-16-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> References: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150302_092700_556746_A3B16DD9 X-CRM114-Status: UNSURE ( 9.42 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -4.2 (----) Cc: x86@kernel.org, lizefan@huawei.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP schedule_delayed_work() doesn't work until scheduler and timer are ready. For early kprobes, directly call do_optimize_kprobes() should make things simpler. Arch code should ensure there's no conflict between code modification and execution using stop_machine(). To avoid lock order problem, call do_optimize_kprobes() before leaving register_kprobe() instead of kick_kprobe_optimizer(). Signed-off-by: Wang Nan --- kernel/kprobes.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index ab3640b..2d178fc 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -546,7 +546,16 @@ static void do_free_cleaned_kprobes(void) /* Start optimizer after OPTIMIZE_DELAY passed */ static void kick_kprobe_optimizer(void) { - schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY); + /* + * For early kprobes, scheduler and timer may not ready. Use + * do_optimize_kprobes() and let it choose stop_machine() based + * optimizer. Instead of directly calling do_optimize_kprobes(), + * let optimization be done in register_kprobe because we can + * held many (and different) locks here in different situations + * which makes things relativly complex. + */ + if (likely(!kprobes_is_early())) + schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY); } /* Kprobe jump optimizer */ @@ -1595,6 +1604,16 @@ int register_kprobe(struct kprobe *p) /* Try to optimize kprobe */ try_to_optimize_kprobe(p); + /* + * Optimize early kprobes here because of locking order. + * See comments in kick_kprobe_optimizer(). + */ + if (unlikely(kprobes_is_early())) { + mutex_lock(&module_mutex); + do_optimize_kprobes(); + mutex_unlock(&module_mutex); + } + out: mutex_unlock(&kprobe_mutex);