From patchwork Mon Mar 2 14:24:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 5913561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 035F0BF440 for ; Mon, 2 Mar 2015 14:31:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2DD382020F for ; Mon, 2 Mar 2015 14:31:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AAFD2013A for ; Mon, 2 Mar 2015 14:31:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRLM-0005Wv-Cg; Mon, 02 Mar 2015 14:29:12 +0000 Received: from szxga01-in.huawei.com ([119.145.14.64]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRJ2-0003jO-P6 for linux-arm-kernel@lists.infradead.org; Mon, 02 Mar 2015 14:26:56 +0000 Received: from 172.24.2.119 (EHLO lggeml422-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CKE50586; Mon, 02 Mar 2015 22:25:49 +0800 (CST) Received: from kernel-host.huawei (10.107.197.247) by lggeml422-hub.china.huawei.com (10.72.61.32) with Microsoft SMTP Server id 14.3.158.1; Mon, 2 Mar 2015 22:25:42 +0800 From: Wang Nan To: , , , , Subject: [RFC PATCH v4 13/34] early kprobes: alloc optimized kprobe before memory system is ready. Date: Mon, 2 Mar 2015 22:24:51 +0800 Message-ID: <1425306312-3437-14-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> References: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150302_062649_184957_C69F2643 X-CRM114-Status: GOOD ( 10.23 ) X-Spam-Score: -2.3 (--) Cc: x86@kernel.org, lizefan@huawei.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Create static slots of 'struct optimized_kprobe', alloc such structure from the slots for early kprobes. This patch is for optimization for early kprobes. Signed-off-by: Wang Nan --- kernel/kprobes.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 1eb3000..ab3640b 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -362,6 +362,7 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) } #ifdef CONFIG_OPTPROBES +DEFINE_EKPROBE_ALLOC_OPS(struct optimized_kprobe, early_aggr_kprobe, static) /* NOTE: change this value only with kprobe_mutex held */ static bool kprobes_allow_optimization; @@ -391,7 +392,8 @@ static void free_aggr_kprobe(struct kprobe *p) op = container_of(p, struct optimized_kprobe, kp); arch_remove_optimized_kprobe(op); arch_remove_kprobe(p); - kfree(op); + if (likely(!ek_free_early_aggr_kprobe(op))) + kfree(op); } /* Return true(!0) if the kprobe is ready for optimization. */ @@ -746,7 +748,11 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p) { struct optimized_kprobe *op; - op = kzalloc(sizeof(struct optimized_kprobe), GFP_KERNEL); + if (unlikely(kprobes_is_early())) + op = ek_alloc_early_aggr_kprobe(); + else + op = kzalloc(sizeof(struct optimized_kprobe), GFP_KERNEL); + if (!op) return NULL; @@ -784,7 +790,8 @@ static void try_to_optimize_kprobe(struct kprobe *p) if (!arch_prepared_optinsn(&op->optinsn)) { /* If failed to setup optimizing, fallback to kprobe */ arch_remove_optimized_kprobe(op); - kfree(op); + if (likely(!ek_free_early_aggr_kprobe(op))) + kfree(op); goto out; } @@ -914,6 +921,7 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt) #define __disarm_kprobe(p, o) arch_disarm_kprobe(p) #define kprobe_disarmed(p) kprobe_disabled(p) #define wait_for_kprobe_optimizer() do {} while (0) +DEFINE_EKPROBE_ALLOC_OPS(struct kprobe, early_aggr_kprobe, static) /* There should be no unused kprobes can be reused without optimization */ static void reuse_unused_kprobe(struct kprobe *ap) @@ -925,11 +933,14 @@ static void reuse_unused_kprobe(struct kprobe *ap) static void free_aggr_kprobe(struct kprobe *p) { arch_remove_kprobe(p); - kfree(p); + if (likely(!ek_free_early_aggr_kprobe(p))) + kfree(p); } static struct kprobe *alloc_aggr_kprobe(struct kprobe *p) { + if (unlikely(kprobes_is_early())) + return ek_alloc_early_aggr_kprobe(); return kzalloc(sizeof(struct kprobe), GFP_KERNEL); } #endif /* CONFIG_OPTPROBES */