From patchwork Thu Jul 12 17:29:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 10522047 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 170AC602B3 for ; Thu, 12 Jul 2018 17:27:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED24229ACD for ; Thu, 12 Jul 2018 17:27:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E117529AD2; Thu, 12 Jul 2018 17:27:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5114229AC1 for ; Thu, 12 Jul 2018 17:27:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 247DF6B026E; Thu, 12 Jul 2018 13:27:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1CB2C6B026F; Thu, 12 Jul 2018 13:27:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 094776B0271; Thu, 12 Jul 2018 13:27:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id A3ADA6B026E for ; Thu, 12 Jul 2018 13:27:42 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id w10-v6so11497940eds.7 for ; Thu, 12 Jul 2018 10:27:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=B292iOJBCMYSOk5RS2XjGeJNVjxlqhgKK69weXIJS3lPSCFKPu5SdzyjHt+1QTcQ9W AJZS3pU9VWeeKSDHVJ8vOyd3wsfVAxuV4XtYk63URNAOyKoSXAbfeBwtWrXgMLojO+7o +bj6YLCvzxuRLWLZP4/WYwHn4JlVUtgRUrZlELvDAFrFN/PPf5GhVOhNp37vnOrdq7bH VDSrbqKRxA7LCTZ1VTpOMlWTn1E6xZvFeU1QVJyJwt9JzOkgf/qDxaUwrY2JtLEzQFf8 BwHx8ikyJCKtGZitkFpf3emBc2KUG+WNkXosE/1J0nxuvdKm7pdRTmZn/WL4KDBLKMJ8 1CLQ== X-Gm-Message-State: AOUpUlEitWSp4q7Zm55GYslbVEQL7so70MFqKesqwf9PyhEQOTW1+XBK IuzolxCcJm4LDT1aDQXIo9LQ9dkmpm9yXGS7yNv91i/mYjSCANLJQajr6k84689tjl9FlWiR0n/ wckeTc9ZeCXpeHQ7k8uEU+sY9RGNpi5eeBbEMIDHi+ebv+xRuzafw0KiFzE/SyVwFxQ== X-Received: by 2002:aa7:d993:: with SMTP id u19-v6mr3462204eds.125.1531416462236; Thu, 12 Jul 2018 10:27:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeAREK1DfMJcGDLJvztFDg2ZKrZAcbVYAi+Oyq6jjTi1AeoEmzG3wP9Ea0NH2r3Y/rknbFl X-Received: by 2002:aa7:d993:: with SMTP id u19-v6mr3462170eds.125.1531416461517; Thu, 12 Jul 2018 10:27:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531416461; cv=none; d=google.com; s=arc-20160816; b=TTV5tvR4L+pS4KZ1E55JGNvxaY4raAI4cVie3q9BL2p5kZ+Wt94tdf4u0JVe0CPHv4 9k3GR6O3W3K4slLRueTSq7B5m+jRxDy1KdXjzN4HI3ekSGK5sTZz9Dr7rFkEzhK7kfNq cj2IY6+TR19/EYv0DUaEOBQAhbwy05QuXzqaZza5kz9/T3ldBW/4Gg7QXjgHuYw+r+Hq eLB40ZgBTmV+3Q0LerpjYWx34KfKWggaaoXKFkISqTvG+2rvN0AP282n60ubhPHCqo2T ZR7SQZBfQN+e1nugbGzET6/c+5EYFwSjyELny6/F/2mumBHdqF5lkrn7A435CFEfpgzx rAgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=ufyVviQAkdoHu12lOBG90KCRTgKlheytLSg3Mx0Pnf4I49/sjaeV6x2Bi4XczdeJd0 4pfztki37TSA76QUzs82hB5QR902BeT+LOkfnOFUgRrHuJhX7FehVKrXl5372HdSTJ1O QGkUQe2vs+BpFKwfBE6/6l2BsNvog9Ti+QXXJSLLwWtwAONvCotdaKmgLG8nnHRPSyd2 kLG9EAbnMRocLIVmWAI3xdGZ6soqaYLH109jFQUJGaLifHN9yyjqKPEJE1onF26Urc3m 8TRTjKGj6LYxvNuqpG474nM4mIWfHGBo5y7yba460BzqcijVsyC8Hrxa15JYEFM3MadA WkrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg.org header.s=x header.b=wtdzvVUg; spf=pass (google.com: domain of hannes@cmpxchg.org designates 85.214.110.215 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: from gum.cmpxchg.org (gum.cmpxchg.org. [85.214.110.215]) by mx.google.com with ESMTPS id k42-v6si5210009eda.168.2018.07.12.10.27.41 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 12 Jul 2018 10:27:41 -0700 (PDT) Received-SPF: pass (google.com: domain of hannes@cmpxchg.org designates 85.214.110.215 as permitted sender) client-ip=85.214.110.215; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg.org header.s=x header.b=wtdzvVUg; spf=pass (google.com: domain of hannes@cmpxchg.org designates 85.214.110.215 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cmpxchg.org ; s=x; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender: Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=wtdzvVUgqJqLsOamE+IiCIHzcc DC7mcWHWJ5F8fOLKva4ASf119JwxbWWcGQ3lll4d47DPWI0KW099OU3EhChOXbhV1bvdqq1O1Afg8 uu5uqb677Xogl2XOBW5MrqeAxVKM6UB1+2o9AwCuIlcnwD2cl/g9E5IIeOPn9ZHzMgsA=; From: Johannes Weiner To: Ingo Molnar , Peter Zijlstra , Andrew Morton , Linus Torvalds Cc: Tejun Heo , Suren Baghdasaryan , Vinayak Menon , Christopher Lameter , Mike Galbraith , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 06/10] sched: sched.h: make rq locking and clock functions available in stats.h Date: Thu, 12 Jul 2018 13:29:38 -0400 Message-Id: <20180712172942.10094-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180712172942.10094-1-hannes@cmpxchg.org> References: <20180712172942.10094-1-hannes@cmpxchg.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP kernel/sched/sched.h includes "stats.h" half-way through the file. The next patch introduces users of sched.h's rq locking functions and update_rq_clock() in kernel/sched/stats.h. Move those definitions up in the file so they are available in stats.h. Signed-off-by: Johannes Weiner --- kernel/sched/sched.h | 164 +++++++++++++++++++++---------------------- 1 file changed, 82 insertions(+), 82 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index cb467c221b15..b8f038497240 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -919,6 +919,8 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +extern void update_rq_clock(struct rq *rq); + static inline u64 __rq_clock_broken(struct rq *rq) { return READ_ONCE(rq->clock); @@ -1037,6 +1039,86 @@ static inline void rq_repin_lock(struct rq *rq, struct rq_flags *rf) #endif } +struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(rq->lock); + +struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(p->pi_lock) + __acquires(rq->lock); + +static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + +static inline void +task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) + __releases(rq->lock) + __releases(p->pi_lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); + raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); +} + +static inline void +rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irqsave(&rq->lock, rf->flags); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock_irq(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irq(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_relock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_repin_lock(rq, rf); +} + +static inline void +rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irqrestore(&rq->lock, rf->flags); +} + +static inline void +rq_unlock_irq(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irq(&rq->lock); +} + +static inline void +rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + #ifdef CONFIG_NUMA enum numa_topology_type { NUMA_DIRECT, @@ -1670,8 +1752,6 @@ static inline void sub_nr_running(struct rq *rq, unsigned count) sched_update_tick_dependency(rq); } -extern void update_rq_clock(struct rq *rq); - extern void activate_task(struct rq *rq, struct task_struct *p, int flags); extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags); @@ -1752,86 +1832,6 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } static inline void sched_avg_update(struct rq *rq) { } #endif -struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(rq->lock); - -struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(p->pi_lock) - __acquires(rq->lock); - -static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - -static inline void -task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) - __releases(rq->lock) - __releases(p->pi_lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); - raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); -} - -static inline void -rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irqsave(&rq->lock, rf->flags); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock_irq(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irq(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_relock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_repin_lock(rq, rf); -} - -static inline void -rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irqrestore(&rq->lock, rf->flags); -} - -static inline void -rq_unlock_irq(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irq(&rq->lock); -} - -static inline void -rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - #ifdef CONFIG_SMP #ifdef CONFIG_PREEMPT