From patchwork Tue Aug 28 17:22:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 10578895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 672A6175A for ; Tue, 28 Aug 2018 17:23:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AFC5286C0 for ; Tue, 28 Aug 2018 17:23:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F5282A579; Tue, 28 Aug 2018 17:23:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 157AB286C0 for ; Tue, 28 Aug 2018 17:23:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 085AB6B4741; Tue, 28 Aug 2018 13:23:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 012616B4743; Tue, 28 Aug 2018 13:23:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCF166B4744; Tue, 28 Aug 2018 13:23:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb0-f199.google.com (mail-yb0-f199.google.com [209.85.213.199]) by kanga.kvack.org (Postfix) with ESMTP id A88756B4741 for ; Tue, 28 Aug 2018 13:23:32 -0400 (EDT) Received: by mail-yb0-f199.google.com with SMTP id d202-v6so1105019ybh.0 for ; Tue, 28 Aug 2018 10:23:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=YGVWOmKVI0Wip1CfuceBVSe8DJn5GeG7PCv8icT2Z0U=; b=B8NgX8cHYq/FMSweSTkHmCdciwCEJxqx7b4Cl1W4BDGXOgYM9T3eKrzoH6zpYiVtLe O6zHdY2vrJ2ohV2eEyDwtS/julEfGTAl4qKfv88565Ae0fAgvl6KzdY8QVjM8fEIw17r 2vuTvCXkx3Zuds4G5nnPLTr9l8Tr7eJEmc42YDADdKMZ7f30ScTWvyHMxfqJO44yx3uM 7BimhTT3zPS2sIiUtue1jcB2qxOBd9V5zNAXvy6SaQS3Sg2JeslOPjvGxm5zXItQj5Zt GbkgteD23heCA6NDKZswuhoQ2HVkg1GpV9lNvMjDAJfSXNhyeOOBuSoQtgQH3zWOdHIK rHHA== X-Gm-Message-State: APzg51DAqTgGrSR0diur+3ZtM4RIT2x63JSG0wz5/TF5S0wUjhVNGlMO BF5aGphARZGBxuMdvAgqNCnh1/V7+LBZafDMylOTGe6v7l/1XLFa9WA7t5Ma37GOSelWVPkWQhX zD2kGGtp9r01T5tYCyaX/FWVYoHhoSlmaYwztnIXJ4JJTprW6F4kmTJOSrjL1KpG+Mnf8Yhv3A+ m5AkaHiQkH4Br7JN2Vizddupm8FkLCX+DFME9UjAboeaJSzG/3QCxu+rVpkgrQBohoRmW79Oa0Y Q+hx7UFsAZiXPYXCRYUqiAgXUxBPyuXSwXTdPUG3Twra0h3Xx0Ebt0sJ6ExxSnm2B4DJAyqZwMf rHXQKV6iTYq453Yo/vz1GhVqt22SboSAVz0omADc/+Rt8JQ/FwVG71zTr/IlGcXS0CVpmXM0GqX j X-Received: by 2002:a25:5388:: with SMTP id h130-v6mr1377974ybb.229.1535477012447; Tue, 28 Aug 2018 10:23:32 -0700 (PDT) X-Received: by 2002:a25:5388:: with SMTP id h130-v6mr1377930ybb.229.1535477011596; Tue, 28 Aug 2018 10:23:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535477011; cv=none; d=google.com; s=arc-20160816; b=obe4Rc+51rRIdake2rZkquKodeUczIsT5cKJht4p0IiUhAx+w7DLAGi+bIj3quL2qB 8BseX09ysmQxGIIx+jGS8RIpA4s5++LvENiR8cCEH/8ILaWRGCd62KD/ZKzSXsb1uNNK XjWcRbKgKE41nTzagFzLcc1QsLcwNoTUmAB4cQsMTpVePj9ghH8MRAEIMoH4Z5WZmT7w ix5/GBuLmKpI21uWDk1aa6ne043+dtcJqm7WakxJc7AuhhWby986MTMD1ar4QlsXYVmM 4hhpPx7aKtoNUG6VUOs1ZXhYCV7IQdFYVkuoa8TPJjYBE/KscGysBiSWae0W6KC5Eo+I nCTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=YGVWOmKVI0Wip1CfuceBVSe8DJn5GeG7PCv8icT2Z0U=; b=AFd4QgVVV/pSWB9+q/z6Ya2ntdmZFrpZ1XGl15Bwxlsdeldg0jXbOelTYaLLCQogxm /Tzx1mJgvoIoXawk7IgHb0JMUjr7+OoOPA+h9l4kkSaKjURfFsgg8rhbVNm4hBO0Yi+W NOpC5c1wTCGdCBtHtg0TKLwAC1Pu1RB2XJVDfAjtASIQvydG4PeuP6Uul7+NMuyP7HqJ ju4RSvC/9MxFUEz1gh/YF4B8WwCRxxlWKwpPZlZPmRe2X/NKy/fYvPapwgzLR8w8YhXh NxzjuHxVV9ebPhswSEPte3+fvrgGKvL/jM9mVKiQDFweiPfbpEZZiXf4Li/P1ufr2y8v ++CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=AKT3sPmO; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p203-v6sor392906ybb.181.2018.08.28.10.23.31 for (Google Transport Security); Tue, 28 Aug 2018 10:23:31 -0700 (PDT) Received-SPF: pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=AKT3sPmO; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YGVWOmKVI0Wip1CfuceBVSe8DJn5GeG7PCv8icT2Z0U=; b=AKT3sPmOEbg9Yg5eRaVaKJegsvlZJJBcHWwMpFiJtmIxeN7ansAN3icVfmfy3FahHC SD0UmVPp1D0fMVlwf+k2KcuBedR/njVCvvA72cV+0+X8HotOSnV/ckH193eYWk+rBWlN ayfwiIlXBVREQ6I34QvF21XoIjIQ6YMwa8MX5sUFGlxVQBtse6wmdVh5t1zdkb4MpaPt pdZCwebhjeyRfwcTuJvpb/IhnAiJQ/9JrU2FICmfrXa+m5mfeGfhiN22InLUkytxqk2z ZcE/gpyXgChPMTiPnMGUFbk8+IHjLxw0HsvV2qrldDxxq/489CHpZDAyRrqivMJ/oAcO Y8Aw== X-Google-Smtp-Source: ANB0VdazHFQ+Rkv3OnVYJY2XGFTnCPRHmlKF6eG3ZuI+LsWdppTrChqLX2X3QIOtALrC2Gu+yRidXQ== X-Received: by 2002:a25:c2c5:: with SMTP id s188-v6mr1351995ybf.176.1535477011271; Tue, 28 Aug 2018 10:23:31 -0700 (PDT) Received: from localhost ([2620:10d:c091:200::1:de86]) by smtp.gmail.com with ESMTPSA id s63-v6sm570764ywd.63.2018.08.28.10.23.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 28 Aug 2018 10:23:30 -0700 (PDT) From: Johannes Weiner To: Ingo Molnar , Peter Zijlstra , Andrew Morton , Linus Torvalds Cc: Tejun Heo , Suren Baghdasaryan , Daniel Drake , Vinayak Menon , Christopher Lameter , Peter Enderborg , Shakeel Butt , Mike Galbraith , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 6/9] sched: sched.h: make rq locking and clock functions available in stats.h Date: Tue, 28 Aug 2018 13:22:55 -0400 Message-Id: <20180828172258.3185-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180828172258.3185-1-hannes@cmpxchg.org> References: <20180828172258.3185-1-hannes@cmpxchg.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP kernel/sched/sched.h includes "stats.h" half-way through the file. The next patch introduces users of sched.h's rq locking functions and update_rq_clock() in kernel/sched/stats.h. Move those definitions up in the file so they are available in stats.h. Signed-off-by: Johannes Weiner --- kernel/sched/sched.h | 164 +++++++++++++++++++++---------------------- 1 file changed, 82 insertions(+), 82 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c7742dcc136c..eb9b1326906c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -926,6 +926,8 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +extern void update_rq_clock(struct rq *rq); + static inline u64 __rq_clock_broken(struct rq *rq) { return READ_ONCE(rq->clock); @@ -1044,6 +1046,86 @@ static inline void rq_repin_lock(struct rq *rq, struct rq_flags *rf) #endif } +struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(rq->lock); + +struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(p->pi_lock) + __acquires(rq->lock); + +static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + +static inline void +task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) + __releases(rq->lock) + __releases(p->pi_lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); + raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); +} + +static inline void +rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irqsave(&rq->lock, rf->flags); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock_irq(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irq(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_relock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_repin_lock(rq, rf); +} + +static inline void +rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irqrestore(&rq->lock, rf->flags); +} + +static inline void +rq_unlock_irq(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irq(&rq->lock); +} + +static inline void +rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + #ifdef CONFIG_NUMA enum numa_topology_type { NUMA_DIRECT, @@ -1683,8 +1765,6 @@ static inline void sub_nr_running(struct rq *rq, unsigned count) sched_update_tick_dependency(rq); } -extern void update_rq_clock(struct rq *rq); - extern void activate_task(struct rq *rq, struct task_struct *p, int flags); extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags); @@ -1765,86 +1845,6 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } static inline void sched_avg_update(struct rq *rq) { } #endif -struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(rq->lock); - -struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(p->pi_lock) - __acquires(rq->lock); - -static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - -static inline void -task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) - __releases(rq->lock) - __releases(p->pi_lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); - raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); -} - -static inline void -rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irqsave(&rq->lock, rf->flags); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock_irq(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irq(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_relock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_repin_lock(rq, rf); -} - -static inline void -rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irqrestore(&rq->lock, rf->flags); -} - -static inline void -rq_unlock_irq(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irq(&rq->lock); -} - -static inline void -rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - #ifdef CONFIG_SMP #ifdef CONFIG_PREEMPT