diff mbox series

[01/13] perf/core: Fix several typos

Message ID 20240319180005.246930-2-visitorckw@gmail.com (mailing list archive)
State Superseded, archived
Delegated to: Mike Snitzer
Headers show
Series treewide: Refactor heap related implementation | expand

Commit Message

Kuan-Wei Chiu March 19, 2024, 5:59 p.m. UTC
Replace 'artifically' with 'artificially'.
Replace 'irrespecive' with 'irrespective'.
Replace 'futher' with 'further'.
Replace 'sufficent' with 'sufficient'.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 kernel/events/core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Ian Rogers March 19, 2024, 7:31 p.m. UTC | #1
On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Replace 'artifically' with 'artificially'.
> Replace 'irrespecive' with 'irrespective'.
> Replace 'futher' with 'further'.
> Replace 'sufficent' with 'sufficient'.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  kernel/events/core.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 724e6d7e128f..10ac2db83f14 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -534,7 +534,7 @@ void perf_sample_event_took(u64 sample_len_ns)
>         __this_cpu_write(running_sample_length, running_len);
>
>         /*
> -        * Note: this will be biased artifically low until we have
> +        * Note: this will be biased artificially low until we have
>          * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
>          * from having to maintain a count.
>          */
> @@ -596,10 +596,10 @@ static inline u64 perf_event_clock(struct perf_event *event)
>   *
>   * Event groups make things a little more complicated, but not terribly so. The
>   * rules for a group are that if the group leader is OFF the entire group is
> - * OFF, irrespecive of what the group member states are. This results in
> + * OFF, irrespective of what the group member states are. This results in
>   * __perf_effective_state().
>   *
> - * A futher ramification is that when a group leader flips between OFF and
> + * A further ramification is that when a group leader flips between OFF and
>   * !OFF, we need to update all group member times.
>   *
>   *
> @@ -891,7 +891,7 @@ static int perf_cgroup_ensure_storage(struct perf_event *event,
>         int cpu, heap_size, ret = 0;
>
>         /*
> -        * Allow storage to have sufficent space for an iterator for each
> +        * Allow storage to have sufficient space for an iterator for each
>          * possibly nested cgroup plus an iterator for events with no cgroup.
>          */
>         for (heap_size = 1; css; css = css->parent)
> --
> 2.34.1
>
diff mbox series

Patch

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 724e6d7e128f..10ac2db83f14 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -534,7 +534,7 @@  void perf_sample_event_took(u64 sample_len_ns)
 	__this_cpu_write(running_sample_length, running_len);
 
 	/*
-	 * Note: this will be biased artifically low until we have
+	 * Note: this will be biased artificially low until we have
 	 * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
 	 * from having to maintain a count.
 	 */
@@ -596,10 +596,10 @@  static inline u64 perf_event_clock(struct perf_event *event)
  *
  * Event groups make things a little more complicated, but not terribly so. The
  * rules for a group are that if the group leader is OFF the entire group is
- * OFF, irrespecive of what the group member states are. This results in
+ * OFF, irrespective of what the group member states are. This results in
  * __perf_effective_state().
  *
- * A futher ramification is that when a group leader flips between OFF and
+ * A further ramification is that when a group leader flips between OFF and
  * !OFF, we need to update all group member times.
  *
  *
@@ -891,7 +891,7 @@  static int perf_cgroup_ensure_storage(struct perf_event *event,
 	int cpu, heap_size, ret = 0;
 
 	/*
-	 * Allow storage to have sufficent space for an iterator for each
+	 * Allow storage to have sufficient space for an iterator for each
 	 * possibly nested cgroup plus an iterator for events with no cgroup.
 	 */
 	for (heap_size = 1; css; css = css->parent)