From patchwork Sun Oct 31 06:03:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quanfa Fu X-Patchwork-Id: 12594963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72028C433EF for ; Sun, 31 Oct 2021 06:03:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D26A860F4A for ; Sun, 31 Oct 2021 06:03:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D26A860F4A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 30186940008; Sun, 31 Oct 2021 02:03:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B11B940007; Sun, 31 Oct 2021 02:03:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1799E940008; Sun, 31 Oct 2021 02:03:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 05E41940007 for ; Sun, 31 Oct 2021 02:03:19 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A378F1802749B for ; Sun, 31 Oct 2021 06:03:18 +0000 (UTC) X-FDA: 78755690076.19.CF489DB Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf22.hostedemail.com (Postfix) with ESMTP id 461781903 for ; Sun, 31 Oct 2021 06:03:18 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id q2-20020a17090a2e0200b001a0fd4efd49so9459543pjd.1 for ; Sat, 30 Oct 2021 23:03:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=N3JMpCir9O4C/+thkZw6Zp6gl8Z+qWlXpOixuNSACOM=; b=Xq6L/JDAA3Boh5ndh2fn/fmAsa7ZlJF0yUbCiqEnnKo1tB159gyltguxW82Z1Q2Cxd LrW7WutTsy3FhhIgXNByQ8K+HAt/geNDb931DgYPFYTCf4WEEgKZTIzS7At5kgRqEOeN 9evU8JbhRhriCeDFS6EG78MKTbfyNM7sPG/dknVoEtKBoxnvlaMleM33hcr/552S25lm wJO8V02JBMCAGYe/HwbzjTxwFrGghj+VkHBLZ92EJrlrLAsQzlQkyzRFIvRB3PNRWVrW ytjOk5ihXml8K9w0LXs01aYhAD7ZdGGlNVLLvdJ6sTSABxcR5uReflr2hDFGjiHJA/L9 wuCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=N3JMpCir9O4C/+thkZw6Zp6gl8Z+qWlXpOixuNSACOM=; b=vnBimzLYiClS/zSRzD1JIKiAlLkTUTb5m9UcCusg1/vvIXUzeNTG6sa6oqeT5vbg0q bxWopqhX18DMbiiFCFpI6IcwL/pV+egekURyIK1BFa4NK/cVr+h0lhDe8gUH1jaYYoed pCoeTu6YQc62UEan+jyPg4r0vO5c9hz5GI/wycnwi68RHSkiiQh3Z7D7XKkWpSkBJWnt L9FlxraWUlvOuapxFwPUQi5tjJBENm71R1tXQNG5Dq+9AgtzXIUDuoi/4I76CSYNwOXA Z+8WPFw5HCfT1qCtfInm+jkpnKrG5WP1zDExCbprcJxPTO5OokP3bqhXCgvbHWU7xwPx 3R9w== X-Gm-Message-State: AOAM5328fc6aZndTA/zG4R3GK3rT71sLg60Nr/gKfhQ3mQDz+FRlbNpw ed0OjFWsFss+wd6CLTwJIrw= X-Google-Smtp-Source: ABdhPJzBGkGvZNbuit6l1si3F3U8jdJE0C5HhOx5F7h40YZSEZTLXwi9HOnCuhiV2OvWKC22ppUueg== X-Received: by 2002:a17:902:bb96:b0:13f:b181:58ef with SMTP id m22-20020a170902bb9600b0013fb18158efmr18566958pls.2.1635660196272; Sat, 30 Oct 2021 23:03:16 -0700 (PDT) Received: from ubuntu.localdomain ([103.121.208.77]) by smtp.gmail.com with ESMTPSA id r14sm9553813pgf.49.2021.10.30.23.03.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Oct 2021 23:03:16 -0700 (PDT) From: Quanfa Fu To: akpm@linux-foundation.org, naoya.horiguchi@nec.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Quanfa Fu Subject: [PATCH] writeback: Fix some comment errors Date: Sun, 31 Oct 2021 14:03:02 +0800 Message-Id: <20211031060302.146914-1-fuqf0919@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Xq6L/JDA"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of fuqf0919@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=fuqf0919@gmail.com X-Stat-Signature: fmo33fwa8wje147mmgbzzwuww8wycgke X-Rspamd-Queue-Id: 461781903 X-Rspamd-Server: rspam01 X-HE-Tag: 1635660198-865542 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Quanfa Fu --- mm/khugepaged.c | 2 +- mm/memory-failure.c | 4 ++-- mm/slab_common.c | 2 +- mm/swap.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8a8b3aa92937..f482a7861141 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1306,7 +1306,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, /* * Record which node the original page is from and save this * information to khugepaged_node_load[]. - * Khupaged will allocate hugepage from the node has the max + * Khugepaged will allocate hugepage from the node has the max * hit record. */ node = page_to_nid(page); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bdbbb32211a5..21fa983e52e4 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1227,7 +1227,7 @@ static int get_any_page(struct page *p, unsigned long flags) * * get_hwpoison_page() takes a page refcount of an error page to handle memory * error on it, after checking that the error page is in a well-defined state - * (defined as a page-type we can successfully handle the memor error on it, + * (defined as a page-type we can successfully handle the memory error on it, * such as LRU page and hugetlb page). * * Memory error handling could be triggered at any time on any type of page, @@ -1653,7 +1653,7 @@ int memory_failure(unsigned long pfn, int flags) /* * We need/can do nothing about count=0 pages. - * 1) it's a free page, and therefore in safe hand: + * 1) it's a freed page, and therefore in safe hand: * prep_new_page() will be the gate keeper. * 2) it's part of a non-compound high order page. * Implies some kernel user: cannot stop them from diff --git a/mm/slab_common.c b/mm/slab_common.c index ec2bb0beed75..e845a8286f2c 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -832,7 +832,7 @@ void __init setup_kmalloc_cache_index_table(void) if (KMALLOC_MIN_SIZE >= 64) { /* - * The 96 byte size cache is not used if the alignment + * The 96 byte sized cache is not used if the alignment * is 64 byte. */ for (i = 64 + 8; i <= 96; i += 8) diff --git a/mm/swap.c b/mm/swap.c index af3cad4e5378..0ab1aa4a79b6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -866,7 +866,7 @@ void lru_cache_disable(void) * all online CPUs so any calls of lru_cache_disabled wrapped by * local_lock or preemption disabled would be ordered by that. * The atomic operation doesn't need to have stronger ordering - * requirements because that is enforeced by the scheduling + * requirements because that is enforced by the scheduling * guarantees. */ __lru_add_drain_all(true);