From patchwork Fri Jan 17 22:11:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11339909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA504921 for ; Fri, 17 Jan 2020 22:11:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F1F321D56 for ; Fri, 17 Jan 2020 22:11:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IY7dG2oF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F1F321D56 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D51576B04F3; Fri, 17 Jan 2020 17:11:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D02C16B04F4; Fri, 17 Jan 2020 17:11:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF0366B04F5; Fri, 17 Jan 2020 17:11:51 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id A710B6B04F3 for ; Fri, 17 Jan 2020 17:11:51 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 7D45B45A3 for ; Fri, 17 Jan 2020 22:11:51 +0000 (UTC) X-FDA: 76388524422.07.trail55_1026ce131092b X-Spam-Summary: 2,0,0,bb6b8e8a6c7c0204,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:vbabka@suse.cz:mgorman@techsingularity.net:linux-kernel@vger.kernel.org:,RULES_HIT:41:69:355:379:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:5007:6119:6261:6653:7903:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12986:13439:13870:14096:14097:14181:14659:14721:21080:21444:21451:21627:30012:30054,0,RBL:209.85.210.195:@google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: trail55_1026ce131092b X-Filterd-Recvd-Size: 7862 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jan 2020 22:11:50 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id n9so12537105pff.13 for ; Fri, 17 Jan 2020 14:11:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=I07pR0oCHYZ+OosxxFEWPv5kDJlwMJ/Csumm0eUVygc=; b=IY7dG2oFgZVtqoBqrQQrRCWrDUSV1LC/NSDgr//4rHlhS6Vp+V2u9iTNfTll0hhDN0 DdXnXLiBhzLElgcNysVIBSjuv1OZHnhvzacjA3ME7q7snNAM7nhpF2/eLxykhfZW9nxZ p9bT7Qf/wSA3MOfjBFHK00dxMdnX6IDsCoC1fY5U2jLU64alEpoY1r9mUrfYgG3Utqsq xDxk6JCxmxEFka1Jf6072QxB24IglEagJ24t0HdToxnLnNCEuF6zAoMUQaNJvfDrgwaf 5AjBAIM9OAWoOBELnt7qJa4Hs15KW19nZkxmsmx4U3bj7f18zSMFOyPOyJctM0TiPkZs pkOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=I07pR0oCHYZ+OosxxFEWPv5kDJlwMJ/Csumm0eUVygc=; b=Z+f3EJ6TbxBJM4Odkj3K3oqKB2CRvCkGV50lUcJ1IjMMAGojcNnTJK6+Ygi2/NLCIs U0YlX//G0xiy6HnsY5Bkz5Awa59mlRtmLWurlDFsZHgi788V3U5dty6LsiX41PgtwxvY R0oaqssa7axWdlcqxCkOohhABteblJQFYzLZrw7D15RScS95ISbPCBIWRmY79BB5OwuZ U9F3yGNEi4tMqsqHjCHZ+LcoE8X6rayNRFcEr4xQ5MHgvvcKzyKfTdvv5HdkwKcQthbc zdkocSqaPTjuwSOdRceipZCw7toqWM24fRhoom6qdaZ+VH9W6FpLRsKAk2nB+/vre1KK DzwA== X-Gm-Message-State: APjAAAVIvY/2161Q9SjYGXzAQ7sZDSEgT3zQ4rMnlfhOVG+zfGMseNzA uWZF1QSMcFbxwfrFuGQTRePmFg== X-Google-Smtp-Source: APXvYqzl5ui39E2Jhw94MkIfpeo/6fENDkaM2NcZLxDoCcu8TbV6QLo/CC4vzcYcHtJWW6l4jIGhIA== X-Received: by 2002:a63:d62:: with SMTP id 34mr49600714pgn.268.1579299109485; Fri, 17 Jan 2020 14:11:49 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id a195sm30899088pfa.120.2020.01.17.14.11.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2020 14:11:48 -0800 (PST) Date: Fri, 17 Jan 2020 14:11:48 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Vlastimil Babka , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch v2] mm, thp: fix defrag setting if newline is not used In-Reply-To: Message-ID: References: <20200116191609.3972fd5301cf364a27381923@linux-foundation.org> <025511aa-4721-2edb-d658-78d6368a9101@suse.cz> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If thp defrag setting "defer" is used and a newline is *not* used when writing to the sysfs file, this is interpreted as the "defer+madvise" option. This is because we do prefix matching and if five characters are written without a newline, the current code ends up comparing to the first five bytes of the "defer+madvise" option and using that instead. Use the more appropriate sysfs_streq() that handles the trailing newline for us. Since this doubles as a nice cleanup, do it in enabled_store() as well. Fixes: 21440d7eb904 ("mm, thp: add new defer+madvise defrag option") Cc: Vlastimil Babka Cc: Mel Gorman Suggested-by: Andrew Morton Signed-off-by: David Rientjes Acked-by: Vlastimil Babka --- Latest 5.5-rc6 doesn't boot for me, something to be debugged separately, so this was tested on 5.4. No changes in this area, however, between the two kernels. mm/huge_memory.c | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 13cc93785006..1c61dea937bc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -177,16 +177,13 @@ static ssize_t enabled_store(struct kobject *kobj, { ssize_t ret = count; - if (!memcmp("always", buf, - min(sizeof("always")-1, count))) { + if (sysfs_streq(buf, "always")) { clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("madvise", buf, - min(sizeof("madvise")-1, count))) { + } else if (sysfs_streq(buf, "madvise")) { clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("never", buf, - min(sizeof("never")-1, count))) { + } else if (sysfs_streq(buf, "never")) { clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags); } else @@ -250,32 +247,27 @@ static ssize_t defrag_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { - if (!memcmp("always", buf, - min(sizeof("always")-1, count))) { + if (sysfs_streq(buf, "always")) { clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("defer+madvise", buf, - min(sizeof("defer+madvise")-1, count))) { + } else if (sysfs_streq(buf, "defer+madvise")) { clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("defer", buf, - min(sizeof("defer")-1, count))) { + } else if (sysfs_streq(buf, "defer")) { clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("madvise", buf, - min(sizeof("madvise")-1, count))) { + } else if (sysfs_streq(buf, "madvise")) { clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags); set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags); - } else if (!memcmp("never", buf, - min(sizeof("never")-1, count))) { + } else if (sysfs_streq(buf, "never")) { clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags); clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);