From patchwork Fri May 25 01:16:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 10425851 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DCB42601D5 for ; Fri, 25 May 2018 01:17:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5906293D5 for ; Fri, 25 May 2018 01:17:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA04D2956A; Fri, 25 May 2018 01:17:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 889BC293D5 for ; Fri, 25 May 2018 01:17:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 125F26B027D; Thu, 24 May 2018 21:17:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0D80F6B027E; Thu, 24 May 2018 21:17:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F08436B027F; Thu, 24 May 2018 21:17:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f200.google.com (mail-qt0-f200.google.com [209.85.216.200]) by kanga.kvack.org (Postfix) with ESMTP id C31986B027D for ; Thu, 24 May 2018 21:17:06 -0400 (EDT) Received: by mail-qt0-f200.google.com with SMTP id j33-v6so2571008qtc.18 for ; Thu, 24 May 2018 18:17:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:subject :message-id:references:mime-version:content-disposition:in-reply-to :user-agent; bh=uO9m1uQlD+pY9bqha7kHfnYWWzVJAjDjmNgv4T9htjY=; b=WSjM6aFG22aqJsLlbImGx3MqmoFzLV7Id7MU5rUdYeJPANTpW1/yhoVnCGqoORy9ea Tm+t11Zda6zpjv4O3dvMEjdB9t57QDlhr8DJKef4FjL8KpqbmCwPnxKJ4I695FOeK5jR wKyZQxfSlOi6dsXXvfrsFQ5sUxKPM7/0tPMGsOhx5zAAHGroinqu5StQwI/kvLMO5Skj udxiUnlMrivhkv5G3j2jwWKf6+Np/qkX4ZwwWFVSo+SGvg77w6WpolslPuWa8ghLL1yo WVhcQNJ6faaYBhSofhoHEIlsapeR+7TdPXg4sAs7/Xq8BaSIighBtbpmymYNVjYvqT5w AWNA== X-Gm-Message-State: ALKqPwccTXbcr8rw0x9MIrcoLBGi1CY0HKhzQ5MNsgxAGWajhZ5/SYJV mFH+8XFf/3hzikWimeNMiMaS8MZJlahDwphOFOD1e5F4nuBeR2k7JfAlqrzK52cKQI+NceDpG8X k9XWVXwrqHN+BtmJEVriuLJaBDShge/sPc8ovIKqTK8Ilxong0/NjlWSQEieY9/yJPQ== X-Received: by 2002:a0c:dd11:: with SMTP id u17-v6mr269771qvk.237.1527211026456; Thu, 24 May 2018 18:17:06 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKiT1huvVxHnH9mrMpVXRfCIYJkJibmm79WIszPDma/Qdo/fgb2S0aBKfgXc3V8gvRjL1KU X-Received: by 2002:a0c:dd11:: with SMTP id u17-v6mr269724qvk.237.1527211025262; Thu, 24 May 2018 18:17:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527211025; cv=none; d=google.com; s=arc-20160816; b=haKFm+fY+HzlWFJhLxqy3WIq6fr8pASI8qZinDnEH2IKda6sCRxdL5SCbDIfcwimUm V7B/6XpIpDl0dXCJEuZ6YXZwOkeg6PUiU93E78joyYlO2QTGPlV0WpcJkNSnd3z16yQX /tXgjKadXHPwf7pXEwdywFanpNgdO0/qmRSh6VRyOcmaJXoKfbgXkIWejwX/SitYIeI2 ht5EVVxkLFKzs/p/qYm7gbiQUZDNeQvyAWaqwbjjuz7lNt/4+Lt6Udu8BQYi89YenUWV AUkZJmgz5/LgWHadqnKdY8y2RIA4O353vkE+G9MXisbFL3tbki8eZxp34xnFaOd55uZF dRlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=uO9m1uQlD+pY9bqha7kHfnYWWzVJAjDjmNgv4T9htjY=; b=h3RUL0A9I1W+F01074dNnhiCbgSgKNBuxfzMqB/Wv1SNh2kW3VygGcYQVhj9QO/D+Y XeYSiTphTUYzS1l2PQuOvvGr3XyoxWlqSZIBHxDS6r44Tuuazap5tSjCRfynvcE+/FeV iXgXLnKnU6obPkxLdDAxpMYz+8IwDACx5KZa2bN/JN1A659UZJXwY4DDN9rpeKGKGJgr FQIwy89LRqiwn3cCM98dqUeOes3uHd042swU4FvRuToe89pXneQf5aU1TFcXB+/Exj9K L0Xyz/B97YhFf4E+Anv0M9yLXAdSwI8vfjFwKU04bWD37ac6eitG3PZl5kIHoU0gDaVJ bAnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=n5iO+J7a; spf=pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.79 as permitted sender) smtp.mailfrom=pasha.tatashin@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from aserp2130.oracle.com (aserp2130.oracle.com. [141.146.126.79]) by mx.google.com with ESMTPS id c135-v6si2947830qkg.11.2018.05.24.18.17.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 May 2018 18:17:05 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.79 as permitted sender) client-ip=141.146.126.79; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=n5iO+J7a; spf=pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.79 as permitted sender) smtp.mailfrom=pasha.tatashin@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w4P1GXEU045900; Fri, 25 May 2018 01:17:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=corp-2017-10-26; bh=uO9m1uQlD+pY9bqha7kHfnYWWzVJAjDjmNgv4T9htjY=; b=n5iO+J7alVqMXiRPMb/0cYYkIjJNN4hrlp+m4vG/jlG3/ghVGpjrRD3FvnnxcSoVUkKh 2BqVX52GdtF5pk7DVrJCXlaL3vk+UShkiEy995gYXRSu+NGUaOqtvvgf1dgQYa1MMxbO UzS3qfZtp1sK0fdpGzpAdAhWqFbn5lKygXwPftFs/0j/VsNNV2DL5HjYfz7gjY4ZTdmI Bmytn5TTy8hm9IWk9dgastyQPKglOIRxRjWvtWdEca9AAJsxZt6zZwPrMclf6Qt7Soy/ Z7MOr+cPGFuW39vy4K1LvGccxYmh09K3PnrLqo8iR4UWeblPVCsa0vaFN06GqWSi4Onz 5g== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2130.oracle.com with ESMTP id 2j62sw98g6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 25 May 2018 01:17:01 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w4P1H0MM011262 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 25 May 2018 01:17:01 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w4P1GxJw021119; Fri, 25 May 2018 01:16:59 GMT Received: from xakep.localdomain (/73.69.118.222) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 24 May 2018 18:16:59 -0700 Date: Thu, 24 May 2018 21:16:57 -0400 From: Pavel Tatashin To: Timofey Titovets Cc: linux-mm@kvack.org, Sioh Lee , Andrea Arcangeli , kvm@vger.kernel.org Subject: Re: [PATCH V6 2/2 RESEND] ksm: replace jhash2 with faster hash Message-ID: <20180525011657.4qxrosmm3xjzo24w@xakep.localdomain> References: <20180418193220.4603-1-timofey.titovets@synesis.ru> <20180418193220.4603-3-timofey.titovets@synesis.ru> <20180522202242.otvdunkl75yfhkt4@xakep.localdomain> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180512 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8903 signatures=668700 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=50 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1805250013 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Hi Timofey, > > Do you have performance numbers of crc32c without acceleration? > Yes, https://lkml.org/lkml/2017/12/30/222 > > The experimental results (the experimental value is the average of the > measured values) > crc32c_intel: 1084.10ns > crc32c (no hardware acceleration): 7012.51ns > xxhash32: 2227.75ns > xxhash64: 1413.16ns > jhash2: 5128.30ns Excellent, thank you for this data. > > I understand that losing half of the hash result might be acceptable in > > this case, but I am not really sure how XOirng one more time can possibly > > make hash function worse, could you please elaborate? > > IIRC, because of xor are symmetric > i.e. shift: > 0b01011010 >> 4 = 0b0101 > and xor: > 0b0101 ^ 0b1010 = 0b1111 > Xor will decrease randomness/entropy and will lead to hash collisions. Makes perfect sense. Yes, XORing two random numbers reduces entropy. > That possible to move decision from lazy load, to ksm_thread, > that will allow us to start bench and not slowdown boot. > > But for that to works, ksm must start later, after init of crypto. After studying this dependency some more I agree, it is OK to choose hash function where it is, but I still disagree that we must measure the performance at runtime. > crc32c with no hw, are slower in compare to jhash2 on x86, so i think on > other arches result will be same. Agreed. Below, is your patch updated with my suggested changes. 1. Removes dependency on crc32c, use it only when it is available. 2. Do not spend time measuring the performance, choose only if there is HW optimized implementation of crc32c is available. 3. Replace the logic with static branches. 4. Fix a couple minor bugs: fastest_hash_setup() and crc32c_available() were marked as __init functions. Thus could be unmapped by the time they are run for the first time. I think section mismatch would catch those Removed dead code: "desc.flags = 0", and also replaced desc with sash. Removed unnecessary local global "static struct shash_desc desc" this removes it from data page. Fixed few spelling errors, and other minor changes to pass ./scripts/checkpatch.pl The patch is untested, but should work. Please let me know if you agree with the changes. If so, you can test and resubmit the series. Thank you, Pavel Patch: ========================================================================== From d5f9ecb89ac5de7467fe587b6ccdead39ee00049 Mon Sep 17 00:00:00 2001 From: Timofey Titovets Date: Wed, 18 Apr 2018 22:32:20 +0300 Subject: [PATCH] ksm: replace jhash2 with faster hash 1. Pickup, Sioh Lee crc32 patch, after some long conversation 2. Merge with my work on xxhash 3. Add autoselect code to choice fastest hash helper. Base idea are same, replace jhash2 with something faster. Perf numbers: Intel(R) Xeon(R) CPU E5-2420 v2 @ 2.20GHz ksm: crc32c hash() 12081 MB/s ksm: xxh64 hash() 8770 MB/s ksm: xxh32 hash() 4529 MB/s ksm: jhash2 hash() 1569 MB/s As jhash2 always will be slower (for data size like PAGE_SIZE), just drop it from choice. Add function to autoselect hash algo during first page merging run. Move init of zero_checksum from init, to first call of fasthash(): 1. KSM Init run on early kernel init, run perf testing stuff on main kernel boot thread looks bad to me. 2. Crypto subsystem not available at that early booting, so crc32c even, compiled in, not available As crypto and ksm init, run at subsys_initcall() (4) kernel level of init, all possible consumers will run later at 5+ levels Output after first try of KSM to hash page: ksm: using crc32c as hash function Thanks. Changes: v1 -> v2: - Move xxhash() to xxhash.h/c and separate patches v2 -> v3: - Move xxhash() xxhash.c -> xxhash.h - replace xxhash_t with 'unsigned long' - update kerneldoc above xxhash() v3 -> v4: - Merge xxhash/crc32 patches - Replace crc32 with crc32c (crc32 have same as jhash2 speed) - Add auto speed test and auto choice of fastest hash function v4 -> v5: - Pickup missed xxhash patch - Update code with compile time choicen xxhash - Add more macros to make code more readable - As now that only possible use xxhash or crc32c, on crc32c allocation error, skip speed test and fallback to xxhash - For workaround too early init problem (crc32c not available), move zero_checksum init to first call of fastcall() - Don't alloc page for hash testing, use arch zero pages for that v5 -> v6: - Use libcrc32c instead of CRYPTO API, mainly for code/Kconfig deps Simplification - Add crc32c_available(): libcrc32c will BUG_ON on crc32c problems, so test crc32c available by crc32c_available() - Simplify choice_fastest_hash() - Simplify fasthash() - struct rmap_item && stable_node have sizeof == 64 on x86_64, that makes them cache friendly. As we don't suffer from hash collisions, change hash type from unsigned long back to u32. - Fix kbuild robot warning, make all local functions static Signed-off-by: Timofey Titovets Signed-off-by: leesioh Reviewed-by: Pavel Tatashin CC: Andrea Arcangeli CC: linux-mm@kvack.org CC: kvm@vger.kernel.org --- mm/Kconfig | 1 + mm/ksm.c | 49 +++++++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 46 insertions(+), 4 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index e14c01513bfd..6f46cda5c8ed 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -298,6 +298,7 @@ config MMU_NOTIFIER config KSM bool "Enable KSM for page merging" depends on MMU + select XXHASH help Enable Kernel Samepage Merging: KSM periodically scans those areas of an application's address space that an app has advised may be diff --git a/mm/ksm.c b/mm/ksm.c index e3cbf9a92f3c..1ce4ce6dc313 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include #include @@ -39,6 +38,9 @@ #include #include #include +#include +#include +#include #include #include "internal.h" @@ -284,6 +286,47 @@ static DEFINE_SPINLOCK(ksm_mmlist_lock); sizeof(struct __struct), __alignof__(struct __struct),\ (__flags), NULL) +static DEFINE_STATIC_KEY_FALSE(ksm_use_crc32c); +static DEFINE_STATIC_KEY_FALSE(ksm_use_xxhash); + +static void fasthash_setup(void) +{ + struct crypto_shash *shash = crypto_alloc_shash("crc32c", 0, 0); + + if (!IS_ERR(shash)) { + /* Use crc32c if any non-generic version is available. + * Generic crypto algorithms have priority 100. + */ + if (crypto_tfm_alg_priority(&shash->base) > 100) { + static_branch_enable(&ksm_use_crc32c); + pr_info("ksm: using crc32c as hash function"); + } + crypto_free_shash(shash); + } + + if (!static_branch_likely(&ksm_use_crc32c)) { + static_branch_enable(&ksm_use_xxhash); + pr_info("ksm: using xxhash as hash function"); + } +} + +static u32 fasthash(const void *input, size_t length) +{ + if (static_branch_likely(&ksm_use_crc32c)) + return crc32c(0, input, length); + + if (static_branch_likely(&ksm_use_xxhash)) + return (u32)xxhash(input, length, 0); + + /* Is done only once on the first call of fasthash() */ + fasthash_setup(); + + /* Now, that we know the hash alg., calculate checksum for zero page */ + zero_checksum = fasthash(ZERO_PAGE(0), PAGE_SIZE); + + return fasthash(input, length); +} + static int __init ksm_slab_init(void) { rmap_item_cache = KSM_KMEM_CACHE(rmap_item, 0); @@ -979,7 +1022,7 @@ static u32 calc_checksum(struct page *page) { u32 checksum; void *addr = kmap_atomic(page); - checksum = jhash2(addr, PAGE_SIZE / 4, 17); + checksum = fasthash(addr, PAGE_SIZE); kunmap_atomic(addr); return checksum; } @@ -3100,8 +3143,6 @@ static int __init ksm_init(void) struct task_struct *ksm_thread; int err; - /* The correct value depends on page size and endianness */ - zero_checksum = calc_checksum(ZERO_PAGE(0)); /* Default to false for backwards compatibility */ ksm_use_zero_pages = false;