From patchwork Fri Dec 30 14:31:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Butsykin X-Patchwork-Id: 9492245 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 766A760417 for ; Fri, 30 Dec 2016 15:09:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 655A01FF29 for ; Fri, 30 Dec 2016 15:09:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 58C35223B2; Fri, 30 Dec 2016 15:09:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2952D1FF29 for ; Fri, 30 Dec 2016 15:09:57 +0000 (UTC) Received: from localhost ([::1]:40231 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cMyod-0003pj-QY for patchwork-qemu-devel@patchwork.kernel.org; Fri, 30 Dec 2016 10:09:55 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40654) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cMyl4-0001I0-7y for qemu-devel@nongnu.org; Fri, 30 Dec 2016 10:06:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cMyl0-0001ol-5n for qemu-devel@nongnu.org; Fri, 30 Dec 2016 10:06:14 -0500 Received: from mail-db5eur01on0117.outbound.protection.outlook.com ([104.47.2.117]:21094 helo=EUR01-DB5-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cMykz-0001nm-Oh; Fri, 30 Dec 2016 10:06:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=3359AaGUss2E6NP54rXSBkiiYAU35uzMtV66b/ftsYo=; b=bBjPBBUzZnduj20MkiCGjJTqZSFn6qVKMdSVayuTR0UfYt5j+pt2nXdAE+9P/dB3msYWdb9irtrO8wfTkD6OWFUfB7VSDmGLgs6NXCewrjMG519tIZmWrk1HvpqAk7e7KEMvuNAdFMXMleg9vQherUX0MNNAr27bDZnzKsp21W8= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=pbutsykin@virtuozzo.com; Received: from pavelb-Z68P-DS3.sw.ru (195.214.232.6) by HE1PR0802MB2555.eurprd08.prod.outlook.com (10.175.35.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.803.11; Fri, 30 Dec 2016 14:31:58 +0000 From: Pavel Butsykin To: , Date: Fri, 30 Dec 2016 17:31:27 +0300 Message-ID: <20161230143142.18214-4-pbutsykin@virtuozzo.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20161230143142.18214-1-pbutsykin@virtuozzo.com> References: <20161230143142.18214-1-pbutsykin@virtuozzo.com> MIME-Version: 1.0 X-Originating-IP: [195.214.232.6] X-ClientProxiedBy: HE1PR0802CA0010.eurprd08.prod.outlook.com (10.172.123.148) To HE1PR0802MB2555.eurprd08.prod.outlook.com (10.175.35.148) X-MS-Office365-Filtering-Correlation-Id: fe22ec4c-71be-400f-f504-08d430c09cdb X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:HE1PR0802MB2555; X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 3:i0hUU3z9zEuZWE3I61dotglc2nNOc2W3g0AILPp9SJ5BeTfNUbFEwquENLbGRLk6NlqLPfdXlUof88oHknLD1wf+f8ynN/kqZ/x6Bypg8uwObNtlB9O0RQ7GkecAPLLOhD5HKkCHKzncWIC4J3dYmnc/gUjglRUNXC8CCe4vfkhHOmoS2rH+0vXvlFiS9HtIZywbHOhiU1+a5jtb8Hsq2j4awoQ+sav+dnA2iMj5TLWHGC2aUPXoWY2m5pSyTNZo9EeBq/fzJCG7+4xDepiQCA== X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 25:HEN/HKLDyItvX9YkPs5ZLdWqWvGIofbLaYyIlm9BEm2jVJ+o/iabUj6uNiqXG0e5iHVsyjcD9U3b6/UzKIt3Hga3c2j91UieYBk1T2tc2CNXGtDkrXj2/PN9+dZJ0GPnopEry0+1dLcPik2oSJ0DGtIi2KS/NTHfJKjFEXNMjre6JEsBNd2paup2HXf1oHSA2VGYXKw2mhCPQAtZm9MgejjQ5sWbKeIWyZ8DbBvyLzC4Jxdf6EAWfYRjzxOYYAR9dTMDAl2NzmR2s9MemY9VXA7JUg6eI6/MdkvCUTY7WUFCiNSm83HOFJFl93ubHVPrgLGxD9ZFbhxW0FSDYGi7RBVwyTqBSw7gwda/eAyb5kdcKcdVTNkSYVVjM+ch3MuW1n0Fv69inSTRG3k3kSJmizj/yYfpoo/jQ9/DB/WEFxh63slhaRKNAgDV3ZxJFrelhkUrFo9GrDUdT6FSVLGupTEvfzTDcLP45+Om+KQWdvBYURxPBxUKV7Ct6A17rBIXWonAvtCExMlxAeeaZi6ZLltm0CCUDXVEwd+tdkfnCRAFdGY1G3GA/6xDpTPvkU65GujvrfwEwNdGMPvn5uniOXebPhkbul2/2Boq2E2SwW3IbgE8fh8p5k+5iqd4/f4eFQ0wu7NliJvL+o+4WgoyzDgyQ2DbXQ4NQCakSXBPTiN/bdLGhJH3WkzjIhDPEuSQmqGAEFC6Z0Ax3IyAtU6R7GqI2Z4I/LF+Ed14usfhVwE= X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 31:HDG/PB7t41qpRPSx7Bw4i0F34jUVFu1skpkmDOi6mK1x8KLRoEU0Uj3ZubemfYosDpQ19XVyOe4diGzmZlRDwCiXjGRp1n1CQl0+ok+g8nJ4Yn0qrsmV7X8L66xpNm+tdEbgyNQ2lbiAptE7r+EBlI1q/9/MPqY042YKoOF/mEhgw4aZWch0+GrWRp6/BI8cZZKet6uIJwdqo9XaLCIw8eU27sKkswi9tSiIywVn/aCLzTZMkt3YjMMqEDRFgBMY; 20:bK+r8fO8gJLo5DHct6cPjuh3aWoRrgWfCG7PSqPvp+mkGX1eKo7a1a+sZgHY74sdd7w8mHOdYPEq6VwivUOJqEfXmlr1BfUpIzPqqI1epErsja8WoetJhi2QeuxAgIVXMHpvSbx/60nnakTK2tZ1a6Q5JUjdtLbxh/TNLNXAo8DH9/hYz5PBb40/frVoeU9goAqSnmWbd73c2GjP4h+qmhJ/0UVgpsj43WGujWhkkCdMLicPCR3SqG0PrJR4uabd X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(17755550239193); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6041248)(20161123564025)(20161123562025)(20161123555025)(20161123558021)(20161123560025)(6072148); SRVR:HE1PR0802MB2555; BCL:0; PCL:0; RULEID:; SRVR:HE1PR0802MB2555; X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 4:c6dE5WL+KfFPbvuUOFyis8caqluhCqQ7/1Xgs1C9v6kKiAF1D5N60dN3VqyX9+Xsk4/FT84yLxh5uwpXDbnmxq1G2bpLo/+EnAzvyb0q0qrF1YbJ3yKh3L8X+Y5GavK1sqh+Nb4mHyhRWJzVkMmzNza+8lGk+e7H6JYl7bQnsyGj91xxVp/d4FjaWAgaevvFrg3+0WTe61UPSEkEXwEHukkpaJjU5EyeqxoDwVs24fO8+r9RrooiIgVOrd27M6ncGl+CM/x4bVZVWbMjaC+7fzcBqOQU16yK2JZvBQk7xGBwHYIJnsI0Ane41eHogqJOgAJlojOucnNXz9Z1B5a2FEEI09qlfp9wZZGjOBqsP/CYfGYFH9Pem3SLOvm3mlzMVYyI7Ossm8Sn4s+0xBMGK7ueFGgtND1TX2mrAz+TDWuSZe2QDA9Y6iKA50goRuw4I0Su0EAE+MWIYJfo38KnIxjgH08O10GBDfPWv8u6JkGbhZtq0kafMbY3X9hA3fRwgOgYgxWVof7FuXHdY5rDC6xVWsNdCDFnChZKV9FG+V3+Tg2Kam0qjMfwHVmJp4efQyDw8wjxoMKabhFq0XdX+pJx7iah+vqN3FknQDtWYngy7G44yUgn1ZJL68p2rXZ2T9ACB1ei7aPY1Fr/bdtuvw== X-Forefront-PRVS: 0172F0EF77 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(4630300001)(6009001)(7916002)(39450400003)(199003)(189002)(6666003)(5660300001)(189998001)(5001770100001)(50226002)(81156014)(81166006)(5003940100001)(8676002)(97736004)(47776003)(66066001)(6486002)(36756003)(6116002)(3846002)(76176999)(105586002)(33646002)(92566002)(305945005)(38730400001)(50466002)(106356001)(50986999)(1076002)(6506006)(6512006)(69596002)(4326007)(2950100002)(25786008)(2906002)(7736002)(68736007)(53416004)(42186005)(48376002)(101416001)(86362001); DIR:OUT; SFP:1102; SCL:1; SRVR:HE1PR0802MB2555; H:pavelb-Z68P-DS3.sw.ru; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: virtuozzo.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; HE1PR0802MB2555; 23:ReogVGoeHBlxOSb8uPs2FvTUK3Wf7z/Z8h0sqni?= =?us-ascii?Q?lCtWxBtRkVsDG39AfRjOHUQNA/ePff+cJx7K4KpEPR7loxZVQae5jWpK6Uwa?= =?us-ascii?Q?ObhTuol3kyHc33Rj+KUtMUcMHSJG4NZ0ZHNBwp00Je0vT6mdm+krKxzN5Y7j?= =?us-ascii?Q?eB2GZRf/Zm4h0F09xklx6kiZhexKVzFuAu98l69ZtyUs4Wa26cdPgaVrCQDj?= =?us-ascii?Q?qyOoMfqlwq72ahBbqM3PoCUfIJhZeyIDdOapdyEoP4Y4/Re0V098XsoR8JEW?= =?us-ascii?Q?S5dbpSsjUKuRCGquKbOpJx557Yk4ii8GG2eH6GMisGmxqI/leqG5+NzrXZVp?= =?us-ascii?Q?3PPesQ8QqqZ645uSgPUrEPCo5UeG7IQkKcl0tkHNAKVpLVdPuNo77xrZaZv1?= =?us-ascii?Q?OSeVyTVyuKiowwAMINMeEsE/PL8j1AMCzt93djUCtvO0j0YXkg1gv3n5QYdG?= =?us-ascii?Q?KgsEGlk+e4zFtCuZ19k2BBJh5f/WRv+fTAWByAsnpex0mMEKTDzz9UYZ6UFF?= =?us-ascii?Q?2XSdJJKEgd3xXNX93A8bW+42VIADt7qjZfeGaLZjlAUlXX+2FMcyhv+hEK9P?= =?us-ascii?Q?bLvXwHQLBlLMkeNnfC03zWlfZhaITqfECGDxDddNFJUY1uuvKIG0aL3RyMxe?= =?us-ascii?Q?uFszVkZYUOav2TWI2tgJYy2VBl5GolpYvFUm/uAj+lC2d3N63m1BN/+wU8BW?= =?us-ascii?Q?05lqtlJBu4+kKDeRYUtlr0TVJTs0RJ0GGds0YL3y7Q6SNInB4IXbYX0Aov9U?= =?us-ascii?Q?AeCjdJ9pSIpT+H/eOpjAP5CIbkr7iCC6wRBHSU5HVnuGr0Z8YYt460z0I8aL?= =?us-ascii?Q?wF5rMCjhBm2bEduMH3kfDUmGrLSXg8s4FP0kCT7XNPgoMyQEQ4Y4TMjdxQU/?= =?us-ascii?Q?7gkhygQQljUdam6k+uASGAaHHwAkxl2cjbylpx8Bi/eSHN43WOvqd+y41mNF?= =?us-ascii?Q?i1/T1QecdcoFl4r+igf4yxkc6v4r22amVLGYfro/xCvXqdKX0yfjnCgTozsU?= =?us-ascii?Q?MD8YVQTlPjNccNIB4JqYRXFzL90V6+QhXckcprEH13JRx99Gk8zcYWCb1fOU?= =?us-ascii?Q?iJC8b8/czLSDIV5N2QxuMOui29/O5GFYuiw4Nq4vD7wx05qyF2wlsvnzWvVX?= =?us-ascii?Q?4bl5IYxjTcX4=3D?= X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 6:E/m6S5wZgI7DEJjvdWJ/LOmbn+fbNojTbEkbQgrf2IfMMYmoDunGkimIFs53UPB1pagjsCWZ8SH6mcHLkgeoSP7FpeO8XsKlfpJvM0EJTT4ReB7do+NlzKMqhKK2CcOkOzw1Kt7BsjoBSIorUKv8r+rd22wyIb9KgT8h+OchfHsEEGthYYK5wcfGsq1eVTffSChrdKnzlgsQhE4V2MoVsmZhZ2mllx+s/mS94Cusv36Pe7Am/HGjtN82B16N40o75aiA83pE/NxZJgOtyN/u1CVm3WOWbaM2uAuD4D31JnU3VZg7K0AKP6btPLt5OtRz3iOJjEJUTFOckhy26dpXcxQJwB9ZsfAU1S5rHOIjAJHPPwlfQb/hiD+O0gMA3SEh59nZKtEDP8wx6PN6p5JloUYTIZwfgaNh7cEQq3ia1iw=; 5:pK28qw95yDXJrcbux3QlaPSZpB4aVkZN7ysARxQh39XqVQoAximbxALamF0B2B40+kdsQSwaiZjVGb7Pip22QVYWRt6Y9Yd4j7jR/vR5kyU0PNG7WTC7Uf80z4GYu2MvyllSShSlxASw6fbbfcSYqw==; 24:77iGrWdFqeZcKsQV1/+VG5iE4ruc3wxQYdFC83z4qSrUyosj8IwhsU2RJAnL0Xo+I/tiR2vIt5TC4Tr7Gry7r9WfUtXLj/e8xwQT7AKTaSo= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; HE1PR0802MB2555; 7:y5DcingdctI2s42Vt2n/rSKSxBMj6JqscEOTPt/UMT3OcJZyHQC0I91WEyroGJDdAP1Axspb2EhrbpdA6L1qHzxB13XNUhN7gWF74A2BmzAWs22SX8msTdZO6OhBUyZdKcProC/xQfbbJrX+9xyLLsYUYLcHG/uzKZVu8zBxTYQfGYN1J8Fdl3DFt+OiqF2/Q2rX2SVQ2sIHXElA/R0w8ApwPG14p4DfR5TL8v4WoY2HQAY8Jm02Gq10QoT0FrEgpICMKHNu0AEaNKawmjNrb/oeBf7KzBqZWC1LBOTP1htSctgRXAwOtOFBmrmHWNCONHbz2hHE3ieDaBSCILyQdwG1fb/y+rrrCHzDt5Fe37DH8Zk6MjNes1+gQiKs82Ma6IWasNBE+X12gAGlFtae8IgogyAhTgjDC90sIBTRZleuJLbvmHWxgxxi3ddBJi8gTFnDx/2Urq+Q/TUM53ZPSQ==; 20:wTRCT+kJ3yvt0xqnjDTuTQy1wkw0WtACQ0+1jAQYbGdd+oCM392fPBskk+7JBmq6B6cf54BaPhfOMGR2qrH2gasG9GdG8tlqxeNS1iWkGff4ww5XjGJyYcDs9wIfGHn786KuSyE9X51IJVADQ+SRLQVbOubvrX8KGUeuk51R3no= X-OriginatorOrg: virtuozzo.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Dec 2016 14:31:58.7025 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2555 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 104.47.2.117 Subject: [Qemu-devel] [PATCH v2 03/18] util/rbcache: range-based cache core X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, den@openvz.org, armbru@redhat.com, mreitz@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP RBCache provides functionality to cache the data from block devices (basically). The range here is used as the main key for searching and storing data. The cache is based on red-black trees, so basic operations search, insert, delete are performed for O(log n). It is important to note that QEMU usually does not require a data cache, but in reality, there are already some cases where a cache of small amounts can increase performance, so as the data structure was selected red-black trees, this is a fairly simple data structure and show high efficiency on a small number of elements. Therefore, when the minimum range is 512 bytes, the recommended size of the cache memory no more than 8-16mb. Also note that this cache implementation allows to store ranges of different lengths without alignment. Generic cache core can easily be used to implement different caching policies at the block level, such as read-ahed. Also it can be used in some special cases, for example for caching data in qcow2 when sequential allocating writes to image with backing file. Signed-off-by: Pavel Butsykin --- MAINTAINERS | 6 ++ include/qemu/rbcache.h | 128 +++++++++++++++++++++++++ util/Makefile.objs | 1 + util/rbcache.c | 253 +++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 388 insertions(+) create mode 100644 include/qemu/rbcache.h create mode 100644 util/rbcache.c diff --git a/MAINTAINERS b/MAINTAINERS index 228278c1ca..01f4afa1e4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1472,6 +1472,12 @@ F: include/qemu/rbtree.h F: include/qemu/rbtree_augmented.h F: util/rbtree.c +Range-Based Cache +M: Denis V. Lunev +S: Supported +F: include/qemu/rbcache.h +F: util/rbcache.c + UUID M: Fam Zheng S: Supported diff --git a/include/qemu/rbcache.h b/include/qemu/rbcache.h new file mode 100644 index 0000000000..24f7c1cb80 --- /dev/null +++ b/include/qemu/rbcache.h @@ -0,0 +1,128 @@ +/* + * QEMU Range-Based Cache core + * + * Copyright (C) 2015-2016 Parallels IP Holdings GmbH. + * + * Author: Pavel Butsykin + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * later. See the COPYING file in the top-level directory. + */ + +#ifndef RBCACHE_H +#define RBCACHE_H + +#include "qemu/rbtree.h" +#include "qemu/queue.h" + +typedef struct RBCacheNode { + struct RbNode rb_node; + uint64_t offset; + uint64_t bytes; + QTAILQ_ENTRY(RBCacheNode) entry; +} RBCacheNode; + +typedef struct RBCache RBCache; + +/* These callbacks are used to extend the common structure RBCacheNode. The + * alloc callback should initialize only fields of the expanded node. Node + * common part is initialized in RBCache( see rbcache_node_alloc() ). + */ +typedef RBCacheNode *RBNodeAlloc(uint64_t offset, uint64_t bytes, void *opaque); +typedef void RBNodeFree(RBCacheNode *node, void *opaque); + + +enum eviction_type { + RBCACHE_FIFO, + RBCACHE_LRU, +}; + +/** + * rbcache_search: + * @rbcache: the cache object. + * @offset: the start of the range. + * @bytes: the size of the range. + * + * Returns the node corresponding to the range(offset, bytes), or NULL if + * the node was not found. In the case when the range covers multiple nodes, + * it returns the node with the lowest offset. + */ +void *rbcache_search(RBCache *rbcache, uint64_t offset, uint64_t bytes); + +/** + * rbcache_insert: + * @rbcache: the cache object. + * @node: a new node for the cache. + * + * Returns the new node, or old node if a node describing the same range + * already exists. In case of partial overlaps, the existing overlapping node + * with the lowest offset is returned. + */ +void *rbcache_insert(RBCache *rbcache, RBCacheNode *node); + +/** + * rbcache_search_and_insert: + * @rbcache: the cache object. + * @offset: the start of the range. + * @bytes: the size of the range. + * + * rbcache_search_and_insert() is like rbcache_insert(), except that a new node + * is allocated inside the function. Returns the new node, or old node if a node + * describing the same range. In case of partial overlaps, the existing + * overlapping node with the lowest offset is returned. + */ +void *rbcache_search_and_insert(RBCache *rbcache, uint64_t offset, + uint64_t byte); + +/** + * rbcache_remove: + * @rbcache: the cache object. + * @node: a node to remove. + * + * Removes the cached range owned by the node, it also frees the node. + */ +void rbcache_remove(RBCache *rbcache, RBCacheNode *node); + +/** + * rbcache_node_alloc: + * @rbcache: the cache object. + * @offset: the start of the range. + * @bytes: the size of the range. + * + * Returns an allocated and initialized node. + */ +RBCacheNode *rbcache_node_alloc(RBCache *rbcache, uint64_t offset, + uint64_t bytes); + +/** + * rbcache_node_free: + * @rbcache: the cache object. + * @node: a node to free. + * + * Frees the node. + */ +void rbcache_node_free(RBCache *rbcache, RBCacheNode *node); + +/** + * rbcache_create: + * @alloc: callback to allocation node, allows to upgrade allocate and expand + * the capabilities of the node. + * @free: callback to release node, must be used together with alloc callback. + * @limit_size: maximum cache size in bytes. + * @eviction_type: method of memory limitation + * @opaque: the opaque pointer to pass to the callback. + * + * Returns the cache object. + */ +RBCache *rbcache_create(RBNodeAlloc *alloc, RBNodeFree *free, + uint64_t limit_size, int eviction_type, void *opaque); + +/** + * rbcache_destroy: + * @rbcache: the cache object. + * + * Cleanup the cache object created with rbcache_create(). + */ +void rbcache_destroy(RBCache *rbcache); + +#endif /* RBCACHE_H */ diff --git a/util/Makefile.objs b/util/Makefile.objs index a5607cb88f..e9f545ddbf 100644 --- a/util/Makefile.objs +++ b/util/Makefile.objs @@ -37,3 +37,4 @@ util-obj-y += qdist.o util-obj-y += qht.o util-obj-y += range.o util-obj-y += rbtree.o +util-obj-y += rbcache.o diff --git a/util/rbcache.c b/util/rbcache.c new file mode 100644 index 0000000000..2f1f860f76 --- /dev/null +++ b/util/rbcache.c @@ -0,0 +1,253 @@ +/* + * QEMU Range-Based Cache core + * + * Copyright (C) 2015-2016 Parallels IP Holdings GmbH. + * + * Author: Pavel Butsykin + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * later. See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "qemu/rbcache.h" + +/* RBCache provides functionality to cache the data from block devices + * (basically). The range here is used as the main key for searching and storing + * data. The cache is based on red-black trees, so basic operations search, + * insert, delete are performed for O(log n). + * + * It is important to note that QEMU usually does not require a data cache, but + * in reality, there are already some cases where a cache of small amounts can + * increase performance, so as the data structure was selected red-black trees, + * this is a quite simple data structure and show high efficiency on a small + * number of elements. Therefore, when the minimum range is 512 bytes, the + * recommended size of the cache memory no more than 8-16mb. Also note that this + * cache implementation allows to store ranges of different lengths without + * alignment. + */ + +struct RBCache { + struct RbRoot root; + RBNodeAlloc *alloc; + RBNodeFree *free; + uint64_t limit_size; + uint64_t cur_size; + enum eviction_type eviction_type; + void *opaque; + + QTAILQ_HEAD(RBCacheNodeHead, RBCacheNode) queue; +}; + +static int node_key_cmp(const RBCacheNode *node1, const RBCacheNode *node2) +{ + assert(node1 != NULL); + assert(node2 != NULL); + + if (node1->offset >= node2->offset + node2->bytes) { + return 1; + } + if (node1->offset + node1->bytes <= node2->offset) { + return -1; + } + + return 0; +} + +/* Find leftmost node that intersects given target_offset. */ +static RBCacheNode *node_previous(RBCacheNode *node, uint64_t target_offset) +{ + while (node) { + struct RbNode *prev_rb_node = rb_prev(&node->rb_node); + RBCacheNode *prev_node; + if (prev_rb_node == NULL) { + break; + } + prev_node = container_of(prev_rb_node, RBCacheNode, rb_node); + if (prev_node->offset + prev_node->bytes <= target_offset) { + break; + } + node = prev_node; + } + + assert(node != NULL); + + return node; +} + +RBCacheNode *rbcache_node_alloc(RBCache *rbcache, uint64_t offset, + uint64_t bytes) +{ + RBCacheNode *node; + + if (rbcache->alloc) { + node = rbcache->alloc(offset, bytes, rbcache->opaque); + } else { + node = g_new(RBCacheNode, 1); + } + + node->offset = offset; + node->bytes = bytes; + + return node; +} + +void rbcache_node_free(RBCache *rbcache, RBCacheNode *node) +{ + if (rbcache->free) { + rbcache->free(node, rbcache->opaque); + } else { + g_free(node); + } +} + +static void rbcache_try_shrink(RBCache *rbcache) +{ + while (rbcache->cur_size > rbcache->limit_size) { + RBCacheNode *node; + assert(!QTAILQ_EMPTY(&rbcache->queue)); + + node = QTAILQ_LAST(&rbcache->queue, RBCacheNodeHead); + + rbcache_remove(rbcache, node); + } +} + +static inline void node_move_in_queue(RBCache *rbcache, RBCacheNode *node) +{ + if (rbcache->eviction_type == RBCACHE_LRU) { + QTAILQ_REMOVE(&rbcache->queue, node, entry); + QTAILQ_INSERT_HEAD(&rbcache->queue, node, entry); + } +} + +/* + * Adds a new node to the tree if the range of the node doesn't overlap with + * existing nodes, and returns the new node. If the new node overlaps with + * another existing node, the tree is not changed and the function returns a + * pointer to the existing node. If the new node covers multiple nodes, then + * returns the leftmost node in the tree. + */ +static RBCacheNode *node_insert(RBCache *rbcache, RBCacheNode *node, bool alloc) +{ + struct RbNode **new, *parent = NULL; + + assert(rbcache != NULL); + assert(node->bytes != 0); + + /* Figure out where to put new node */ + new = &(rbcache->root.rb_node); + while (*new) { + RBCacheNode *this = container_of(*new, RBCacheNode, rb_node); + int result = node_key_cmp(node, this); + if (result == 0) { + this = node_previous(this, node->offset); + node_move_in_queue(rbcache, this); + return this; + } + parent = *new; + new = result < 0 ? &((*new)->rb_left) : &((*new)->rb_right); + } + + if (alloc) { + node = rbcache_node_alloc(rbcache, node->offset, node->bytes); + } + /* Add new node and rebalance tree. */ + rb_link_node(&node->rb_node, parent, new); + rb_insert_color(&node->rb_node, &rbcache->root); + + rbcache->cur_size += node->bytes; + + rbcache_try_shrink(rbcache); + + QTAILQ_INSERT_HEAD(&rbcache->queue, node, entry); + + return node; +} + +void *rbcache_search(RBCache *rbcache, uint64_t offset, uint64_t bytes) +{ + struct RbNode *rb_node; + RBCacheNode node = { + .offset = offset, + .bytes = bytes, + }; + + assert(rbcache != NULL); + + rb_node = rbcache->root.rb_node; + while (rb_node) { + RBCacheNode *this = container_of(rb_node, RBCacheNode, rb_node); + int32_t result = node_key_cmp(&node, this); + if (result == 0) { + this = node_previous(this, offset); + node_move_in_queue(rbcache, this); + return this; + } + rb_node = result < 0 ? rb_node->rb_left : rb_node->rb_right; + } + return NULL; +} + +void *rbcache_insert(RBCache *rbcache, RBCacheNode *node) +{ + return node_insert(rbcache, node, false); +} + +void *rbcache_search_and_insert(RBCache *rbcache, uint64_t offset, + uint64_t bytes) +{ + RBCacheNode node = { + .offset = offset, + .bytes = bytes, + }; + + return node_insert(rbcache, &node, true); +} + +void rbcache_remove(RBCache *rbcache, RBCacheNode *node) +{ + assert(rbcache->cur_size >= node->bytes); + + rbcache->cur_size -= node->bytes; + rb_erase(&node->rb_node, &rbcache->root); + + QTAILQ_REMOVE(&rbcache->queue, node, entry); + + rbcache_node_free(rbcache, node); +} + +RBCache *rbcache_create(RBNodeAlloc *alloc, RBNodeFree *free, + uint64_t limit_size, int eviction_type, void *opaque) +{ + RBCache *rbcache = g_new(RBCache, 1); + + /* We can't use only one callback, or both or neither */ + assert(!(!alloc ^ !free)); + + *rbcache = (RBCache) { + .root = RB_ROOT, + .alloc = alloc, + .free = free, + .limit_size = limit_size, + .eviction_type = eviction_type, + .opaque = opaque, + .queue = QTAILQ_HEAD_INITIALIZER(rbcache->queue), + }; + + return rbcache; +} + +void rbcache_destroy(RBCache *rbcache) +{ + RBCacheNode *node, *next; + + assert(rbcache != NULL); + + QTAILQ_FOREACH_SAFE(node, &rbcache->queue, entry, next) { + QTAILQ_REMOVE(&rbcache->queue, node, entry); + rbcache_node_free(rbcache, node); + } + + g_free(rbcache); +}