From patchwork Fri Mar 1 04:38:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578006 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A061550261 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; cv=none; b=fCinfc0BAYa5V//NwygxGWdy/DFqMhK7GgDcw7FRe+64GS0RyChfU7QJHRBTGFx7TxMZmyeN3im8RYfs6avlaFTZRzVV/tKH2YMBRy0cw31tfUiq/hBJ99pmh8nmF/PTZvDWYnSqxqr9B1KdIF5Wt5d7xbTYWn8Bc7jotylGizA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; c=relaxed/simple; bh=ss7vCGO77nSIuiyuclbVXYwhiwGzGpFmSkJ1fhd899Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mqCSImGaQX4OLkOwjd9vwDurlVHEPrXaSIJq7CfOg5+r44P2JU3Pw2o7ap2Jg/o7VVFokqhgB9poTZr/y9oIfNvoGvWBdNzT2r0wCDpclcr+MxjtvRm32KyYtjEtlTtYH2hJEo+vbi45wRKcbmyFTVZIliFD/wqiyE8fYWrvIyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=JYS5U4Bt; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JYS5U4Bt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Je33jA3b8Qca5p6zd6AGM+0D7kMvflIJ+NRAjn+lvqI=; b=JYS5U4BtofrXzRDwqAtOX2EcQ87ykmszs1luvFJhv6Jkv9fKpTMZOLYOY4Sau6jwVYszSr hNENPnNMlEA9VdEcvwobOP00oxYufN823c13l7s8+GniLgeMz9Y+IoqCzaNfEuUw3civcs e5aa53dfhQJ/5SxJj8XqHb3UZNSP7ow= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-481-ZsYLPgVdPrOzwzg5rn1wng-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: ZsYLPgVdPrOzwzg5rn1wng-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B8A7E1C05AFB for ; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id B3CE7492BE2; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id AD1FEA161A; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Matthew Sakai Subject: [PATCH 01/10] dm vdo errors: remove unused error codes Date: Thu, 29 Feb 2024 23:38:44 -0500 Message-ID: <71e1a606ed732a0ee77aecd3d73e75f63903cc7e.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Also define VDO_SUCCESS in a more central location, and rename error block constants for clarity. Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/errors.c | 4 ---- drivers/md/dm-vdo/errors.h | 13 +++---------- drivers/md/dm-vdo/status-codes.c | 10 ---------- drivers/md/dm-vdo/status-codes.h | 32 +++++--------------------------- 4 files changed, 8 insertions(+), 51 deletions(-) diff --git a/drivers/md/dm-vdo/errors.c b/drivers/md/dm-vdo/errors.c index dc1f0533bd7a..df2498553312 100644 --- a/drivers/md/dm-vdo/errors.c +++ b/drivers/md/dm-vdo/errors.c @@ -58,13 +58,9 @@ static const struct error_info error_list[] = { { "UDS_DUPLICATE_NAME", "Attempt to enter the same name into a delta index twice" }, { "UDS_ASSERTION_FAILED", "Assertion failed" }, { "UDS_QUEUED", "Request queued" }, - { "UDS_BUFFER_ERROR", "Buffer error" }, - { "UDS_NO_DIRECTORY", "Expected directory is missing" }, { "UDS_ALREADY_REGISTERED", "Error range already registered" }, { "UDS_OUT_OF_RANGE", "Cannot access data outside specified limits" }, - { "UDS_EMODULE_LOAD", "Could not load modules" }, { "UDS_DISABLED", "UDS library context is disabled" }, - { "UDS_UNKNOWN_ERROR", "Unknown error" }, { "UDS_UNSUPPORTED_VERSION", "Unsupported version" }, { "UDS_CORRUPT_DATA", "Some index structure is corrupt" }, { "UDS_NO_INDEX", "No index found" }, diff --git a/drivers/md/dm-vdo/errors.h b/drivers/md/dm-vdo/errors.h index cf15d7243204..c6c085b26a0e 100644 --- a/drivers/md/dm-vdo/errors.h +++ b/drivers/md/dm-vdo/errors.h @@ -9,12 +9,13 @@ #include #include -/* Custom error codes and error-related utilities for UDS */ +/* Custom error codes and error-related utilities */ +#define VDO_SUCCESS 0 /* Valid status codes for internal UDS functions. */ enum uds_status_codes { /* Successful return */ - UDS_SUCCESS = 0, + UDS_SUCCESS = VDO_SUCCESS, /* Used as a base value for reporting internal errors */ UDS_ERROR_CODE_BASE = 1024, /* Index overflow */ @@ -29,20 +30,12 @@ enum uds_status_codes { UDS_ASSERTION_FAILED, /* A request has been queued for later processing (not an error) */ UDS_QUEUED, - /* A problem has occurred with a buffer */ - UDS_BUFFER_ERROR, - /* No directory was found where one was expected */ - UDS_NO_DIRECTORY, /* This error range has already been registered */ UDS_ALREADY_REGISTERED, /* Attempt to read or write data outside the valid range */ UDS_OUT_OF_RANGE, - /* Could not load modules */ - UDS_EMODULE_LOAD, /* The index session is disabled */ UDS_DISABLED, - /* Unknown error */ - UDS_UNKNOWN_ERROR, /* The index configuration or volume format is no longer supported */ UDS_UNSUPPORTED_VERSION, /* Some index structure is corrupt */ diff --git a/drivers/md/dm-vdo/status-codes.c b/drivers/md/dm-vdo/status-codes.c index efba1ead0aca..92c42b8bbb8b 100644 --- a/drivers/md/dm-vdo/status-codes.c +++ b/drivers/md/dm-vdo/status-codes.c @@ -15,22 +15,16 @@ const struct error_info vdo_status_list[] = { { "VDO_OUT_OF_RANGE", "Out of range" }, { "VDO_REF_COUNT_INVALID", "Reference count would become invalid" }, { "VDO_NO_SPACE", "Out of space" }, - { "VDO_UNEXPECTED_EOF", "Unexpected EOF on block read" }, { "VDO_BAD_CONFIGURATION", "Bad configuration option" }, - { "VDO_SOCKET_ERROR", "Socket error" }, - { "VDO_BAD_ALIGNMENT", "Mis-aligned block reference" }, { "VDO_COMPONENT_BUSY", "Prior operation still in progress" }, { "VDO_BAD_PAGE", "Corrupt or incorrect page" }, { "VDO_UNSUPPORTED_VERSION", "Unsupported component version" }, { "VDO_INCORRECT_COMPONENT", "Component id mismatch in decoder" }, { "VDO_PARAMETER_MISMATCH", "Parameters have conflicting values" }, - { "VDO_BLOCK_SIZE_TOO_SMALL", "The block size is too small" }, { "VDO_UNKNOWN_PARTITION", "No partition exists with a given id" }, { "VDO_PARTITION_EXISTS", "A partition already exists with a given id" }, - { "VDO_NOT_READ_ONLY", "The device is not in read-only mode" }, { "VDO_INCREMENT_TOO_SMALL", "Physical block growth of too few blocks" }, { "VDO_CHECKSUM_MISMATCH", "Incorrect checksum" }, - { "VDO_RECOVERY_JOURNAL_FULL", "The recovery journal is full" }, { "VDO_LOCK_ERROR", "A lock is held incorrectly" }, { "VDO_READ_ONLY", "The device is in read-only mode" }, { "VDO_SHUTTING_DOWN", "The device is shutting down" }, @@ -38,11 +32,7 @@ const struct error_info vdo_status_list[] = { { "VDO_TOO_MANY_SLABS", "Exceeds maximum number of slabs supported" }, { "VDO_INVALID_FRAGMENT", "Compressed block fragment is invalid" }, { "VDO_RETRY_AFTER_REBUILD", "Retry operation after rebuilding finishes" }, - { "VDO_UNKNOWN_COMMAND", "The extended command is not known" }, - { "VDO_COMMAND_ERROR", "Bad extended command parameters" }, - { "VDO_CANNOT_DETERMINE_SIZE", "Cannot determine config sizes to fit" }, { "VDO_BAD_MAPPING", "Invalid page mapping" }, - { "VDO_READ_CACHE_BUSY", "Read cache has no free slots" }, { "VDO_BIO_CREATION_FAILED", "Bio creation failed" }, { "VDO_BAD_MAGIC", "Bad magic number" }, { "VDO_BAD_NONCE", "Bad nonce" }, diff --git a/drivers/md/dm-vdo/status-codes.h b/drivers/md/dm-vdo/status-codes.h index 5d1e8bbe54b4..eb847def8eb4 100644 --- a/drivers/md/dm-vdo/status-codes.h +++ b/drivers/md/dm-vdo/status-codes.h @@ -9,17 +9,15 @@ #include "errors.h" enum { - UDS_BLOCK_SIZE = UDS_ERROR_CODE_BLOCK_END - UDS_ERROR_CODE_BASE, - VDO_BLOCK_START = UDS_ERROR_CODE_BLOCK_END, - VDO_BLOCK_END = VDO_BLOCK_START + UDS_BLOCK_SIZE, + UDS_ERRORS_BLOCK_SIZE = UDS_ERROR_CODE_BLOCK_END - UDS_ERROR_CODE_BASE, + VDO_ERRORS_BLOCK_START = UDS_ERROR_CODE_BLOCK_END, + VDO_ERRORS_BLOCK_END = VDO_ERRORS_BLOCK_START + UDS_ERRORS_BLOCK_SIZE, }; /* VDO-specific status codes. */ enum vdo_status_codes { - /* successful result */ - VDO_SUCCESS = UDS_SUCCESS, /* base of all VDO errors */ - VDO_STATUS_CODE_BASE = VDO_BLOCK_START, + VDO_STATUS_CODE_BASE = VDO_ERRORS_BLOCK_START, /* we haven't written this yet */ VDO_NOT_IMPLEMENTED = VDO_STATUS_CODE_BASE, /* input out of range */ @@ -28,14 +26,8 @@ enum vdo_status_codes { VDO_REF_COUNT_INVALID, /* a free block could not be allocated */ VDO_NO_SPACE, - /* unexpected EOF on block read */ - VDO_UNEXPECTED_EOF, /* improper or missing configuration option */ VDO_BAD_CONFIGURATION, - /* socket opening or binding problem */ - VDO_SOCKET_ERROR, - /* read or write on non-aligned offset */ - VDO_BAD_ALIGNMENT, /* prior operation still in progress */ VDO_COMPONENT_BUSY, /* page contents incorrect or corrupt data */ @@ -46,20 +38,14 @@ enum vdo_status_codes { VDO_INCORRECT_COMPONENT, /* parameters have conflicting values */ VDO_PARAMETER_MISMATCH, - /* the block size is too small */ - VDO_BLOCK_SIZE_TOO_SMALL, /* no partition exists with a given id */ VDO_UNKNOWN_PARTITION, /* a partition already exists with a given id */ VDO_PARTITION_EXISTS, - /* the VDO is not in read-only mode */ - VDO_NOT_READ_ONLY, /* physical block growth of too few blocks */ VDO_INCREMENT_TOO_SMALL, /* incorrect checksum */ VDO_CHECKSUM_MISMATCH, - /* the recovery journal is full */ - VDO_RECOVERY_JOURNAL_FULL, /* a lock is held incorrectly */ VDO_LOCK_ERROR, /* the VDO is in read-only mode */ @@ -74,16 +60,8 @@ enum vdo_status_codes { VDO_INVALID_FRAGMENT, /* action is unsupported while rebuilding */ VDO_RETRY_AFTER_REBUILD, - /* the extended command is not known */ - VDO_UNKNOWN_COMMAND, - /* bad extended command parameters */ - VDO_COMMAND_ERROR, - /* cannot determine sizes to fit */ - VDO_CANNOT_DETERMINE_SIZE, /* a block map entry is invalid */ VDO_BAD_MAPPING, - /* read cache has no free slots */ - VDO_READ_CACHE_BUSY, /* bio_add_page failed */ VDO_BIO_CREATION_FAILED, /* bad magic number */ @@ -98,7 +76,7 @@ enum vdo_status_codes { VDO_CANT_ADD_SYSFS_NODE, /* one more than last error code */ VDO_STATUS_CODE_LAST, - VDO_STATUS_CODE_BLOCK_END = VDO_BLOCK_END + VDO_STATUS_CODE_BLOCK_END = VDO_ERRORS_BLOCK_END }; extern const struct error_info vdo_status_list[]; From patchwork Fri Mar 1 04:38:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578010 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7032450A62 for ; Fri, 1 Mar 2024 04:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267940; cv=none; b=ggUxUs6Y5/aHvCiXKAF8+BWzcKJugTlnF6NZPEbyoC3nL45JndiMAQqtXcPnoyINpDZKaEgcZJ+3IAFb2RK4WAIoiOPXXtd6gIogVc7PC6LiPi0DUAVddy6QEak9g3DMlPN9gQaGGfkIESlBKvKBbaFMi4ZSAsLi7zM8FiLCLAw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267940; c=relaxed/simple; bh=+NsMK88P8fGRTDDY2580G/KZbBNMTmj2o3ymk+gyQmU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nHA+weWUWEJMt63xtBWcPNtbhTOL9g84Z/FEK2U+CeHzdMsf2vhu7d25ES+ZFUqPtiK3dP8kFUDUCrF14Tc+a+txH6blWiuBkzUD+Crcmr4hNhBWpyXB6hW3i8qmr1xSA9UFfulJlXytxA5i4Pd46hmXAR1LhmD17u0ugjxiOuU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XHfFqz3A; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XHfFqz3A" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267937; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3gUtW8s4dzVzYkQkTYpx/1lN/7KAsle5z4XzHFITPq4=; b=XHfFqz3AXrESMsUwLFhko4fjiPcEJwqEJmy9rbEf4dXGgmlVrZzu6PuGK34LIIzUI/Z36y ahtEB0qSU8u2IADL2sKAR63+WSlyb/iReUvQlwxfi3DzYdeZwHqk0YDnNsa5Kmd/pey8Da sQhYVGvAyS5dbDZXdRwScBzZGukZQtQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-564-_5DpK_yINQyEHhWgI_oO4Q-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: _5DpK_yINQyEHhWgI_oO4Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C273338062B5; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id BB3402166B5E; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id B29F3A161C; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 02/10] dm vdo memory-alloc: return VDO_SUCCESS on success Date: Thu, 29 Feb 2024 23:38:45 -0500 Message-ID: <6105d788062f5239a4d58469982a2a87967b455c.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/memory-alloc.c | 24 ++++++++++++------------ drivers/md/dm-vdo/memory-alloc.h | 8 ++++---- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-vdo/memory-alloc.c b/drivers/md/dm-vdo/memory-alloc.c index 8d5df3e45a24..dd5acc582fb3 100644 --- a/drivers/md/dm-vdo/memory-alloc.c +++ b/drivers/md/dm-vdo/memory-alloc.c @@ -194,7 +194,7 @@ static inline bool use_kmalloc(size_t size) * @what: What is being allocated (for error logging) * @ptr: A pointer to hold the allocated memory * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ int vdo_allocate_memory(size_t size, size_t align, const char *what, void *ptr) { @@ -216,12 +216,12 @@ int vdo_allocate_memory(size_t size, size_t align, const char *what, void *ptr) unsigned long start_time; void *p = NULL; - if (ptr == NULL) - return UDS_INVALID_ARGUMENT; + if (unlikely(ptr == NULL)) + return -EINVAL; if (size == 0) { *((void **) ptr) = NULL; - return UDS_SUCCESS; + return VDO_SUCCESS; } if (allocations_restricted) @@ -245,7 +245,7 @@ int vdo_allocate_memory(size_t size, size_t align, const char *what, void *ptr) } else { struct vmalloc_block_info *block; - if (vdo_allocate(1, struct vmalloc_block_info, __func__, &block) == UDS_SUCCESS) { + if (vdo_allocate(1, struct vmalloc_block_info, __func__, &block) == VDO_SUCCESS) { /* * It is possible for __vmalloc to fail to allocate memory because there * are no pages available. A short sleep may allow the page reclaimer @@ -290,7 +290,7 @@ int vdo_allocate_memory(size_t size, size_t align, const char *what, void *ptr) } *((void **) ptr) = p; - return UDS_SUCCESS; + return VDO_SUCCESS; } /* @@ -335,7 +335,7 @@ void vdo_free(void *ptr) * @what: What is being allocated (for error logging) * @new_ptr: A pointer to hold the reallocated pointer * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ int vdo_reallocate_memory(void *ptr, size_t old_size, size_t size, const char *what, void *new_ptr) @@ -345,11 +345,11 @@ int vdo_reallocate_memory(void *ptr, size_t old_size, size_t size, const char *w if (size == 0) { vdo_free(ptr); *(void **) new_ptr = NULL; - return UDS_SUCCESS; + return VDO_SUCCESS; } result = vdo_allocate(size, char, what, new_ptr); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; if (ptr != NULL) { @@ -360,7 +360,7 @@ int vdo_reallocate_memory(void *ptr, size_t old_size, size_t size, const char *w vdo_free(ptr); } - return UDS_SUCCESS; + return VDO_SUCCESS; } int vdo_duplicate_string(const char *string, const char *what, char **new_string) @@ -369,12 +369,12 @@ int vdo_duplicate_string(const char *string, const char *what, char **new_string u8 *dup; result = vdo_allocate(strlen(string) + 1, u8, what, &dup); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; memcpy(dup, string, strlen(string) + 1); *new_string = dup; - return UDS_SUCCESS; + return VDO_SUCCESS; } void vdo_memory_init(void) diff --git a/drivers/md/dm-vdo/memory-alloc.h b/drivers/md/dm-vdo/memory-alloc.h index 1401843db5e0..0093d9f940d9 100644 --- a/drivers/md/dm-vdo/memory-alloc.h +++ b/drivers/md/dm-vdo/memory-alloc.h @@ -35,7 +35,7 @@ int __must_check vdo_allocate_memory(size_t size, size_t align, const char *what * @what: What is being allocated (for error logging) * @ptr: A pointer to hold the allocated memory * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ static inline int __vdo_do_allocation(size_t count, size_t size, size_t extra, size_t align, const char *what, void *ptr) @@ -65,7 +65,7 @@ static inline int __vdo_do_allocation(size_t count, size_t size, size_t extra, * @WHAT: What is being allocated (for error logging) * @PTR: A pointer to hold the allocated memory * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ #define vdo_allocate(COUNT, TYPE, WHAT, PTR) \ __vdo_do_allocation(COUNT, sizeof(TYPE), 0, __alignof__(TYPE), WHAT, PTR) @@ -81,7 +81,7 @@ static inline int __vdo_do_allocation(size_t count, size_t size, size_t extra, * @WHAT: What is being allocated (for error logging) * @PTR: A pointer to hold the allocated memory * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ #define vdo_allocate_extended(TYPE1, COUNT, TYPE2, WHAT, PTR) \ __extension__({ \ @@ -105,7 +105,7 @@ static inline int __vdo_do_allocation(size_t count, size_t size, size_t extra, * @what: What is being allocated (for error logging) * @ptr: A pointer to hold the allocated memory * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code */ static inline int __must_check vdo_allocate_cache_aligned(size_t size, const char *what, void *ptr) { From patchwork Fri Mar 1 04:38:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578011 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4817651C52 for ; Fri, 1 Mar 2024 04:38:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267941; cv=none; b=jdtOh7C084ajPbQ392/elWDNcG11CLCKPkDSR0Tu91CFIlr7SgnaI7kNBYsWX9dgTuq5kVf5mkjpPJJ15piH/WGxlCcej2cs/HnCFpCO8rQCLlY9IuEwwnyp+es2lIW40qhoGIJu9w3G6I1ytEpet7fU0zunH4TcNclHkGAHsGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267941; c=relaxed/simple; bh=Jy+bmkN5idybLkMLGfdxJwOUaZK805dicUHGdf0focY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ifAbRECZGaxdmV20yfx6MdSnJbTUm4vRcIT/EV5Yg0DOcBRu9cOFlLfQexwRIOaiStQNMkDja2sd3Wr186XdUIX518nVK1xjx2+8Szb2XnB6lHkcCw/w0UibXzTaB3owH/++TMNQZYXhYcEWj1BBOZGkDC/xZFRDzVZX+BXVUmw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PCO1D1N1; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PCO1D1N1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267938; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wZGwmPXlq0vQCwEykQrU/KEnBODSaqWE2leMAMplKws=; b=PCO1D1N1Np2KTrzpfvqlZyteyj8gTjf+bYM+8mKpJb8Dm6+Si4JSufH7nXjvlILgRwRV7L UL16dQHSa1iIoDUu9gmVxuF8FK2s7gV2yME8ZBOyj3B8asP663SGZ9ruDSbIH5Mt5jpz+D z9H1WVRlW3F+gPuIMIVZv+ZOf/uvWyo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-638-gUjQv--5OWKYlObVf4oTwA-1; Thu, 29 Feb 2024 23:38:55 -0500 X-MC-Unique: gUjQv--5OWKYlObVf4oTwA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C2928832BCB; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id BCA07492BE8; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id B87FEA161E; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 03/10] dm vdo: check for VDO_SUCCESS return value from memory-alloc functions Date: Thu, 29 Feb 2024 23:38:46 -0500 Message-ID: <7df964ed1e80ee3e2cf32a1c7057da1a3adad46d.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer VDO_SUCCESS and UDS_SUCCESS were used interchangably, update all callers of VDO's memory-alloc functions to consistently check for VDO_SUCCESS. Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/block-map.c | 6 +++--- drivers/md/dm-vdo/data-vio.c | 2 +- drivers/md/dm-vdo/dm-vdo-target.c | 18 +++++++++--------- drivers/md/dm-vdo/encodings.c | 2 +- drivers/md/dm-vdo/funnel-queue.c | 2 +- drivers/md/dm-vdo/funnel-workqueue.c | 6 +++--- drivers/md/dm-vdo/indexer/chapter-index.c | 2 +- drivers/md/dm-vdo/indexer/config.c | 2 +- drivers/md/dm-vdo/indexer/delta-index.c | 10 +++++----- .../md/dm-vdo/indexer/funnel-requestqueue.c | 2 +- drivers/md/dm-vdo/indexer/geometry.c | 2 +- drivers/md/dm-vdo/indexer/index-layout.c | 18 +++++++++--------- drivers/md/dm-vdo/indexer/index-page-map.c | 8 ++++---- drivers/md/dm-vdo/indexer/index-session.c | 2 +- drivers/md/dm-vdo/indexer/index.c | 12 ++++++------ drivers/md/dm-vdo/indexer/io-factory.c | 6 +++--- drivers/md/dm-vdo/indexer/open-chapter.c | 4 ++-- drivers/md/dm-vdo/indexer/radix-sort.c | 2 +- drivers/md/dm-vdo/indexer/sparse-cache.c | 8 ++++---- drivers/md/dm-vdo/indexer/volume-index.c | 6 +++--- drivers/md/dm-vdo/indexer/volume.c | 14 +++++++------- drivers/md/dm-vdo/int-map.c | 18 +++++++++--------- drivers/md/dm-vdo/io-submitter.c | 2 +- drivers/md/dm-vdo/message-stats.c | 2 +- drivers/md/dm-vdo/slab-depot.c | 2 +- drivers/md/dm-vdo/thread-utils.c | 2 +- drivers/md/dm-vdo/uds-sysfs.c | 2 +- drivers/md/dm-vdo/vdo.c | 2 +- 28 files changed, 82 insertions(+), 82 deletions(-) diff --git a/drivers/md/dm-vdo/block-map.c b/drivers/md/dm-vdo/block-map.c index b09974ad41d2..c4719fb30f86 100644 --- a/drivers/md/dm-vdo/block-map.c +++ b/drivers/md/dm-vdo/block-map.c @@ -223,11 +223,11 @@ static int __must_check allocate_cache_components(struct vdo_page_cache *cache) result = vdo_allocate(cache->page_count, struct page_info, "page infos", &cache->infos); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate_memory(size, VDO_BLOCK_SIZE, "cache pages", &cache->pages); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_int_map_create(cache->page_count, &cache->page_map); @@ -2874,7 +2874,7 @@ int vdo_decode_block_map(struct block_map_state_2_0 state, block_count_t logical result = vdo_allocate_extended(struct block_map, vdo->thread_config.logical_zone_count, struct block_map_zone, __func__, &map); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; map->vdo = vdo; diff --git a/drivers/md/dm-vdo/data-vio.c b/drivers/md/dm-vdo/data-vio.c index dcdd767e40e5..3d5054e61330 100644 --- a/drivers/md/dm-vdo/data-vio.c +++ b/drivers/md/dm-vdo/data-vio.c @@ -847,7 +847,7 @@ int make_data_vio_pool(struct vdo *vdo, data_vio_count_t pool_size, result = vdo_allocate_extended(struct data_vio_pool, pool_size, struct data_vio, __func__, &pool); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; ASSERT_LOG_ONLY((discard_limit <= pool_size), diff --git a/drivers/md/dm-vdo/dm-vdo-target.c b/drivers/md/dm-vdo/dm-vdo-target.c index 86c30fbd75ca..90ba379f8d3e 100644 --- a/drivers/md/dm-vdo/dm-vdo-target.c +++ b/drivers/md/dm-vdo/dm-vdo-target.c @@ -280,7 +280,7 @@ static int split_string(const char *string, char separator, char ***substring_ar result = vdo_allocate(substring_count + 1, char *, "string-splitting array", &substrings); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; for (s = string; *s != 0; s++) { @@ -289,7 +289,7 @@ static int split_string(const char *string, char separator, char ***substring_ar result = vdo_allocate(length + 1, char, "split string", &substrings[current_substring]); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { free_string_array(substrings); return result; } @@ -310,7 +310,7 @@ static int split_string(const char *string, char separator, char ***substring_ar result = vdo_allocate(length + 1, char, "split string", &substrings[current_substring]); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { free_string_array(substrings); return result; } @@ -1527,7 +1527,7 @@ static size_t get_bit_array_size(unsigned int bit_count) * Since the array is initially NULL, this also initializes the array the first time we allocate an * instance number. * - * Return: UDS_SUCCESS or an error code from the allocation + * Return: VDO_SUCCESS or an error code from the allocation */ static int grow_bit_array(void) { @@ -1540,19 +1540,19 @@ static int grow_bit_array(void) get_bit_array_size(instances.bit_count), get_bit_array_size(new_count), "instance number bit array", &new_words); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; instances.bit_count = new_count; instances.words = new_words; - return UDS_SUCCESS; + return VDO_SUCCESS; } /** * allocate_instance() - Allocate an instance number. * @instance_ptr: A point to hold the instance number * - * Return: UDS_SUCCESS or an error code + * Return: VDO_SUCCESS or an error code * * This function must be called while holding the instances lock. */ @@ -1564,7 +1564,7 @@ static int allocate_instance(unsigned int *instance_ptr) /* If there are no unallocated instances, grow the bit array. */ if (instances.count >= instances.bit_count) { result = grow_bit_array(); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; } @@ -1587,7 +1587,7 @@ static int allocate_instance(unsigned int *instance_ptr) instances.count++; instances.next = instance + 1; *instance_ptr = instance; - return UDS_SUCCESS; + return VDO_SUCCESS; } static int construct_new_vdo_registered(struct dm_target *ti, unsigned int argc, diff --git a/drivers/md/dm-vdo/encodings.c b/drivers/md/dm-vdo/encodings.c index 56d94339d2af..a97771fe0a43 100644 --- a/drivers/md/dm-vdo/encodings.c +++ b/drivers/md/dm-vdo/encodings.c @@ -800,7 +800,7 @@ static int allocate_partition(struct layout *layout, u8 id, int result; result = vdo_allocate(1, struct partition, __func__, &partition); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; partition->id = id; diff --git a/drivers/md/dm-vdo/funnel-queue.c b/drivers/md/dm-vdo/funnel-queue.c index 67f7b52ecc86..ce0e801fd955 100644 --- a/drivers/md/dm-vdo/funnel-queue.c +++ b/drivers/md/dm-vdo/funnel-queue.c @@ -15,7 +15,7 @@ int uds_make_funnel_queue(struct funnel_queue **queue_ptr) struct funnel_queue *queue; result = vdo_allocate(1, struct funnel_queue, "funnel queue", &queue); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* diff --git a/drivers/md/dm-vdo/funnel-workqueue.c b/drivers/md/dm-vdo/funnel-workqueue.c index ebf8dce67086..a88f5c93eae5 100644 --- a/drivers/md/dm-vdo/funnel-workqueue.c +++ b/drivers/md/dm-vdo/funnel-workqueue.c @@ -324,7 +324,7 @@ static int make_simple_work_queue(const char *thread_name_prefix, const char *na VDO_WORK_Q_MAX_PRIORITY); result = vdo_allocate(1, struct simple_work_queue, "simple work queue", &queue); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; queue->private = private; @@ -401,12 +401,12 @@ int vdo_make_work_queue(const char *thread_name_prefix, const char *name, result = vdo_allocate(1, struct round_robin_work_queue, "round-robin work queue", &queue); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate(thread_count, struct simple_work_queue *, "subordinate work queues", &queue->service_queues); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { vdo_free(queue); return result; } diff --git a/drivers/md/dm-vdo/indexer/chapter-index.c b/drivers/md/dm-vdo/indexer/chapter-index.c index 9477150362ae..68d86028dbb7 100644 --- a/drivers/md/dm-vdo/indexer/chapter-index.c +++ b/drivers/md/dm-vdo/indexer/chapter-index.c @@ -21,7 +21,7 @@ int uds_make_open_chapter_index(struct open_chapter_index **chapter_index, struct open_chapter_index *index; result = vdo_allocate(1, struct open_chapter_index, "open chapter index", &index); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* diff --git a/drivers/md/dm-vdo/indexer/config.c b/drivers/md/dm-vdo/indexer/config.c index cd20ee8b9a02..5da39043b9ae 100644 --- a/drivers/md/dm-vdo/indexer/config.c +++ b/drivers/md/dm-vdo/indexer/config.c @@ -326,7 +326,7 @@ int uds_make_configuration(const struct uds_parameters *params, return result; result = vdo_allocate(1, struct uds_configuration, __func__, &config); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = uds_make_index_geometry(DEFAULT_BYTES_PER_PAGE, record_pages_per_chapter, diff --git a/drivers/md/dm-vdo/indexer/delta-index.c b/drivers/md/dm-vdo/indexer/delta-index.c index 11f7b85b6710..5bba9a48c5a0 100644 --- a/drivers/md/dm-vdo/indexer/delta-index.c +++ b/drivers/md/dm-vdo/indexer/delta-index.c @@ -312,18 +312,18 @@ static int initialize_delta_zone(struct delta_zone *delta_zone, size_t size, int result; result = vdo_allocate(size, u8, "delta list", &delta_zone->memory); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate(list_count + 2, u64, "delta list temp", &delta_zone->new_offsets); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* Allocate the delta lists. */ result = vdo_allocate(list_count + 2, struct delta_list, "delta lists", &delta_zone->delta_lists); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; compute_coding_constants(mean_delta, &delta_zone->min_bits, @@ -354,7 +354,7 @@ int uds_initialize_delta_index(struct delta_index *delta_index, unsigned int zon result = vdo_allocate(zone_count, struct delta_zone, "Delta Index Zones", &delta_index->delta_zones); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; delta_index->zone_count = zone_count; @@ -1048,7 +1048,7 @@ int uds_finish_restoring_delta_index(struct delta_index *delta_index, u8 *data; result = vdo_allocate(DELTA_LIST_MAX_BYTE_COUNT, u8, __func__, &data); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; for (z = 0; z < reader_count; z++) { diff --git a/drivers/md/dm-vdo/indexer/funnel-requestqueue.c b/drivers/md/dm-vdo/indexer/funnel-requestqueue.c index 95a402ec31c9..eee7b980960f 100644 --- a/drivers/md/dm-vdo/indexer/funnel-requestqueue.c +++ b/drivers/md/dm-vdo/indexer/funnel-requestqueue.c @@ -199,7 +199,7 @@ int uds_make_request_queue(const char *queue_name, struct uds_request_queue *queue; result = vdo_allocate(1, struct uds_request_queue, __func__, &queue); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; queue->processor = processor; diff --git a/drivers/md/dm-vdo/indexer/geometry.c b/drivers/md/dm-vdo/indexer/geometry.c index c735e6cb4425..c0575612e820 100644 --- a/drivers/md/dm-vdo/indexer/geometry.c +++ b/drivers/md/dm-vdo/indexer/geometry.c @@ -62,7 +62,7 @@ int uds_make_index_geometry(size_t bytes_per_page, u32 record_pages_per_chapter, struct index_geometry *geometry; result = vdo_allocate(1, struct index_geometry, "geometry", &geometry); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; geometry->bytes_per_page = bytes_per_page; diff --git a/drivers/md/dm-vdo/indexer/index-layout.c b/drivers/md/dm-vdo/indexer/index-layout.c index c1bcff03cc55..01e0db4184aa 100644 --- a/drivers/md/dm-vdo/indexer/index-layout.c +++ b/drivers/md/dm-vdo/indexer/index-layout.c @@ -487,7 +487,7 @@ static int __must_check make_index_save_region_table(struct index_save_layout *i result = vdo_allocate_extended(struct region_table, region_count, struct layout_region, "layout region table for ISL", &table); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; lr = &table->regions[0]; @@ -546,7 +546,7 @@ static int __must_check write_index_save_header(struct index_save_layout *isl, size_t offset = 0; result = vdo_allocate(table->encoded_size, u8, "index save data", &buffer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; encode_region_table(buffer, &offset, table); @@ -670,7 +670,7 @@ static int __must_check make_layout_region_table(struct index_layout *layout, result = vdo_allocate_extended(struct region_table, region_count, struct layout_region, "layout region table", &table); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; lr = &table->regions[0]; @@ -716,7 +716,7 @@ static int __must_check write_layout_header(struct index_layout *layout, size_t offset = 0; result = vdo_allocate(table->encoded_size, u8, "layout data", &buffer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; encode_region_table(buffer, &offset, table); @@ -807,7 +807,7 @@ static int create_index_layout(struct index_layout *layout, struct uds_configura result = vdo_allocate(sizes.save_count, struct index_save_layout, __func__, &layout->index.saves); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; initialize_layout(layout, &sizes); @@ -1165,7 +1165,7 @@ static int __must_check load_region_table(struct buffered_reader *reader, result = vdo_allocate_extended(struct region_table, header.region_count, struct layout_region, "single file layout region table", &table); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; table->header = header; @@ -1202,7 +1202,7 @@ static int __must_check read_super_block_data(struct buffered_reader *reader, size_t offset = 0; result = vdo_allocate(saved_size, u8, "super block data", &buffer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = uds_read_from_buffered_reader(reader, buffer, saved_size); @@ -1337,7 +1337,7 @@ static int __must_check reconstitute_layout(struct index_layout *layout, result = vdo_allocate(layout->super.max_saves, struct index_save_layout, __func__, &layout->index.saves); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; layout->total_blocks = table->header.region_blocks; @@ -1696,7 +1696,7 @@ int uds_make_index_layout(struct uds_configuration *config, bool new_layout, return result; result = vdo_allocate(1, struct index_layout, __func__, &layout); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = create_layout_factory(layout, config); diff --git a/drivers/md/dm-vdo/indexer/index-page-map.c b/drivers/md/dm-vdo/indexer/index-page-map.c index ddb6d843cbd9..c5d1b9995846 100644 --- a/drivers/md/dm-vdo/indexer/index-page-map.c +++ b/drivers/md/dm-vdo/indexer/index-page-map.c @@ -39,14 +39,14 @@ int uds_make_index_page_map(const struct index_geometry *geometry, struct index_page_map *map; result = vdo_allocate(1, struct index_page_map, "page map", &map); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; map->geometry = geometry; map->entries_per_chapter = geometry->index_pages_per_chapter - 1; result = vdo_allocate(get_entry_count(geometry), u16, "Index Page Map Entries", &map->entries); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_index_page_map(map); return result; } @@ -119,7 +119,7 @@ int uds_write_index_page_map(struct index_page_map *map, struct buffered_writer u32 i; result = vdo_allocate(saved_size, u8, "page map data", &buffer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; memcpy(buffer, PAGE_MAP_MAGIC, PAGE_MAP_MAGIC_LENGTH); @@ -146,7 +146,7 @@ int uds_read_index_page_map(struct index_page_map *map, struct buffered_reader * u32 i; result = vdo_allocate(saved_size, u8, "page map data", &buffer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = uds_read_from_buffered_reader(reader, buffer, saved_size); diff --git a/drivers/md/dm-vdo/indexer/index-session.c b/drivers/md/dm-vdo/indexer/index-session.c index 0f920a583021..22445dcb3fe0 100644 --- a/drivers/md/dm-vdo/indexer/index-session.c +++ b/drivers/md/dm-vdo/indexer/index-session.c @@ -222,7 +222,7 @@ static int __must_check make_empty_index_session(struct uds_index_session **inde struct uds_index_session *session; result = vdo_allocate(1, struct uds_index_session, __func__, &session); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; mutex_init(&session->request_mutex); diff --git a/drivers/md/dm-vdo/indexer/index.c b/drivers/md/dm-vdo/indexer/index.c index c576033b8a53..243a9deab4de 100644 --- a/drivers/md/dm-vdo/indexer/index.c +++ b/drivers/md/dm-vdo/indexer/index.c @@ -89,7 +89,7 @@ static int launch_zone_message(struct uds_zone_message message, unsigned int zon struct uds_request *request; result = vdo_allocate(1, struct uds_request, __func__, &request); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; request->index = index; @@ -770,7 +770,7 @@ static int make_chapter_writer(struct uds_index *index, result = vdo_allocate_extended(struct chapter_writer, index->zone_count, struct open_chapter_zone *, "Chapter Writer", &writer); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; writer->index = index; @@ -779,7 +779,7 @@ static int make_chapter_writer(struct uds_index *index, result = vdo_allocate_cache_aligned(collated_records_size, "collated records", &writer->collated_records); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { free_chapter_writer(writer); return result; } @@ -1127,7 +1127,7 @@ static int make_index_zone(struct uds_index *index, unsigned int zone_number) struct index_zone *zone; result = vdo_allocate(1, struct index_zone, "index zone", &zone); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = uds_make_open_chapter(index->volume->geometry, index->zone_count, @@ -1165,7 +1165,7 @@ int uds_make_index(struct uds_configuration *config, enum uds_open_index_type op result = vdo_allocate_extended(struct uds_index, config->zone_count, struct uds_request_queue *, "index", &index); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; index->zone_count = config->zone_count; @@ -1178,7 +1178,7 @@ int uds_make_index(struct uds_configuration *config, enum uds_open_index_type op result = vdo_allocate(index->zone_count, struct index_zone *, "zones", &index->zones); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_index(index); return result; } diff --git a/drivers/md/dm-vdo/indexer/io-factory.c b/drivers/md/dm-vdo/indexer/io-factory.c index 749c950c0189..0dcf6d596653 100644 --- a/drivers/md/dm-vdo/indexer/io-factory.c +++ b/drivers/md/dm-vdo/indexer/io-factory.c @@ -65,7 +65,7 @@ int uds_make_io_factory(struct block_device *bdev, struct io_factory **factory_p struct io_factory *factory; result = vdo_allocate(1, struct io_factory, __func__, &factory); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; factory->bdev = bdev; @@ -145,7 +145,7 @@ int uds_make_buffered_reader(struct io_factory *factory, off_t offset, u64 block return result; result = vdo_allocate(1, struct buffered_reader, "buffered reader", &reader); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { dm_bufio_client_destroy(client); return result; } @@ -283,7 +283,7 @@ int uds_make_buffered_writer(struct io_factory *factory, off_t offset, u64 block return result; result = vdo_allocate(1, struct buffered_writer, "buffered writer", &writer); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { dm_bufio_client_destroy(client); return result; } diff --git a/drivers/md/dm-vdo/indexer/open-chapter.c b/drivers/md/dm-vdo/indexer/open-chapter.c index 4a4dc94915dd..46b7bc1ac324 100644 --- a/drivers/md/dm-vdo/indexer/open-chapter.c +++ b/drivers/md/dm-vdo/indexer/open-chapter.c @@ -71,14 +71,14 @@ int uds_make_open_chapter(const struct index_geometry *geometry, unsigned int zo result = vdo_allocate_extended(struct open_chapter_zone, slot_count, struct open_chapter_zone_slot, "open chapter", &open_chapter); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; open_chapter->slot_count = slot_count; open_chapter->capacity = capacity; result = vdo_allocate_cache_aligned(records_size(open_chapter), "record pages", &open_chapter->records); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_open_chapter(open_chapter); return result; } diff --git a/drivers/md/dm-vdo/indexer/radix-sort.c b/drivers/md/dm-vdo/indexer/radix-sort.c index 74ea18b8e9be..66b8c706a1ef 100644 --- a/drivers/md/dm-vdo/indexer/radix-sort.c +++ b/drivers/md/dm-vdo/indexer/radix-sort.c @@ -213,7 +213,7 @@ int uds_make_radix_sorter(unsigned int count, struct radix_sorter **sorter) result = vdo_allocate_extended(struct radix_sorter, stack_size, struct task, __func__, &radix_sorter); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; radix_sorter->count = count; diff --git a/drivers/md/dm-vdo/indexer/sparse-cache.c b/drivers/md/dm-vdo/indexer/sparse-cache.c index e297ba2d6ceb..28920167827c 100644 --- a/drivers/md/dm-vdo/indexer/sparse-cache.c +++ b/drivers/md/dm-vdo/indexer/sparse-cache.c @@ -224,7 +224,7 @@ static int __must_check initialize_cached_chapter_index(struct cached_chapter_in result = vdo_allocate(chapter->index_pages_count, struct delta_index_page, __func__, &chapter->index_pages); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; return vdo_allocate(chapter->index_pages_count, struct dm_buffer *, @@ -242,7 +242,7 @@ static int __must_check make_search_list(struct sparse_cache *cache, bytes = (sizeof(struct search_list) + (cache->capacity * sizeof(struct cached_chapter_index *))); result = vdo_allocate_cache_aligned(bytes, "search list", &list); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; list->capacity = cache->capacity; @@ -265,7 +265,7 @@ int uds_make_sparse_cache(const struct index_geometry *geometry, unsigned int ca bytes = (sizeof(struct sparse_cache) + (capacity * sizeof(struct cached_chapter_index))); result = vdo_allocate_cache_aligned(bytes, "sparse cache", &cache); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; cache->geometry = geometry; @@ -296,7 +296,7 @@ int uds_make_sparse_cache(const struct index_geometry *geometry, unsigned int ca /* purge_search_list() needs some temporary lists for sorting. */ result = vdo_allocate(capacity * 2, struct cached_chapter_index *, "scratch entries", &cache->scratch_entries); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) goto out; *cache_ptr = cache; diff --git a/drivers/md/dm-vdo/indexer/volume-index.c b/drivers/md/dm-vdo/indexer/volume-index.c index 1a762e6dd709..1cc9ac4fe510 100644 --- a/drivers/md/dm-vdo/indexer/volume-index.c +++ b/drivers/md/dm-vdo/indexer/volume-index.c @@ -1213,7 +1213,7 @@ static int initialize_volume_sub_index(const struct uds_configuration *config, /* The following arrays are initialized to all zeros. */ result = vdo_allocate(params.list_count, u64, "first chapter to flush", &sub_index->flush_chapters); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; return vdo_allocate(zone_count, struct volume_sub_index_zone, @@ -1229,7 +1229,7 @@ int uds_make_volume_index(const struct uds_configuration *config, u64 volume_non int result; result = vdo_allocate(1, struct volume_index, "volume index", &volume_index); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; volume_index->zone_count = config->zone_count; @@ -1251,7 +1251,7 @@ int uds_make_volume_index(const struct uds_configuration *config, u64 volume_non result = vdo_allocate(config->zone_count, struct volume_index_zone, "volume index zones", &volume_index->zones); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_volume_index(volume_index); return result; } diff --git a/drivers/md/dm-vdo/indexer/volume.c b/drivers/md/dm-vdo/indexer/volume.c index 2d8901732f5d..959dd82ef665 100644 --- a/drivers/md/dm-vdo/indexer/volume.c +++ b/drivers/md/dm-vdo/indexer/volume.c @@ -1509,22 +1509,22 @@ static int __must_check initialize_page_cache(struct page_cache *cache, result = vdo_allocate(VOLUME_CACHE_MAX_QUEUED_READS, struct queued_read, "volume read queue", &cache->read_queue); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate(cache->zone_count, struct search_pending_counter, "Volume Cache Zones", &cache->search_pending_counters); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate(cache->indexable_pages, u16, "page cache index", &cache->index); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; result = vdo_allocate(cache->cache_slots, struct cached_page, "page cache cache", &cache->cache); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* Initialize index values to invalid values. */ @@ -1547,7 +1547,7 @@ int uds_make_volume(const struct uds_configuration *config, struct index_layout int result; result = vdo_allocate(1, struct volume, "volume", &volume); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; volume->nonce = uds_get_volume_nonce(layout); @@ -1586,7 +1586,7 @@ int uds_make_volume(const struct uds_configuration *config, struct index_layout result = vdo_allocate(geometry->records_per_page, const struct uds_volume_record *, "record pointers", &volume->record_pointers); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_volume(volume); return result; } @@ -1626,7 +1626,7 @@ int uds_make_volume(const struct uds_configuration *config, struct index_layout result = vdo_allocate(config->read_threads, struct thread *, "reader threads", &volume->reader_threads); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_volume(volume); return result; } diff --git a/drivers/md/dm-vdo/int-map.c b/drivers/md/dm-vdo/int-map.c index 0bd742ecbe2e..9849d12f2a36 100644 --- a/drivers/md/dm-vdo/int-map.c +++ b/drivers/md/dm-vdo/int-map.c @@ -152,7 +152,7 @@ static u64 hash_key(u64 key) * @map: The map to initialize. * @capacity: The initial capacity of the map. * - * Return: UDS_SUCCESS or an error code. + * Return: VDO_SUCCESS or an error code. */ static int allocate_buckets(struct int_map *map, size_t capacity) { @@ -174,7 +174,7 @@ static int allocate_buckets(struct int_map *map, size_t capacity) * tells the map to use its own small default). * @map_ptr: Output, a pointer to hold the new int_map. * - * Return: UDS_SUCCESS or an error code. + * Return: VDO_SUCCESS or an error code. */ int vdo_int_map_create(size_t initial_capacity, struct int_map **map_ptr) { @@ -183,7 +183,7 @@ int vdo_int_map_create(size_t initial_capacity, struct int_map **map_ptr) size_t capacity; result = vdo_allocate(1, struct int_map, "struct int_map", &map); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* Use the default capacity if the caller did not specify one. */ @@ -196,13 +196,13 @@ int vdo_int_map_create(size_t initial_capacity, struct int_map **map_ptr) capacity = capacity * 100 / DEFAULT_LOAD; result = allocate_buckets(map, capacity); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { vdo_int_map_free(vdo_forget(map)); return result; } *map_ptr = map; - return UDS_SUCCESS; + return VDO_SUCCESS; } /** @@ -368,7 +368,7 @@ void *vdo_int_map_get(struct int_map *map, u64 key) * * Resizes and rehashes all the existing entries, storing them in the new buckets. * - * Return: UDS_SUCCESS or an error code. + * Return: VDO_SUCCESS or an error code. */ static int resize_buckets(struct int_map *map) { @@ -384,7 +384,7 @@ static int resize_buckets(struct int_map *map) uds_log_info("%s: attempting resize from %zu to %zu, current size=%zu", __func__, map->capacity, new_capacity, map->size); result = allocate_buckets(map, new_capacity); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { *map = old_map; return result; } @@ -407,7 +407,7 @@ static int resize_buckets(struct int_map *map) /* Destroy the old bucket array. */ vdo_free(vdo_forget(old_map.buckets)); - return UDS_SUCCESS; + return VDO_SUCCESS; } /** @@ -647,7 +647,7 @@ int vdo_int_map_put(struct int_map *map, u64 key, void *new_value, bool update, * large maps). */ result = resize_buckets(map); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; /* diff --git a/drivers/md/dm-vdo/io-submitter.c b/drivers/md/dm-vdo/io-submitter.c index 23549b7e9e6d..b0f1ba810cd0 100644 --- a/drivers/md/dm-vdo/io-submitter.c +++ b/drivers/md/dm-vdo/io-submitter.c @@ -383,7 +383,7 @@ int vdo_make_io_submitter(unsigned int thread_count, unsigned int rotation_inter result = vdo_allocate_extended(struct io_submitter, thread_count, struct bio_queue_data, "bio submission data", &io_submitter); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; io_submitter->bio_queue_rotation_interval = rotation_interval; diff --git a/drivers/md/dm-vdo/message-stats.c b/drivers/md/dm-vdo/message-stats.c index ec24fff2a21b..18c9d2af8aed 100644 --- a/drivers/md/dm-vdo/message-stats.c +++ b/drivers/md/dm-vdo/message-stats.c @@ -420,7 +420,7 @@ int vdo_write_stats(struct vdo *vdo, char *buf, unsigned int maxlen) int result; result = vdo_allocate(1, struct vdo_statistics, __func__, &stats); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_log_error("Cannot allocate memory to write VDO statistics"); return result; } diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c index 2d2cccf89edb..97208c9e0062 100644 --- a/drivers/md/dm-vdo/slab-depot.c +++ b/drivers/md/dm-vdo/slab-depot.c @@ -2379,7 +2379,7 @@ static int allocate_slab_counters(struct vdo_slab *slab) bytes = (slab->reference_block_count * COUNTS_PER_BLOCK) + (2 * BYTES_PER_WORD); result = vdo_allocate(bytes, vdo_refcount_t, "ref counts array", &slab->counters); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { vdo_free(vdo_forget(slab->reference_blocks)); return result; } diff --git a/drivers/md/dm-vdo/thread-utils.c b/drivers/md/dm-vdo/thread-utils.c index ad7682784459..b4aa71fffdbf 100644 --- a/drivers/md/dm-vdo/thread-utils.c +++ b/drivers/md/dm-vdo/thread-utils.c @@ -83,7 +83,7 @@ int vdo_create_thread(void (*thread_function)(void *), void *thread_data, int result; result = vdo_allocate(1, struct thread, __func__, &thread); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_log_warning("Error allocating memory for %s", name); return result; } diff --git a/drivers/md/dm-vdo/uds-sysfs.c b/drivers/md/dm-vdo/uds-sysfs.c index d1d5a30b3717..753d81d6f207 100644 --- a/drivers/md/dm-vdo/uds-sysfs.c +++ b/drivers/md/dm-vdo/uds-sysfs.c @@ -35,7 +35,7 @@ static char *buffer_to_string(const char *buf, size_t length) { char *string; - if (vdo_allocate(length + 1, char, __func__, &string) != UDS_SUCCESS) + if (vdo_allocate(length + 1, char, __func__, &string) != VDO_SUCCESS) return NULL; memcpy(string, buf, length); diff --git a/drivers/md/dm-vdo/vdo.c b/drivers/md/dm-vdo/vdo.c index ae62f260c5ec..b4dd0634a5cb 100644 --- a/drivers/md/dm-vdo/vdo.c +++ b/drivers/md/dm-vdo/vdo.c @@ -545,7 +545,7 @@ int vdo_make(unsigned int instance, struct device_config *config, char **reason, *reason = "Unspecified error"; result = vdo_allocate(1, struct vdo, __func__, &vdo); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { *reason = "Cannot allocate VDO"; return result; } From patchwork Fri Mar 1 04:38:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578004 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61A3450255 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; cv=none; b=SAQVq6ddqhMUg8HHaBXacxEwENNngFIXhaAwQbom8B8xCKOsqhRKZHCT08qek+SE596St8envCa/r/TocnWmWyv9KEek3NUlNJtQt2vFimP/BOEkVqIciVzhm9znWwTPmJMvciLYF+yT6iLce1/ZHT/D6Mc0tOQ3+jI4iwwkpNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; c=relaxed/simple; bh=U3/cl9ex941e6L2+IyoKR5dR0xUNIuTBJI3TYMWiCfo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=CDFPDEMsImF9xr3iULscoHKMDAw7FqbTd92ztPTWQoSy3oKzrB2RobqBAY3NkhwZM0DoyH18DBkchHijkUcmvdMIqwajHXIJjJOuDeXEoFxoGSet+V2HbURwNDx7vb8JGWzCOHO1IvgPNPXiJgLPLrFwrZZ0LbA8XRVKVM8H+KU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=h6jYWDyj; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h6jYWDyj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TtawbkqbMzMMJzbXYi1bvF0bxK4bDQnFMdwaBLQLV3Q=; b=h6jYWDyj4Js60wZ2wbzrVbAI79a7JWt0a9KA21d4zGbUpZxYNBhApf56VoD9RARQMPEJF2 NuKlQZ5IhbHqk4BlaFOuljJ58k9wIB1xvl2vNAayecXqFFALX9XS5irAKgDilH7nbm2REg NtxtnQI2k44/jPIfygYGz5frArOfN+s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-68-mUM0VrMXOnmoGA4YsIO4RA-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: mUM0VrMXOnmoGA4YsIO4RA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C5B65832BD0; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id C04032166B5D; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id BDA02A1622; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 04/10] dm vdo int-map: return VDO_SUCCESS on success Date: Thu, 29 Feb 2024 23:38:47 -0500 Message-ID: <9b65d46a3f8d833db1ce7a77e69598c238bee95d.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Update all callers to check for VDO_SUCCESS (most already did). Also fix whitespace for update_mapping() parameters. Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/block-map.c | 4 ++-- drivers/md/dm-vdo/int-map.c | 20 ++++++++------------ drivers/md/dm-vdo/io-submitter.c | 4 ++-- 3 files changed, 12 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-vdo/block-map.c b/drivers/md/dm-vdo/block-map.c index c4719fb30f86..320e76527e2b 100644 --- a/drivers/md/dm-vdo/block-map.c +++ b/drivers/md/dm-vdo/block-map.c @@ -231,7 +231,7 @@ static int __must_check allocate_cache_components(struct vdo_page_cache *cache) return result; result = vdo_int_map_create(cache->page_count, &cache->page_map); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; return initialize_info(cache); @@ -390,7 +390,7 @@ static int __must_check set_info_pbn(struct page_info *info, physical_block_numb if (pbn != NO_PAGE) { result = vdo_int_map_put(cache->page_map, pbn, info, true, NULL); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; } return VDO_SUCCESS; diff --git a/drivers/md/dm-vdo/int-map.c b/drivers/md/dm-vdo/int-map.c index 9849d12f2a36..a909a11204c1 100644 --- a/drivers/md/dm-vdo/int-map.c +++ b/drivers/md/dm-vdo/int-map.c @@ -397,7 +397,7 @@ static int resize_buckets(struct int_map *map) continue; result = vdo_int_map_put(map, entry->key, entry->value, true, NULL); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { /* Destroy the new partial map and restore the map from the stack. */ vdo_free(vdo_forget(map->buckets)); *map = old_map; @@ -525,12 +525,8 @@ static struct bucket *move_empty_bucket(struct int_map *map __always_unused, * * Return: true if the map contains a mapping for the key, false if it does not. */ -static bool update_mapping(struct int_map *map, - struct bucket *neighborhood, - u64 key, - void *new_value, - bool update, - void **old_value_ptr) +static bool update_mapping(struct int_map *map, struct bucket *neighborhood, + u64 key, void *new_value, bool update, void **old_value_ptr) { struct bucket *bucket = search_hop_list(map, neighborhood, key, NULL); @@ -609,15 +605,15 @@ static struct bucket *find_or_make_vacancy(struct int_map *map, * update is true. In either case the old value is returned. If the map does not already contain a * value for the specified key, the new value is added regardless of the value of update. * - * Return: UDS_SUCCESS or an error code. + * Return: VDO_SUCCESS or an error code. */ int vdo_int_map_put(struct int_map *map, u64 key, void *new_value, bool update, void **old_value_ptr) { struct bucket *neighborhood, *bucket; - if (new_value == NULL) - return UDS_INVALID_ARGUMENT; + if (unlikely(new_value == NULL)) + return -EINVAL; /* * Select the bucket at the start of the neighborhood that must contain any entry for the @@ -630,7 +626,7 @@ int vdo_int_map_put(struct int_map *map, u64 key, void *new_value, bool update, * optionally update it, returning the old value. */ if (update_mapping(map, neighborhood, key, new_value, update, old_value_ptr)) - return UDS_SUCCESS; + return VDO_SUCCESS; /* * Find an empty bucket in the desired neighborhood for the new entry or re-arrange entries @@ -666,7 +662,7 @@ int vdo_int_map_put(struct int_map *map, u64 key, void *new_value, bool update, /* There was no existing entry, so there was no old value to be returned. */ if (old_value_ptr != NULL) *old_value_ptr = NULL; - return UDS_SUCCESS; + return VDO_SUCCESS; } /** diff --git a/drivers/md/dm-vdo/io-submitter.c b/drivers/md/dm-vdo/io-submitter.c index b0f1ba810cd0..e82b4a8c6fc4 100644 --- a/drivers/md/dm-vdo/io-submitter.c +++ b/drivers/md/dm-vdo/io-submitter.c @@ -300,7 +300,7 @@ static bool try_bio_map_merge(struct vio *vio) mutex_unlock(&bio_queue_data->lock); /* We don't care about failure of int_map_put in this case. */ - ASSERT_LOG_ONLY(result == UDS_SUCCESS, "bio map insertion succeeds"); + ASSERT_LOG_ONLY(result == VDO_SUCCESS, "bio map insertion succeeds"); return merged; } @@ -403,7 +403,7 @@ int vdo_make_io_submitter(unsigned int thread_count, unsigned int rotation_inter */ result = vdo_int_map_create(max_requests_active * 2, &bio_queue_data->map); - if (result != 0) { + if (result != VDO_SUCCESS) { /* * Clean up the partially initialized bio-queue entirely and indicate that * initialization failed. From patchwork Fri Mar 1 04:38:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578005 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0FF250263 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; cv=none; b=myuqu7O3xyxIOIPbczpgwmTf9AhaRKwM0A/dtu/Slg8xoGaFkSQAsNbSg80RI/jm1cb4bBP8s0R/2CM/x5kSobbgtt+vkIjuXHOCwfhP/gF230l7lHErTg3F6ZJA3fOJANKcPVSZedRyw5ZMInFtx5Tpb9VUZPi8ijS2ZZYgapk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; c=relaxed/simple; bh=bPqXZQJmpTUbLZVaoWbfPJO2MVYQYC6ppXjF9grD9ms=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aSythEWI8+t8YdvPkPEveIEbP369OVAkAjCoZda9AxUsD98mfKvweBe8VO3ucaJoaq5+9oMjFeMI08yhOQiP8MMTvJ/c6ypcxSEJrx4pA3X90vtkZpSbU3oP2dqjveW1qogv2TY6aSUCVBVBOM+55Ibb8hf7clyiaf291NvKn84= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GmQRYkJm; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GmQRYkJm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0yBWOrph1RO7Wna9nVGis2+wWoyhUgPm2d7LzcK2HeQ=; b=GmQRYkJmu2CfNjbUNyi/v727owNJUxJjJoUQu8w25D4dk3+JLApSoKLPX+XXU/LbsGN66c 4vlAG8T4rK0zYi59F2v7lHc3cA3O0Pm/hYLYJHBxQRIzsufmCsLwsj+fh8vvNU4PVSaArB ZE1O3TlCHG8jaIijP5Drm/33ypfNBUo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-564-ZeTjmVyQNsSa4TTUxORG0Q-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: ZeTjmVyQNsSa4TTUxORG0Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CAE2083B8E5; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5CD21121312; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id C3402A1624; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 05/10] dm vdo thread-utils: return VDO_SUCCESS on vdo_create_thread success Date: Thu, 29 Feb 2024 23:38:48 -0500 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Update all callers to check for VDO_SUCCESS. Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/indexer/funnel-requestqueue.c | 2 +- drivers/md/dm-vdo/indexer/index.c | 2 +- drivers/md/dm-vdo/indexer/volume.c | 2 +- drivers/md/dm-vdo/thread-utils.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/md/dm-vdo/indexer/funnel-requestqueue.c b/drivers/md/dm-vdo/indexer/funnel-requestqueue.c index eee7b980960f..84c7c1ae1333 100644 --- a/drivers/md/dm-vdo/indexer/funnel-requestqueue.c +++ b/drivers/md/dm-vdo/indexer/funnel-requestqueue.c @@ -221,7 +221,7 @@ int uds_make_request_queue(const char *queue_name, result = vdo_create_thread(request_queue_worker, queue, queue_name, &queue->thread); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_request_queue_finish(queue); return result; } diff --git a/drivers/md/dm-vdo/indexer/index.c b/drivers/md/dm-vdo/indexer/index.c index 243a9deab4de..226713221105 100644 --- a/drivers/md/dm-vdo/indexer/index.c +++ b/drivers/md/dm-vdo/indexer/index.c @@ -798,7 +798,7 @@ static int make_chapter_writer(struct uds_index *index, writer->open_chapter_index->memory_size); result = vdo_create_thread(close_chapters, writer, "writer", &writer->thread); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { free_chapter_writer(writer); return result; } diff --git a/drivers/md/dm-vdo/indexer/volume.c b/drivers/md/dm-vdo/indexer/volume.c index 959dd82ef665..0a4beef8ac8d 100644 --- a/drivers/md/dm-vdo/indexer/volume.c +++ b/drivers/md/dm-vdo/indexer/volume.c @@ -1634,7 +1634,7 @@ int uds_make_volume(const struct uds_configuration *config, struct index_layout for (i = 0; i < config->read_threads; i++) { result = vdo_create_thread(read_thread_function, (void *) volume, "reader", &volume->reader_threads[i]); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_free_volume(volume); return result; } diff --git a/drivers/md/dm-vdo/thread-utils.c b/drivers/md/dm-vdo/thread-utils.c index b4aa71fffdbf..c822df86f731 100644 --- a/drivers/md/dm-vdo/thread-utils.c +++ b/drivers/md/dm-vdo/thread-utils.c @@ -119,7 +119,7 @@ int vdo_create_thread(void (*thread_function)(void *), void *thread_data, } *new_thread = thread; - return UDS_SUCCESS; + return VDO_SUCCESS; } void vdo_join_threads(struct thread *thread) From patchwork Fri Mar 1 04:38:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578002 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30EBC50241 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267937; cv=none; b=FlTWZIEC6Pl0EEgVS+gECsybkRYVbEZVBYI1LcyGQxFw25cVYaXYskU6mzCw86FiCdKE40C00sT5szve0Fzf9bqdLPgzhp4kn1ZTXhmqpbwiTbcQ/q27/IwZXzJFxHM6DZqgRDFjSEQT65iTmW93zYg3EuuC544TqQVkgmJiles= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267937; c=relaxed/simple; bh=rnR3/38I9YTx8CuIPb2i5NiLuIVBJd2wYOJc5ZMiOMQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=M1QMDRoTC4F/QKcz880PHqCDC+hJuiu4C1sspgMKG4YyrnfJTqg36iP38M8mH6FB5DqHPnZG1szUxaw0GVrmcPQU9C8X0KhC9+8C5WpXSfCNtv8pDOMUTsQ32skObyPJOin3aIU79HBEzH1eVkOsCQ4001Xmrl+BmBFJkZBjwac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Lw9Yb7J0; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Lw9Yb7J0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y75cSizFVNQPtk55oIktHs+j39YJlAwspT63tgQZQuA=; b=Lw9Yb7J0FEf5rm1exKA3MtRuuI4nxoOuYmYjOo9xgRZEIvTKuLRq8rmKN+zLIwHhe/zpT3 POUOdbURc9DoxRcwOE3+UJ4c8I/7v1dJ1aIU5ukCdvgzoFEnv5JlPWtY9jgJXrKArvv8zO Hd7A4XiUtFr0FaFhGHLZzWvoAvK68Ps= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-687-w0rGRkB6ONeFwHZ7khXQUA-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: w0rGRkB6ONeFwHZ7khXQUA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CFB9C185A781; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAF90112132A; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id C8A1EA1626; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 06/10] dm-vdo funnel-workqueue: return VDO_SUCCESS from make_simple_work_queue Date: Thu, 29 Feb 2024 23:38:49 -0500 Message-ID: <24d79921a860066337c6c307654c141d6bb1c849.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/funnel-workqueue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-vdo/funnel-workqueue.c b/drivers/md/dm-vdo/funnel-workqueue.c index a88f5c93eae5..a923432f0a37 100644 --- a/drivers/md/dm-vdo/funnel-workqueue.c +++ b/drivers/md/dm-vdo/funnel-workqueue.c @@ -367,7 +367,7 @@ static int make_simple_work_queue(const char *thread_name_prefix, const char *na wait_for_completion(&started); *queue_ptr = queue; - return UDS_SUCCESS; + return VDO_SUCCESS; } /** From patchwork Fri Mar 1 04:38:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578012 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2C1A50A88 for ; Fri, 1 Mar 2024 04:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267944; cv=none; b=iOlgXP4G3LljSwXvXOEnQGl/buJ1YFdcbhf6AbPV6lVg1b/huJONW5kTIofn86A+70iQUhCYiLRTxMKtt/9sCCwvvGHN+l+keyeRXlAAcA96rFaCVdlGExacNecp38UhIGT+7yyteyh3BdUukt2uzRIRqjnKVGaUSNoLhVNnxkk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267944; c=relaxed/simple; bh=AkoUG2fyDMjLKQaLtJW8aQeTH775Sm/P/9SKHLjMGEQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Fjx3371OMz2Ic0IVYCnEsvci1aRVhf16+Y0Aso6mXUWi7lQuoUGYoXWC95ER3awxUiU9jJLUv/ZK6icONTy2I5hQ9BCs/2LxjxLn0vZsrEqBLk5L0obihK58/DoogZvkUbmpPH+hIFG0yrBxetGesv9gnfBo2pAIGAuyaOG01lA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=H+C+hfso; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="H+C+hfso" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267937; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yw5Hd7BxZiUufCvj3IIKJrrXP5d3BqS9z8dhY5/+ag4=; b=H+C+hfsolFAI8fEzkGcKhAvo2FUnX1Nmi+t5+54ndQ5SUAmRKFJOcT5mDaVB/W+sNldGm5 mjT1nWEEOht+ftolbq+qXQOc9V0DAOU6j48kCXc2gtdrbBhrTSToP9PS+EW3cNi+GqgwQf rXCr5ILZfN+mtokQxvPfjboLUocaC78= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-638-w-27WFe4P2SFA7zyvGSEQA-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: w-27WFe4P2SFA7zyvGSEQA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D8F8B185A782; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2E4410793; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id D02CCA1628; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 07/10] dm vdo permassert: audit all of ASSERT to test for VDO_SUCCESS Date: Thu, 29 Feb 2024 23:38:50 -0500 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Also rename ASSERT to VDO_ASSERT and ASSERT_LOG_ONLY to VDO_ASSERT_LOG_ONLY. But re-introduce ASSERT and ASSERT_LOG_ONLY as a placeholder for the benefit of dm-vdo/indexer. Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/action-manager.c | 8 +- drivers/md/dm-vdo/block-map.c | 118 +++++++++---------- drivers/md/dm-vdo/completion.c | 10 +- drivers/md/dm-vdo/completion.h | 6 +- drivers/md/dm-vdo/data-vio.c | 108 +++++++++--------- drivers/md/dm-vdo/data-vio.h | 68 +++++------ drivers/md/dm-vdo/dedupe.c | 165 +++++++++++++-------------- drivers/md/dm-vdo/dm-vdo-target.c | 38 +++--- drivers/md/dm-vdo/encodings.c | 156 ++++++++++++------------- drivers/md/dm-vdo/errors.c | 5 +- drivers/md/dm-vdo/flush.c | 22 ++-- drivers/md/dm-vdo/funnel-workqueue.c | 22 ++-- drivers/md/dm-vdo/io-submitter.c | 8 +- drivers/md/dm-vdo/logical-zone.c | 22 ++-- drivers/md/dm-vdo/memory-alloc.c | 12 +- drivers/md/dm-vdo/packer.c | 12 +- drivers/md/dm-vdo/permassert.h | 15 ++- drivers/md/dm-vdo/physical-zone.c | 48 ++++---- drivers/md/dm-vdo/priority-table.c | 4 +- drivers/md/dm-vdo/recovery-journal.c | 60 +++++----- drivers/md/dm-vdo/repair.c | 12 +- drivers/md/dm-vdo/slab-depot.c | 116 +++++++++---------- drivers/md/dm-vdo/thread-registry.c | 4 +- drivers/md/dm-vdo/vdo.c | 32 +++--- drivers/md/dm-vdo/vio.c | 40 +++---- drivers/md/dm-vdo/vio.h | 8 +- 26 files changed, 561 insertions(+), 558 deletions(-) diff --git a/drivers/md/dm-vdo/action-manager.c b/drivers/md/dm-vdo/action-manager.c index 709be4c17d27..a0e5e7077d13 100644 --- a/drivers/md/dm-vdo/action-manager.c +++ b/drivers/md/dm-vdo/action-manager.c @@ -177,8 +177,8 @@ static void apply_to_zone(struct vdo_completion *completion) zone_count_t zone; struct action_manager *manager = as_action_manager(completion); - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == get_acting_zone_thread_id(manager)), - "%s() called on acting zones's thread", __func__); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == get_acting_zone_thread_id(manager)), + "%s() called on acting zones's thread", __func__); zone = manager->acting_zone++; if (manager->acting_zone == manager->zones) { @@ -357,8 +357,8 @@ bool vdo_schedule_operation_with_context(struct action_manager *manager, { struct action *current_action; - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == manager->initiator_thread_id), - "action initiated from correct thread"); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == manager->initiator_thread_id), + "action initiated from correct thread"); if (!manager->current_action->in_use) { current_action = manager->current_action; } else if (!manager->current_action->next->in_use) { diff --git a/drivers/md/dm-vdo/block-map.c b/drivers/md/dm-vdo/block-map.c index 320e76527e2b..b70294d8bb61 100644 --- a/drivers/md/dm-vdo/block-map.c +++ b/drivers/md/dm-vdo/block-map.c @@ -246,16 +246,16 @@ static inline void assert_on_cache_thread(struct vdo_page_cache *cache, { thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((thread_id == cache->zone->thread_id), - "%s() must only be called on cache thread %d, not thread %d", - function_name, cache->zone->thread_id, thread_id); + VDO_ASSERT_LOG_ONLY((thread_id == cache->zone->thread_id), + "%s() must only be called on cache thread %d, not thread %d", + function_name, cache->zone->thread_id, thread_id); } /** assert_io_allowed() - Assert that a page cache may issue I/O. */ static inline void assert_io_allowed(struct vdo_page_cache *cache) { - ASSERT_LOG_ONLY(!vdo_is_state_quiescent(&cache->zone->state), - "VDO page cache may issue I/O"); + VDO_ASSERT_LOG_ONLY(!vdo_is_state_quiescent(&cache->zone->state), + "VDO page cache may issue I/O"); } /** report_cache_pressure() - Log and, if enabled, report cache pressure. */ @@ -287,9 +287,9 @@ static const char * __must_check get_page_state_name(enum vdo_page_buffer_state BUILD_BUG_ON(ARRAY_SIZE(state_names) != PAGE_STATE_COUNT); - result = ASSERT(state < ARRAY_SIZE(state_names), - "Unknown page_state value %d", state); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(state < ARRAY_SIZE(state_names), + "Unknown page_state value %d", state); + if (result != VDO_SUCCESS) return "[UNKNOWN PAGE STATE]"; return state_names[state]; @@ -378,8 +378,8 @@ static int __must_check set_info_pbn(struct page_info *info, physical_block_numb struct vdo_page_cache *cache = info->cache; /* Either the new or the old page number must be NO_PAGE. */ - int result = ASSERT((pbn == NO_PAGE) || (info->pbn == NO_PAGE), - "Must free a page before reusing it."); + int result = VDO_ASSERT((pbn == NO_PAGE) || (info->pbn == NO_PAGE), + "Must free a page before reusing it."); if (result != VDO_SUCCESS) return result; @@ -401,13 +401,13 @@ static int reset_page_info(struct page_info *info) { int result; - result = ASSERT(info->busy == 0, "VDO Page must not be busy"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(info->busy == 0, "VDO Page must not be busy"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(!vdo_waitq_has_waiters(&info->waiting), - "VDO Page must not have waiters"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(!vdo_waitq_has_waiters(&info->waiting), + "VDO Page must not have waiters"); + if (result != VDO_SUCCESS) return result; result = set_info_pbn(info, NO_PAGE); @@ -592,29 +592,29 @@ static int __must_check validate_completed_page(struct vdo_page_completion *comp { int result; - result = ASSERT(completion->ready, "VDO Page completion not ready"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(completion->ready, "VDO Page completion not ready"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(completion->info != NULL, - "VDO Page Completion must be complete"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(completion->info != NULL, + "VDO Page Completion must be complete"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(completion->info->pbn == completion->pbn, - "VDO Page Completion pbn must be consistent"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(completion->info->pbn == completion->pbn, + "VDO Page Completion pbn must be consistent"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(is_valid(completion->info), - "VDO Page Completion page must be valid"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(is_valid(completion->info), + "VDO Page Completion page must be valid"); + if (result != VDO_SUCCESS) return result; if (writable) { - result = ASSERT(completion->writable, - "VDO Page Completion must be writable"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(completion->writable, + "VDO Page Completion must be writable"); + if (result != VDO_SUCCESS) return result; } @@ -776,7 +776,7 @@ static int __must_check launch_page_load(struct page_info *info, if (result != VDO_SUCCESS) return result; - result = ASSERT((info->busy == 0), "Page is not busy before loading."); + result = VDO_ASSERT((info->busy == 0), "Page is not busy before loading."); if (result != VDO_SUCCESS) return result; @@ -949,8 +949,8 @@ static void discard_a_page(struct vdo_page_cache *cache) return; } - ASSERT_LOG_ONLY(!is_in_flight(info), - "page selected for discard is not in flight"); + VDO_ASSERT_LOG_ONLY(!is_in_flight(info), + "page selected for discard is not in flight"); cache->discard_count++; info->write_status = WRITE_STATUS_DISCARD; @@ -1153,8 +1153,8 @@ void vdo_release_page_completion(struct vdo_completion *completion) discard_info = page_completion->info; } - ASSERT_LOG_ONLY((page_completion->waiter.next_waiter == NULL), - "Page being released after leaving all queues"); + VDO_ASSERT_LOG_ONLY((page_completion->waiter.next_waiter == NULL), + "Page being released after leaving all queues"); page_completion->info = NULL; cache = page_completion->cache; @@ -1217,8 +1217,8 @@ void vdo_get_page(struct vdo_page_completion *page_completion, struct page_info *info; assert_on_cache_thread(cache, __func__); - ASSERT_LOG_ONLY((page_completion->waiter.next_waiter == NULL), - "New page completion was not already on a wait queue"); + VDO_ASSERT_LOG_ONLY((page_completion->waiter.next_waiter == NULL), + "New page completion was not already on a wait queue"); *page_completion = (struct vdo_page_completion) { .pbn = pbn, @@ -1265,7 +1265,7 @@ void vdo_get_page(struct vdo_page_completion *page_completion, } /* Something horrible has gone wrong. */ - ASSERT_LOG_ONLY(false, "Info found in a usable state."); + VDO_ASSERT_LOG_ONLY(false, "Info found in a usable state."); } /* The page must be fetched. */ @@ -1334,7 +1334,7 @@ int vdo_invalidate_page_cache(struct vdo_page_cache *cache) /* Make sure we don't throw away any dirty pages. */ for (info = cache->infos; info < cache->infos + cache->page_count; info++) { - int result = ASSERT(!is_dirty(info), "cache must have no dirty pages"); + int result = VDO_ASSERT(!is_dirty(info), "cache must have no dirty pages"); if (result != VDO_SUCCESS) return result; @@ -1440,10 +1440,10 @@ static bool __must_check is_not_older(struct block_map_zone *zone, u8 a, u8 b) { int result; - result = ASSERT((in_cyclic_range(zone->oldest_generation, a, zone->generation, 1 << 8) && - in_cyclic_range(zone->oldest_generation, b, zone->generation, 1 << 8)), - "generation(s) %u, %u are out of range [%u, %u]", - a, b, zone->oldest_generation, zone->generation); + result = VDO_ASSERT((in_cyclic_range(zone->oldest_generation, a, zone->generation, 1 << 8) && + in_cyclic_range(zone->oldest_generation, b, zone->generation, 1 << 8)), + "generation(s) %u, %u are out of range [%u, %u]", + a, b, zone->oldest_generation, zone->generation); if (result != VDO_SUCCESS) { enter_zone_read_only_mode(zone, result); return true; @@ -1456,8 +1456,8 @@ static void release_generation(struct block_map_zone *zone, u8 generation) { int result; - result = ASSERT((zone->dirty_page_counts[generation] > 0), - "dirty page count underflow for generation %u", generation); + result = VDO_ASSERT((zone->dirty_page_counts[generation] > 0), + "dirty page count underflow for generation %u", generation); if (result != VDO_SUCCESS) { enter_zone_read_only_mode(zone, result); return; @@ -1482,8 +1482,8 @@ static void set_generation(struct block_map_zone *zone, struct tree_page *page, page->generation = new_generation; new_count = ++zone->dirty_page_counts[new_generation]; - result = ASSERT((new_count != 0), "dirty page count overflow for generation %u", - new_generation); + result = VDO_ASSERT((new_count != 0), "dirty page count overflow for generation %u", + new_generation); if (result != VDO_SUCCESS) { enter_zone_read_only_mode(zone, result); return; @@ -1698,15 +1698,15 @@ static void release_page_lock(struct data_vio *data_vio, char *what) struct tree_lock *lock_holder; struct tree_lock *lock = &data_vio->tree_lock; - ASSERT_LOG_ONLY(lock->locked, - "release of unlocked block map page %s for key %llu in tree %u", - what, (unsigned long long) lock->key, lock->root_index); + VDO_ASSERT_LOG_ONLY(lock->locked, + "release of unlocked block map page %s for key %llu in tree %u", + what, (unsigned long long) lock->key, lock->root_index); zone = data_vio->logical.zone->block_map_zone; lock_holder = vdo_int_map_remove(zone->loading_pages, lock->key); - ASSERT_LOG_ONLY((lock_holder == lock), - "block map page %s mismatch for key %llu in tree %u", - what, (unsigned long long) lock->key, lock->root_index); + VDO_ASSERT_LOG_ONLY((lock_holder == lock), + "block map page %s mismatch for key %llu in tree %u", + what, (unsigned long long) lock->key, lock->root_index); lock->locked = false; } @@ -2008,8 +2008,8 @@ static void write_expired_elements(struct block_map_zone *zone) list_del_init(&page->entry); - result = ASSERT(!vdo_waiter_is_waiting(&page->waiter), - "Newly expired page not already waiting to write"); + result = VDO_ASSERT(!vdo_waiter_is_waiting(&page->waiter), + "Newly expired page not already waiting to write"); if (result != VDO_SUCCESS) { enter_zone_read_only_mode(zone, result); continue; @@ -2867,8 +2867,8 @@ int vdo_decode_block_map(struct block_map_state_2_0 state, block_count_t logical BUILD_BUG_ON(VDO_BLOCK_MAP_ENTRIES_PER_PAGE != ((VDO_BLOCK_SIZE - sizeof(struct block_map_page)) / sizeof(struct block_map_entry))); - result = ASSERT(cache_size > 0, "block map cache size is specified"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(cache_size > 0, "block map cache size is specified"); + if (result != VDO_SUCCESS) return result; result = vdo_allocate_extended(struct block_map, @@ -2937,7 +2937,7 @@ void vdo_initialize_block_map_from_journal(struct block_map *map, for (z = 0; z < map->zone_count; z++) { struct dirty_lists *dirty_lists = map->zones[z].dirty_lists; - ASSERT_LOG_ONLY(dirty_lists->next_period == 0, "current period not set"); + VDO_ASSERT_LOG_ONLY(dirty_lists->next_period == 0, "current period not set"); dirty_lists->oldest_period = map->current_era_point; dirty_lists->next_period = map->current_era_point + 1; dirty_lists->offset = map->current_era_point % dirty_lists->maximum_age; @@ -2971,8 +2971,8 @@ static void initiate_drain(struct admin_state *state) { struct block_map_zone *zone = container_of(state, struct block_map_zone, state); - ASSERT_LOG_ONLY((zone->active_lookups == 0), - "%s() called with no active lookups", __func__); + VDO_ASSERT_LOG_ONLY((zone->active_lookups == 0), + "%s() called with no active lookups", __func__); if (!vdo_is_state_suspending(state)) { while (zone->dirty_lists->oldest_period < zone->dirty_lists->next_period) diff --git a/drivers/md/dm-vdo/completion.c b/drivers/md/dm-vdo/completion.c index 9e2381dc3683..5ad85334632d 100644 --- a/drivers/md/dm-vdo/completion.c +++ b/drivers/md/dm-vdo/completion.c @@ -60,7 +60,7 @@ void vdo_initialize_completion(struct vdo_completion *completion, static inline void assert_incomplete(struct vdo_completion *completion) { - ASSERT_LOG_ONLY(!completion->complete, "completion is not complete"); + VDO_ASSERT_LOG_ONLY(!completion->complete, "completion is not complete"); } /** @@ -111,10 +111,10 @@ void vdo_enqueue_completion(struct vdo_completion *completion, struct vdo *vdo = completion->vdo; thread_id_t thread_id = completion->callback_thread_id; - if (ASSERT(thread_id < vdo->thread_config.thread_count, - "thread_id %u (completion type %d) is less than thread count %u", - thread_id, completion->type, - vdo->thread_config.thread_count) != UDS_SUCCESS) + if (VDO_ASSERT(thread_id < vdo->thread_config.thread_count, + "thread_id %u (completion type %d) is less than thread count %u", + thread_id, completion->type, + vdo->thread_config.thread_count) != VDO_SUCCESS) BUG(); completion->requeue = false; diff --git a/drivers/md/dm-vdo/completion.h b/drivers/md/dm-vdo/completion.h index aa145d73a686..3407f34ce58c 100644 --- a/drivers/md/dm-vdo/completion.h +++ b/drivers/md/dm-vdo/completion.h @@ -85,9 +85,9 @@ static inline void vdo_fail_completion(struct vdo_completion *completion, int re static inline int vdo_assert_completion_type(struct vdo_completion *completion, enum vdo_completion_type expected) { - return ASSERT(expected == completion->type, - "completion type should be %u, not %u", expected, - completion->type); + return VDO_ASSERT(expected == completion->type, + "completion type should be %u, not %u", expected, + completion->type); } static inline void vdo_set_completion_callback(struct vdo_completion *completion, diff --git a/drivers/md/dm-vdo/data-vio.c b/drivers/md/dm-vdo/data-vio.c index 3d5054e61330..51c49fad1b8b 100644 --- a/drivers/md/dm-vdo/data-vio.c +++ b/drivers/md/dm-vdo/data-vio.c @@ -232,8 +232,8 @@ static bool check_for_drain_complete_locked(struct data_vio_pool *pool) if (pool->limiter.busy > 0) return false; - ASSERT_LOG_ONLY((pool->discard_limiter.busy == 0), - "no outstanding discard permits"); + VDO_ASSERT_LOG_ONLY((pool->discard_limiter.busy == 0), + "no outstanding discard permits"); return (bio_list_empty(&pool->limiter.new_waiters) && bio_list_empty(&pool->discard_limiter.new_waiters)); @@ -277,9 +277,9 @@ static void acknowledge_data_vio(struct data_vio *data_vio) if (bio == NULL) return; - ASSERT_LOG_ONLY((data_vio->remaining_discard <= - (u32) (VDO_BLOCK_SIZE - data_vio->offset)), - "data_vio to acknowledge is not an incomplete discard"); + VDO_ASSERT_LOG_ONLY((data_vio->remaining_discard <= + (u32) (VDO_BLOCK_SIZE - data_vio->offset)), + "data_vio to acknowledge is not an incomplete discard"); data_vio->user_bio = NULL; vdo_count_bios(&vdo->stats.bios_acknowledged, bio); @@ -443,7 +443,7 @@ static void attempt_logical_block_lock(struct vdo_completion *completion) return; } - result = ASSERT(lock_holder->logical.locked, "logical block lock held"); + result = VDO_ASSERT(lock_holder->logical.locked, "logical block lock held"); if (result != VDO_SUCCESS) { continue_data_vio_with_error(data_vio, result); return; @@ -627,9 +627,9 @@ static void update_limiter(struct limiter *limiter) struct bio_list *waiters = &limiter->waiters; data_vio_count_t available = limiter->limit - limiter->busy; - ASSERT_LOG_ONLY((limiter->release_count <= limiter->busy), - "Release count %u is not more than busy count %u", - limiter->release_count, limiter->busy); + VDO_ASSERT_LOG_ONLY((limiter->release_count <= limiter->busy), + "Release count %u is not more than busy count %u", + limiter->release_count, limiter->busy); get_waiters(limiter); for (; (limiter->release_count > 0) && !bio_list_empty(waiters); limiter->release_count--) @@ -850,8 +850,8 @@ int make_data_vio_pool(struct vdo *vdo, data_vio_count_t pool_size, if (result != VDO_SUCCESS) return result; - ASSERT_LOG_ONLY((discard_limit <= pool_size), - "discard limit does not exceed pool size"); + VDO_ASSERT_LOG_ONLY((discard_limit <= pool_size), + "discard limit does not exceed pool size"); initialize_limiter(&pool->discard_limiter, pool, assign_discard_permit, discard_limit); pool->discard_limiter.permitted_waiters = &pool->permitted_discards; @@ -908,15 +908,15 @@ void free_data_vio_pool(struct data_vio_pool *pool) BUG_ON(atomic_read(&pool->processing)); spin_lock(&pool->lock); - ASSERT_LOG_ONLY((pool->limiter.busy == 0), - "data_vio pool must not have %u busy entries when being freed", - pool->limiter.busy); - ASSERT_LOG_ONLY((bio_list_empty(&pool->limiter.waiters) && - bio_list_empty(&pool->limiter.new_waiters)), - "data_vio pool must not have threads waiting to read or write when being freed"); - ASSERT_LOG_ONLY((bio_list_empty(&pool->discard_limiter.waiters) && - bio_list_empty(&pool->discard_limiter.new_waiters)), - "data_vio pool must not have threads waiting to discard when being freed"); + VDO_ASSERT_LOG_ONLY((pool->limiter.busy == 0), + "data_vio pool must not have %u busy entries when being freed", + pool->limiter.busy); + VDO_ASSERT_LOG_ONLY((bio_list_empty(&pool->limiter.waiters) && + bio_list_empty(&pool->limiter.new_waiters)), + "data_vio pool must not have threads waiting to read or write when being freed"); + VDO_ASSERT_LOG_ONLY((bio_list_empty(&pool->discard_limiter.waiters) && + bio_list_empty(&pool->discard_limiter.new_waiters)), + "data_vio pool must not have threads waiting to discard when being freed"); spin_unlock(&pool->lock); list_for_each_entry_safe(data_vio, tmp, &pool->available, pool_entry) { @@ -961,8 +961,8 @@ void vdo_launch_bio(struct data_vio_pool *pool, struct bio *bio) { struct data_vio *data_vio; - ASSERT_LOG_ONLY(!vdo_is_state_quiescent(&pool->state), - "data_vio_pool not quiescent on acquire"); + VDO_ASSERT_LOG_ONLY(!vdo_is_state_quiescent(&pool->state), + "data_vio_pool not quiescent on acquire"); bio->bi_private = (void *) jiffies; spin_lock(&pool->lock); @@ -998,8 +998,8 @@ static void initiate_drain(struct admin_state *state) static void assert_on_vdo_cpu_thread(const struct vdo *vdo, const char *name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == vdo->thread_config.cpu_thread), - "%s called on cpu thread", name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == vdo->thread_config.cpu_thread), + "%s called on cpu thread", name); } /** @@ -1173,17 +1173,17 @@ static void release_lock(struct data_vio *data_vio, struct lbn_lock *lock) /* The lock is not locked, so it had better not be registered in the lock map. */ struct data_vio *lock_holder = vdo_int_map_get(lock_map, lock->lbn); - ASSERT_LOG_ONLY((data_vio != lock_holder), - "no logical block lock held for block %llu", - (unsigned long long) lock->lbn); + VDO_ASSERT_LOG_ONLY((data_vio != lock_holder), + "no logical block lock held for block %llu", + (unsigned long long) lock->lbn); return; } /* Release the lock by removing the lock from the map. */ lock_holder = vdo_int_map_remove(lock_map, lock->lbn); - ASSERT_LOG_ONLY((data_vio == lock_holder), - "logical block lock mismatch for block %llu", - (unsigned long long) lock->lbn); + VDO_ASSERT_LOG_ONLY((data_vio == lock_holder), + "logical block lock mismatch for block %llu", + (unsigned long long) lock->lbn); lock->locked = false; } @@ -1193,7 +1193,7 @@ static void transfer_lock(struct data_vio *data_vio, struct lbn_lock *lock) struct data_vio *lock_holder, *next_lock_holder; int result; - ASSERT_LOG_ONLY(lock->locked, "lbn_lock with waiters is not locked"); + VDO_ASSERT_LOG_ONLY(lock->locked, "lbn_lock with waiters is not locked"); /* Another data_vio is waiting for the lock, transfer it in a single lock map operation. */ next_lock_holder = @@ -1210,9 +1210,9 @@ static void transfer_lock(struct data_vio *data_vio, struct lbn_lock *lock) return; } - ASSERT_LOG_ONLY((lock_holder == data_vio), - "logical block lock mismatch for block %llu", - (unsigned long long) lock->lbn); + VDO_ASSERT_LOG_ONLY((lock_holder == data_vio), + "logical block lock mismatch for block %llu", + (unsigned long long) lock->lbn); lock->locked = false; /* @@ -1275,10 +1275,10 @@ static void finish_cleanup(struct data_vio *data_vio) { struct vdo_completion *completion = &data_vio->vio.completion; - ASSERT_LOG_ONLY(data_vio->allocation.lock == NULL, - "complete data_vio has no allocation lock"); - ASSERT_LOG_ONLY(data_vio->hash_lock == NULL, - "complete data_vio has no hash lock"); + VDO_ASSERT_LOG_ONLY(data_vio->allocation.lock == NULL, + "complete data_vio has no allocation lock"); + VDO_ASSERT_LOG_ONLY(data_vio->hash_lock == NULL, + "complete data_vio has no hash lock"); if ((data_vio->remaining_discard <= VDO_BLOCK_SIZE) || (completion->result != VDO_SUCCESS)) { struct data_vio_pool *pool = completion->vdo->data_vio_pool; @@ -1404,8 +1404,8 @@ void data_vio_allocate_data_block(struct data_vio *data_vio, { struct allocation *allocation = &data_vio->allocation; - ASSERT_LOG_ONLY((allocation->pbn == VDO_ZERO_BLOCK), - "data_vio does not have an allocation"); + VDO_ASSERT_LOG_ONLY((allocation->pbn == VDO_ZERO_BLOCK), + "data_vio does not have an allocation"); allocation->write_lock_type = write_lock_type; allocation->zone = vdo_get_next_allocation_zone(data_vio->logical.zone); allocation->first_allocation_zone = allocation->zone->zone_number; @@ -1796,11 +1796,11 @@ static void compress_data_vio(struct vdo_completion *completion) */ void launch_compress_data_vio(struct data_vio *data_vio) { - ASSERT_LOG_ONLY(!data_vio->is_duplicate, "compressing a non-duplicate block"); - ASSERT_LOG_ONLY(data_vio->hash_lock != NULL, - "data_vio to compress has a hash_lock"); - ASSERT_LOG_ONLY(data_vio_has_allocation(data_vio), - "data_vio to compress has an allocation"); + VDO_ASSERT_LOG_ONLY(!data_vio->is_duplicate, "compressing a non-duplicate block"); + VDO_ASSERT_LOG_ONLY(data_vio->hash_lock != NULL, + "data_vio to compress has a hash_lock"); + VDO_ASSERT_LOG_ONLY(data_vio_has_allocation(data_vio), + "data_vio to compress has an allocation"); /* * There are 4 reasons why a data_vio which has reached this point will not be eligible for @@ -1841,7 +1841,7 @@ static void hash_data_vio(struct vdo_completion *completion) struct data_vio *data_vio = as_data_vio(completion); assert_data_vio_on_cpu_thread(data_vio); - ASSERT_LOG_ONLY(!data_vio->is_zero, "zero blocks should not be hashed"); + VDO_ASSERT_LOG_ONLY(!data_vio->is_zero, "zero blocks should not be hashed"); murmurhash3_128(data_vio->vio.data, VDO_BLOCK_SIZE, 0x62ea60be, &data_vio->record_name); @@ -1856,7 +1856,7 @@ static void hash_data_vio(struct vdo_completion *completion) static void prepare_for_dedupe(struct data_vio *data_vio) { /* We don't care what thread we are on. */ - ASSERT_LOG_ONLY(!data_vio->is_zero, "must not prepare to dedupe zero blocks"); + VDO_ASSERT_LOG_ONLY(!data_vio->is_zero, "must not prepare to dedupe zero blocks"); /* * Before we can dedupe, we need to know the record name, so the first @@ -1929,11 +1929,11 @@ static void acknowledge_write_callback(struct vdo_completion *completion) struct data_vio *data_vio = as_data_vio(completion); struct vdo *vdo = completion->vdo; - ASSERT_LOG_ONLY((!vdo_uses_bio_ack_queue(vdo) || - (vdo_get_callback_thread_id() == vdo->thread_config.bio_ack_thread)), - "%s() called on bio ack queue", __func__); - ASSERT_LOG_ONLY(data_vio_has_flush_generation_lock(data_vio), - "write VIO to be acknowledged has a flush generation lock"); + VDO_ASSERT_LOG_ONLY((!vdo_uses_bio_ack_queue(vdo) || + (vdo_get_callback_thread_id() == vdo->thread_config.bio_ack_thread)), + "%s() called on bio ack queue", __func__); + VDO_ASSERT_LOG_ONLY(data_vio_has_flush_generation_lock(data_vio), + "write VIO to be acknowledged has a flush generation lock"); acknowledge_data_vio(data_vio); if (data_vio->new_mapped.pbn == VDO_ZERO_BLOCK) { /* This is a zero write or discard */ @@ -1998,8 +1998,8 @@ static void handle_allocation_error(struct vdo_completion *completion) static int assert_is_discard(struct data_vio *data_vio) { - int result = ASSERT(data_vio->is_discard, - "data_vio with no block map page is a discard"); + int result = VDO_ASSERT(data_vio->is_discard, + "data_vio with no block map page is a discard"); return ((result == VDO_SUCCESS) ? result : VDO_READ_ONLY); } diff --git a/drivers/md/dm-vdo/data-vio.h b/drivers/md/dm-vdo/data-vio.h index 44fd0d8ccb76..25926b6cd98b 100644 --- a/drivers/md/dm-vdo/data-vio.h +++ b/drivers/md/dm-vdo/data-vio.h @@ -280,7 +280,7 @@ struct data_vio { static inline struct data_vio *vio_as_data_vio(struct vio *vio) { - ASSERT_LOG_ONLY((vio->type == VIO_TYPE_DATA), "vio is a data_vio"); + VDO_ASSERT_LOG_ONLY((vio->type == VIO_TYPE_DATA), "vio is a data_vio"); return container_of(vio, struct data_vio, vio); } @@ -374,9 +374,9 @@ static inline void assert_data_vio_in_hash_zone(struct data_vio *data_vio) * It's odd to use the LBN, but converting the record name to hex is a bit clunky for an * inline, and the LBN better than nothing as an identifier. */ - ASSERT_LOG_ONLY((expected == thread_id), - "data_vio for logical block %llu on thread %u, should be on hash zone thread %u", - (unsigned long long) data_vio->logical.lbn, thread_id, expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "data_vio for logical block %llu on thread %u, should be on hash zone thread %u", + (unsigned long long) data_vio->logical.lbn, thread_id, expected); } static inline void set_data_vio_hash_zone_callback(struct data_vio *data_vio, @@ -402,9 +402,9 @@ static inline void assert_data_vio_in_logical_zone(struct data_vio *data_vio) thread_id_t expected = data_vio->logical.zone->thread_id; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "data_vio for logical block %llu on thread %u, should be on thread %u", - (unsigned long long) data_vio->logical.lbn, thread_id, expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "data_vio for logical block %llu on thread %u, should be on thread %u", + (unsigned long long) data_vio->logical.lbn, thread_id, expected); } static inline void set_data_vio_logical_callback(struct data_vio *data_vio, @@ -430,10 +430,10 @@ static inline void assert_data_vio_in_allocated_zone(struct data_vio *data_vio) thread_id_t expected = data_vio->allocation.zone->thread_id; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "struct data_vio for allocated physical block %llu on thread %u, should be on thread %u", - (unsigned long long) data_vio->allocation.pbn, thread_id, - expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "struct data_vio for allocated physical block %llu on thread %u, should be on thread %u", + (unsigned long long) data_vio->allocation.pbn, thread_id, + expected); } static inline void set_data_vio_allocated_zone_callback(struct data_vio *data_vio, @@ -460,10 +460,10 @@ static inline void assert_data_vio_in_duplicate_zone(struct data_vio *data_vio) thread_id_t expected = data_vio->duplicate.zone->thread_id; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "data_vio for duplicate physical block %llu on thread %u, should be on thread %u", - (unsigned long long) data_vio->duplicate.pbn, thread_id, - expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "data_vio for duplicate physical block %llu on thread %u, should be on thread %u", + (unsigned long long) data_vio->duplicate.pbn, thread_id, + expected); } static inline void set_data_vio_duplicate_zone_callback(struct data_vio *data_vio, @@ -490,9 +490,9 @@ static inline void assert_data_vio_in_mapped_zone(struct data_vio *data_vio) thread_id_t expected = data_vio->mapped.zone->thread_id; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "data_vio for mapped physical block %llu on thread %u, should be on thread %u", - (unsigned long long) data_vio->mapped.pbn, thread_id, expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "data_vio for mapped physical block %llu on thread %u, should be on thread %u", + (unsigned long long) data_vio->mapped.pbn, thread_id, expected); } static inline void set_data_vio_mapped_zone_callback(struct data_vio *data_vio, @@ -507,10 +507,10 @@ static inline void assert_data_vio_in_new_mapped_zone(struct data_vio *data_vio) thread_id_t expected = data_vio->new_mapped.zone->thread_id; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "data_vio for new_mapped physical block %llu on thread %u, should be on thread %u", - (unsigned long long) data_vio->new_mapped.pbn, thread_id, - expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "data_vio for new_mapped physical block %llu on thread %u, should be on thread %u", + (unsigned long long) data_vio->new_mapped.pbn, thread_id, + expected); } static inline void set_data_vio_new_mapped_zone_callback(struct data_vio *data_vio, @@ -525,10 +525,10 @@ static inline void assert_data_vio_in_journal_zone(struct data_vio *data_vio) thread_id_t journal_thread = vdo_from_data_vio(data_vio)->thread_config.journal_thread; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((journal_thread == thread_id), - "data_vio for logical block %llu on thread %u, should be on journal thread %u", - (unsigned long long) data_vio->logical.lbn, thread_id, - journal_thread); + VDO_ASSERT_LOG_ONLY((journal_thread == thread_id), + "data_vio for logical block %llu on thread %u, should be on journal thread %u", + (unsigned long long) data_vio->logical.lbn, thread_id, + journal_thread); } static inline void set_data_vio_journal_callback(struct data_vio *data_vio, @@ -555,10 +555,10 @@ static inline void assert_data_vio_in_packer_zone(struct data_vio *data_vio) thread_id_t packer_thread = vdo_from_data_vio(data_vio)->thread_config.packer_thread; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((packer_thread == thread_id), - "data_vio for logical block %llu on thread %u, should be on packer thread %u", - (unsigned long long) data_vio->logical.lbn, thread_id, - packer_thread); + VDO_ASSERT_LOG_ONLY((packer_thread == thread_id), + "data_vio for logical block %llu on thread %u, should be on packer thread %u", + (unsigned long long) data_vio->logical.lbn, thread_id, + packer_thread); } static inline void set_data_vio_packer_callback(struct data_vio *data_vio, @@ -585,10 +585,10 @@ static inline void assert_data_vio_on_cpu_thread(struct data_vio *data_vio) thread_id_t cpu_thread = vdo_from_data_vio(data_vio)->thread_config.cpu_thread; thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((cpu_thread == thread_id), - "data_vio for logical block %llu on thread %u, should be on cpu thread %u", - (unsigned long long) data_vio->logical.lbn, thread_id, - cpu_thread); + VDO_ASSERT_LOG_ONLY((cpu_thread == thread_id), + "data_vio for logical block %llu on thread %u, should be on cpu thread %u", + (unsigned long long) data_vio->logical.lbn, thread_id, + cpu_thread); } static inline void set_data_vio_cpu_callback(struct data_vio *data_vio, diff --git a/drivers/md/dm-vdo/dedupe.c b/drivers/md/dm-vdo/dedupe.c index 7cdbe825116f..52bdf657db64 100644 --- a/drivers/md/dm-vdo/dedupe.c +++ b/drivers/md/dm-vdo/dedupe.c @@ -327,8 +327,8 @@ static inline struct hash_zones *as_hash_zones(struct vdo_completion *completion static inline void assert_in_hash_zone(struct hash_zone *zone, const char *name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == zone->thread_id), - "%s called on hash zone thread", name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == zone->thread_id), + "%s called on hash zone thread", name); } static inline bool change_context_state(struct dedupe_context *context, int old, int new) @@ -404,8 +404,8 @@ static void assert_hash_lock_agent(struct data_vio *data_vio, const char *where) { /* Not safe to access the agent field except from the hash zone. */ assert_data_vio_in_hash_zone(data_vio); - ASSERT_LOG_ONLY(data_vio == data_vio->hash_lock->agent, - "%s must be for the hash lock agent", where); + VDO_ASSERT_LOG_ONLY(data_vio == data_vio->hash_lock->agent, + "%s must be for the hash lock agent", where); } /** @@ -416,9 +416,8 @@ static void assert_hash_lock_agent(struct data_vio *data_vio, const char *where) */ static void set_duplicate_lock(struct hash_lock *hash_lock, struct pbn_lock *pbn_lock) { - ASSERT_LOG_ONLY((hash_lock->duplicate_lock == NULL), - "hash lock must not already hold a duplicate lock"); - + VDO_ASSERT_LOG_ONLY((hash_lock->duplicate_lock == NULL), + "hash lock must not already hold a duplicate lock"); pbn_lock->holder_count += 1; hash_lock->duplicate_lock = pbn_lock; } @@ -446,12 +445,12 @@ static void set_hash_lock(struct data_vio *data_vio, struct hash_lock *new_lock) struct hash_lock *old_lock = data_vio->hash_lock; if (old_lock != NULL) { - ASSERT_LOG_ONLY(data_vio->hash_zone != NULL, - "must have a hash zone when holding a hash lock"); - ASSERT_LOG_ONLY(!list_empty(&data_vio->hash_lock_entry), - "must be on a hash lock ring when holding a hash lock"); - ASSERT_LOG_ONLY(old_lock->reference_count > 0, - "hash lock reference must be counted"); + VDO_ASSERT_LOG_ONLY(data_vio->hash_zone != NULL, + "must have a hash zone when holding a hash lock"); + VDO_ASSERT_LOG_ONLY(!list_empty(&data_vio->hash_lock_entry), + "must be on a hash lock ring when holding a hash lock"); + VDO_ASSERT_LOG_ONLY(old_lock->reference_count > 0, + "hash lock reference must be counted"); if ((old_lock->state != VDO_HASH_LOCK_BYPASSING) && (old_lock->state != VDO_HASH_LOCK_UNLOCKING)) { @@ -459,9 +458,9 @@ static void set_hash_lock(struct data_vio *data_vio, struct hash_lock *new_lock) * If the reference count goes to zero in a non-terminal state, we're most * likely leaking this lock. */ - ASSERT_LOG_ONLY(old_lock->reference_count > 1, - "hash locks should only become unreferenced in a terminal state, not state %s", - get_hash_lock_state_name(old_lock->state)); + VDO_ASSERT_LOG_ONLY(old_lock->reference_count > 1, + "hash locks should only become unreferenced in a terminal state, not state %s", + get_hash_lock_state_name(old_lock->state)); } list_del_init(&data_vio->hash_lock_entry); @@ -641,8 +640,8 @@ static void finish_unlocking(struct vdo_completion *completion) assert_hash_lock_agent(agent, __func__); - ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, - "must have released the duplicate lock for the hash lock"); + VDO_ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, + "must have released the duplicate lock for the hash lock"); if (!lock->verified) { /* @@ -696,8 +695,8 @@ static void unlock_duplicate_pbn(struct vdo_completion *completion) struct hash_lock *lock = agent->hash_lock; assert_data_vio_in_duplicate_zone(agent); - ASSERT_LOG_ONLY(lock->duplicate_lock != NULL, - "must have a duplicate lock to release"); + VDO_ASSERT_LOG_ONLY(lock->duplicate_lock != NULL, + "must have a duplicate lock to release"); vdo_release_physical_zone_pbn_lock(agent->duplicate.zone, agent->duplicate.pbn, vdo_forget(lock->duplicate_lock)); @@ -799,8 +798,8 @@ static void start_updating(struct hash_lock *lock, struct data_vio *agent) { lock->state = VDO_HASH_LOCK_UPDATING; - ASSERT_LOG_ONLY(lock->verified, "new advice should have been verified"); - ASSERT_LOG_ONLY(lock->update_advice, "should only update advice if needed"); + VDO_ASSERT_LOG_ONLY(lock->verified, "new advice should have been verified"); + VDO_ASSERT_LOG_ONLY(lock->update_advice, "should only update advice if needed"); agent->last_async_operation = VIO_ASYNC_OP_UPDATE_DEDUPE_INDEX; set_data_vio_hash_zone_callback(agent, finish_updating); @@ -822,9 +821,9 @@ static void finish_deduping(struct hash_lock *lock, struct data_vio *data_vio) { struct data_vio *agent = data_vio; - ASSERT_LOG_ONLY(lock->agent == NULL, "shouldn't have an agent in DEDUPING"); - ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&lock->waiters), - "shouldn't have any lock waiters in DEDUPING"); + VDO_ASSERT_LOG_ONLY(lock->agent == NULL, "shouldn't have an agent in DEDUPING"); + VDO_ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&lock->waiters), + "shouldn't have any lock waiters in DEDUPING"); /* Just release the lock reference if other data_vios are still deduping. */ if (lock->reference_count > 1) { @@ -879,8 +878,8 @@ static int __must_check acquire_lock(struct hash_zone *zone, * Borrow and prepare a lock from the pool so we don't have to do two int_map accesses * in the common case of no lock contention. */ - result = ASSERT(!list_empty(&zone->lock_pool), - "never need to wait for a free hash lock"); + result = VDO_ASSERT(!list_empty(&zone->lock_pool), + "never need to wait for a free hash lock"); if (result != VDO_SUCCESS) return result; @@ -902,11 +901,11 @@ static int __must_check acquire_lock(struct hash_zone *zone, if (replace_lock != NULL) { /* On mismatch put the old lock back and return a severe error */ - ASSERT_LOG_ONLY(lock == replace_lock, - "old lock must have been in the lock map"); + VDO_ASSERT_LOG_ONLY(lock == replace_lock, + "old lock must have been in the lock map"); /* TODO: Check earlier and bail out? */ - ASSERT_LOG_ONLY(replace_lock->registered, - "old lock must have been marked registered"); + VDO_ASSERT_LOG_ONLY(replace_lock->registered, + "old lock must have been marked registered"); replace_lock->registered = false; } @@ -1018,15 +1017,15 @@ static void start_deduping(struct hash_lock *lock, struct data_vio *agent, * deduplicate against it. */ if (lock->duplicate_lock == NULL) { - ASSERT_LOG_ONLY(!vdo_is_state_compressed(agent->new_mapped.state), - "compression must have shared a lock"); - ASSERT_LOG_ONLY(agent_is_done, - "agent must have written the new duplicate"); + VDO_ASSERT_LOG_ONLY(!vdo_is_state_compressed(agent->new_mapped.state), + "compression must have shared a lock"); + VDO_ASSERT_LOG_ONLY(agent_is_done, + "agent must have written the new duplicate"); transfer_allocation_lock(agent); } - ASSERT_LOG_ONLY(vdo_is_pbn_read_lock(lock->duplicate_lock), - "duplicate_lock must be a PBN read lock"); + VDO_ASSERT_LOG_ONLY(vdo_is_pbn_read_lock(lock->duplicate_lock), + "duplicate_lock must be a PBN read lock"); /* * This state is not like any of the other states. There is no designated agent--the agent @@ -1204,7 +1203,7 @@ static void start_verifying(struct hash_lock *lock, struct data_vio *agent) agent->scratch_block); lock->state = VDO_HASH_LOCK_VERIFYING; - ASSERT_LOG_ONLY(!lock->verified, "hash lock only verifies advice once"); + VDO_ASSERT_LOG_ONLY(!lock->verified, "hash lock only verifies advice once"); agent->last_async_operation = VIO_ASYNC_OP_VERIFY_DUPLICATION; result = vio_reset_bio(vio, buffer, verify_endio, REQ_OP_READ, @@ -1234,8 +1233,8 @@ static void finish_locking(struct vdo_completion *completion) assert_hash_lock_agent(agent, __func__); if (!agent->is_duplicate) { - ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, - "must not hold duplicate_lock if not flagged as a duplicate"); + VDO_ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, + "must not hold duplicate_lock if not flagged as a duplicate"); /* * LOCKING -> WRITING transition: The advice block is being modified or has no * available references, so try to write or compress the data, remembering to @@ -1247,8 +1246,8 @@ static void finish_locking(struct vdo_completion *completion) return; } - ASSERT_LOG_ONLY(lock->duplicate_lock != NULL, - "must hold duplicate_lock if flagged as a duplicate"); + VDO_ASSERT_LOG_ONLY(lock->duplicate_lock != NULL, + "must hold duplicate_lock if flagged as a duplicate"); if (!lock->verified) { /* @@ -1418,8 +1417,8 @@ static void lock_duplicate_pbn(struct vdo_completion *completion) */ static void start_locking(struct hash_lock *lock, struct data_vio *agent) { - ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, - "must not acquire a duplicate lock when already holding it"); + VDO_ASSERT_LOG_ONLY(lock->duplicate_lock == NULL, + "must not acquire a duplicate lock when already holding it"); lock->state = VDO_HASH_LOCK_LOCKING; @@ -1725,8 +1724,8 @@ static void start_querying(struct hash_lock *lock, struct data_vio *data_vio) */ static void report_bogus_lock_state(struct hash_lock *lock, struct data_vio *data_vio) { - ASSERT_LOG_ONLY(false, "hash lock must not be in unimplemented state %s", - get_hash_lock_state_name(lock->state)); + VDO_ASSERT_LOG_ONLY(false, "hash lock must not be in unimplemented state %s", + get_hash_lock_state_name(lock->state)); continue_data_vio_with_error(data_vio, VDO_LOCK_ERROR); } @@ -1748,8 +1747,8 @@ void vdo_continue_hash_lock(struct vdo_completion *completion) switch (lock->state) { case VDO_HASH_LOCK_WRITING: - ASSERT_LOG_ONLY(data_vio == lock->agent, - "only the lock agent may continue the lock"); + VDO_ASSERT_LOG_ONLY(data_vio == lock->agent, + "only the lock agent may continue the lock"); finish_writing(lock, data_vio); break; @@ -1815,18 +1814,18 @@ static inline int assert_hash_lock_preconditions(const struct data_vio *data_vio int result; /* FIXME: BUG_ON() and/or enter read-only mode? */ - result = ASSERT(data_vio->hash_lock == NULL, - "must not already hold a hash lock"); + result = VDO_ASSERT(data_vio->hash_lock == NULL, + "must not already hold a hash lock"); if (result != VDO_SUCCESS) return result; - result = ASSERT(list_empty(&data_vio->hash_lock_entry), - "must not already be a member of a hash lock ring"); + result = VDO_ASSERT(list_empty(&data_vio->hash_lock_entry), + "must not already be a member of a hash lock ring"); if (result != VDO_SUCCESS) return result; - return ASSERT(data_vio->recovery_sequence_number == 0, - "must not hold a recovery lock when getting a hash lock"); + return VDO_ASSERT(data_vio->recovery_sequence_number == 0, + "must not hold a recovery lock when getting a hash lock"); } /** @@ -1933,24 +1932,24 @@ void vdo_release_hash_lock(struct data_vio *data_vio) struct hash_lock *removed; removed = vdo_int_map_remove(zone->hash_lock_map, lock_key); - ASSERT_LOG_ONLY(lock == removed, - "hash lock being released must have been mapped"); + VDO_ASSERT_LOG_ONLY(lock == removed, + "hash lock being released must have been mapped"); } else { - ASSERT_LOG_ONLY(lock != vdo_int_map_get(zone->hash_lock_map, lock_key), - "unregistered hash lock must not be in the lock map"); - } - - ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&lock->waiters), - "hash lock returned to zone must have no waiters"); - ASSERT_LOG_ONLY((lock->duplicate_lock == NULL), - "hash lock returned to zone must not reference a PBN lock"); - ASSERT_LOG_ONLY((lock->state == VDO_HASH_LOCK_BYPASSING), - "returned hash lock must not be in use with state %s", - get_hash_lock_state_name(lock->state)); - ASSERT_LOG_ONLY(list_empty(&lock->pool_node), - "hash lock returned to zone must not be in a pool ring"); - ASSERT_LOG_ONLY(list_empty(&lock->duplicate_ring), - "hash lock returned to zone must not reference DataVIOs"); + VDO_ASSERT_LOG_ONLY(lock != vdo_int_map_get(zone->hash_lock_map, lock_key), + "unregistered hash lock must not be in the lock map"); + } + + VDO_ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&lock->waiters), + "hash lock returned to zone must have no waiters"); + VDO_ASSERT_LOG_ONLY((lock->duplicate_lock == NULL), + "hash lock returned to zone must not reference a PBN lock"); + VDO_ASSERT_LOG_ONLY((lock->state == VDO_HASH_LOCK_BYPASSING), + "returned hash lock must not be in use with state %s", + get_hash_lock_state_name(lock->state)); + VDO_ASSERT_LOG_ONLY(list_empty(&lock->pool_node), + "hash lock returned to zone must not be in a pool ring"); + VDO_ASSERT_LOG_ONLY(list_empty(&lock->duplicate_ring), + "hash lock returned to zone must not reference DataVIOs"); return_hash_lock_to_pool(zone, lock); } @@ -1965,13 +1964,13 @@ static void transfer_allocation_lock(struct data_vio *data_vio) struct allocation *allocation = &data_vio->allocation; struct hash_lock *hash_lock = data_vio->hash_lock; - ASSERT_LOG_ONLY(data_vio->new_mapped.pbn == allocation->pbn, - "transferred lock must be for the block written"); + VDO_ASSERT_LOG_ONLY(data_vio->new_mapped.pbn == allocation->pbn, + "transferred lock must be for the block written"); allocation->pbn = VDO_ZERO_BLOCK; - ASSERT_LOG_ONLY(vdo_is_pbn_read_lock(allocation->lock), - "must have downgraded the allocation lock before transfer"); + VDO_ASSERT_LOG_ONLY(vdo_is_pbn_read_lock(allocation->lock), + "must have downgraded the allocation lock before transfer"); hash_lock->duplicate = data_vio->new_mapped; data_vio->duplicate = data_vio->new_mapped; @@ -1997,10 +1996,10 @@ void vdo_share_compressed_write_lock(struct data_vio *data_vio, { bool claimed; - ASSERT_LOG_ONLY(vdo_get_duplicate_lock(data_vio) == NULL, - "a duplicate PBN lock should not exist when writing"); - ASSERT_LOG_ONLY(vdo_is_state_compressed(data_vio->new_mapped.state), - "lock transfer must be for a compressed write"); + VDO_ASSERT_LOG_ONLY(vdo_get_duplicate_lock(data_vio) == NULL, + "a duplicate PBN lock should not exist when writing"); + VDO_ASSERT_LOG_ONLY(vdo_is_state_compressed(data_vio->new_mapped.state), + "lock transfer must be for a compressed write"); assert_data_vio_in_new_mapped_zone(data_vio); /* First sharer downgrades the lock. */ @@ -2020,7 +2019,7 @@ void vdo_share_compressed_write_lock(struct data_vio *data_vio, * deduplicating against it before our incRef. */ claimed = vdo_claim_pbn_lock_increment(pbn_lock); - ASSERT_LOG_ONLY(claimed, "impossible to fail to claim an initial increment"); + VDO_ASSERT_LOG_ONLY(claimed, "impossible to fail to claim an initial increment"); } static void dedupe_kobj_release(struct kobject *directory) @@ -2296,8 +2295,8 @@ static void finish_index_operation(struct uds_request *request) */ if (!change_context_state(context, DEDUPE_CONTEXT_TIMED_OUT, DEDUPE_CONTEXT_TIMED_OUT_COMPLETE)) { - ASSERT_LOG_ONLY(false, "uds request was timed out (state %d)", - atomic_read(&context->state)); + VDO_ASSERT_LOG_ONLY(false, "uds request was timed out (state %d)", + atomic_read(&context->state)); } uds_funnel_queue_put(context->zone->timed_out_complete, &context->queue_entry); @@ -2341,7 +2340,7 @@ static void check_for_drain_complete(struct hash_zone *zone) if (recycled > 0) WRITE_ONCE(zone->active, zone->active - recycled); - ASSERT_LOG_ONLY(READ_ONCE(zone->active) == 0, "all contexts inactive"); + VDO_ASSERT_LOG_ONLY(READ_ONCE(zone->active) == 0, "all contexts inactive"); vdo_finish_draining(&zone->state); } diff --git a/drivers/md/dm-vdo/dm-vdo-target.c b/drivers/md/dm-vdo/dm-vdo-target.c index 90ba379f8d3e..e493b2fec90b 100644 --- a/drivers/md/dm-vdo/dm-vdo-target.c +++ b/drivers/md/dm-vdo/dm-vdo-target.c @@ -904,8 +904,8 @@ static int vdo_map_bio(struct dm_target *ti, struct bio *bio) struct vdo_work_queue *current_work_queue; const struct admin_state_code *code = vdo_get_admin_state_code(&vdo->admin.state); - ASSERT_LOG_ONLY(code->normal, "vdo should not receive bios while in state %s", - code->name); + VDO_ASSERT_LOG_ONLY(code->normal, "vdo should not receive bios while in state %s", + code->name); /* Count all incoming bios. */ vdo_count_bios(&vdo->stats.bios_in, bio); @@ -1244,9 +1244,9 @@ static int perform_admin_operation(struct vdo *vdo, u32 starting_phase, /* Assert that we are operating on the correct thread for the current phase. */ static void assert_admin_phase_thread(struct vdo *vdo, const char *what) { - ASSERT_LOG_ONLY(vdo_get_callback_thread_id() == get_thread_id_for_phase(vdo), - "%s on correct thread for %s", what, - ADMIN_PHASE_NAMES[vdo->admin.phase]); + VDO_ASSERT_LOG_ONLY(vdo_get_callback_thread_id() == get_thread_id_for_phase(vdo), + "%s on correct thread for %s", what, + ADMIN_PHASE_NAMES[vdo->admin.phase]); } /** @@ -1424,11 +1424,11 @@ static void release_instance(unsigned int instance) { mutex_lock(&instances_lock); if (instance >= instances.bit_count) { - ASSERT_LOG_ONLY(false, - "instance number %u must be less than bit count %u", - instance, instances.bit_count); + VDO_ASSERT_LOG_ONLY(false, + "instance number %u must be less than bit count %u", + instance, instances.bit_count); } else if (test_bit(instance, instances.words) == 0) { - ASSERT_LOG_ONLY(false, "instance number %u must be allocated", instance); + VDO_ASSERT_LOG_ONLY(false, "instance number %u must be allocated", instance); } else { __clear_bit(instance, instances.words); instances.count -= 1; @@ -1577,9 +1577,9 @@ static int allocate_instance(unsigned int *instance_ptr) if (instance >= instances.bit_count) { /* Nothing free after next, so wrap around to instance zero. */ instance = find_first_zero_bit(instances.words, instances.bit_count); - result = ASSERT(instance < instances.bit_count, - "impossibly, no zero bit found"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(instance < instances.bit_count, + "impossibly, no zero bit found"); + if (result != VDO_SUCCESS) return result; } @@ -1729,8 +1729,8 @@ static int prepare_to_grow_physical(struct vdo *vdo, block_count_t new_physical_ uds_log_info("Preparing to resize physical to %llu", (unsigned long long) new_physical_blocks); - ASSERT_LOG_ONLY((new_physical_blocks > current_physical_blocks), - "New physical size is larger than current physical size"); + VDO_ASSERT_LOG_ONLY((new_physical_blocks > current_physical_blocks), + "New physical size is larger than current physical size"); result = perform_admin_operation(vdo, PREPARE_GROW_PHYSICAL_PHASE_START, check_may_grow_physical, finish_operation_callback, @@ -1829,8 +1829,8 @@ static int prepare_to_modify(struct dm_target *ti, struct device_config *config, uds_log_info("Preparing to resize logical to %llu", (unsigned long long) config->logical_blocks); - ASSERT_LOG_ONLY((config->logical_blocks > logical_blocks), - "New logical size is larger than current size"); + VDO_ASSERT_LOG_ONLY((config->logical_blocks > logical_blocks), + "New logical size is larger than current size"); result = vdo_prepare_to_grow_block_map(vdo->block_map, config->logical_blocks); @@ -2890,9 +2890,9 @@ static void vdo_module_destroy(void) if (dm_registered) dm_unregister_target(&vdo_target_bio); - ASSERT_LOG_ONLY(instances.count == 0, - "should have no instance numbers still in use, but have %u", - instances.count); + VDO_ASSERT_LOG_ONLY(instances.count == 0, + "should have no instance numbers still in use, but have %u", + instances.count); vdo_free(instances.words); memset(&instances, 0, sizeof(struct instance_tracker)); diff --git a/drivers/md/dm-vdo/encodings.c b/drivers/md/dm-vdo/encodings.c index a97771fe0a43..e24c31bc3524 100644 --- a/drivers/md/dm-vdo/encodings.c +++ b/drivers/md/dm-vdo/encodings.c @@ -320,8 +320,8 @@ int __must_check vdo_parse_geometry_block(u8 *block, struct volume_geometry *geo decode_volume_geometry(block, &offset, geometry, header.version.major_version); - result = ASSERT(header.size == offset + sizeof(u32), - "should have decoded up to the geometry checksum"); + result = VDO_ASSERT(header.size == offset + sizeof(u32), + "should have decoded up to the geometry checksum"); if (result != VDO_SUCCESS) return result; @@ -380,25 +380,25 @@ static int decode_block_map_state_2_0(u8 *buffer, size_t *offset, initial_offset = *offset; decode_u64_le(buffer, offset, &flat_page_origin); - result = ASSERT(flat_page_origin == VDO_BLOCK_MAP_FLAT_PAGE_ORIGIN, - "Flat page origin must be %u (recorded as %llu)", - VDO_BLOCK_MAP_FLAT_PAGE_ORIGIN, - (unsigned long long) state->flat_page_origin); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(flat_page_origin == VDO_BLOCK_MAP_FLAT_PAGE_ORIGIN, + "Flat page origin must be %u (recorded as %llu)", + VDO_BLOCK_MAP_FLAT_PAGE_ORIGIN, + (unsigned long long) state->flat_page_origin); + if (result != VDO_SUCCESS) return result; decode_u64_le(buffer, offset, &flat_page_count); - result = ASSERT(flat_page_count == 0, - "Flat page count must be 0 (recorded as %llu)", - (unsigned long long) state->flat_page_count); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(flat_page_count == 0, + "Flat page count must be 0 (recorded as %llu)", + (unsigned long long) state->flat_page_count); + if (result != VDO_SUCCESS) return result; decode_u64_le(buffer, offset, &root_origin); decode_u64_le(buffer, offset, &root_count); - result = ASSERT(VDO_BLOCK_MAP_HEADER_2_0.size == *offset - initial_offset, - "decoded block map component size must match header size"); + result = VDO_ASSERT(VDO_BLOCK_MAP_HEADER_2_0.size == *offset - initial_offset, + "decoded block map component size must match header size"); if (result != VDO_SUCCESS) return result; @@ -425,8 +425,8 @@ static void encode_block_map_state_2_0(u8 *buffer, size_t *offset, encode_u64_le(buffer, offset, state.root_origin); encode_u64_le(buffer, offset, state.root_count); - ASSERT_LOG_ONLY(VDO_BLOCK_MAP_HEADER_2_0.size == *offset - initial_offset, - "encoded block map component size must match header size"); + VDO_ASSERT_LOG_ONLY(VDO_BLOCK_MAP_HEADER_2_0.size == *offset - initial_offset, + "encoded block map component size must match header size"); } /** @@ -477,8 +477,8 @@ static void encode_recovery_journal_state_7_0(u8 *buffer, size_t *offset, encode_u64_le(buffer, offset, state.logical_blocks_used); encode_u64_le(buffer, offset, state.block_map_data_blocks); - ASSERT_LOG_ONLY(VDO_RECOVERY_JOURNAL_HEADER_7_0.size == *offset - initial_offset, - "encoded recovery journal component size must match header size"); + VDO_ASSERT_LOG_ONLY(VDO_RECOVERY_JOURNAL_HEADER_7_0.size == *offset - initial_offset, + "encoded recovery journal component size must match header size"); } /** @@ -508,9 +508,9 @@ static int __must_check decode_recovery_journal_state_7_0(u8 *buffer, size_t *of decode_u64_le(buffer, offset, &logical_blocks_used); decode_u64_le(buffer, offset, &block_map_data_blocks); - result = ASSERT(VDO_RECOVERY_JOURNAL_HEADER_7_0.size == *offset - initial_offset, - "decoded recovery journal component size must match header size"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(VDO_RECOVERY_JOURNAL_HEADER_7_0.size == *offset - initial_offset, + "decoded recovery journal component size must match header size"); + if (result != VDO_SUCCESS) return result; *state = (struct recovery_journal_state_7_0) { @@ -566,8 +566,8 @@ static void encode_slab_depot_state_2_0(u8 *buffer, size_t *offset, encode_u64_le(buffer, offset, state.last_block); buffer[(*offset)++] = state.zone_count; - ASSERT_LOG_ONLY(VDO_SLAB_DEPOT_HEADER_2_0.size == *offset - initial_offset, - "encoded block map component size must match header size"); + VDO_ASSERT_LOG_ONLY(VDO_SLAB_DEPOT_HEADER_2_0.size == *offset - initial_offset, + "encoded block map component size must match header size"); } /** @@ -618,9 +618,9 @@ static int decode_slab_depot_state_2_0(u8 *buffer, size_t *offset, decode_u64_le(buffer, offset, &last_block); zone_count = buffer[(*offset)++]; - result = ASSERT(VDO_SLAB_DEPOT_HEADER_2_0.size == *offset - initial_offset, - "decoded slab depot component size must match header size"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(VDO_SLAB_DEPOT_HEADER_2_0.size == *offset - initial_offset, + "decoded slab depot component size must match header size"); + if (result != VDO_SUCCESS) return result; *state = (struct slab_depot_state_2_0) { @@ -970,7 +970,7 @@ struct partition *vdo_get_known_partition(struct layout *layout, enum partition_ struct partition *partition; int result = vdo_get_partition(layout, id, &partition); - ASSERT_LOG_ONLY(result == VDO_SUCCESS, "layout has expected partition: %u", id); + VDO_ASSERT_LOG_ONLY(result == VDO_SUCCESS, "layout has expected partition: %u", id); return partition; } @@ -982,8 +982,8 @@ static void encode_layout(u8 *buffer, size_t *offset, const struct layout *layou struct header header = VDO_LAYOUT_HEADER_3_0; BUILD_BUG_ON(sizeof(enum partition_id) != sizeof(u8)); - ASSERT_LOG_ONLY(layout->num_partitions <= U8_MAX, - "layout partition count must fit in a byte"); + VDO_ASSERT_LOG_ONLY(layout->num_partitions <= U8_MAX, + "layout partition count must fit in a byte"); vdo_encode_header(buffer, offset, &header); @@ -992,8 +992,8 @@ static void encode_layout(u8 *buffer, size_t *offset, const struct layout *layou encode_u64_le(buffer, offset, layout->last_free); buffer[(*offset)++] = layout->num_partitions; - ASSERT_LOG_ONLY(sizeof(struct layout_3_0) == *offset - initial_offset, - "encoded size of a layout header must match structure"); + VDO_ASSERT_LOG_ONLY(sizeof(struct layout_3_0) == *offset - initial_offset, + "encoded size of a layout header must match structure"); for (partition = layout->head; partition != NULL; partition = partition->next) { buffer[(*offset)++] = partition->id; @@ -1003,8 +1003,8 @@ static void encode_layout(u8 *buffer, size_t *offset, const struct layout *layou encode_u64_le(buffer, offset, partition->count); } - ASSERT_LOG_ONLY(header.size == *offset - initial_offset, - "encoded size of a layout must match header size"); + VDO_ASSERT_LOG_ONLY(header.size == *offset - initial_offset, + "encoded size of a layout must match header size"); } static int decode_layout(u8 *buffer, size_t *offset, physical_block_number_t start, @@ -1035,8 +1035,8 @@ static int decode_layout(u8 *buffer, size_t *offset, physical_block_number_t sta .partition_count = partition_count, }; - result = ASSERT(sizeof(struct layout_3_0) == *offset - initial_offset, - "decoded size of a layout header must match structure"); + result = VDO_ASSERT(sizeof(struct layout_3_0) == *offset - initial_offset, + "decoded size of a layout header must match structure"); if (result != VDO_SUCCESS) return result; @@ -1208,29 +1208,29 @@ int vdo_validate_config(const struct vdo_config *config, struct slab_config slab_config; int result; - result = ASSERT(config->slab_size > 0, "slab size unspecified"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->slab_size > 0, "slab size unspecified"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(is_power_of_2(config->slab_size), - "slab size must be a power of two"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(is_power_of_2(config->slab_size), + "slab size must be a power of two"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(config->slab_size <= (1 << MAX_VDO_SLAB_BITS), - "slab size must be less than or equal to 2^%d", - MAX_VDO_SLAB_BITS); + result = VDO_ASSERT(config->slab_size <= (1 << MAX_VDO_SLAB_BITS), + "slab size must be less than or equal to 2^%d", + MAX_VDO_SLAB_BITS); if (result != VDO_SUCCESS) return result; - result = ASSERT(config->slab_journal_blocks >= MINIMUM_VDO_SLAB_JOURNAL_BLOCKS, - "slab journal size meets minimum size"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->slab_journal_blocks >= MINIMUM_VDO_SLAB_JOURNAL_BLOCKS, + "slab journal size meets minimum size"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(config->slab_journal_blocks <= config->slab_size, - "slab journal size is within expected bound"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->slab_journal_blocks <= config->slab_size, + "slab journal size is within expected bound"); + if (result != VDO_SUCCESS) return result; result = vdo_configure_slab(config->slab_size, config->slab_journal_blocks, @@ -1238,20 +1238,20 @@ int vdo_validate_config(const struct vdo_config *config, if (result != VDO_SUCCESS) return result; - result = ASSERT((slab_config.data_blocks >= 1), - "slab must be able to hold at least one block"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((slab_config.data_blocks >= 1), + "slab must be able to hold at least one block"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(config->physical_blocks > 0, "physical blocks unspecified"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->physical_blocks > 0, "physical blocks unspecified"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(config->physical_blocks <= MAXIMUM_VDO_PHYSICAL_BLOCKS, - "physical block count %llu exceeds maximum %llu", - (unsigned long long) config->physical_blocks, - (unsigned long long) MAXIMUM_VDO_PHYSICAL_BLOCKS); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->physical_blocks <= MAXIMUM_VDO_PHYSICAL_BLOCKS, + "physical block count %llu exceeds maximum %llu", + (unsigned long long) config->physical_blocks, + (unsigned long long) MAXIMUM_VDO_PHYSICAL_BLOCKS); + if (result != VDO_SUCCESS) return VDO_OUT_OF_RANGE; if (physical_block_count != config->physical_blocks) { @@ -1262,9 +1262,9 @@ int vdo_validate_config(const struct vdo_config *config, } if (logical_block_count > 0) { - result = ASSERT((config->logical_blocks > 0), - "logical blocks unspecified"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((config->logical_blocks > 0), + "logical blocks unspecified"); + if (result != VDO_SUCCESS) return result; if (logical_block_count != config->logical_blocks) { @@ -1275,19 +1275,19 @@ int vdo_validate_config(const struct vdo_config *config, } } - result = ASSERT(config->logical_blocks <= MAXIMUM_VDO_LOGICAL_BLOCKS, - "logical blocks too large"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->logical_blocks <= MAXIMUM_VDO_LOGICAL_BLOCKS, + "logical blocks too large"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(config->recovery_journal_size > 0, - "recovery journal size unspecified"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(config->recovery_journal_size > 0, + "recovery journal size unspecified"); + if (result != VDO_SUCCESS) return result; - result = ASSERT(is_power_of_2(config->recovery_journal_size), - "recovery journal size must be a power of two"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(is_power_of_2(config->recovery_journal_size), + "recovery journal size must be a power of two"); + if (result != VDO_SUCCESS) return result; return result; @@ -1341,8 +1341,8 @@ static int __must_check decode_components(u8 *buffer, size_t *offset, if (result != VDO_SUCCESS) return result; - ASSERT_LOG_ONLY(*offset == VDO_COMPONENT_DATA_OFFSET + VDO_COMPONENT_DATA_SIZE, - "All decoded component data was used"); + VDO_ASSERT_LOG_ONLY(*offset == VDO_COMPONENT_DATA_OFFSET + VDO_COMPONENT_DATA_SIZE, + "All decoded component data was used"); return VDO_SUCCESS; } @@ -1416,8 +1416,8 @@ static void vdo_encode_component_states(u8 *buffer, size_t *offset, encode_slab_depot_state_2_0(buffer, offset, states->slab_depot); encode_block_map_state_2_0(buffer, offset, states->block_map); - ASSERT_LOG_ONLY(*offset == VDO_COMPONENT_DATA_OFFSET + VDO_COMPONENT_DATA_SIZE, - "All super block component data was encoded"); + VDO_ASSERT_LOG_ONLY(*offset == VDO_COMPONENT_DATA_OFFSET + VDO_COMPONENT_DATA_SIZE, + "All super block component data was encoded"); } /** @@ -1440,8 +1440,8 @@ void vdo_encode_super_block(u8 *buffer, struct vdo_component_states *states) * Even though the buffer is a full block, to avoid the potential corruption from a torn * write, the entire encoding must fit in the first sector. */ - ASSERT_LOG_ONLY(offset <= VDO_SECTOR_SIZE, - "entire superblock must fit in one sector"); + VDO_ASSERT_LOG_ONLY(offset <= VDO_SECTOR_SIZE, + "entire superblock must fit in one sector"); } /** @@ -1476,8 +1476,8 @@ int vdo_decode_super_block(u8 *buffer) checksum = vdo_crc32(buffer, offset); decode_u32_le(buffer, &offset, &saved_checksum); - result = ASSERT(offset == VDO_SUPER_BLOCK_FIXED_SIZE + VDO_COMPONENT_DATA_SIZE, - "must have decoded entire superblock payload"); + result = VDO_ASSERT(offset == VDO_SUPER_BLOCK_FIXED_SIZE + VDO_COMPONENT_DATA_SIZE, + "must have decoded entire superblock payload"); if (result != VDO_SUCCESS) return result; diff --git a/drivers/md/dm-vdo/errors.c b/drivers/md/dm-vdo/errors.c index df2498553312..3b5fddad8ddf 100644 --- a/drivers/md/dm-vdo/errors.c +++ b/drivers/md/dm-vdo/errors.c @@ -281,8 +281,9 @@ int uds_register_error_block(const char *block_name, int first_error, .infos = infos, }; - result = ASSERT(first_error < next_free_error, "well-defined error block range"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(first_error < next_free_error, + "well-defined error block range"); + if (result != VDO_SUCCESS) return result; if (registered_errors.count == registered_errors.allocated) { diff --git a/drivers/md/dm-vdo/flush.c b/drivers/md/dm-vdo/flush.c index 8d8d9cf4a24c..e03679e4d1ba 100644 --- a/drivers/md/dm-vdo/flush.c +++ b/drivers/md/dm-vdo/flush.c @@ -59,8 +59,8 @@ struct flusher { */ static inline void assert_on_flusher_thread(struct flusher *flusher, const char *caller) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == flusher->thread_id), - "%s() called from flusher thread", caller); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == flusher->thread_id), + "%s() called from flusher thread", caller); } /** @@ -272,8 +272,8 @@ static void flush_vdo(struct vdo_completion *completion) int result; assert_on_flusher_thread(flusher, __func__); - result = ASSERT(vdo_is_state_normal(&flusher->state), - "flusher is in normal operation"); + result = VDO_ASSERT(vdo_is_state_normal(&flusher->state), + "flusher is in normal operation"); if (result != VDO_SUCCESS) { vdo_enter_read_only_mode(flusher->vdo, result); vdo_complete_flush(flush); @@ -330,11 +330,11 @@ void vdo_complete_flushes(struct flusher *flusher) if (flush->flush_generation >= oldest_active_generation) return; - ASSERT_LOG_ONLY((flush->flush_generation == - flusher->first_unacknowledged_generation), - "acknowledged next expected flush, %llu, was: %llu", - (unsigned long long) flusher->first_unacknowledged_generation, - (unsigned long long) flush->flush_generation); + VDO_ASSERT_LOG_ONLY((flush->flush_generation == + flusher->first_unacknowledged_generation), + "acknowledged next expected flush, %llu, was: %llu", + (unsigned long long) flusher->first_unacknowledged_generation, + (unsigned long long) flush->flush_generation); vdo_waitq_dequeue_waiter(&flusher->pending_flushes); vdo_complete_flush(flush); flusher->first_unacknowledged_generation++; @@ -400,8 +400,8 @@ void vdo_launch_flush(struct vdo *vdo, struct bio *bio) struct flusher *flusher = vdo->flusher; const struct admin_state_code *code = vdo_get_admin_state_code(&flusher->state); - ASSERT_LOG_ONLY(!code->quiescent, "Flushing not allowed in state %s", - code->name); + VDO_ASSERT_LOG_ONLY(!code->quiescent, "Flushing not allowed in state %s", + code->name); spin_lock(&flusher->lock); diff --git a/drivers/md/dm-vdo/funnel-workqueue.c b/drivers/md/dm-vdo/funnel-workqueue.c index a923432f0a37..03296e7fec12 100644 --- a/drivers/md/dm-vdo/funnel-workqueue.c +++ b/drivers/md/dm-vdo/funnel-workqueue.c @@ -110,14 +110,14 @@ static struct vdo_completion *poll_for_completion(struct simple_work_queue *queu static void enqueue_work_queue_completion(struct simple_work_queue *queue, struct vdo_completion *completion) { - ASSERT_LOG_ONLY(completion->my_queue == NULL, - "completion %px (fn %px) to enqueue (%px) is not already queued (%px)", - completion, completion->callback, queue, completion->my_queue); + VDO_ASSERT_LOG_ONLY(completion->my_queue == NULL, + "completion %px (fn %px) to enqueue (%px) is not already queued (%px)", + completion, completion->callback, queue, completion->my_queue); if (completion->priority == VDO_WORK_Q_DEFAULT_PRIORITY) completion->priority = queue->common.type->default_priority; - if (ASSERT(completion->priority <= queue->common.type->max_priority, - "priority is in range for queue") != VDO_SUCCESS) + if (VDO_ASSERT(completion->priority <= queue->common.type->max_priority, + "priority is in range for queue") != VDO_SUCCESS) completion->priority = 0; completion->my_queue = &queue->common; @@ -222,9 +222,9 @@ static struct vdo_completion *wait_for_next_completion(struct simple_work_queue static void process_completion(struct simple_work_queue *queue, struct vdo_completion *completion) { - if (ASSERT(completion->my_queue == &queue->common, - "completion %px from queue %px marked as being in this queue (%px)", - completion, queue, completion->my_queue) == UDS_SUCCESS) + if (VDO_ASSERT(completion->my_queue == &queue->common, + "completion %px from queue %px marked as being in this queue (%px)", + completion, queue, completion->my_queue) == VDO_SUCCESS) completion->my_queue = NULL; vdo_run_completion(completion); @@ -319,9 +319,9 @@ static int make_simple_work_queue(const char *thread_name_prefix, const char *na struct task_struct *thread = NULL; int result; - ASSERT_LOG_ONLY((type->max_priority <= VDO_WORK_Q_MAX_PRIORITY), - "queue priority count %u within limit %u", type->max_priority, - VDO_WORK_Q_MAX_PRIORITY); + VDO_ASSERT_LOG_ONLY((type->max_priority <= VDO_WORK_Q_MAX_PRIORITY), + "queue priority count %u within limit %u", type->max_priority, + VDO_WORK_Q_MAX_PRIORITY); result = vdo_allocate(1, struct simple_work_queue, "simple work queue", &queue); if (result != VDO_SUCCESS) diff --git a/drivers/md/dm-vdo/io-submitter.c b/drivers/md/dm-vdo/io-submitter.c index e82b4a8c6fc4..61bb48068c3a 100644 --- a/drivers/md/dm-vdo/io-submitter.c +++ b/drivers/md/dm-vdo/io-submitter.c @@ -94,7 +94,7 @@ static void count_all_bios(struct vio *vio, struct bio *bio) */ static void assert_in_bio_zone(struct vio *vio) { - ASSERT_LOG_ONLY(!in_interrupt(), "not in interrupt context"); + VDO_ASSERT_LOG_ONLY(!in_interrupt(), "not in interrupt context"); assert_vio_in_bio_zone(vio); } @@ -300,7 +300,7 @@ static bool try_bio_map_merge(struct vio *vio) mutex_unlock(&bio_queue_data->lock); /* We don't care about failure of int_map_put in this case. */ - ASSERT_LOG_ONLY(result == VDO_SUCCESS, "bio map insertion succeeds"); + VDO_ASSERT_LOG_ONLY(result == VDO_SUCCESS, "bio map insertion succeeds"); return merged; } @@ -345,8 +345,8 @@ void __submit_metadata_vio(struct vio *vio, physical_block_number_t physical, const struct admin_state_code *code = vdo_get_admin_state(completion->vdo); - ASSERT_LOG_ONLY(!code->quiescent, "I/O not allowed in state %s", code->name); - ASSERT_LOG_ONLY(vio->bio->bi_next == NULL, "metadata bio has no next bio"); + VDO_ASSERT_LOG_ONLY(!code->quiescent, "I/O not allowed in state %s", code->name); + VDO_ASSERT_LOG_ONLY(vio->bio->bi_next == NULL, "metadata bio has no next bio"); vdo_reset_completion(completion); completion->error_handler = error_handler; diff --git a/drivers/md/dm-vdo/logical-zone.c b/drivers/md/dm-vdo/logical-zone.c index ca5bc3be7978..300f9d2d2d5c 100644 --- a/drivers/md/dm-vdo/logical-zone.c +++ b/drivers/md/dm-vdo/logical-zone.c @@ -142,8 +142,8 @@ void vdo_free_logical_zones(struct logical_zones *zones) static inline void assert_on_zone_thread(struct logical_zone *zone, const char *what) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == zone->thread_id), - "%s() called on correct thread", what); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == zone->thread_id), + "%s() called on correct thread", what); } /** @@ -247,10 +247,10 @@ void vdo_increment_logical_zone_flush_generation(struct logical_zone *zone, sequence_number_t expected_generation) { assert_on_zone_thread(zone, __func__); - ASSERT_LOG_ONLY((zone->flush_generation == expected_generation), - "logical zone %u flush generation %llu should be %llu before increment", - zone->zone_number, (unsigned long long) zone->flush_generation, - (unsigned long long) expected_generation); + VDO_ASSERT_LOG_ONLY((zone->flush_generation == expected_generation), + "logical zone %u flush generation %llu should be %llu before increment", + zone->zone_number, (unsigned long long) zone->flush_generation, + (unsigned long long) expected_generation); zone->flush_generation++; zone->ios_in_flush_generation = 0; @@ -267,7 +267,7 @@ void vdo_acquire_flush_generation_lock(struct data_vio *data_vio) struct logical_zone *zone = data_vio->logical.zone; assert_on_zone_thread(zone, __func__); - ASSERT_LOG_ONLY(vdo_is_state_normal(&zone->state), "vdo state is normal"); + VDO_ASSERT_LOG_ONLY(vdo_is_state_normal(&zone->state), "vdo state is normal"); data_vio->flush_generation = zone->flush_generation; list_add_tail(&data_vio->write_entry, &zone->write_vios); @@ -332,10 +332,10 @@ void vdo_release_flush_generation_lock(struct data_vio *data_vio) return; list_del_init(&data_vio->write_entry); - ASSERT_LOG_ONLY((zone->oldest_active_generation <= data_vio->flush_generation), - "data_vio releasing lock on generation %llu is not older than oldest active generation %llu", - (unsigned long long) data_vio->flush_generation, - (unsigned long long) zone->oldest_active_generation); + VDO_ASSERT_LOG_ONLY((zone->oldest_active_generation <= data_vio->flush_generation), + "data_vio releasing lock on generation %llu is not older than oldest active generation %llu", + (unsigned long long) data_vio->flush_generation, + (unsigned long long) zone->oldest_active_generation); if (!update_oldest_active_generation(zone) || zone->notifying) return; diff --git a/drivers/md/dm-vdo/memory-alloc.c b/drivers/md/dm-vdo/memory-alloc.c index dd5acc582fb3..62bb717c4c50 100644 --- a/drivers/md/dm-vdo/memory-alloc.c +++ b/drivers/md/dm-vdo/memory-alloc.c @@ -385,12 +385,12 @@ void vdo_memory_init(void) void vdo_memory_exit(void) { - ASSERT_LOG_ONLY(memory_stats.kmalloc_bytes == 0, - "kmalloc memory used (%zd bytes in %zd blocks) is returned to the kernel", - memory_stats.kmalloc_bytes, memory_stats.kmalloc_blocks); - ASSERT_LOG_ONLY(memory_stats.vmalloc_bytes == 0, - "vmalloc memory used (%zd bytes in %zd blocks) is returned to the kernel", - memory_stats.vmalloc_bytes, memory_stats.vmalloc_blocks); + VDO_ASSERT_LOG_ONLY(memory_stats.kmalloc_bytes == 0, + "kmalloc memory used (%zd bytes in %zd blocks) is returned to the kernel", + memory_stats.kmalloc_bytes, memory_stats.kmalloc_blocks); + VDO_ASSERT_LOG_ONLY(memory_stats.vmalloc_bytes == 0, + "vmalloc memory used (%zd bytes in %zd blocks) is returned to the kernel", + memory_stats.vmalloc_bytes, memory_stats.vmalloc_blocks); uds_log_debug("peak usage %zd bytes", memory_stats.peak_bytes); } diff --git a/drivers/md/dm-vdo/packer.c b/drivers/md/dm-vdo/packer.c index 5774d8fd5c5a..4d45243161a6 100644 --- a/drivers/md/dm-vdo/packer.c +++ b/drivers/md/dm-vdo/packer.c @@ -86,8 +86,8 @@ int vdo_get_compressed_block_fragment(enum block_mapping_state mapping_state, */ static inline void assert_on_packer_thread(struct packer *packer, const char *caller) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == packer->thread_id), - "%s() called from packer thread", caller); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == packer->thread_id), + "%s() called from packer thread", caller); } /** @@ -569,9 +569,9 @@ void vdo_attempt_packing(struct data_vio *data_vio) assert_on_packer_thread(packer, __func__); - result = ASSERT((status.stage == DATA_VIO_COMPRESSING), - "attempt to pack data_vio not ready for packing, stage: %u", - status.stage); + result = VDO_ASSERT((status.stage == DATA_VIO_COMPRESSING), + "attempt to pack data_vio not ready for packing, stage: %u", + status.stage); if (result != VDO_SUCCESS) return; @@ -671,7 +671,7 @@ void vdo_remove_lock_holder_from_packer(struct vdo_completion *completion) lock_holder = vdo_forget(data_vio->compression.lock_holder); bin = lock_holder->compression.bin; - ASSERT_LOG_ONLY((bin != NULL), "data_vio in packer has a bin"); + VDO_ASSERT_LOG_ONLY((bin != NULL), "data_vio in packer has a bin"); slot = lock_holder->compression.slot; bin->slots_used--; diff --git a/drivers/md/dm-vdo/permassert.h b/drivers/md/dm-vdo/permassert.h index ee978bc115ec..8774dde7927a 100644 --- a/drivers/md/dm-vdo/permassert.h +++ b/drivers/md/dm-vdo/permassert.h @@ -13,7 +13,6 @@ /* Utilities for asserting that certain conditions are met */ #define STRINGIFY(X) #X -#define STRINGIFY_VALUE(X) STRINGIFY(X) /* * A hack to apply the "warn if unused" attribute to an integral expression. @@ -23,19 +22,23 @@ * expression. With optimization enabled, this function contributes no additional instructions, but * the warn_unused_result attribute still applies to the code calling it. */ -static inline int __must_check uds_must_use(int value) +static inline int __must_check vdo_must_use(int value) { return value; } /* Assert that an expression is true and return an error if it is not. */ -#define ASSERT(expr, ...) uds_must_use(__UDS_ASSERT(expr, __VA_ARGS__)) +#define VDO_ASSERT(expr, ...) vdo_must_use(__VDO_ASSERT(expr, __VA_ARGS__)) /* Log a message if the expression is not true. */ -#define ASSERT_LOG_ONLY(expr, ...) __UDS_ASSERT(expr, __VA_ARGS__) +#define VDO_ASSERT_LOG_ONLY(expr, ...) __VDO_ASSERT(expr, __VA_ARGS__) -#define __UDS_ASSERT(expr, ...) \ - (likely(expr) ? UDS_SUCCESS \ +/* For use by UDS */ +#define ASSERT(expr, ...) VDO_ASSERT(expr, __VA_ARGS__) +#define ASSERT_LOG_ONLY(expr, ...) __VDO_ASSERT(expr, __VA_ARGS__) + +#define __VDO_ASSERT(expr, ...) \ + (likely(expr) ? VDO_SUCCESS \ : uds_assertion_failed(STRINGIFY(expr), __FILE__, __LINE__, __VA_ARGS__)) /* Log an assertion failure message. */ diff --git a/drivers/md/dm-vdo/physical-zone.c b/drivers/md/dm-vdo/physical-zone.c index fadcea23288e..6678f472fb44 100644 --- a/drivers/md/dm-vdo/physical-zone.c +++ b/drivers/md/dm-vdo/physical-zone.c @@ -80,13 +80,13 @@ static inline void set_pbn_lock_type(struct pbn_lock *lock, enum pbn_lock_type t */ void vdo_downgrade_pbn_write_lock(struct pbn_lock *lock, bool compressed_write) { - ASSERT_LOG_ONLY(!vdo_is_pbn_read_lock(lock), - "PBN lock must not already have been downgraded"); - ASSERT_LOG_ONLY(!has_lock_type(lock, VIO_BLOCK_MAP_WRITE_LOCK), - "must not downgrade block map write locks"); - ASSERT_LOG_ONLY(lock->holder_count == 1, - "PBN write lock should have one holder but has %u", - lock->holder_count); + VDO_ASSERT_LOG_ONLY(!vdo_is_pbn_read_lock(lock), + "PBN lock must not already have been downgraded"); + VDO_ASSERT_LOG_ONLY(!has_lock_type(lock, VIO_BLOCK_MAP_WRITE_LOCK), + "must not downgrade block map write locks"); + VDO_ASSERT_LOG_ONLY(lock->holder_count == 1, + "PBN write lock should have one holder but has %u", + lock->holder_count); /* * data_vio write locks are downgraded in place--the writer retains the hold on the lock. * If this was a compressed write, the holder has not yet journaled its own inc ref, @@ -128,8 +128,8 @@ bool vdo_claim_pbn_lock_increment(struct pbn_lock *lock) */ void vdo_assign_pbn_lock_provisional_reference(struct pbn_lock *lock) { - ASSERT_LOG_ONLY(!lock->has_provisional_reference, - "lock does not have a provisional reference"); + VDO_ASSERT_LOG_ONLY(!lock->has_provisional_reference, + "lock does not have a provisional reference"); lock->has_provisional_reference = true; } @@ -221,7 +221,7 @@ static void return_pbn_lock_to_pool(struct pbn_lock_pool *pool, struct pbn_lock INIT_LIST_HEAD(&idle->entry); list_add_tail(&idle->entry, &pool->idle_list); - ASSERT_LOG_ONLY(pool->borrowed > 0, "shouldn't return more than borrowed"); + VDO_ASSERT_LOG_ONLY(pool->borrowed > 0, "shouldn't return more than borrowed"); pool->borrowed -= 1; } @@ -267,9 +267,9 @@ static void free_pbn_lock_pool(struct pbn_lock_pool *pool) if (pool == NULL) return; - ASSERT_LOG_ONLY(pool->borrowed == 0, - "All PBN locks must be returned to the pool before it is freed, but %zu locks are still on loan", - pool->borrowed); + VDO_ASSERT_LOG_ONLY(pool->borrowed == 0, + "All PBN locks must be returned to the pool before it is freed, but %zu locks are still on loan", + pool->borrowed); vdo_free(pool); } @@ -298,8 +298,8 @@ static int __must_check borrow_pbn_lock_from_pool(struct pbn_lock_pool *pool, "no free PBN locks left to borrow"); pool->borrowed += 1; - result = ASSERT(!list_empty(&pool->idle_list), - "idle list should not be empty if pool not at capacity"); + result = VDO_ASSERT(!list_empty(&pool->idle_list), + "idle list should not be empty if pool not at capacity"); if (result != VDO_SUCCESS) return result; @@ -447,7 +447,7 @@ int vdo_attempt_physical_zone_pbn_lock(struct physical_zone *zone, result = borrow_pbn_lock_from_pool(zone->lock_pool, type, &new_lock); if (result != VDO_SUCCESS) { - ASSERT_LOG_ONLY(false, "must always be able to borrow a PBN lock"); + VDO_ASSERT_LOG_ONLY(false, "must always be able to borrow a PBN lock"); return result; } @@ -461,8 +461,8 @@ int vdo_attempt_physical_zone_pbn_lock(struct physical_zone *zone, if (lock != NULL) { /* The lock is already held, so we don't need the borrowed one. */ return_pbn_lock_to_pool(zone->lock_pool, vdo_forget(new_lock)); - result = ASSERT(lock->holder_count > 0, "physical block %llu lock held", - (unsigned long long) pbn); + result = VDO_ASSERT(lock->holder_count > 0, "physical block %llu lock held", + (unsigned long long) pbn); if (result != VDO_SUCCESS) return result; *lock_ptr = lock; @@ -485,8 +485,8 @@ static int allocate_and_lock_block(struct allocation *allocation) int result; struct pbn_lock *lock; - ASSERT_LOG_ONLY(allocation->lock == NULL, - "must not allocate a block while already holding a lock on one"); + VDO_ASSERT_LOG_ONLY(allocation->lock == NULL, + "must not allocate a block while already holding a lock on one"); result = vdo_allocate_block(allocation->zone->allocator, &allocation->pbn); if (result != VDO_SUCCESS) @@ -617,8 +617,8 @@ void vdo_release_physical_zone_pbn_lock(struct physical_zone *zone, if (lock == NULL) return; - ASSERT_LOG_ONLY(lock->holder_count > 0, - "should not be releasing a lock that is not held"); + VDO_ASSERT_LOG_ONLY(lock->holder_count > 0, + "should not be releasing a lock that is not held"); lock->holder_count -= 1; if (lock->holder_count > 0) { @@ -627,8 +627,8 @@ void vdo_release_physical_zone_pbn_lock(struct physical_zone *zone, } holder = vdo_int_map_remove(zone->pbn_operations, locked_pbn); - ASSERT_LOG_ONLY((lock == holder), "physical block lock mismatch for block %llu", - (unsigned long long) locked_pbn); + VDO_ASSERT_LOG_ONLY((lock == holder), "physical block lock mismatch for block %llu", + (unsigned long long) locked_pbn); release_pbn_lock_provisional_reference(lock, locked_pbn, zone->allocator); return_pbn_lock_to_pool(zone->lock_pool, lock); diff --git a/drivers/md/dm-vdo/priority-table.c b/drivers/md/dm-vdo/priority-table.c index fc99268d2437..42d3d8d0e4b5 100644 --- a/drivers/md/dm-vdo/priority-table.c +++ b/drivers/md/dm-vdo/priority-table.c @@ -127,8 +127,8 @@ void vdo_reset_priority_table(struct priority_table *table) void vdo_priority_table_enqueue(struct priority_table *table, unsigned int priority, struct list_head *entry) { - ASSERT_LOG_ONLY((priority <= table->max_priority), - "entry priority must be valid for the table"); + VDO_ASSERT_LOG_ONLY((priority <= table->max_priority), + "entry priority must be valid for the table"); /* Append the entry to the queue in the specified bucket. */ list_move_tail(entry, &table->buckets[priority].queue); diff --git a/drivers/md/dm-vdo/recovery-journal.c b/drivers/md/dm-vdo/recovery-journal.c index 615755697e60..6df373b88042 100644 --- a/drivers/md/dm-vdo/recovery-journal.c +++ b/drivers/md/dm-vdo/recovery-journal.c @@ -119,8 +119,8 @@ static bool is_journal_zone_locked(struct recovery_journal *journal, /* Pairs with barrier in vdo_release_journal_entry_lock() */ smp_rmb(); - ASSERT_LOG_ONLY((decrements <= journal_value), - "journal zone lock counter must not underflow"); + VDO_ASSERT_LOG_ONLY((decrements <= journal_value), + "journal zone lock counter must not underflow"); return (journal_value != decrements); } @@ -150,8 +150,8 @@ void vdo_release_recovery_journal_block_reference(struct recovery_journal *journ lock_number = vdo_get_recovery_journal_block_number(journal, sequence_number); current_value = get_counter(journal, lock_number, zone_type, zone_id); - ASSERT_LOG_ONLY((*current_value >= 1), - "decrement of lock counter must not underflow"); + VDO_ASSERT_LOG_ONLY((*current_value >= 1), + "decrement of lock counter must not underflow"); *current_value -= 1; if (zone_type == VDO_ZONE_TYPE_JOURNAL) { @@ -254,8 +254,8 @@ static inline bool __must_check is_block_full(const struct recovery_journal_bloc static void assert_on_journal_thread(struct recovery_journal *journal, const char *function_name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == journal->thread_id), - "%s() called on journal thread", function_name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == journal->thread_id), + "%s() called on journal thread", function_name); } /** @@ -353,14 +353,14 @@ static void check_for_drain_complete(struct recovery_journal *journal) if (vdo_is_state_saving(&journal->state)) { if (journal->active_block != NULL) { - ASSERT_LOG_ONLY(((result == VDO_READ_ONLY) || - !is_block_dirty(journal->active_block)), - "journal being saved has clean active block"); + VDO_ASSERT_LOG_ONLY(((result == VDO_READ_ONLY) || + !is_block_dirty(journal->active_block)), + "journal being saved has clean active block"); recycle_journal_block(journal->active_block); } - ASSERT_LOG_ONLY(list_empty(&journal->active_tail_blocks), - "all blocks in a journal being saved must be inactive"); + VDO_ASSERT_LOG_ONLY(list_empty(&journal->active_tail_blocks), + "all blocks in a journal being saved must be inactive"); } vdo_finish_draining_with_result(&journal->state, result); @@ -800,8 +800,8 @@ void vdo_free_recovery_journal(struct recovery_journal *journal) * requires opening before use. */ if (!vdo_is_state_quiescent(&journal->state)) { - ASSERT_LOG_ONLY(list_empty(&journal->active_tail_blocks), - "journal being freed has no active tail blocks"); + VDO_ASSERT_LOG_ONLY(list_empty(&journal->active_tail_blocks), + "journal being freed has no active tail blocks"); } else if (!vdo_is_state_saved(&journal->state) && !list_empty(&journal->active_tail_blocks)) { uds_log_warning("journal being freed has uncommitted entries"); @@ -989,8 +989,8 @@ static void initialize_lock_count(struct recovery_journal *journal) atomic_t *decrement_counter = get_decrement_counter(journal, lock_number); journal_value = get_counter(journal, lock_number, VDO_ZONE_TYPE_JOURNAL, 0); - ASSERT_LOG_ONLY((*journal_value == atomic_read(decrement_counter)), - "count to be initialized not in use"); + VDO_ASSERT_LOG_ONLY((*journal_value == atomic_read(decrement_counter)), + "count to be initialized not in use"); *journal_value = journal->entries_per_block + 1; atomic_set(decrement_counter, 0); } @@ -1175,13 +1175,13 @@ static void continue_committed_waiter(struct vdo_waiter *waiter, void *context) int result = (is_read_only(journal) ? VDO_READ_ONLY : VDO_SUCCESS); bool has_decrement; - ASSERT_LOG_ONLY(vdo_before_journal_point(&journal->commit_point, - &data_vio->recovery_journal_point), - "DataVIOs released from recovery journal in order. Recovery journal point is (%llu, %u), but commit waiter point is (%llu, %u)", - (unsigned long long) journal->commit_point.sequence_number, - journal->commit_point.entry_count, - (unsigned long long) data_vio->recovery_journal_point.sequence_number, - data_vio->recovery_journal_point.entry_count); + VDO_ASSERT_LOG_ONLY(vdo_before_journal_point(&journal->commit_point, + &data_vio->recovery_journal_point), + "DataVIOs released from recovery journal in order. Recovery journal point is (%llu, %u), but commit waiter point is (%llu, %u)", + (unsigned long long) journal->commit_point.sequence_number, + journal->commit_point.entry_count, + (unsigned long long) data_vio->recovery_journal_point.sequence_number, + data_vio->recovery_journal_point.entry_count); journal->commit_point = data_vio->recovery_journal_point; data_vio->last_async_operation = VIO_ASYNC_OP_UPDATE_REFERENCE_COUNTS; @@ -1281,8 +1281,8 @@ static void complete_write(struct vdo_completion *completion) journal->last_write_acknowledged = block->sequence_number; last_active_block = get_journal_block(&journal->active_tail_blocks); - ASSERT_LOG_ONLY((block->sequence_number >= last_active_block->sequence_number), - "completed journal write is still active"); + VDO_ASSERT_LOG_ONLY((block->sequence_number >= last_active_block->sequence_number), + "completed journal write is still active"); notify_commit_waiters(journal); @@ -1456,8 +1456,8 @@ void vdo_add_recovery_journal_entry(struct recovery_journal *journal, return; } - ASSERT_LOG_ONLY(data_vio->recovery_sequence_number == 0, - "journal lock not held for new entry"); + VDO_ASSERT_LOG_ONLY(data_vio->recovery_sequence_number == 0, + "journal lock not held for new entry"); vdo_advance_journal_point(&journal->append_point, journal->entries_per_block); vdo_waitq_enqueue_waiter(&journal->entry_waiters, &data_vio->waiter); @@ -1564,13 +1564,13 @@ void vdo_acquire_recovery_journal_block_reference(struct recovery_journal *journ if (sequence_number == 0) return; - ASSERT_LOG_ONLY((zone_type != VDO_ZONE_TYPE_JOURNAL), - "invalid lock count increment from journal zone"); + VDO_ASSERT_LOG_ONLY((zone_type != VDO_ZONE_TYPE_JOURNAL), + "invalid lock count increment from journal zone"); lock_number = vdo_get_recovery_journal_block_number(journal, sequence_number); current_value = get_counter(journal, lock_number, zone_type, zone_id); - ASSERT_LOG_ONLY(*current_value < U16_MAX, - "increment of lock counter must not overflow"); + VDO_ASSERT_LOG_ONLY(*current_value < U16_MAX, + "increment of lock counter must not overflow"); if (*current_value == 0) { /* diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c index bfcdedeedb86..c7abb8078336 100644 --- a/drivers/md/dm-vdo/repair.c +++ b/drivers/md/dm-vdo/repair.c @@ -976,8 +976,8 @@ find_entry_starting_next_page(struct repair_completion *repair, if (needs_sort) { struct numbered_block_mapping *just_sorted_entry = sort_next_heap_element(repair); - ASSERT_LOG_ONLY(just_sorted_entry < current_entry, - "heap is returning elements in an unexpected order"); + VDO_ASSERT_LOG_ONLY(just_sorted_entry < current_entry, + "heap is returning elements in an unexpected order"); } current_entry--; @@ -1129,8 +1129,8 @@ static void recover_block_map(struct vdo_completion *completion) repair->current_entry = &repair->entries[repair->block_map_entry_count - 1]; first_sorted_entry = sort_next_heap_element(repair); - ASSERT_LOG_ONLY(first_sorted_entry == repair->current_entry, - "heap is returning elements in an unexpected order"); + VDO_ASSERT_LOG_ONLY(first_sorted_entry == repair->current_entry, + "heap is returning elements in an unexpected order"); /* Prevent any page from being processed until all pages have been launched. */ repair->launching = true; @@ -1489,8 +1489,8 @@ static int extract_new_mappings(struct repair_completion *repair) repair->block_map_entry_count++; } - result = ASSERT((repair->block_map_entry_count <= repair->entry_count), - "approximate entry count is an upper bound"); + result = VDO_ASSERT((repair->block_map_entry_count <= repair->entry_count), + "approximate entry count is an upper bound"); if (result != VDO_SUCCESS) vdo_enter_read_only_mode(vdo, result); diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c index 97208c9e0062..00746de09c12 100644 --- a/drivers/md/dm-vdo/slab-depot.c +++ b/drivers/md/dm-vdo/slab-depot.c @@ -149,7 +149,7 @@ static void mark_slab_journal_dirty(struct slab_journal *journal, sequence_numbe struct slab_journal *dirty_journal; struct list_head *dirty_list = &journal->slab->allocator->dirty_slab_journals; - ASSERT_LOG_ONLY(journal->recovery_lock == 0, "slab journal was clean"); + VDO_ASSERT_LOG_ONLY(journal->recovery_lock == 0, "slab journal was clean"); journal->recovery_lock = lock; list_for_each_entry_reverse(dirty_journal, dirty_list, dirty_entry) { @@ -216,7 +216,7 @@ static u8 __must_check compute_fullness_hint(struct slab_depot *depot, { block_count_t hint; - ASSERT_LOG_ONLY((free_blocks < (1 << 23)), "free blocks must be less than 2^23"); + VDO_ASSERT_LOG_ONLY((free_blocks < (1 << 23)), "free blocks must be less than 2^23"); if (free_blocks == 0) return 0; @@ -532,13 +532,13 @@ static void adjust_slab_journal_block_reference(struct slab_journal *journal, return; } - ASSERT_LOG_ONLY((adjustment != 0), "adjustment must be non-zero"); + VDO_ASSERT_LOG_ONLY((adjustment != 0), "adjustment must be non-zero"); lock = get_lock(journal, sequence_number); if (adjustment < 0) { - ASSERT_LOG_ONLY((-adjustment <= lock->count), - "adjustment %d of lock count %u for slab journal block %llu must not underflow", - adjustment, lock->count, - (unsigned long long) sequence_number); + VDO_ASSERT_LOG_ONLY((-adjustment <= lock->count), + "adjustment %d of lock count %u for slab journal block %llu must not underflow", + adjustment, lock->count, + (unsigned long long) sequence_number); } lock->count += adjustment; @@ -661,16 +661,16 @@ static void reopen_slab_journal(struct vdo_slab *slab) struct slab_journal *journal = &slab->journal; sequence_number_t block; - ASSERT_LOG_ONLY(journal->tail_header.entry_count == 0, - "vdo_slab journal's active block empty before reopening"); + VDO_ASSERT_LOG_ONLY(journal->tail_header.entry_count == 0, + "vdo_slab journal's active block empty before reopening"); journal->head = journal->tail; initialize_journal_state(journal); /* Ensure no locks are spuriously held on an empty journal. */ for (block = 1; block <= journal->size; block++) { - ASSERT_LOG_ONLY((get_lock(journal, block)->count == 0), - "Scrubbed journal's block %llu is not locked", - (unsigned long long) block); + VDO_ASSERT_LOG_ONLY((get_lock(journal, block)->count == 0), + "Scrubbed journal's block %llu is not locked", + (unsigned long long) block); } add_entries(journal); @@ -757,7 +757,7 @@ static void write_slab_journal_block(struct vdo_waiter *waiter, void *context) /* Copy the tail block into the vio. */ memcpy(pooled->vio.data, journal->block, VDO_BLOCK_SIZE); - ASSERT_LOG_ONLY(unused_entries >= 0, "vdo_slab journal block is not overfull"); + VDO_ASSERT_LOG_ONLY(unused_entries >= 0, "vdo_slab journal block is not overfull"); if (unused_entries > 0) { /* * Release the per-entry locks for any unused entries in the block we are about to @@ -907,22 +907,22 @@ static void add_entry(struct slab_journal *journal, physical_block_number_t pbn, struct packed_slab_journal_block *block = journal->block; int result; - result = ASSERT(vdo_before_journal_point(&journal->tail_header.recovery_point, - &recovery_point), - "recovery journal point is monotonically increasing, recovery point: %llu.%u, block recovery point: %llu.%u", - (unsigned long long) recovery_point.sequence_number, - recovery_point.entry_count, - (unsigned long long) journal->tail_header.recovery_point.sequence_number, - journal->tail_header.recovery_point.entry_count); + result = VDO_ASSERT(vdo_before_journal_point(&journal->tail_header.recovery_point, + &recovery_point), + "recovery journal point is monotonically increasing, recovery point: %llu.%u, block recovery point: %llu.%u", + (unsigned long long) recovery_point.sequence_number, + recovery_point.entry_count, + (unsigned long long) journal->tail_header.recovery_point.sequence_number, + journal->tail_header.recovery_point.entry_count); if (result != VDO_SUCCESS) { vdo_enter_read_only_mode(journal->slab->allocator->depot->vdo, result); return; } if (operation == VDO_JOURNAL_BLOCK_MAP_REMAPPING) { - result = ASSERT((journal->tail_header.entry_count < - journal->full_entries_per_block), - "block has room for full entries"); + result = VDO_ASSERT((journal->tail_header.entry_count < + journal->full_entries_per_block), + "block has room for full entries"); if (result != VDO_SUCCESS) { vdo_enter_read_only_mode(journal->slab->allocator->depot->vdo, result); @@ -1371,8 +1371,8 @@ static unsigned int calculate_slab_priority(struct vdo_slab *slab) */ static void prioritize_slab(struct vdo_slab *slab) { - ASSERT_LOG_ONLY(list_empty(&slab->allocq_entry), - "a slab must not already be on a ring when prioritizing"); + VDO_ASSERT_LOG_ONLY(list_empty(&slab->allocq_entry), + "a slab must not already be on a ring when prioritizing"); slab->priority = calculate_slab_priority(slab); vdo_priority_table_enqueue(slab->allocator->prioritized_slabs, slab->priority, &slab->allocq_entry); @@ -1655,8 +1655,8 @@ static int __must_check adjust_reference_count(struct vdo_slab *slab, * the last time it was clean. We must release the per-entry slab journal lock for * the entry associated with the update we are now doing. */ - result = ASSERT(is_valid_journal_point(slab_journal_point), - "Reference count adjustments need slab journal points."); + result = VDO_ASSERT(is_valid_journal_point(slab_journal_point), + "Reference count adjustments need slab journal points."); if (result != VDO_SUCCESS) return result; @@ -1825,16 +1825,16 @@ static void add_entries(struct slab_journal *journal) * scrubbing thresholds, this should never happen. */ if (lock->count > 0) { - ASSERT_LOG_ONLY((journal->head + journal->size) == journal->tail, - "New block has locks, but journal is not full"); + VDO_ASSERT_LOG_ONLY((journal->head + journal->size) == journal->tail, + "New block has locks, but journal is not full"); /* * The blocking threshold must let the journal fill up if the new * block has locks; if the blocking threshold is smaller than the * journal size, the new block cannot possibly have locks already. */ - ASSERT_LOG_ONLY((journal->blocking_threshold >= journal->size), - "New block can have locks already iff blocking threshold is at the end of the journal"); + VDO_ASSERT_LOG_ONLY((journal->blocking_threshold >= journal->size), + "New block can have locks already iff blocking threshold is at the end of the journal"); WRITE_ONCE(journal->events->disk_full_count, journal->events->disk_full_count + 1); @@ -2361,9 +2361,9 @@ static int allocate_slab_counters(struct vdo_slab *slab) int result; size_t index, bytes; - result = ASSERT(slab->reference_blocks == NULL, - "vdo_slab %u doesn't allocate refcounts twice", - slab->slab_number); + result = VDO_ASSERT(slab->reference_blocks == NULL, + "vdo_slab %u doesn't allocate refcounts twice", + slab->slab_number); if (result != VDO_SUCCESS) return result; @@ -2503,9 +2503,9 @@ static void load_slab_journal(struct vdo_slab *slab) * 1. This is impossible, due to the scrubbing threshold, on a real system, so * don't bother reading the (bogus) data off disk. */ - ASSERT_LOG_ONLY(((journal->size < 16) || - (journal->scrubbing_threshold < (journal->size - 1))), - "Scrubbing threshold protects against reads of unwritten slab journal blocks"); + VDO_ASSERT_LOG_ONLY(((journal->size < 16) || + (journal->scrubbing_threshold < (journal->size - 1))), + "Scrubbing threshold protects against reads of unwritten slab journal blocks"); vdo_finish_loading_with_result(&slab->state, allocate_counters_if_clean(slab)); return; @@ -2519,8 +2519,8 @@ static void register_slab_for_scrubbing(struct vdo_slab *slab, bool high_priorit { struct slab_scrubber *scrubber = &slab->allocator->scrubber; - ASSERT_LOG_ONLY((slab->status != VDO_SLAB_REBUILT), - "slab to be scrubbed is unrecovered"); + VDO_ASSERT_LOG_ONLY((slab->status != VDO_SLAB_REBUILT), + "slab to be scrubbed is unrecovered"); if (slab->status != VDO_SLAB_REQUIRES_SCRUBBING) return; @@ -2547,17 +2547,17 @@ static void queue_slab(struct vdo_slab *slab) block_count_t free_blocks; int result; - ASSERT_LOG_ONLY(list_empty(&slab->allocq_entry), + VDO_ASSERT_LOG_ONLY(list_empty(&slab->allocq_entry), "a requeued slab must not already be on a ring"); if (vdo_is_read_only(allocator->depot->vdo)) return; free_blocks = slab->free_blocks; - result = ASSERT((free_blocks <= allocator->depot->slab_config.data_blocks), - "rebuilt slab %u must have a valid free block count (has %llu, expected maximum %llu)", - slab->slab_number, (unsigned long long) free_blocks, - (unsigned long long) allocator->depot->slab_config.data_blocks); + result = VDO_ASSERT((free_blocks <= allocator->depot->slab_config.data_blocks), + "rebuilt slab %u must have a valid free block count (has %llu, expected maximum %llu)", + slab->slab_number, (unsigned long long) free_blocks, + (unsigned long long) allocator->depot->slab_config.data_blocks); if (result != VDO_SUCCESS) { vdo_enter_read_only_mode(allocator->depot->vdo, result); return; @@ -2880,9 +2880,9 @@ static void apply_journal_entries(struct vdo_completion *completion) * At the end of rebuild, the reference counters should be accurate to the end of the * journal we just applied. */ - result = ASSERT(!vdo_before_journal_point(&last_entry_applied, - &ref_counts_point), - "Refcounts are not more accurate than the slab journal"); + result = VDO_ASSERT(!vdo_before_journal_point(&last_entry_applied, + &ref_counts_point), + "Refcounts are not more accurate than the slab journal"); if (result != VDO_SUCCESS) { abort_scrubbing(scrubber, result); return; @@ -2993,8 +2993,8 @@ static void scrub_slabs(struct block_allocator *allocator, struct vdo_completion static inline void assert_on_allocator_thread(thread_id_t thread_id, const char *function_name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == thread_id), - "%s called on correct thread", function_name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == thread_id), + "%s called on correct thread", function_name); } static void register_slab_with_allocator(struct block_allocator *allocator, @@ -3142,8 +3142,8 @@ static int __must_check allocate_slab_block(struct vdo_slab *slab, if (!search_reference_blocks(slab, &free_index)) return VDO_NO_SPACE; - ASSERT_LOG_ONLY((slab->counters[free_index] == EMPTY_REFERENCE_COUNT), - "free block must have ref count of zero"); + VDO_ASSERT_LOG_ONLY((slab->counters[free_index] == EMPTY_REFERENCE_COUNT), + "free block must have ref count of zero"); make_provisional_reference(slab, free_index); adjust_free_block_count(slab, false); @@ -3850,8 +3850,8 @@ static bool __must_check release_recovery_journal_lock(struct slab_journal *jour sequence_number_t recovery_lock) { if (recovery_lock > journal->recovery_lock) { - ASSERT_LOG_ONLY((recovery_lock < journal->recovery_lock), - "slab journal recovery lock is not older than the recovery journal head"); + VDO_ASSERT_LOG_ONLY((recovery_lock < journal->recovery_lock), + "slab journal recovery lock is not older than the recovery journal head"); return false; } @@ -4665,8 +4665,8 @@ int vdo_prepare_to_grow_slab_depot(struct slab_depot *depot, return VDO_INCREMENT_TOO_SMALL; /* Generate the depot configuration for the new block count. */ - ASSERT_LOG_ONLY(depot->first_block == partition->offset, - "New slab depot partition doesn't change origin"); + VDO_ASSERT_LOG_ONLY(depot->first_block == partition->offset, + "New slab depot partition doesn't change origin"); result = vdo_configure_slab_depot(partition, depot->slab_config, depot->zone_count, &new_state); if (result != VDO_SUCCESS) @@ -4740,7 +4740,7 @@ static void register_new_slabs(void *context, zone_count_t zone_number, */ void vdo_use_new_slabs(struct slab_depot *depot, struct vdo_completion *parent) { - ASSERT_LOG_ONLY(depot->new_slabs != NULL, "Must have new slabs to use"); + VDO_ASSERT_LOG_ONLY(depot->new_slabs != NULL, "Must have new slabs to use"); vdo_schedule_operation(depot->action_manager, VDO_ADMIN_STATE_SUSPENDED_OPERATION, NULL, register_new_slabs, @@ -4796,8 +4796,8 @@ static void do_drain_step(struct vdo_completion *completion) return; case VDO_DRAIN_ALLOCATOR_STEP_FINISHED: - ASSERT_LOG_ONLY(!is_vio_pool_busy(allocator->vio_pool), - "vio pool not busy"); + VDO_ASSERT_LOG_ONLY(!is_vio_pool_busy(allocator->vio_pool), + "vio pool not busy"); vdo_finish_draining_with_result(&allocator->state, completion->result); return; diff --git a/drivers/md/dm-vdo/thread-registry.c b/drivers/md/dm-vdo/thread-registry.c index 03e2f45e8e78..d4a077d58c60 100644 --- a/drivers/md/dm-vdo/thread-registry.c +++ b/drivers/md/dm-vdo/thread-registry.c @@ -44,7 +44,7 @@ void vdo_register_thread(struct thread_registry *registry, list_add_tail_rcu(&new_thread->links, ®istry->links); spin_unlock(®istry->lock); - ASSERT_LOG_ONLY(!found_it, "new thread not already in registry"); + VDO_ASSERT_LOG_ONLY(!found_it, "new thread not already in registry"); if (found_it) { /* Ensure no RCU iterators see it before re-initializing. */ synchronize_rcu(); @@ -67,7 +67,7 @@ void vdo_unregister_thread(struct thread_registry *registry) } spin_unlock(®istry->lock); - ASSERT_LOG_ONLY(found_it, "thread found in registry"); + VDO_ASSERT_LOG_ONLY(found_it, "thread found in registry"); if (found_it) { /* Ensure no RCU iterators see it before re-initializing. */ synchronize_rcu(); diff --git a/drivers/md/dm-vdo/vdo.c b/drivers/md/dm-vdo/vdo.c index b4dd0634a5cb..11be2ab17e29 100644 --- a/drivers/md/dm-vdo/vdo.c +++ b/drivers/md/dm-vdo/vdo.c @@ -425,9 +425,9 @@ int vdo_make_thread(struct vdo *vdo, thread_id_t thread_id, type = &default_queue_type; if (thread->queue != NULL) { - return ASSERT(vdo_work_queue_type_is(thread->queue, type), - "already constructed vdo thread %u is of the correct type", - thread_id); + return VDO_ASSERT(vdo_work_queue_type_is(thread->queue, type), + "already constructed vdo thread %u is of the correct type", + thread_id); } thread->vdo = vdo; @@ -448,8 +448,8 @@ static int register_vdo(struct vdo *vdo) int result; write_lock(®istry.lock); - result = ASSERT(filter_vdos_locked(vdo_is_equal, vdo) == NULL, - "VDO not already registered"); + result = VDO_ASSERT(filter_vdos_locked(vdo_is_equal, vdo) == NULL, + "VDO not already registered"); if (result == VDO_SUCCESS) { INIT_LIST_HEAD(&vdo->registration); list_add_tail(&vdo->registration, ®istry.links); @@ -1050,8 +1050,8 @@ int vdo_register_read_only_listener(struct vdo *vdo, void *listener, struct read_only_listener *read_only_listener; int result; - result = ASSERT(thread_id != vdo->thread_config.dedupe_thread, - "read only listener not registered on dedupe thread"); + result = VDO_ASSERT(thread_id != vdo->thread_config.dedupe_thread, + "read only listener not registered on dedupe thread"); if (result != VDO_SUCCESS) return result; @@ -1704,8 +1704,8 @@ void vdo_dump_status(const struct vdo *vdo) */ void vdo_assert_on_admin_thread(const struct vdo *vdo, const char *name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == vdo->thread_config.admin_thread), - "%s called on admin thread", name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == vdo->thread_config.admin_thread), + "%s called on admin thread", name); } /** @@ -1718,9 +1718,9 @@ void vdo_assert_on_admin_thread(const struct vdo *vdo, const char *name) void vdo_assert_on_logical_zone_thread(const struct vdo *vdo, zone_count_t logical_zone, const char *name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == - vdo->thread_config.logical_threads[logical_zone]), - "%s called on logical thread", name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == + vdo->thread_config.logical_threads[logical_zone]), + "%s called on logical thread", name); } /** @@ -1733,9 +1733,9 @@ void vdo_assert_on_logical_zone_thread(const struct vdo *vdo, zone_count_t logic void vdo_assert_on_physical_zone_thread(const struct vdo *vdo, zone_count_t physical_zone, const char *name) { - ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == - vdo->thread_config.physical_threads[physical_zone]), - "%s called on physical thread", name); + VDO_ASSERT_LOG_ONLY((vdo_get_callback_thread_id() == + vdo->thread_config.physical_threads[physical_zone]), + "%s called on physical thread", name); } /** @@ -1773,7 +1773,7 @@ int vdo_get_physical_zone(const struct vdo *vdo, physical_block_number_t pbn, /* With the PBN already checked, we should always succeed in finding a slab. */ slab = vdo_get_slab(vdo->depot, pbn); - result = ASSERT(slab != NULL, "vdo_get_slab must succeed on all valid PBNs"); + result = VDO_ASSERT(slab != NULL, "vdo_get_slab must succeed on all valid PBNs"); if (result != VDO_SUCCESS) return result; diff --git a/drivers/md/dm-vdo/vio.c b/drivers/md/dm-vdo/vio.c index 83c36f7590de..b1e4e604c2c3 100644 --- a/drivers/md/dm-vdo/vio.c +++ b/drivers/md/dm-vdo/vio.c @@ -82,14 +82,14 @@ int allocate_vio_components(struct vdo *vdo, enum vio_type vio_type, struct bio *bio; int result; - result = ASSERT(block_count <= MAX_BLOCKS_PER_VIO, - "block count %u does not exceed maximum %u", block_count, - MAX_BLOCKS_PER_VIO); + result = VDO_ASSERT(block_count <= MAX_BLOCKS_PER_VIO, + "block count %u does not exceed maximum %u", block_count, + MAX_BLOCKS_PER_VIO); if (result != VDO_SUCCESS) return result; - result = ASSERT(((vio_type != VIO_TYPE_UNINITIALIZED) && (vio_type != VIO_TYPE_DATA)), - "%d is a metadata type", vio_type); + result = VDO_ASSERT(((vio_type != VIO_TYPE_UNINITIALIZED) && (vio_type != VIO_TYPE_DATA)), + "%d is a metadata type", vio_type); if (result != VDO_SUCCESS) return result; @@ -363,13 +363,13 @@ void free_vio_pool(struct vio_pool *pool) return; /* Remove all available vios from the object pool. */ - ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&pool->waiting), - "VIO pool must not have any waiters when being freed"); - ASSERT_LOG_ONLY((pool->busy_count == 0), - "VIO pool must not have %zu busy entries when being freed", - pool->busy_count); - ASSERT_LOG_ONLY(list_empty(&pool->busy), - "VIO pool must not have busy entries when being freed"); + VDO_ASSERT_LOG_ONLY(!vdo_waitq_has_waiters(&pool->waiting), + "VIO pool must not have any waiters when being freed"); + VDO_ASSERT_LOG_ONLY((pool->busy_count == 0), + "VIO pool must not have %zu busy entries when being freed", + pool->busy_count); + VDO_ASSERT_LOG_ONLY(list_empty(&pool->busy), + "VIO pool must not have busy entries when being freed"); list_for_each_entry_safe(pooled, tmp, &pool->available, pool_entry) { list_del(&pooled->pool_entry); @@ -377,8 +377,8 @@ void free_vio_pool(struct vio_pool *pool) pool->size--; } - ASSERT_LOG_ONLY(pool->size == 0, - "VIO pool must not have missing entries when being freed"); + VDO_ASSERT_LOG_ONLY(pool->size == 0, + "VIO pool must not have missing entries when being freed"); vdo_free(vdo_forget(pool->buffer)); vdo_free(pool); @@ -403,8 +403,8 @@ void acquire_vio_from_pool(struct vio_pool *pool, struct vdo_waiter *waiter) { struct pooled_vio *pooled; - ASSERT_LOG_ONLY((pool->thread_id == vdo_get_callback_thread_id()), - "acquire from active vio_pool called from correct thread"); + VDO_ASSERT_LOG_ONLY((pool->thread_id == vdo_get_callback_thread_id()), + "acquire from active vio_pool called from correct thread"); if (list_empty(&pool->available)) { vdo_waitq_enqueue_waiter(&pool->waiting, waiter); @@ -424,8 +424,8 @@ void acquire_vio_from_pool(struct vio_pool *pool, struct vdo_waiter *waiter) */ void return_vio_to_pool(struct vio_pool *pool, struct pooled_vio *vio) { - ASSERT_LOG_ONLY((pool->thread_id == vdo_get_callback_thread_id()), - "vio pool entry returned on same thread as it was acquired"); + VDO_ASSERT_LOG_ONLY((pool->thread_id == vdo_get_callback_thread_id()), + "vio pool entry returned on same thread as it was acquired"); vio->vio.completion.error_handler = NULL; vio->vio.completion.parent = NULL; @@ -465,8 +465,8 @@ void vdo_count_bios(struct atomic_bio_stats *bio_stats, struct bio *bio) * shouldn't exist. */ default: - ASSERT_LOG_ONLY(0, "Bio operation %d not a write, read, discard, or empty flush", - bio_op(bio)); + VDO_ASSERT_LOG_ONLY(0, "Bio operation %d not a write, read, discard, or empty flush", + bio_op(bio)); } if ((bio->bi_opf & REQ_PREFLUSH) != 0) diff --git a/drivers/md/dm-vdo/vio.h b/drivers/md/dm-vdo/vio.h index fbfee5e3415d..3490e9f59b04 100644 --- a/drivers/md/dm-vdo/vio.h +++ b/drivers/md/dm-vdo/vio.h @@ -67,10 +67,10 @@ static inline void assert_vio_in_bio_zone(struct vio *vio) thread_id_t expected = get_vio_bio_zone_thread_id(vio); thread_id_t thread_id = vdo_get_callback_thread_id(); - ASSERT_LOG_ONLY((expected == thread_id), - "vio I/O for physical block %llu on thread %u, should be on bio zone thread %u", - (unsigned long long) pbn_from_vio_bio(vio->bio), thread_id, - expected); + VDO_ASSERT_LOG_ONLY((expected == thread_id), + "vio I/O for physical block %llu on thread %u, should be on bio zone thread %u", + (unsigned long long) pbn_from_vio_bio(vio->bio), thread_id, + expected); } int vdo_create_bio(struct bio **bio_ptr); From patchwork Fri Mar 1 04:38:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578009 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7034450A65 for ; Fri, 1 Mar 2024 04:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267939; cv=none; b=oIgYPyX1lXisMyOixhojTw6XAzSHIpapvdBcJPQWCiI5ZCpdvQtByfr4P0drBKqhyuEuvAv1rXIidVUfzhmTOpqFUnvXCnh1PBwyAEx2S1Wxgonszit+s/l/P66B4rApFmZNDtKX+OExkugz2VPJbf8mjfbO7/PsCWQTPiNiM6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267939; c=relaxed/simple; bh=xcB25sWy+L+B5E5QwqkjiQlDno1+TNAD630Q4ba8YQI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gmyWiBoL+15FvdrUm2l99kkmKROxd9yHx8HBai7NDF6UDKHs8qe8LRdTxU4ckMs3HK7F4raFuegOYO8GjSGEeZh28Az/yUs8LJ5ZJP5yDN+1SIIdsaf4YAneekN1QQWIuvLh/yKS1AZxRIixuU9xbkboNQQKoxZAvaU7wTIVfgE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=LPE0LwTO; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LPE0LwTO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267937; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ORwmwPNPb6J7sqONtX0zVWsVuK8gUpv0boCUbY+phNM=; b=LPE0LwTOKTlOkrmBnoKBZBGP4FCp4AvLFRa/o2toQlNePbCX8at+2Mi4clX7D+wu+t9oED Xv13tzUjqZ1xyiHhOp725Fmv3YZJF4ZLyN0sqaIPxzSuNzF2DWzVnsf+PNVNxCKdLyVvRO oBus9KP+epC/yceVCpitwx/EKHpqogk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-68-48PgtrifM5m8fdiO1kHVNA-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: 48PgtrifM5m8fdiO1kHVNA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE1F083B8E6; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id D838B200F829; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id D5945A162A; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 08/10] dm vdo encodings: update some stale comments Date: Thu, 29 Feb 2024 23:38:51 -0500 Message-ID: <91132d19ad852107a04b72237fde5d43d68fe9c6.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/encodings.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/md/dm-vdo/encodings.c b/drivers/md/dm-vdo/encodings.c index e24c31bc3524..ebb0a4edd109 100644 --- a/drivers/md/dm-vdo/encodings.c +++ b/drivers/md/dm-vdo/encodings.c @@ -544,8 +544,6 @@ const char *vdo_get_journal_operation_name(enum journal_operation operation) /** * encode_slab_depot_state_2_0() - Encode the state of a slab depot into a buffer. - * - * Return: UDS_SUCCESS or an error. */ static void encode_slab_depot_state_2_0(u8 *buffer, size_t *offset, struct slab_depot_state_2_0 state) @@ -573,7 +571,7 @@ static void encode_slab_depot_state_2_0(u8 *buffer, size_t *offset, /** * decode_slab_depot_state_2_0() - Decode slab depot component state version 2.0 from a buffer. * - * Return: UDS_SUCCESS or an error code. + * Return: VDO_SUCCESS or an error code. */ static int decode_slab_depot_state_2_0(u8 *buffer, size_t *offset, struct slab_depot_state_2_0 *state) From patchwork Fri Mar 1 04:38:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578008 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A05F350260 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; cv=none; b=WlgbiMzKBeQ98ntWEhxYry2ss6FmI5DIutC87/MCax4DvXnfFGGEJUfc8kdlgXSRvXu4OmJs6a0J8CqRlmWoKuXeCVAjQUvEKibWEZK+MBYbuOnvCSKQs1tb7iANFWkrmoarC38kQLzEuNNLrVdmRKR2bdLHCn300KEf4jnuDmU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; c=relaxed/simple; bh=7EdYRRXBE03TVCwgedkMWG1BI6rL01/GyhRFfCcFJf0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KLvpuRUDKjnR/bYhXklZV6hdVNFkpQ+g8lgPtS9PP7d8HeJkeOOX/TkiEG2K4H2lmiAVvsd4aGzxMUyKVyrXfuVj4btRu7aTDC96k+ebJWOeqgViaVbEZjimZMORZmsjNjSQwR0PEHDPI7YdICQBvLAkJs9/Z8Dl2BTXEB4Og6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IrL8UHE8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IrL8UHE8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ov6o7AUSW12QWTZVgcRRmQYlwzeQCxNNd52t1u+BwzU=; b=IrL8UHE8jIyoWL03qxVs8rCieWcUWBTbeX09NjCcYxlsOqCC/4s/G7yzgDBLL285Zmc14W Die9TtHaFrTb8xan2jDzOxoffMgpfegMRlSbBEqhv206KnJqbnzK90f5x2gwgbx79v/PYY 2sJURXfsQS42HsDuoWtfuq5sDKbb748= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-68-aAaX_mHYPdiHZ2B9zAp9Vw-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: aAaX_mHYPdiHZ2B9zAp9Vw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E31CC859B80 for ; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id DDE93200F82A; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id DB657A162C; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Matthew Sakai Subject: [PATCH 09/10] dm vdo indexer: update ASSERT and ASSERT_LOG_ONLY usage Date: Thu, 29 Feb 2024 23:38:52 -0500 Message-ID: <8ffbd67687fc963507c2c14ec3e0409785ee4e9c.1709267597.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Update indexer uses of ASSERT and ASSERT_LOG_ONLY to VDO_ASSERT and VDO_ASSERT_LOG_ONLY, respectively. Remove ASSERT and ASSERT_LOG_ONLY. Also rename uds_assertion_failed to vdo_assertion_failed. Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/indexer/chapter-index.c | 16 ++-- drivers/md/dm-vdo/indexer/config.c | 16 ++-- drivers/md/dm-vdo/indexer/delta-index.c | 90 +++++++++++------------ drivers/md/dm-vdo/indexer/index-layout.c | 5 +- drivers/md/dm-vdo/indexer/index-session.c | 12 +-- drivers/md/dm-vdo/indexer/index.c | 4 +- drivers/md/dm-vdo/indexer/volume-index.c | 32 ++++---- drivers/md/dm-vdo/indexer/volume.c | 32 ++++---- drivers/md/dm-vdo/permassert.c | 2 +- drivers/md/dm-vdo/permassert.h | 8 +- 10 files changed, 107 insertions(+), 110 deletions(-) diff --git a/drivers/md/dm-vdo/indexer/chapter-index.c b/drivers/md/dm-vdo/indexer/chapter-index.c index 68d86028dbb7..47e4ed234242 100644 --- a/drivers/md/dm-vdo/indexer/chapter-index.c +++ b/drivers/md/dm-vdo/indexer/chapter-index.c @@ -83,10 +83,10 @@ int uds_put_open_chapter_index_record(struct open_chapter_index *chapter_index, u64 chapter_number = chapter_index->virtual_chapter_number; u32 record_pages = geometry->record_pages_per_chapter; - result = ASSERT(page_number < record_pages, - "Page number within chapter (%u) exceeds the maximum value %u", - page_number, record_pages); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(page_number < record_pages, + "Page number within chapter (%u) exceeds the maximum value %u", + page_number, record_pages); + if (result != VDO_SUCCESS) return UDS_INVALID_ARGUMENT; address = uds_hash_to_chapter_delta_address(name, geometry); @@ -97,10 +97,10 @@ int uds_put_open_chapter_index_record(struct open_chapter_index *chapter_index, return result; found = was_entry_found(&entry, address); - result = ASSERT(!(found && entry.is_collision), - "Chunk appears more than once in chapter %llu", - (unsigned long long) chapter_number); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(!(found && entry.is_collision), + "Chunk appears more than once in chapter %llu", + (unsigned long long) chapter_number); + if (result != VDO_SUCCESS) return UDS_BAD_STATE; found_name = (found ? name->name : NULL); diff --git a/drivers/md/dm-vdo/indexer/config.c b/drivers/md/dm-vdo/indexer/config.c index 5da39043b9ae..69bf27a9d61b 100644 --- a/drivers/md/dm-vdo/indexer/config.c +++ b/drivers/md/dm-vdo/indexer/config.c @@ -134,10 +134,10 @@ int uds_validate_config_contents(struct buffered_reader *reader, decode_u32_le(buffer, &offset, &config.sparse_sample_rate); decode_u64_le(buffer, &offset, &config.nonce); - result = ASSERT(offset == sizeof(struct uds_configuration_6_02), - "%zu bytes read but not decoded", - sizeof(struct uds_configuration_6_02) - offset); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(struct uds_configuration_6_02), + "%zu bytes read but not decoded", + sizeof(struct uds_configuration_6_02) - offset); + if (result != VDO_SUCCESS) return UDS_CORRUPT_DATA; if (is_version(INDEX_CONFIG_VERSION_6_02, version_buffer)) { @@ -210,10 +210,10 @@ int uds_write_config_contents(struct buffered_writer *writer, encode_u32_le(buffer, &offset, config->sparse_sample_rate); encode_u64_le(buffer, &offset, config->nonce); - result = ASSERT(offset == sizeof(struct uds_configuration_6_02), - "%zu bytes encoded, of %zu expected", offset, - sizeof(struct uds_configuration_6_02)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(struct uds_configuration_6_02), + "%zu bytes encoded, of %zu expected", offset, + sizeof(struct uds_configuration_6_02)); + if (result != VDO_SUCCESS) return result; if (version >= 4) { diff --git a/drivers/md/dm-vdo/indexer/delta-index.c b/drivers/md/dm-vdo/indexer/delta-index.c index 5bba9a48c5a0..b49066554248 100644 --- a/drivers/md/dm-vdo/indexer/delta-index.c +++ b/drivers/md/dm-vdo/indexer/delta-index.c @@ -856,10 +856,10 @@ int uds_start_restoring_delta_index(struct delta_index *delta_index, decode_u64_le(buffer, &offset, &header.record_count); decode_u64_le(buffer, &offset, &header.collision_count); - result = ASSERT(offset == sizeof(struct delta_index_header), - "%zu bytes decoded of %zu expected", offset, - sizeof(struct delta_index_header)); - if (result != UDS_SUCCESS) { + result = VDO_ASSERT(offset == sizeof(struct delta_index_header), + "%zu bytes decoded of %zu expected", offset, + sizeof(struct delta_index_header)); + if (result != VDO_SUCCESS) { return uds_log_warning_strerror(result, "failed to read delta index header"); } @@ -1136,10 +1136,10 @@ int uds_start_saving_delta_index(const struct delta_index *delta_index, encode_u64_le(buffer, &offset, delta_zone->record_count); encode_u64_le(buffer, &offset, delta_zone->collision_count); - result = ASSERT(offset == sizeof(struct delta_index_header), - "%zu bytes encoded of %zu expected", offset, - sizeof(struct delta_index_header)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(struct delta_index_header), + "%zu bytes encoded of %zu expected", offset, + sizeof(struct delta_index_header)); + if (result != VDO_SUCCESS) return result; result = uds_write_to_buffered_writer(buffered_writer, buffer, offset); @@ -1212,9 +1212,9 @@ size_t uds_compute_delta_index_save_bytes(u32 list_count, size_t memory_size) static int assert_not_at_end(const struct delta_index_entry *delta_entry) { - int result = ASSERT(!delta_entry->at_end, - "operation is invalid because the list entry is at the end of the delta list"); - if (result != UDS_SUCCESS) + int result = VDO_ASSERT(!delta_entry->at_end, + "operation is invalid because the list entry is at the end of the delta list"); + if (result != VDO_SUCCESS) result = UDS_BAD_STATE; return result; @@ -1236,19 +1236,19 @@ int uds_start_delta_index_search(const struct delta_index *delta_index, u32 list struct delta_zone *delta_zone; struct delta_list *delta_list; - result = ASSERT((list_number < delta_index->list_count), - "Delta list number (%u) is out of range (%u)", list_number, - delta_index->list_count); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((list_number < delta_index->list_count), + "Delta list number (%u) is out of range (%u)", list_number, + delta_index->list_count); + if (result != VDO_SUCCESS) return UDS_CORRUPT_DATA; zone_number = list_number / delta_index->lists_per_zone; delta_zone = &delta_index->delta_zones[zone_number]; list_number -= delta_zone->first_list; - result = ASSERT((list_number < delta_zone->list_count), - "Delta list number (%u) is out of range (%u) for zone (%u)", - list_number, delta_zone->list_count, zone_number); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((list_number < delta_zone->list_count), + "Delta list number (%u) is out of range (%u) for zone (%u)", + list_number, delta_zone->list_count, zone_number); + if (result != VDO_SUCCESS) return UDS_CORRUPT_DATA; if (delta_index->mutable) { @@ -1362,9 +1362,9 @@ noinline int uds_next_delta_index_entry(struct delta_index_entry *delta_entry) delta_entry->at_end = true; delta_entry->delta = 0; delta_entry->is_collision = false; - result = ASSERT((delta_entry->offset == size), - "next offset past end of delta list"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((delta_entry->offset == size), + "next offset past end of delta list"); + if (result != VDO_SUCCESS) result = UDS_CORRUPT_DATA; return result; @@ -1390,8 +1390,8 @@ int uds_remember_delta_index_offset(const struct delta_index_entry *delta_entry) int result; struct delta_list *delta_list = delta_entry->delta_list; - result = ASSERT(!delta_entry->is_collision, "entry is not a collision"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(!delta_entry->is_collision, "entry is not a collision"); + if (result != VDO_SUCCESS) return result; delta_list->save_key = delta_entry->key - delta_entry->delta; @@ -1489,9 +1489,9 @@ int uds_get_delta_entry_collision(const struct delta_index_entry *delta_entry, u if (result != UDS_SUCCESS) return result; - result = ASSERT(delta_entry->is_collision, - "Cannot get full block name from a non-collision delta index entry"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(delta_entry->is_collision, + "Cannot get full block name from a non-collision delta index entry"); + if (result != VDO_SUCCESS) return UDS_BAD_STATE; get_collision_name(delta_entry, name); @@ -1506,9 +1506,9 @@ u32 uds_get_delta_entry_value(const struct delta_index_entry *delta_entry) static int assert_mutable_entry(const struct delta_index_entry *delta_entry) { - int result = ASSERT((delta_entry->delta_list != &delta_entry->temp_delta_list), - "delta index is mutable"); - if (result != UDS_SUCCESS) + int result = VDO_ASSERT((delta_entry->delta_list != &delta_entry->temp_delta_list), + "delta index is mutable"); + if (result != VDO_SUCCESS) result = UDS_BAD_STATE; return result; @@ -1527,10 +1527,10 @@ int uds_set_delta_entry_value(const struct delta_index_entry *delta_entry, u32 v if (result != UDS_SUCCESS) return result; - result = ASSERT((value & value_mask) == value, - "Value (%u) being set in a delta index is too large (must fit in %u bits)", - value, delta_entry->value_bits); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((value & value_mask) == value, + "Value (%u) being set in a delta index is too large (must fit in %u bits)", + value, delta_entry->value_bits); + if (result != VDO_SUCCESS) return UDS_INVALID_ARGUMENT; set_field(value, delta_entry->delta_zone->memory, @@ -1730,9 +1730,9 @@ int uds_put_delta_index_entry(struct delta_index_entry *delta_entry, u32 key, u3 if (result != UDS_SUCCESS) return result; - result = ASSERT((key == delta_entry->key), - "incorrect key for collision entry"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((key == delta_entry->key), + "incorrect key for collision entry"); + if (result != VDO_SUCCESS) return result; delta_entry->offset += delta_entry->entry_bits; @@ -1742,8 +1742,8 @@ int uds_put_delta_index_entry(struct delta_index_entry *delta_entry, u32 key, u3 result = insert_bits(delta_entry, delta_entry->entry_bits); } else if (delta_entry->at_end) { /* Insert a new entry at the end of the delta list. */ - result = ASSERT((key >= delta_entry->key), "key past end of list"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((key >= delta_entry->key), "key past end of list"); + if (result != VDO_SUCCESS) return result; set_delta(delta_entry, key - delta_entry->key); @@ -1760,14 +1760,14 @@ int uds_put_delta_index_entry(struct delta_index_entry *delta_entry, u32 key, u3 * Insert a new entry which requires the delta in the following entry to be * updated. */ - result = ASSERT((key < delta_entry->key), - "key precedes following entry"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((key < delta_entry->key), + "key precedes following entry"); + if (result != VDO_SUCCESS) return result; - result = ASSERT((key >= delta_entry->key - delta_entry->delta), - "key effects following entry's delta"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((key >= delta_entry->key - delta_entry->delta), + "key effects following entry's delta"); + if (result != VDO_SUCCESS) return result; old_entry_size = delta_entry->entry_bits; diff --git a/drivers/md/dm-vdo/indexer/index-layout.c b/drivers/md/dm-vdo/indexer/index-layout.c index 01e0db4184aa..349b3022f1e1 100644 --- a/drivers/md/dm-vdo/indexer/index-layout.c +++ b/drivers/md/dm-vdo/indexer/index-layout.c @@ -837,8 +837,9 @@ static u64 generate_index_save_nonce(u64 volume_nonce, struct index_save_layout encode_u32_le(buffer, &offset, isl->save_data.version); encode_u32_le(buffer, &offset, 0U); encode_u64_le(buffer, &offset, isl->index_save.start_block); - ASSERT_LOG_ONLY(offset == sizeof(nonce_data), - "%zu bytes encoded of %zu expected", offset, sizeof(nonce_data)); + VDO_ASSERT_LOG_ONLY(offset == sizeof(nonce_data), + "%zu bytes encoded of %zu expected", + offset, sizeof(nonce_data)); return generate_secondary_nonce(volume_nonce, buffer, sizeof(buffer)); } diff --git a/drivers/md/dm-vdo/indexer/index-session.c b/drivers/md/dm-vdo/indexer/index-session.c index 22445dcb3fe0..9eae00548095 100644 --- a/drivers/md/dm-vdo/indexer/index-session.c +++ b/drivers/md/dm-vdo/indexer/index-session.c @@ -199,8 +199,8 @@ static void update_session_stats(struct uds_request *request) break; default: - request->status = ASSERT(false, "unknown request type: %d", - request->type); + request->status = VDO_ASSERT(false, "unknown request type: %d", + request->type); } } @@ -402,8 +402,8 @@ static void suspend_rebuild(struct uds_index_session *session) case INDEX_FREEING: default: /* These cases should not happen. */ - ASSERT_LOG_ONLY(false, "Bad load context state %u", - session->load_context.status); + VDO_ASSERT_LOG_ONLY(false, "Bad load context state %u", + session->load_context.status); break; } mutex_unlock(&session->load_context.mutex); @@ -531,8 +531,8 @@ int uds_resume_index_session(struct uds_index_session *session, case INDEX_FREEING: default: /* These cases should not happen; do nothing. */ - ASSERT_LOG_ONLY(false, "Bad load context state %u", - session->load_context.status); + VDO_ASSERT_LOG_ONLY(false, "Bad load context state %u", + session->load_context.status); break; } mutex_unlock(&session->load_context.mutex); diff --git a/drivers/md/dm-vdo/indexer/index.c b/drivers/md/dm-vdo/indexer/index.c index 226713221105..221af95ca2a4 100644 --- a/drivers/md/dm-vdo/indexer/index.c +++ b/drivers/md/dm-vdo/indexer/index.c @@ -112,7 +112,7 @@ static void enqueue_barrier_messages(struct uds_index *index, u64 virtual_chapte for (zone = 0; zone < index->zone_count; zone++) { int result = launch_zone_message(message, zone, index); - ASSERT_LOG_ONLY((result == UDS_SUCCESS), "barrier message allocation"); + VDO_ASSERT_LOG_ONLY((result == UDS_SUCCESS), "barrier message allocation"); } } @@ -1380,7 +1380,7 @@ void uds_enqueue_request(struct uds_request *request, enum request_stage stage) break; default: - ASSERT_LOG_ONLY(false, "invalid index stage: %d", stage); + VDO_ASSERT_LOG_ONLY(false, "invalid index stage: %d", stage); return; } diff --git a/drivers/md/dm-vdo/indexer/volume-index.c b/drivers/md/dm-vdo/indexer/volume-index.c index 1cc9ac4fe510..e2b0600d82b9 100644 --- a/drivers/md/dm-vdo/indexer/volume-index.c +++ b/drivers/md/dm-vdo/indexer/volume-index.c @@ -832,10 +832,10 @@ static int start_restoring_volume_sub_index(struct volume_sub_index *sub_index, decode_u32_le(buffer, &offset, &header.first_list); decode_u32_le(buffer, &offset, &header.list_count); - result = ASSERT(offset == sizeof(buffer), - "%zu bytes decoded of %zu expected", offset, - sizeof(buffer)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(buffer), + "%zu bytes decoded of %zu expected", offset, + sizeof(buffer)); + if (result != VDO_SUCCESS) result = UDS_CORRUPT_DATA; if (memcmp(header.magic, MAGIC_START_5, MAGIC_SIZE) != 0) { @@ -924,10 +924,10 @@ static int start_restoring_volume_index(struct volume_index *volume_index, offset += MAGIC_SIZE; decode_u32_le(buffer, &offset, &header.sparse_sample_rate); - result = ASSERT(offset == sizeof(buffer), - "%zu bytes decoded of %zu expected", offset, - sizeof(buffer)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(buffer), + "%zu bytes decoded of %zu expected", offset, + sizeof(buffer)); + if (result != VDO_SUCCESS) result = UDS_CORRUPT_DATA; if (memcmp(header.magic, MAGIC_START_6, MAGIC_SIZE) != 0) @@ -1023,10 +1023,10 @@ static int start_saving_volume_sub_index(const struct volume_sub_index *sub_inde encode_u32_le(buffer, &offset, first_list); encode_u32_le(buffer, &offset, list_count); - result = ASSERT(offset == sizeof(struct sub_index_data), - "%zu bytes of config written, of %zu expected", offset, - sizeof(struct sub_index_data)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(struct sub_index_data), + "%zu bytes of config written, of %zu expected", offset, + sizeof(struct sub_index_data)); + if (result != VDO_SUCCESS) return result; result = uds_write_to_buffered_writer(buffered_writer, buffer, offset); @@ -1066,10 +1066,10 @@ static int start_saving_volume_index(const struct volume_index *volume_index, memcpy(buffer, MAGIC_START_6, MAGIC_SIZE); offset += MAGIC_SIZE; encode_u32_le(buffer, &offset, volume_index->sparse_sample_rate); - result = ASSERT(offset == sizeof(struct volume_index_data), - "%zu bytes of header written, of %zu expected", offset, - sizeof(struct volume_index_data)); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(offset == sizeof(struct volume_index_data), + "%zu bytes of header written, of %zu expected", offset, + sizeof(struct volume_index_data)); + if (result != VDO_SUCCESS) return result; result = uds_write_to_buffered_writer(writer, buffer, offset); diff --git a/drivers/md/dm-vdo/indexer/volume.c b/drivers/md/dm-vdo/indexer/volume.c index 0a4beef8ac8d..701f2220d803 100644 --- a/drivers/md/dm-vdo/indexer/volume.c +++ b/drivers/md/dm-vdo/indexer/volume.c @@ -135,8 +135,8 @@ static void begin_pending_search(struct page_cache *cache, u32 physical_page, invalidate_counter.page = physical_page; invalidate_counter.counter++; set_invalidate_counter(cache, zone_number, invalidate_counter); - ASSERT_LOG_ONLY(search_pending(invalidate_counter), - "Search is pending for zone %u", zone_number); + VDO_ASSERT_LOG_ONLY(search_pending(invalidate_counter), + "Search is pending for zone %u", zone_number); /* * This memory barrier ensures that the write to the invalidate counter is seen by other * threads before this thread accesses the cached page. The corresponding read memory @@ -158,8 +158,8 @@ static void end_pending_search(struct page_cache *cache, unsigned int zone_numbe smp_mb(); invalidate_counter = get_invalidate_counter(cache, zone_number); - ASSERT_LOG_ONLY(search_pending(invalidate_counter), - "Search is pending for zone %u", zone_number); + VDO_ASSERT_LOG_ONLY(search_pending(invalidate_counter), + "Search is pending for zone %u", zone_number); invalidate_counter.counter++; set_invalidate_counter(cache, zone_number, invalidate_counter); } @@ -259,8 +259,8 @@ static int put_page_in_cache(struct page_cache *cache, u32 physical_page, int result; /* We hold the read_threads_mutex. */ - result = ASSERT((page->read_pending), "page to install has a pending read"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((page->read_pending), "page to install has a pending read"); + if (result != VDO_SUCCESS) return result; page->physical_page = physical_page; @@ -285,8 +285,8 @@ static void cancel_page_in_cache(struct page_cache *cache, u32 physical_page, int result; /* We hold the read_threads_mutex. */ - result = ASSERT((page->read_pending), "page to install has a pending read"); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((page->read_pending), "page to install has a pending read"); + if (result != VDO_SUCCESS) return; clear_cache_page(cache, page); @@ -889,10 +889,10 @@ int uds_search_cached_record_page(struct volume *volume, struct uds_request *req if (record_page_number == NO_CHAPTER_INDEX_ENTRY) return UDS_SUCCESS; - result = ASSERT(record_page_number < geometry->record_pages_per_chapter, - "0 <= %d < %u", record_page_number, - geometry->record_pages_per_chapter); - if (result != UDS_SUCCESS) + result = VDO_ASSERT(record_page_number < geometry->record_pages_per_chapter, + "0 <= %d < %u", record_page_number, + geometry->record_pages_per_chapter); + if (result != VDO_SUCCESS) return result; page_number = geometry->index_pages_per_chapter + record_page_number; @@ -1501,10 +1501,10 @@ static int __must_check initialize_page_cache(struct page_cache *cache, cache->zone_count = zone_count; atomic64_set(&cache->clock, 1); - result = ASSERT((cache->cache_slots <= VOLUME_CACHE_MAX_ENTRIES), - "requested cache size, %u, within limit %u", - cache->cache_slots, VOLUME_CACHE_MAX_ENTRIES); - if (result != UDS_SUCCESS) + result = VDO_ASSERT((cache->cache_slots <= VOLUME_CACHE_MAX_ENTRIES), + "requested cache size, %u, within limit %u", + cache->cache_slots, VOLUME_CACHE_MAX_ENTRIES); + if (result != VDO_SUCCESS) return result; result = vdo_allocate(VOLUME_CACHE_MAX_QUEUED_READS, struct queued_read, diff --git a/drivers/md/dm-vdo/permassert.c b/drivers/md/dm-vdo/permassert.c index 3fa752ba0061..6fe49c4b7e51 100644 --- a/drivers/md/dm-vdo/permassert.c +++ b/drivers/md/dm-vdo/permassert.c @@ -8,7 +8,7 @@ #include "errors.h" #include "logger.h" -int uds_assertion_failed(const char *expression_string, const char *file_name, +int vdo_assertion_failed(const char *expression_string, const char *file_name, int line_number, const char *format, ...) { va_list args; diff --git a/drivers/md/dm-vdo/permassert.h b/drivers/md/dm-vdo/permassert.h index 8774dde7927a..c34f2ba650e1 100644 --- a/drivers/md/dm-vdo/permassert.h +++ b/drivers/md/dm-vdo/permassert.h @@ -33,16 +33,12 @@ static inline int __must_check vdo_must_use(int value) /* Log a message if the expression is not true. */ #define VDO_ASSERT_LOG_ONLY(expr, ...) __VDO_ASSERT(expr, __VA_ARGS__) -/* For use by UDS */ -#define ASSERT(expr, ...) VDO_ASSERT(expr, __VA_ARGS__) -#define ASSERT_LOG_ONLY(expr, ...) __VDO_ASSERT(expr, __VA_ARGS__) - #define __VDO_ASSERT(expr, ...) \ (likely(expr) ? VDO_SUCCESS \ - : uds_assertion_failed(STRINGIFY(expr), __FILE__, __LINE__, __VA_ARGS__)) + : vdo_assertion_failed(STRINGIFY(expr), __FILE__, __LINE__, __VA_ARGS__)) /* Log an assertion failure message. */ -int uds_assertion_failed(const char *expression_string, const char *file_name, +int vdo_assertion_failed(const char *expression_string, const char *file_name, int line_number, const char *format, ...) __printf(4, 5); From patchwork Fri Mar 1 04:38:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Sakai X-Patchwork-Id: 13578007 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D101550267 for ; Fri, 1 Mar 2024 04:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; cv=none; b=DXMHrKv1sNukeLRWb5MBVGAte6S+Wjy3tMG19MwEdG+fUyLn0nd96pXulGoUV1Luoqr4iTVF5ePItGjw7rQp93JSkDSD6NEmm1wn5vQLP4Lig9o52WOkgSIvNI2uGsegxpt9Wh/jJ/TOtX0+CxI7lgYmwutgYL3nt63jLu6Uc5w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709267938; c=relaxed/simple; bh=gdpXyCqIZejhVKAtE4DNljZywHC7sU6KxI9IWRsKOkY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sDIWCQcqLyKUUz+oUcxcMVMuyVgzOy/54GIAvqm/Oyq/hf1DUEyHyZya+00b1lJzpeJLrTL+oDVPzEoMa4YoNGAc3RJy0c0PPlb1s/GTP//5VS29OmPQCmMYE7LWrkyANNap2qnu/KrtlnyMuQ4/6nhx8qbG2tnRz0ng28GFrD0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fiZ8BaGG; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fiZ8BaGG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709267935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l1aGNI3Atv9hThApNrdQjmS4S9QibvEdBJGbPjck1xs=; b=fiZ8BaGGlpxvYg4SW7ZZssiByzIkRMp2C+FKlgiAcSYz4uYDXwI00W5BtAbFFS1r8NojXb 9Z4pFpnH4wfaQURSIemGbEHnoWyGI53rtNEMd5zRo5quVks2WU8OHAbICvWIi+na7iX6gV Yh6bU0TX+w7bhsx1rLclpu0h0mPVyaE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-50-tWt5l-MMOPm3kx1_BTmvGw-1; Thu, 29 Feb 2024 23:38:54 -0500 X-MC-Unique: tWt5l-MMOPm3kx1_BTmvGw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E8C011024D03; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: from vdo-builder-msakai.permabit.com (vdo-builder-msakai.permabit.lab.eng.bos.redhat.com [10.0.103.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id E3436492BC6; Fri, 1 Mar 2024 04:38:53 +0000 (UTC) Received: by vdo-builder-msakai.permabit.com (Postfix, from userid 1138) id E0C8CA162E; Thu, 29 Feb 2024 23:38:53 -0500 (EST) From: Matthew Sakai To: dm-devel@lists.linux.dev Cc: Mike Snitzer , Matthew Sakai Subject: [PATCH 10/10] dm vdo target: eliminate inappropriate uses of UDS_SUCCESS Date: Thu, 29 Feb 2024 23:38:53 -0500 Message-ID: <0c9af9a15a122f7e2035a0dd7693b7f593dfa693.1709267598.git.msakai@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Mike Snitzer Most should be VDO_SUCCESS. But comparing the return from kstrtouint() with UDS_SUCCESS (happens to be 0) makes no sense. Signed-off-by: Mike Snitzer Signed-off-by: Matthew Sakai --- drivers/md/dm-vdo/dm-vdo-target.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/md/dm-vdo/dm-vdo-target.c b/drivers/md/dm-vdo/dm-vdo-target.c index e493b2fec90b..5f607dbb16e9 100644 --- a/drivers/md/dm-vdo/dm-vdo-target.c +++ b/drivers/md/dm-vdo/dm-vdo-target.c @@ -318,7 +318,7 @@ static int split_string(const char *string, char separator, char ***substring_ar current_substring++; /* substrings[current_substring] is NULL already */ *substring_array_ptr = substrings; - return UDS_SUCCESS; + return VDO_SUCCESS; } /* @@ -356,7 +356,7 @@ static int join_strings(char **substring_array, size_t array_length, char separa *(current_position - 1) = '\0'; *string_ptr = output; - return UDS_SUCCESS; + return VDO_SUCCESS; } /** @@ -484,7 +484,7 @@ static int parse_one_thread_config_spec(const char *spec, int result; result = split_string(spec, '=', &fields); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; if ((fields[0] == NULL) || (fields[1] == NULL) || (fields[2] != NULL)) { @@ -495,7 +495,7 @@ static int parse_one_thread_config_spec(const char *spec, } result = kstrtouint(fields[1], 10, &count); - if (result != UDS_SUCCESS) { + if (result) { uds_log_error("thread config string error: integer value needed, found \"%s\"", fields[1]); free_string_array(fields); @@ -537,7 +537,7 @@ static int parse_thread_config_string(const char *string, unsigned int i; result = split_string(string, ',', &specs); - if (result != UDS_SUCCESS) + if (result != VDO_SUCCESS) return result; for (i = 0; specs[i] != NULL; i++) { @@ -607,7 +607,7 @@ static int parse_one_key_value_pair(const char *key, const char *value, /* The remaining arguments must have integral values. */ result = kstrtouint(value, 10, &count); - if (result != UDS_SUCCESS) { + if (result) { uds_log_error("optional config string error: integer value needed, found \"%s\"", value); return result; @@ -2913,7 +2913,7 @@ static int __init vdo_init(void) /* Add VDO errors to the set of errors registered by the indexer. */ result = vdo_register_status_codes(); - if (result != UDS_SUCCESS) { + if (result != VDO_SUCCESS) { uds_log_error("vdo_register_status_codes failed %d", result); vdo_module_destroy(); return result;