From patchwork Tue Dec 12 09:47:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 74C08C4332F for ; Tue, 12 Dec 2023 09:47:46 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652844.1018897 (Exim 4.92) (envelope-from ) id 1rCzMX-0006zJ-PN; Tue, 12 Dec 2023 09:47:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652844.1018897; Tue, 12 Dec 2023 09:47:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMX-0006zC-Mj; Tue, 12 Dec 2023 09:47:37 +0000 Received: by outflank-mailman (input) for mailman id 652844; Tue, 12 Dec 2023 09:47:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMW-0006i7-2M for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:47:36 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 79a74d82-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:47:34 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id DA0CD224B1; Tue, 12 Dec 2023 09:47:33 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 9B9D7139E9; Tue, 12 Dec 2023 09:47:33 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id T+u/JDUseGWqfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:47:33 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 79a74d82-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=05RrYMir4dujTWuikdK7at9dlF2CXR4dNylFaQEqncE=; b=Z/Oh8sjoUCM+8LRc8Dz3gUmr/TTBPaIRau3PfRqWgYATUg++NkN3Hhe9fo7EYWgnDpgCiZ 3bdmEwcJDALmZGPLgk5OdZhwHfFm61MpBkwSE/j5C4skVytVKBNBbY0qVtWou4Hud8JBDS 4rnbwgL+Kr+pJPA0xlR4NL9+DCscMDA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374453; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=05RrYMir4dujTWuikdK7at9dlF2CXR4dNylFaQEqncE=; b=Z5evBUhm1k8ym4FmZnajKtUQVssPQvL1NyssaSen+JthhgCQX4chO4KU+jZ2RsgZkvP4Qe t8p96/3q00v6+/12nkUgHZGNB6rEGQGkk5sETYrn8cCLyaEC4Pb7vhqpKvNkkaQb1oEOFB VI0JGhwLTMcYXtezGYrx53qKCIfUiCc= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Alejandro Vallejo Subject: [PATCH v4 01/12] xen/spinlock: reduce lock profile ifdefs Date: Tue, 12 Dec 2023 10:47:14 +0100 Message-Id: <20231212094725.22184-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out1.suse.de; none X-Spamd-Result: default: False [4.90 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; RCPT_COUNT_SEVEN(0.00)[9]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] With some small adjustments to the LOCK_PROFILE_* macros some #ifdefs can be dropped from spinlock.c. Signed-off-by: Juergen Gross Reviewed-by: Alejandro Vallejo Acked-by: Julien Grall --- V2: - new patch V3: - add variable name to macros parameter (Jan Beulich) V4: - fix coding style issue (Alejandro Vallejo) --- xen/common/spinlock.c | 49 +++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 28 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index d5fa400b78..09028af864 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -267,25 +267,28 @@ void spin_debug_disable(void) lock->profile->time_hold += NOW() - lock->profile->time_locked; \ lock->profile->lock_cnt++; \ } -#define LOCK_PROFILE_VAR s_time_t block = 0 -#define LOCK_PROFILE_BLOCK block = block ? : NOW(); -#define LOCK_PROFILE_GOT \ +#define LOCK_PROFILE_VAR(var, val) s_time_t var = (val) +#define LOCK_PROFILE_BLOCK(var) var = var ? : NOW() +#define LOCK_PROFILE_BLKACC(tst, val) \ + if ( tst ) \ + { \ + lock->profile->time_block += lock->profile->time_locked - (val); \ + lock->profile->block_cnt++; \ + } +#define LOCK_PROFILE_GOT(val) \ if ( lock->profile ) \ { \ lock->profile->time_locked = NOW(); \ - if ( block ) \ - { \ - lock->profile->time_block += lock->profile->time_locked - block; \ - lock->profile->block_cnt++; \ - } \ + LOCK_PROFILE_BLKACC(val, val); \ } #else #define LOCK_PROFILE_REL -#define LOCK_PROFILE_VAR -#define LOCK_PROFILE_BLOCK -#define LOCK_PROFILE_GOT +#define LOCK_PROFILE_VAR(var, val) +#define LOCK_PROFILE_BLOCK(var) +#define LOCK_PROFILE_BLKACC(tst, val) +#define LOCK_PROFILE_GOT(val) #endif @@ -308,7 +311,7 @@ static void always_inline spin_lock_common(spinlock_t *lock, void (*cb)(void *data), void *data) { spinlock_tickets_t tickets = SPINLOCK_TICKET_INC; - LOCK_PROFILE_VAR; + LOCK_PROFILE_VAR(block, 0); check_lock(&lock->debug, false); preempt_disable(); @@ -316,14 +319,14 @@ static void always_inline spin_lock_common(spinlock_t *lock, tickets.head_tail); while ( tickets.tail != observe_head(&lock->tickets) ) { - LOCK_PROFILE_BLOCK; + LOCK_PROFILE_BLOCK(block); if ( cb ) cb(data); arch_lock_relax(); } arch_lock_acquire_barrier(); got_lock(&lock->debug); - LOCK_PROFILE_GOT; + LOCK_PROFILE_GOT(block); } void _spin_lock(spinlock_t *lock) @@ -411,19 +414,15 @@ int _spin_trylock(spinlock_t *lock) * arch_lock_acquire_barrier(). */ got_lock(&lock->debug); -#ifdef CONFIG_DEBUG_LOCK_PROFILE - if ( lock->profile ) - lock->profile->time_locked = NOW(); -#endif + LOCK_PROFILE_GOT(0); + return 1; } void _spin_barrier(spinlock_t *lock) { spinlock_tickets_t sample; -#ifdef CONFIG_DEBUG_LOCK_PROFILE - s_time_t block = NOW(); -#endif + LOCK_PROFILE_VAR(block, NOW()); check_barrier(&lock->debug); smp_mb(); @@ -432,13 +431,7 @@ void _spin_barrier(spinlock_t *lock) { while ( observe_head(&lock->tickets) == sample.head ) arch_lock_relax(); -#ifdef CONFIG_DEBUG_LOCK_PROFILE - if ( lock->profile ) - { - lock->profile->time_block += NOW() - block; - lock->profile->block_cnt++; - } -#endif + LOCK_PROFILE_BLKACC(lock->profile, block); } smp_mb(); } From patchwork Tue Dec 12 09:47:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 919A7C4332F for ; Tue, 12 Dec 2023 09:47:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652845.1018907 (Exim 4.92) (envelope-from ) id 1rCzMd-0007J2-1t; Tue, 12 Dec 2023 09:47:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652845.1018907; Tue, 12 Dec 2023 09:47:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMc-0007It-V8; Tue, 12 Dec 2023 09:47:42 +0000 Received: by outflank-mailman (input) for mailman id 652845; Tue, 12 Dec 2023 09:47:40 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMa-0007Gv-U7 for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:47:40 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 7d061412-98d3-11ee-98e8-6d05b1d4d9a1; Tue, 12 Dec 2023 10:47:40 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 74EE5224B7; Tue, 12 Dec 2023 09:47:39 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 3BE6E139E9; Tue, 12 Dec 2023 09:47:39 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id C59eDTsseGWwfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:47:39 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7d061412-98d3-11ee-98e8-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374459; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=doNpwSeReZ+USBSQcgU+UndKZ1tREobYqeEPZw9GeWA=; b=PNX5+Ub7qOIAHZAg3mwdjS51i1StvDwvNPx66nLRRjnlMAGV71xGdpFBAKLUegx2yVcQMO 6PzMf9HqRheNMcxewW2D7KE6S6J701EC31h19BB0/BcKzGvHcfGc5CJtqrGMTc5xO+g8lR bh4UzrnJmW+Cw3F7rDsZjxT5XWIa3gg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374459; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=doNpwSeReZ+USBSQcgU+UndKZ1tREobYqeEPZw9GeWA=; b=PNX5+Ub7qOIAHZAg3mwdjS51i1StvDwvNPx66nLRRjnlMAGV71xGdpFBAKLUegx2yVcQMO 6PzMf9HqRheNMcxewW2D7KE6S6J701EC31h19BB0/BcKzGvHcfGc5CJtqrGMTc5xO+g8lR bh4UzrnJmW+Cw3F7rDsZjxT5XWIa3gg= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 02/12] xen/spinlock: make spinlock initializers more readable Date: Tue, 12 Dec 2023 10:47:15 +0100 Message-Id: <20231212094725.22184-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Server: rspamd1 X-Spamd-Result: default: False [-0.81 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; RCPT_COUNT_SEVEN(0.00)[8]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; BAYES_HAM(-3.00)[100.00%]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 74EE5224B7 Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=PNX5+Ub7; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out1.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Spamd-Bar: / Use named member initializers instead of positional ones for the macros used to initialize structures. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- V2: - new patch --- xen/include/xen/spinlock.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index c44e7d4929..1cd9120eac 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -21,7 +21,7 @@ union lock_debug { bool unseen:1; }; }; -#define _LOCK_DEBUG { LOCK_DEBUG_INITVAL } +#define _LOCK_DEBUG { .val = LOCK_DEBUG_INITVAL } void check_lock(union lock_debug *debug, bool try); void lock_enter(const union lock_debug *debug); void lock_exit(const union lock_debug *debug); @@ -94,12 +94,16 @@ struct lock_profile_qhead { int32_t idx; /* index for printout */ }; -#define _LOCK_PROFILE(name) { NULL, #name, &name, 0, 0, 0, 0, 0 } +#define _LOCK_PROFILE(lockname) { .name = #lockname, .lock = &lockname, } #define _LOCK_PROFILE_PTR(name) \ static struct lock_profile * const __lock_profile_##name \ __used_section(".lockprofile.data") = \ &__lock_profile_data_##name -#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG, x } +#define _SPIN_LOCK_UNLOCKED(x) { \ + .recurse_cpu = SPINLOCK_NO_CPU, \ + .debug =_LOCK_DEBUG, \ + .profile = x, \ +} #define SPIN_LOCK_UNLOCKED _SPIN_LOCK_UNLOCKED(NULL) #define DEFINE_SPINLOCK(l) \ spinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ @@ -142,7 +146,10 @@ extern void cf_check spinlock_profile_reset(unsigned char key); struct lock_profile_qhead { }; -#define SPIN_LOCK_UNLOCKED { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG } +#define SPIN_LOCK_UNLOCKED { \ + .recurse_cpu = SPINLOCK_NO_CPU, \ + .debug =_LOCK_DEBUG, \ +} #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l)) From patchwork Tue Dec 12 09:47:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 171B7C4332F for ; Tue, 12 Dec 2023 09:47:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652846.1018917 (Exim 4.92) (envelope-from ) id 1rCzMi-0007fC-Dw; Tue, 12 Dec 2023 09:47:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652846.1018917; Tue, 12 Dec 2023 09:47:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMi-0007es-AA; Tue, 12 Dec 2023 09:47:48 +0000 Received: by outflank-mailman (input) for mailman id 652846; Tue, 12 Dec 2023 09:47:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMh-0006i7-AG for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:47:47 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [2a07:de40:b251:101:10:150:64:1]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8044bd86-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:47:45 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 1551D224B1; Tue, 12 Dec 2023 09:47:45 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id C6023139E9; Tue, 12 Dec 2023 09:47:44 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id 3fssL0AseGW6fgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:47:44 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8044bd86-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374465; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=88B7I1n8s6sxOU5LedyrvvowvYQQmSv/h/8w0LslwMs=; b=TbpSLwIc4crKkQi6n4Ieom6HjlLzThLBD+jPsECSsjt1E49fSLLQBFFhskOREKkm9QsNaR rcaLYvS4/s1u/rIV0Cv2Jdu2aH/cA44zfU9FSu/ggawZ5VO2Jb12bRdpb4464W2SWjy/wD 8tZ8jofRPM+7pkjG/si/ETQrQdyZyu4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374465; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=88B7I1n8s6sxOU5LedyrvvowvYQQmSv/h/8w0LslwMs=; b=TbpSLwIc4crKkQi6n4Ieom6HjlLzThLBD+jPsECSsjt1E49fSLLQBFFhskOREKkm9QsNaR rcaLYvS4/s1u/rIV0Cv2Jdu2aH/cA44zfU9FSu/ggawZ5VO2Jb12bRdpb4464W2SWjy/wD 8tZ8jofRPM+7pkjG/si/ETQrQdyZyu4= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini , Paul Durrant Subject: [PATCH v4 03/12] xen/spinlock: introduce new type for recursive spinlocks Date: Tue, 12 Dec 2023 10:47:16 +0100 Message-Id: <20231212094725.22184-4-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out1.suse.de; none X-Spamd-Result: default: False [8.80 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_SEVEN(0.00)[10]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] Introduce a new type "rspinlock_t" to be used for recursive spinlocks. For now it is only an alias of spinlock_t, so both types can still be used for recursive spinlocks. This will be changed later, though. Switch all recursive spinlocks to the new type. Define the initializer helpers and use them where appropriate. Signed-off-by: Juergen Gross Acked-by: Julien Grall --- V2: - carved out from V1 patch --- xen/arch/x86/include/asm/mm.h | 2 +- xen/arch/x86/mm/mm-locks.h | 2 +- xen/common/domain.c | 4 ++-- xen/common/ioreq.c | 2 +- xen/drivers/char/console.c | 4 ++-- xen/drivers/passthrough/pci.c | 2 +- xen/include/xen/sched.h | 6 +++--- xen/include/xen/spinlock.h | 19 +++++++++++++++---- 8 files changed, 26 insertions(+), 15 deletions(-) diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index 05dfe35502..8a6e0c283f 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -596,7 +596,7 @@ unsigned long domain_get_maximum_gpfn(struct domain *d); /* Definition of an mm lock: spinlock with extra fields for debugging */ typedef struct mm_lock { - spinlock_t lock; + rspinlock_t lock; int unlock_level; int locker; /* processor which holds the lock */ const char *locker_function; /* func that took it */ diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index 00b1bc402d..b05cad1752 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -20,7 +20,7 @@ DECLARE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); static inline void mm_lock_init(mm_lock_t *l) { - spin_lock_init(&l->lock); + rspin_lock_init(&l->lock); l->locker = -1; l->locker_function = "nobody"; l->unlock_level = 0; diff --git a/xen/common/domain.c b/xen/common/domain.c index c5954cdb1a..dc97755391 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -627,8 +627,8 @@ struct domain *domain_create(domid_t domid, atomic_set(&d->refcnt, 1); RCU_READ_LOCK_INIT(&d->rcu_lock); - spin_lock_init_prof(d, domain_lock); - spin_lock_init_prof(d, page_alloc_lock); + rspin_lock_init_prof(d, domain_lock); + rspin_lock_init_prof(d, page_alloc_lock); spin_lock_init(&d->hypercall_deadlock_mutex); INIT_PAGE_LIST_HEAD(&d->page_list); INIT_PAGE_LIST_HEAD(&d->extra_page_list); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 62b907f4c4..652c18a9b5 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1331,7 +1331,7 @@ unsigned int ioreq_broadcast(ioreq_t *p, bool buffered) void ioreq_domain_init(struct domain *d) { - spin_lock_init(&d->ioreq_server.lock); + rspin_lock_init(&d->ioreq_server.lock); arch_ioreq_domain_init(d); } diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 0666564ec9..76e455bacd 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -120,7 +120,7 @@ static int __read_mostly sercon_handle = -1; int8_t __read_mostly opt_console_xen; /* console=xen */ #endif -static DEFINE_SPINLOCK(console_lock); +static DEFINE_RSPINLOCK(console_lock); /* * To control the amount of printing, thresholds are added. @@ -1178,7 +1178,7 @@ void console_force_unlock(void) { watchdog_disable(); spin_debug_disable(); - spin_lock_init(&console_lock); + rspin_lock_init(&console_lock); serial_force_unlock(sercon_handle); console_locks_busted = 1; console_start_sync(); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 28ed8ea817..d604ed5634 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -50,7 +50,7 @@ struct pci_seg { } bus2bridge[MAX_BUSES]; }; -static spinlock_t _pcidevs_lock = SPIN_LOCK_UNLOCKED; +static DEFINE_RSPINLOCK(_pcidevs_lock); void pcidevs_lock(void) { diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 3609ef88c4..c6604aef78 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -376,9 +376,9 @@ struct domain rcu_read_lock_t rcu_lock; - spinlock_t domain_lock; + rspinlock_t domain_lock; - spinlock_t page_alloc_lock; /* protects all the following fields */ + rspinlock_t page_alloc_lock; /* protects all the following fields */ struct page_list_head page_list; /* linked list */ struct page_list_head extra_page_list; /* linked list (size extra_pages) */ struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */ @@ -597,7 +597,7 @@ struct domain #ifdef CONFIG_IOREQ_SERVER /* Lock protects all other values in the sub-struct */ struct { - spinlock_t lock; + rspinlock_t lock; struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; #endif diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 1cd9120eac..20d15f34dd 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -45,7 +45,7 @@ union lock_debug { }; lock profiling on: Global locks which should be subject to profiling must be declared via - DEFINE_SPINLOCK. + DEFINE_[R]SPINLOCK. For locks in structures further measures are necessary: - the structure definition must include a profile_head with exactly this @@ -56,7 +56,7 @@ union lock_debug { }; - the single locks which are subject to profiling have to be initialized via - spin_lock_init_prof(ptr, lock); + [r]spin_lock_init_prof(ptr, lock); with ptr being the main structure pointer and lock the spinlock field @@ -109,12 +109,16 @@ struct lock_profile_qhead { spinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l); \ _LOCK_PROFILE_PTR(l) +#define DEFINE_RSPINLOCK(l) \ + rspinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ + static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l); \ + _LOCK_PROFILE_PTR(l) -#define spin_lock_init_prof(s, l) \ +#define __spin_lock_init_prof(s, l, locktype) \ do { \ struct lock_profile *prof; \ prof = xzalloc(struct lock_profile); \ - (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof); \ + (s)->l = (locktype)_SPIN_LOCK_UNLOCKED(prof); \ if ( !prof ) \ { \ printk(XENLOG_WARNING \ @@ -128,6 +132,9 @@ struct lock_profile_qhead { (s)->profile_head.elem_q = prof; \ } while( 0 ) +#define spin_lock_init_prof(s, l) __spin_lock_init_prof(s, l, spinlock_t) +#define rspin_lock_init_prof(s, l) __spin_lock_init_prof(s, l, rspinlock_t) + void _lock_profile_register_struct( int32_t type, struct lock_profile_qhead *qhead, int32_t idx); void _lock_profile_deregister_struct(int32_t type, @@ -151,8 +158,10 @@ struct lock_profile_qhead { }; .debug =_LOCK_DEBUG, \ } #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED +#define DEFINE_RSPINLOCK(l) rspinlock_t l = SPIN_LOCK_UNLOCKED #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l)) +#define rspin_lock_init_prof(s, l) rspin_lock_init(&((s)->l)) #define lock_profile_register_struct(type, ptr, idx) #define lock_profile_deregister_struct(type, ptr) #define spinlock_profile_printall(key) @@ -182,8 +191,10 @@ typedef struct spinlock { #endif } spinlock_t; +typedef spinlock_t rspinlock_t; #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED) +#define rspin_lock_init(l) (*(l) = (rspinlock_t)SPIN_LOCK_UNLOCKED) void _spin_lock(spinlock_t *lock); void _spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data); From patchwork Tue Dec 12 09:47:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D0DEC4332F for ; Tue, 12 Dec 2023 09:48:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652849.1018927 (Exim 4.92) (envelope-from ) id 1rCzMn-00084s-Nn; Tue, 12 Dec 2023 09:47:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652849.1018927; Tue, 12 Dec 2023 09:47:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMn-00084j-Jq; Tue, 12 Dec 2023 09:47:53 +0000 Received: by outflank-mailman (input) for mailman id 652849; Tue, 12 Dec 2023 09:47:52 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMm-0007Gv-Cs for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:47:52 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 83b929aa-98d3-11ee-98e8-6d05b1d4d9a1; Tue, 12 Dec 2023 10:47:51 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C9D441F74C; Tue, 12 Dec 2023 09:47:50 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 68C57139E9; Tue, 12 Dec 2023 09:47:50 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id F6JoGEYseGXEfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:47:50 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 83b929aa-98d3-11ee-98e8-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374470; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdRSEytmHvGpI/Z2j7dS5CoHPsmRTFiZAI41T9Vl7PE=; b=n/D7lrSIoQ++/5E4gz1W0B59W5XN1BfQcTyh2y1yo4XpVhdzYxEnreJPIA0FgqiOs3iiuU S+GxJpD56enSeKua3cujj/ttc4m+zCB0dypwvFGphoCEmEad1tLNllmwnM14W730kD1FY0 H/I3AtDLyjSORra1rbg4zcGe1CGcJYI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374470; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdRSEytmHvGpI/Z2j7dS5CoHPsmRTFiZAI41T9Vl7PE=; b=n/D7lrSIoQ++/5E4gz1W0B59W5XN1BfQcTyh2y1yo4XpVhdzYxEnreJPIA0FgqiOs3iiuU S+GxJpD56enSeKua3cujj/ttc4m+zCB0dypwvFGphoCEmEad1tLNllmwnM14W730kD1FY0 H/I3AtDLyjSORra1rbg4zcGe1CGcJYI= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Tamas K Lengyel , Paul Durrant Subject: [PATCH v4 04/12] xen/spinlock: rename recursive lock functions Date: Tue, 12 Dec 2023 10:47:17 +0100 Message-Id: <20231212094725.22184-5-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Server: rspamd1 X-Spamd-Result: default: False [7.29 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; RCPT_COUNT_TWELVE(0.00)[14]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: C9D441F74C Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b="n/D7lrSI"; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out2.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Spamd-Bar: +++++++ Rename the recursive spin_lock() functions by replacing the trailing "_recursive" with a leading "r". Switch the parameter to be a pointer to rspinlock_t. Remove the indirection through a macro, as it is adding only complexity without any gain. Suggested-by: Jan Beulich Signed-off-by: Juergen Gross Acked-by: Julien Grall Acked-by: Jan Beulich --- V2: - new patch --- xen/arch/arm/domain.c | 4 +-- xen/arch/x86/domain.c | 8 +++--- xen/arch/x86/mm/mem_sharing.c | 8 +++--- xen/arch/x86/mm/mm-locks.h | 4 +-- xen/common/ioreq.c | 52 +++++++++++++++++------------------ xen/common/page_alloc.c | 12 ++++---- xen/common/spinlock.c | 6 ++-- xen/drivers/char/console.c | 12 ++++---- xen/drivers/passthrough/pci.c | 4 +-- xen/include/xen/sched.h | 4 +-- xen/include/xen/spinlock.h | 24 +++++++--------- 11 files changed, 67 insertions(+), 71 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 5e7a7f3e7e..f38cb5e04c 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -987,7 +987,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list) int ret = 0; /* Use a recursive lock, as we may enter 'free_domheap_page'. */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); page_list_for_each_safe( page, tmp, list ) { @@ -1014,7 +1014,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list) } out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); return ret; } diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 3712e36df9..69ce1fd5cf 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1321,7 +1321,7 @@ int arch_set_info_guest( { bool done = false; - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); for ( i = 0; ; ) { @@ -1342,7 +1342,7 @@ int arch_set_info_guest( break; } - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); if ( !done ) return -ERESTART; @@ -2181,7 +2181,7 @@ static int relinquish_memory( int ret = 0; /* Use a recursive lock, as we may enter 'free_domheap_page'. */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); while ( (page = page_list_remove_head(list)) ) { @@ -2322,7 +2322,7 @@ static int relinquish_memory( page_list_move(list, &d->arch.relmem_list); out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); return ret; } diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 4f810706a3..1720079fd9 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -688,7 +688,7 @@ static int page_make_sharable(struct domain *d, int rc = 0; bool drop_dom_ref = false; - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); if ( d->is_dying ) { @@ -731,7 +731,7 @@ static int page_make_sharable(struct domain *d, } out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); if ( drop_dom_ref ) put_domain(d); @@ -1942,7 +1942,7 @@ int mem_sharing_fork_reset(struct domain *d, bool reset_state, goto state; /* need recursive lock because we will free pages */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); page_list_for_each_safe(page, tmp, &d->page_list) { shr_handle_t sh; @@ -1971,7 +1971,7 @@ int mem_sharing_fork_reset(struct domain *d, bool reset_state, put_page_alloc_ref(page); put_page_and_type(page); } - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); state: if ( reset_state ) diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index b05cad1752..c867ad7d53 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -79,7 +79,7 @@ static inline void _mm_lock(const struct domain *d, mm_lock_t *l, { if ( !((mm_locked_by_me(l)) && rec) ) _check_lock_level(d, level); - spin_lock_recursive(&l->lock); + rspin_lock(&l->lock); if ( l->lock.recurse_cnt == 1 ) { l->locker_function = func; @@ -200,7 +200,7 @@ static inline void mm_unlock(mm_lock_t *l) l->locker_function = "nobody"; _set_lock_level(l->unlock_level); } - spin_unlock_recursive(&l->lock); + rspin_unlock(&l->lock); } static inline void mm_enforce_order_unlock(int unlock_level, diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 652c18a9b5..1257a3d972 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -329,7 +329,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) unsigned int id; bool found = false; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -340,7 +340,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) } } - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return found; } @@ -658,7 +658,7 @@ static int ioreq_server_create(struct domain *d, int bufioreq_handling, return -ENOMEM; domain_pause(d); - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -686,13 +686,13 @@ static int ioreq_server_create(struct domain *d, int bufioreq_handling, if ( id ) *id = i; - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); domain_unpause(d); return 0; fail: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); domain_unpause(d); xfree(s); @@ -704,7 +704,7 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id) struct ioreq_server *s; int rc; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -736,7 +736,7 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id) rc = 0; out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -749,7 +749,7 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -783,7 +783,7 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -796,7 +796,7 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t id, ASSERT(is_hvm_domain(d)); - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -834,7 +834,7 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t id, } out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -850,7 +850,7 @@ static int ioreq_server_map_io_range(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -886,7 +886,7 @@ static int ioreq_server_map_io_range(struct domain *d, ioservid_t id, rc = rangeset_add_range(r, start, end); out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -902,7 +902,7 @@ static int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -938,7 +938,7 @@ static int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, rc = rangeset_remove_range(r, start, end); out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -963,7 +963,7 @@ int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -978,7 +978,7 @@ int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, rc = arch_ioreq_server_map_mem_type(d, s, flags); out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); if ( rc == 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -992,7 +992,7 @@ static int ioreq_server_set_state(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -1016,7 +1016,7 @@ static int ioreq_server_set_state(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -1026,7 +1026,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) unsigned int id; int rc; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -1035,7 +1035,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) goto fail; } - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return 0; @@ -1050,7 +1050,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) ioreq_server_remove_vcpu(s, v); } - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } @@ -1060,12 +1060,12 @@ void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) ioreq_server_remove_vcpu(s, v); - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); } void ioreq_server_destroy_all(struct domain *d) @@ -1076,7 +1076,7 @@ void ioreq_server_destroy_all(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); /* No need to domain_pause() as the domain is being torn down */ @@ -1094,7 +1094,7 @@ void ioreq_server_destroy_all(struct domain *d) xfree(s); } - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); } struct ioreq_server *ioreq_server_select(struct domain *d, diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 9b5df74fdd..8c6a3d9274 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -2497,7 +2497,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) if ( unlikely(is_xen_heap_page(pg)) ) { /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); for ( i = 0; i < (1 << order); i++ ) arch_free_heap_page(d, &pg[i]); @@ -2505,7 +2505,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) d->xenheap_pages -= 1 << order; drop_dom_ref = (d->xenheap_pages == 0); - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); } else { @@ -2514,7 +2514,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) if ( likely(d) && likely(d != dom_cow) ) { /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); for ( i = 0; i < (1 << order); i++ ) { @@ -2537,7 +2537,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) drop_dom_ref = !domain_adjust_tot_pages(d, -(1 << order)); - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); /* * Normally we expect a domain to clear pages before freeing them, @@ -2753,7 +2753,7 @@ void free_domstatic_page(struct page_info *page) ASSERT_ALLOC_CONTEXT(); /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); arch_free_heap_page(d, page); @@ -2764,7 +2764,7 @@ void free_domstatic_page(struct page_info *page) /* Add page on the resv_page_list *after* it has been freed. */ page_list_add_tail(page, &d->resv_page_list); - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); if ( drop_dom_ref ) put_domain(d); diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 09028af864..422a7fb1db 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -436,7 +436,7 @@ void _spin_barrier(spinlock_t *lock) smp_mb(); } -int _spin_trylock_recursive(spinlock_t *lock) +int rspin_trylock(rspinlock_t *lock) { unsigned int cpu = smp_processor_id(); @@ -460,7 +460,7 @@ int _spin_trylock_recursive(spinlock_t *lock) return 1; } -void _spin_lock_recursive(spinlock_t *lock) +void rspin_lock(rspinlock_t *lock) { unsigned int cpu = smp_processor_id(); @@ -475,7 +475,7 @@ void _spin_lock_recursive(spinlock_t *lock) lock->recurse_cnt++; } -void _spin_unlock_recursive(spinlock_t *lock) +void rspin_unlock(rspinlock_t *lock) { if ( likely(--lock->recurse_cnt == 0) ) { diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 76e455bacd..f6f61dc5a1 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -920,7 +920,7 @@ static void vprintk_common(const char *prefix, const char *fmt, va_list args) /* console_lock can be acquired recursively from __printk_ratelimit(). */ local_irq_save(flags); - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); state = &this_cpu(state); (void)vsnprintf(buf, sizeof(buf), fmt, args); @@ -956,7 +956,7 @@ static void vprintk_common(const char *prefix, const char *fmt, va_list args) state->continued = 1; } - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); local_irq_restore(flags); } @@ -1163,14 +1163,14 @@ unsigned long console_lock_recursive_irqsave(void) unsigned long flags; local_irq_save(flags); - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); return flags; } void console_unlock_recursive_irqrestore(unsigned long flags) { - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); local_irq_restore(flags); } @@ -1231,12 +1231,12 @@ int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst) char lost_str[8]; snprintf(lost_str, sizeof(lost_str), "%d", lost); /* console_lock may already be acquired by printk(). */ - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); printk_start_of_line("(XEN) "); __putstr("printk: "); __putstr(lost_str); __putstr(" messages suppressed.\n"); - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); } local_irq_restore(flags); return 1; diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index d604ed5634..41444f8e2e 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -54,12 +54,12 @@ static DEFINE_RSPINLOCK(_pcidevs_lock); void pcidevs_lock(void) { - spin_lock_recursive(&_pcidevs_lock); + rspin_lock(&_pcidevs_lock); } void pcidevs_unlock(void) { - spin_unlock_recursive(&_pcidevs_lock); + rspin_unlock(&_pcidevs_lock); } bool pcidevs_locked(void) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index c6604aef78..8cf751ad0c 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -358,8 +358,8 @@ struct sched_unit { (v) = (v)->next_in_list ) /* Per-domain lock can be recursively acquired in fault handlers. */ -#define domain_lock(d) spin_lock_recursive(&(d)->domain_lock) -#define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock) +#define domain_lock(d) rspin_lock(&(d)->domain_lock) +#define domain_unlock(d) rspin_unlock(&(d)->domain_lock) struct evtchn_port_ops; diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 20d15f34dd..ee536c302c 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -209,9 +209,16 @@ int _spin_is_locked(const spinlock_t *lock); int _spin_trylock(spinlock_t *lock); void _spin_barrier(spinlock_t *lock); -int _spin_trylock_recursive(spinlock_t *lock); -void _spin_lock_recursive(spinlock_t *lock); -void _spin_unlock_recursive(spinlock_t *lock); +/* + * rspin_[un]lock(): Use these forms when the lock can (safely!) be + * reentered recursively on the same CPU. All critical regions that may form + * part of a recursively-nested set must be protected by these forms. If there + * are any critical regions that cannot form part of such a set, they can use + * standard spin_[un]lock(). + */ +int rspin_trylock(rspinlock_t *lock); +void rspin_lock(rspinlock_t *lock); +void rspin_unlock(rspinlock_t *lock); #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) @@ -241,15 +248,4 @@ void _spin_unlock_recursive(spinlock_t *lock); /* Ensure a lock is quiescent between two critical operations. */ #define spin_barrier(l) _spin_barrier(l) -/* - * spin_[un]lock_recursive(): Use these forms when the lock can (safely!) be - * reentered recursively on the same CPU. All critical regions that may form - * part of a recursively-nested set must be protected by these forms. If there - * are any critical regions that cannot form part of such a set, they can use - * standard spin_[un]lock(). - */ -#define spin_trylock_recursive(l) _spin_trylock_recursive(l) -#define spin_lock_recursive(l) _spin_lock_recursive(l) -#define spin_unlock_recursive(l) _spin_unlock_recursive(l) - #endif /* __SPINLOCK_H__ */ From patchwork Tue Dec 12 09:47:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DC8F4C4167B for ; Tue, 12 Dec 2023 09:48:09 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652850.1018937 (Exim 4.92) (envelope-from ) id 1rCzMu-0000FC-4j; Tue, 12 Dec 2023 09:48:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652850.1018937; Tue, 12 Dec 2023 09:48:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMu-0000F5-1e; Tue, 12 Dec 2023 09:48:00 +0000 Received: by outflank-mailman (input) for mailman id 652850; Tue, 12 Dec 2023 09:47:58 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMs-0006i7-HX for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:47:58 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 87085716-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:47:56 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 61B1E224B1; Tue, 12 Dec 2023 09:47:56 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 23376139E9; Tue, 12 Dec 2023 09:47:56 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id /fxYB0wseGXQfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:47:56 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 87085716-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374476; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GvkQ+1+sIR2iv06NXkv6xKF/IP+0QdiM6YcayZ0slMo=; b=nVwSG+sNZUZIocD1eWzm7CMBxEElSnhxaVyrV7Y2bz/5OfUJdegD2HkVhXBsQT8r5jtbUl 93TS1a9n6qg7Qk1htapA0uaRQaFeVfTBps5x4VzbMApTw8AExrhWJvk0XpTPHPgD0ndE6J /6aw1YiRtcthPl4s39ZAgTtQxFU7mmw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374476; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GvkQ+1+sIR2iv06NXkv6xKF/IP+0QdiM6YcayZ0slMo=; b=nVwSG+sNZUZIocD1eWzm7CMBxEElSnhxaVyrV7Y2bz/5OfUJdegD2HkVhXBsQT8r5jtbUl 93TS1a9n6qg7Qk1htapA0uaRQaFeVfTBps5x4VzbMApTw8AExrhWJvk0XpTPHPgD0ndE6J /6aw1YiRtcthPl4s39ZAgTtQxFU7mmw= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini Subject: [PATCH v4 05/12] xen/spinlock: add rspin_[un]lock_irq[save|restore]() Date: Tue, 12 Dec 2023 10:47:18 +0100 Message-Id: <20231212094725.22184-6-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Server: rspamd1 X-Spamd-Result: default: False [7.29 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; RCPT_COUNT_SEVEN(0.00)[9]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 61B1E224B1 Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=nVwSG+sN; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out1.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Spamd-Bar: +++++++ Instead of special casing rspin_lock_irqsave() and rspin_unlock_irqrestore() for the console lock, add those functions to spinlock handling and use them where needed. Signed-off-by: Juergen Gross --- V2: - new patch --- xen/arch/x86/traps.c | 14 ++++++++------ xen/common/spinlock.c | 16 ++++++++++++++++ xen/drivers/char/console.c | 18 +----------------- xen/include/xen/console.h | 5 +++-- xen/include/xen/spinlock.h | 7 +++++++ 5 files changed, 35 insertions(+), 25 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 7724306116..21227877b3 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -647,13 +647,15 @@ void show_stack_overflow(unsigned int cpu, const struct cpu_user_regs *regs) void show_execution_state(const struct cpu_user_regs *regs) { /* Prevent interleaving of output. */ - unsigned long flags = console_lock_recursive_irqsave(); + unsigned long flags; + + rspin_lock_irqsave(&console_lock, flags); show_registers(regs); show_code(regs); show_stack(regs); - console_unlock_recursive_irqrestore(flags); + rspin_unlock_irqrestore(&console_lock, flags); } void cf_check show_execution_state_nonconst(struct cpu_user_regs *regs) @@ -663,7 +665,7 @@ void cf_check show_execution_state_nonconst(struct cpu_user_regs *regs) void vcpu_show_execution_state(struct vcpu *v) { - unsigned long flags = 0; + unsigned long flags; if ( test_bit(_VPF_down, &v->pause_flags) ) { @@ -698,7 +700,7 @@ void vcpu_show_execution_state(struct vcpu *v) #endif /* Prevent interleaving of output. */ - flags = console_lock_recursive_irqsave(); + rspin_lock_irqsave(&console_lock, flags); vcpu_show_registers(v); @@ -708,7 +710,7 @@ void vcpu_show_execution_state(struct vcpu *v) * Stop interleaving prevention: The necessary P2M lookups involve * locking, which has to occur with IRQs enabled. */ - console_unlock_recursive_irqrestore(flags); + rspin_unlock_irqrestore(&console_lock, flags); show_hvm_stack(v, &v->arch.user_regs); } @@ -717,7 +719,7 @@ void vcpu_show_execution_state(struct vcpu *v) if ( guest_kernel_mode(v, &v->arch.user_regs) ) show_guest_stack(v, &v->arch.user_regs); - console_unlock_recursive_irqrestore(flags); + rspin_unlock_irqrestore(&console_lock, flags); } #ifdef CONFIG_HVM diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 422a7fb1db..c1a9ba1304 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -475,6 +475,16 @@ void rspin_lock(rspinlock_t *lock) lock->recurse_cnt++; } +unsigned long __rspin_lock_irqsave(rspinlock_t *lock) +{ + unsigned long flags; + + local_irq_save(flags); + rspin_lock(lock); + + return flags; +} + void rspin_unlock(rspinlock_t *lock) { if ( likely(--lock->recurse_cnt == 0) ) @@ -484,6 +494,12 @@ void rspin_unlock(rspinlock_t *lock) } } +void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags) +{ + rspin_unlock(lock); + local_irq_restore(flags); +} + #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile_anc { diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index f6f61dc5a1..1db2bbdb6a 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -120,7 +120,7 @@ static int __read_mostly sercon_handle = -1; int8_t __read_mostly opt_console_xen; /* console=xen */ #endif -static DEFINE_RSPINLOCK(console_lock); +DEFINE_RSPINLOCK(console_lock); /* * To control the amount of printing, thresholds are added. @@ -1158,22 +1158,6 @@ void console_end_log_everything(void) atomic_dec(&print_everything); } -unsigned long console_lock_recursive_irqsave(void) -{ - unsigned long flags; - - local_irq_save(flags); - rspin_lock(&console_lock); - - return flags; -} - -void console_unlock_recursive_irqrestore(unsigned long flags) -{ - rspin_unlock(&console_lock); - local_irq_restore(flags); -} - void console_force_unlock(void) { watchdog_disable(); diff --git a/xen/include/xen/console.h b/xen/include/xen/console.h index 68759862e8..583c38f064 100644 --- a/xen/include/xen/console.h +++ b/xen/include/xen/console.h @@ -8,8 +8,11 @@ #define __CONSOLE_H__ #include +#include #include +extern rspinlock_t console_lock; + struct xen_sysctl_readconsole; long read_console_ring(struct xen_sysctl_readconsole *op); @@ -20,8 +23,6 @@ void console_init_postirq(void); void console_endboot(void); int console_has(const char *device); -unsigned long console_lock_recursive_irqsave(void); -void console_unlock_recursive_irqrestore(unsigned long flags); void console_force_unlock(void); void console_start_sync(void); diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index ee536c302c..05b97c1e03 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -218,7 +218,14 @@ void _spin_barrier(spinlock_t *lock); */ int rspin_trylock(rspinlock_t *lock); void rspin_lock(rspinlock_t *lock); +#define rspin_lock_irqsave(l, f) \ + ({ \ + BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ + ((f) = __rspin_lock_irqsave(l)); \ + }) +unsigned long __rspin_lock_irqsave(rspinlock_t *lock); void rspin_unlock(rspinlock_t *lock); +void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) From patchwork Tue Dec 12 09:47:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EA1BC4332F for ; Tue, 12 Dec 2023 09:48:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652854.1018946 (Exim 4.92) (envelope-from ) id 1rCzMz-0000lk-Es; Tue, 12 Dec 2023 09:48:05 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652854.1018946; Tue, 12 Dec 2023 09:48:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMz-0000lX-Ar; Tue, 12 Dec 2023 09:48:05 +0000 Received: by outflank-mailman (input) for mailman id 652854; Tue, 12 Dec 2023 09:48:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzMx-0006i7-VO for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:03 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8a5709ba-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:48:02 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id F229E1F74C; Tue, 12 Dec 2023 09:48:01 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id B58C8139E9; Tue, 12 Dec 2023 09:48:01 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id DcYlK1EseGXSfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:01 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8a5709ba-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ImgxXwya7OlVCWVkU9MisuGH03hg4qJ0KBDJY6FOy/8=; b=egqNoIMtSq3uUlnt7OwPH2OSCEPrReq3lJDG0WY8H8IIlp3ftCWMOwz5zGGQK7qUUz2Wg/ e5ZlVA1M/bd8vHbQRuM7vKgxOcVpwwo4nt33c7fyvU78uIppYm1WfEQj3SBmXnVKBqQxde KR9cvPuPUQiZGDDEuOldP7QE6bOnNUg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ImgxXwya7OlVCWVkU9MisuGH03hg4qJ0KBDJY6FOy/8=; b=egqNoIMtSq3uUlnt7OwPH2OSCEPrReq3lJDG0WY8H8IIlp3ftCWMOwz5zGGQK7qUUz2Wg/ e5ZlVA1M/bd8vHbQRuM7vKgxOcVpwwo4nt33c7fyvU78uIppYm1WfEQj3SBmXnVKBqQxde KR9cvPuPUQiZGDDEuOldP7QE6bOnNUg= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Alejandro Vallejo Subject: [PATCH v4 06/12] xen/spinlock: make struct lock_profile rspinlock_t aware Date: Tue, 12 Dec 2023 10:47:19 +0100 Message-Id: <20231212094725.22184-7-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out2.suse.de; none X-Spamd-Result: default: False [8.80 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[99.99%]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_SEVEN(0.00)[9]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[] Struct lock_profile contains a pointer to the spinlock it is associated with. Prepare support of differing spinlock_t and rspinlock_t types by adding a type indicator of the pointer. Use the highest bit of the block_cnt member for this indicator in order to not grow the struct while hurting only the slow path with slightly less performant code. Signed-off-by: Juergen Gross Acked-by: Alejandro Vallejo Acked-by: Julien Grall --- V2: - new patch --- xen/common/spinlock.c | 26 +++++++++++++++++++------- xen/include/xen/spinlock.h | 10 ++++++++-- 2 files changed, 27 insertions(+), 9 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index c1a9ba1304..7d611d3d7d 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -538,19 +538,31 @@ static void spinlock_profile_iterate(lock_profile_subfunc *sub, void *par) static void cf_check spinlock_profile_print_elem(struct lock_profile *data, int32_t type, int32_t idx, void *par) { - struct spinlock *lock = data->lock; + unsigned int cpu; + uint32_t lockval; + + if ( data->is_rlock ) + { + cpu = data->rlock->debug.cpu; + lockval = data->rlock->tickets.head_tail; + } + else + { + cpu = data->lock->debug.cpu; + lockval = data->lock->tickets.head_tail; + } printk("%s ", lock_profile_ancs[type].name); if ( type != LOCKPROF_TYPE_GLOBAL ) printk("%d ", idx); - printk("%s: addr=%p, lockval=%08x, ", data->name, lock, - lock->tickets.head_tail); - if ( lock->debug.cpu == SPINLOCK_NO_CPU ) + printk("%s: addr=%p, lockval=%08x, ", data->name, data->lock, lockval); + if ( cpu == SPINLOCK_NO_CPU ) printk("not locked\n"); else - printk("cpu=%d\n", lock->debug.cpu); - printk(" lock:%" PRId64 "(%" PRI_stime "), block:%" PRId64 "(%" PRI_stime ")\n", - data->lock_cnt, data->time_hold, data->block_cnt, data->time_block); + printk("cpu=%u\n", cpu); + printk(" lock:%" PRIu64 "(%" PRI_stime "), block:%" PRIu64 "(%" PRI_stime ")\n", + data->lock_cnt, data->time_hold, (uint64_t)data->block_cnt, + data->time_block); } void cf_check spinlock_profile_printall(unsigned char key) diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 05b97c1e03..ac3bef267a 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -76,13 +76,19 @@ union lock_debug { }; */ struct spinlock; +/* Temporary hack until a dedicated struct rspinlock is existing. */ +#define rspinlock spinlock struct lock_profile { struct lock_profile *next; /* forward link */ const char *name; /* lock name */ - struct spinlock *lock; /* the lock itself */ + union { + struct spinlock *lock; /* the lock itself */ + struct rspinlock *rlock; /* the recursive lock itself */ + }; uint64_t lock_cnt; /* # of complete locking ops */ - uint64_t block_cnt; /* # of complete wait for lock */ + uint64_t block_cnt:63; /* # of complete wait for lock */ + uint64_t is_rlock:1; /* use rlock pointer */ s_time_t time_hold; /* cumulated lock time */ s_time_t time_block; /* cumulated wait time */ s_time_t time_locked; /* system time of last locking */ From patchwork Tue Dec 12 09:47:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 399E9C4332F for ; Tue, 12 Dec 2023 09:48:27 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652855.1018957 (Exim 4.92) (envelope-from ) id 1rCzN4-0001DJ-Ni; Tue, 12 Dec 2023 09:48:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652855.1018957; Tue, 12 Dec 2023 09:48:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzN4-0001DA-Kh; Tue, 12 Dec 2023 09:48:10 +0000 Received: by outflank-mailman (input) for mailman id 652855; Tue, 12 Dec 2023 09:48:09 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzN3-0007Gv-8n for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:09 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8dd442f4-98d3-11ee-98e8-6d05b1d4d9a1; Tue, 12 Dec 2023 10:48:08 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C2043224B1; Tue, 12 Dec 2023 09:48:07 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 58BD2139E9; Tue, 12 Dec 2023 09:48:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id NAJtFFcseGXXfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:07 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8dd442f4-98d3-11ee-98e8-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fuJajbW3/YYf0ZwXbPmGfzAVaQMwEgrS+mxOKjY7JuM=; b=I//7pvG9dUu5sVhpJO82/Y8e3y3JxDVHBkG7EqHvpH/sISE2vyx6KBnEzb1kCFZw/jB9y6 r9IhKDRf/hf7XHt/noqg0QxGzlDOw5Oh7788PsVXcw7TQ62U06hSHRnP1up+qWsW9ZGhcM FJUwbegaGNEfb9lVibgJ238F9EAuB98= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fuJajbW3/YYf0ZwXbPmGfzAVaQMwEgrS+mxOKjY7JuM=; b=I//7pvG9dUu5sVhpJO82/Y8e3y3JxDVHBkG7EqHvpH/sISE2vyx6KBnEzb1kCFZw/jB9y6 r9IhKDRf/hf7XHt/noqg0QxGzlDOw5Oh7788PsVXcw7TQ62U06hSHRnP1up+qWsW9ZGhcM FJUwbegaGNEfb9lVibgJ238F9EAuB98= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Tamas K Lengyel , Lukasz Hawrylko , "Daniel P. Smith" , =?utf-8?q?Mateusz_M=C3=B3?= =?utf-8?q?wka?= Subject: [PATCH v4 07/12] xen/spinlock: add explicit non-recursive locking functions Date: Tue, 12 Dec 2023 10:47:20 +0100 Message-Id: <20231212094725.22184-8-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out1.suse.de; none X-Spamd-Result: default: False [8.80 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_TWELVE(0.00)[16]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[] In order to prepare a type-safe recursive spinlock structure, add explicitly non-recursive locking functions to be used for non-recursive locking of spinlocks, which are used recursively, too. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- V2: - rename functions (Jan Beulich) - get rid of !! in pcidevs_locked() (Jan Beulich) --- xen/arch/arm/mm.c | 4 ++-- xen/arch/x86/domain.c | 12 ++++++------ xen/arch/x86/mm.c | 12 ++++++------ xen/arch/x86/mm/mem_sharing.c | 8 ++++---- xen/arch/x86/mm/p2m-pod.c | 4 ++-- xen/arch/x86/mm/p2m.c | 4 ++-- xen/arch/x86/tboot.c | 4 ++-- xen/common/domctl.c | 4 ++-- xen/common/grant_table.c | 10 +++++----- xen/common/memory.c | 4 ++-- xen/common/numa.c | 4 ++-- xen/common/page_alloc.c | 16 ++++++++-------- xen/drivers/char/console.c | 16 ++++++++-------- xen/include/xen/spinlock.h | 24 +++++++++++++++++++----- 14 files changed, 70 insertions(+), 56 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index eeb65ca6bb..7466d12b0c 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -105,7 +105,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, if ( page_get_owner(page) == d ) return; - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); /* * The incremented type count pins as writable or read-only. @@ -136,7 +136,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, page_list_add_tail(page, &d->xenpage_list); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); } int xenmem_add_to_physmap_one( diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 69ce1fd5cf..998cb53a58 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -212,7 +212,7 @@ void dump_pageframe_info(struct domain *d) { unsigned long total[MASK_EXTR(PGT_type_mask, PGT_type_mask) + 1] = {}; - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { unsigned int index = MASK_EXTR(page->u.inuse.type_info, @@ -231,13 +231,13 @@ void dump_pageframe_info(struct domain *d) _p(mfn_x(page_to_mfn(page))), page->count_info, page->u.inuse.type_info); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); } if ( is_hvm_domain(d) ) p2m_pod_dump_data(d); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->xenpage_list ) { @@ -253,7 +253,7 @@ void dump_pageframe_info(struct domain *d) page->count_info, page->u.inuse.type_info); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); } void update_guest_memory_policy(struct vcpu *v, @@ -2446,10 +2446,10 @@ int domain_relinquish_resources(struct domain *d) d->arch.auto_unmask = 0; } - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_splice(&d->arch.relmem_list, &d->page_list); INIT_PAGE_LIST_HEAD(&d->arch.relmem_list); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); PROGRESS(xen): diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 0a66db10b9..c35a68fbd5 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -482,7 +482,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); /* The incremented type count pins as writable or read-only. */ page->u.inuse.type_info = @@ -502,7 +502,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, page_list_add_tail(page, &d->xenpage_list); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); } void make_cr3(struct vcpu *v, mfn_t mfn) @@ -3584,11 +3584,11 @@ long do_mmuext_op( { bool drop_ref; - spin_lock(&pg_owner->page_alloc_lock); + nrspin_lock(&pg_owner->page_alloc_lock); drop_ref = (pg_owner->is_dying && test_and_clear_bit(_PGT_pinned, &page->u.inuse.type_info)); - spin_unlock(&pg_owner->page_alloc_lock); + nrspin_unlock(&pg_owner->page_alloc_lock); if ( drop_ref ) { pin_drop: @@ -4411,7 +4411,7 @@ int steal_page( * that it might be upon return from alloc_domheap_pages with * MEMF_no_owner set. */ - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); BUG_ON(page->u.inuse.type_info & (PGT_count_mask | PGT_locked | PGT_pinned)); @@ -4423,7 +4423,7 @@ int steal_page( if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) ) drop_dom_ref = true; - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); if ( unlikely(drop_dom_ref) ) put_domain(d); diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 1720079fd9..fa4e56a4df 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -746,11 +746,11 @@ static int page_make_private(struct domain *d, struct page_info *page) if ( !get_page(page, dom_cow) ) return -EINVAL; - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); if ( d->is_dying ) { - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); put_page(page); return -EBUSY; } @@ -758,7 +758,7 @@ static int page_make_private(struct domain *d, struct page_info *page) expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2); if ( page->u.inuse.type_info != expected_type ) { - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); put_page(page); return -EEXIST; } @@ -775,7 +775,7 @@ static int page_make_private(struct domain *d, struct page_info *page) if ( domain_adjust_tot_pages(d, 1) == 1 ) get_knownalive_domain(d); page_list_add_tail(page, &d->page_list); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); put_page(page); diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 9e5ad68df2..61a91f5a94 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -27,7 +27,7 @@ static inline void lock_page_alloc(struct p2m_domain *p2m) { page_alloc_mm_pre_lock(p2m->domain); - spin_lock(&(p2m->domain->page_alloc_lock)); + nrspin_lock(&(p2m->domain->page_alloc_lock)); page_alloc_mm_post_lock(p2m->domain, p2m->domain->arch.page_alloc_unlock_level); } @@ -35,7 +35,7 @@ static inline void lock_page_alloc(struct p2m_domain *p2m) static inline void unlock_page_alloc(struct p2m_domain *p2m) { page_alloc_mm_unlock(p2m->domain->arch.page_alloc_unlock_level); - spin_unlock(&(p2m->domain->page_alloc_lock)); + nrspin_unlock(&(p2m->domain->page_alloc_lock)); } /* diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6eb446e437..f188f09b8e 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2226,7 +2226,7 @@ void audit_p2m(struct domain *d, /* Audit part two: walk the domain's page allocation list, checking * the m2p entries. */ - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { mfn = mfn_x(page_to_mfn(page)); @@ -2278,7 +2278,7 @@ void audit_p2m(struct domain *d, P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n", mfn, gfn, mfn_x(p2mfn)); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); pod_unlock(p2m); p2m_unlock(p2m); diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 86c4c22cac..5b33a1bf9d 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -205,14 +205,14 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE], continue; printk("MACing Domain %u\n", d->domain_id); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { void *pg = __map_domain_page(page); vmac_update(pg, PAGE_SIZE, &ctx); unmap_domain_page(pg); } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); if ( is_iommu_enabled(d) && is_vtd ) { diff --git a/xen/common/domctl.c b/xen/common/domctl.c index f5a71ee5f7..cb62b18a9d 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -621,14 +621,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { uint64_t new_max = op->u.max_mem.max_memkb >> (PAGE_SHIFT - 10); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); /* * NB. We removed a check that new_max >= current tot_pages; this means * that the domain will now be allowed to "ratchet" down to new_max. In * the meantime, while tot > max, all new allocations are disallowed. */ d->max_pages = min(new_max, (uint64_t)(typeof(d->max_pages))-1); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); break; } diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 5721eab225..54163d51ea 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2376,7 +2376,7 @@ gnttab_transfer( mfn = page_to_mfn(page); } - spin_lock(&e->page_alloc_lock); + nrspin_lock(&e->page_alloc_lock); /* * Check that 'e' will accept the page and has reservation @@ -2387,7 +2387,7 @@ gnttab_transfer( unlikely(domain_tot_pages(e) >= e->max_pages) || unlikely(!(e->tot_pages + 1)) ) { - spin_unlock(&e->page_alloc_lock); + nrspin_unlock(&e->page_alloc_lock); if ( e->is_dying ) gdprintk(XENLOG_INFO, "Transferee d%d is dying\n", @@ -2411,7 +2411,7 @@ gnttab_transfer( * safely drop the lock and re-aquire it later to add page to the * pagelist. */ - spin_unlock(&e->page_alloc_lock); + nrspin_unlock(&e->page_alloc_lock); okay = gnttab_prepare_for_transfer(e, d, gop.ref); /* @@ -2427,9 +2427,9 @@ gnttab_transfer( * Need to grab this again to safely free our "reserved" * page in the page total */ - spin_lock(&e->page_alloc_lock); + nrspin_lock(&e->page_alloc_lock); drop_dom_ref = !domain_adjust_tot_pages(e, -1); - spin_unlock(&e->page_alloc_lock); + nrspin_unlock(&e->page_alloc_lock); if ( okay /* i.e. e->is_dying due to the surrounding if() */ ) gdprintk(XENLOG_INFO, "Transferee d%d is now dying\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index b3b05c2ec0..b4593f5f45 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -770,10 +770,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) (1UL << in_chunk_order)) - (j * (1UL << exch.out.extent_order))); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); drop_dom_ref = (dec_count && !domain_adjust_tot_pages(d, -dec_count)); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); if ( drop_dom_ref ) put_domain(d); diff --git a/xen/common/numa.c b/xen/common/numa.c index f454c4d894..47b1d0b5a8 100644 --- a/xen/common/numa.c +++ b/xen/common/numa.c @@ -718,13 +718,13 @@ static void cf_check dump_numa(unsigned char key) memset(page_num_node, 0, sizeof(page_num_node)); - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { i = page_to_nid(page); page_num_node[i]++; } - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); for_each_online_node ( i ) printk(" Node %u: %u\n", i, page_num_node[i]); diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 8c6a3d9274..a25c00a7d4 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -515,7 +515,7 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) * must always take the global heap_lock rather than only in the much * rarer case that d->outstanding_pages is non-zero */ - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); spin_lock(&heap_lock); /* pages==0 means "unset" the claim. */ @@ -561,7 +561,7 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) out: spin_unlock(&heap_lock); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); return ret; } @@ -2343,7 +2343,7 @@ int assign_pages( int rc = 0; unsigned int i; - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); if ( unlikely(d->is_dying) ) { @@ -2425,7 +2425,7 @@ int assign_pages( } out: - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); return rc; } @@ -2906,9 +2906,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags) ASSERT_ALLOC_CONTEXT(); /* Acquire a page from reserved page list(resv_page_list). */ - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page = page_list_remove_head(&d->resv_page_list); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); if ( unlikely(!page) ) return INVALID_MFN; @@ -2927,9 +2927,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags) */ unprepare_staticmem_pages(page, 1, false); fail: - spin_lock(&d->page_alloc_lock); + nrspin_lock(&d->page_alloc_lock); page_list_add_tail(page, &d->resv_page_list); - spin_unlock(&d->page_alloc_lock); + nrspin_unlock(&d->page_alloc_lock); return INVALID_MFN; } #endif diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 1db2bbdb6a..8d05c57f69 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -369,9 +369,9 @@ long read_console_ring(struct xen_sysctl_readconsole *op) if ( op->clear ) { - spin_lock_irq(&console_lock); + nrspin_lock_irq(&console_lock); conringc = p - c > conring_size ? p - conring_size : c; - spin_unlock_irq(&console_lock); + nrspin_unlock_irq(&console_lock); } op->count = sofar; @@ -639,7 +639,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, if ( is_hardware_domain(cd) ) { /* Use direct console output as it could be interactive */ - spin_lock_irq(&console_lock); + nrspin_lock_irq(&console_lock); console_serial_puts(kbuf, kcount); video_puts(kbuf, kcount); @@ -660,7 +660,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, tasklet_schedule(¬ify_dom0_con_ring_tasklet); } - spin_unlock_irq(&console_lock); + nrspin_unlock_irq(&console_lock); } else { @@ -1027,9 +1027,9 @@ void __init console_init_preirq(void) pv_console_set_rx_handler(serial_rx); /* HELLO WORLD --- start-of-day banner text. */ - spin_lock(&console_lock); + nrspin_lock(&console_lock); __putstr(xen_banner()); - spin_unlock(&console_lock); + nrspin_unlock(&console_lock); printk("Xen version %d.%d%s (%s@%s) (%s) %s %s\n", xen_major_version(), xen_minor_version(), xen_extra_version(), xen_compile_by(), xen_compile_domain(), xen_compiler(), @@ -1066,13 +1066,13 @@ void __init console_init_ring(void) } opt_conring_size = PAGE_SIZE << order; - spin_lock_irqsave(&console_lock, flags); + nrspin_lock_irqsave(&console_lock, flags); for ( i = conringc ; i != conringp; i++ ) ring[i & (opt_conring_size - 1)] = conring[i & (conring_size - 1)]; conring = ring; smp_wmb(); /* Allow users of console_force_unlock() to see larger buffer. */ conring_size = opt_conring_size; - spin_unlock_irqrestore(&console_lock, flags); + nrspin_unlock_irqrestore(&console_lock, flags); printk("Allocated console ring of %u KiB.\n", opt_conring_size >> 10); } diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index ac3bef267a..82ef99d3b6 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -101,6 +101,8 @@ struct lock_profile_qhead { }; #define _LOCK_PROFILE(lockname) { .name = #lockname, .lock = &lockname, } +#define _RLOCK_PROFILE(lockname) { .name = #lockname, .rlock = &lockname, \ + .is_rlock = 1, } #define _LOCK_PROFILE_PTR(name) \ static struct lock_profile * const __lock_profile_##name \ __used_section(".lockprofile.data") = \ @@ -117,10 +119,10 @@ struct lock_profile_qhead { _LOCK_PROFILE_PTR(l) #define DEFINE_RSPINLOCK(l) \ rspinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ - static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l); \ + static struct lock_profile __lock_profile_data_##l = _RLOCK_PROFILE(l); \ _LOCK_PROFILE_PTR(l) -#define __spin_lock_init_prof(s, l, locktype) \ +#define __spin_lock_init_prof(s, l, lockptr, locktype, isr) \ do { \ struct lock_profile *prof; \ prof = xzalloc(struct lock_profile); \ @@ -133,13 +135,16 @@ struct lock_profile_qhead { break; \ } \ prof->name = #l; \ - prof->lock = &(s)->l; \ + prof->lockptr = &(s)->l; \ + prof->is_rlock = isr; \ prof->next = (s)->profile_head.elem_q; \ (s)->profile_head.elem_q = prof; \ } while( 0 ) -#define spin_lock_init_prof(s, l) __spin_lock_init_prof(s, l, spinlock_t) -#define rspin_lock_init_prof(s, l) __spin_lock_init_prof(s, l, rspinlock_t) +#define spin_lock_init_prof(s, l) \ + __spin_lock_init_prof(s, l, lock, spinlock_t, 0) +#define rspin_lock_init_prof(s, l) \ + __spin_lock_init_prof(s, l, rlock, rspinlock_t, 1) void _lock_profile_register_struct( int32_t type, struct lock_profile_qhead *qhead, int32_t idx); @@ -174,6 +179,7 @@ struct lock_profile_qhead { }; #endif + typedef union { uint32_t head_tail; struct { @@ -261,4 +267,12 @@ void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); /* Ensure a lock is quiescent between two critical operations. */ #define spin_barrier(l) _spin_barrier(l) +#define nrspin_trylock(l) spin_trylock(l) +#define nrspin_lock(l) spin_lock(l) +#define nrspin_unlock(l) spin_unlock(l) +#define nrspin_lock_irq(l) spin_lock_irq(l) +#define nrspin_unlock_irq(l) spin_unlock_irq(l) +#define nrspin_lock_irqsave(l, f) spin_lock_irqsave(l, f) +#define nrspin_unlock_irqrestore(l, f) spin_unlock_irqrestore(l, f) + #endif /* __SPINLOCK_H__ */ From patchwork Tue Dec 12 09:47:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 62AC5C4167B for ; Tue, 12 Dec 2023 09:48:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652858.1018967 (Exim 4.92) (envelope-from ) id 1rCzNA-0001lI-8Z; Tue, 12 Dec 2023 09:48:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652858.1018967; Tue, 12 Dec 2023 09:48:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNA-0001kz-5M; Tue, 12 Dec 2023 09:48:16 +0000 Received: by outflank-mailman (input) for mailman id 652858; Tue, 12 Dec 2023 09:48:14 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzN8-0007Gv-Lx for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:14 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9132306b-98d3-11ee-98e8-6d05b1d4d9a1; Tue, 12 Dec 2023 10:48:13 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 5EE4D1F8AF; Tue, 12 Dec 2023 09:48:13 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 25CBD139E9; Tue, 12 Dec 2023 09:48:13 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id NfAGCF0seGXifgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:13 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9132306b-98d3-11ee-98e8-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374493; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JqufYzI2VSdAkZ5M5j8hnnU6lqQ01enR77XYZZhm6gU=; b=RS0mBIQIWvWD3yz04dS6i8xt3FH+xEsh06cP6VAMODVwwM0A4buG9E7aP1qiXJoN1HU4Vh Bgq4m3asfdOoKtk6krfmEdmvaf9YtTElB3baiW9CckHIm70JRFDKNGA7/UUTPF7/YKRF5G R1oPE4RzeogH0pnPLtPMpsYC6EevDs8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374493; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JqufYzI2VSdAkZ5M5j8hnnU6lqQ01enR77XYZZhm6gU=; b=RS0mBIQIWvWD3yz04dS6i8xt3FH+xEsh06cP6VAMODVwwM0A4buG9E7aP1qiXJoN1HU4Vh Bgq4m3asfdOoKtk6krfmEdmvaf9YtTElB3baiW9CckHIm70JRFDKNGA7/UUTPF7/YKRF5G R1oPE4RzeogH0pnPLtPMpsYC6EevDs8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 08/12] xen/spinlock: add another function level Date: Tue, 12 Dec 2023 10:47:21 +0100 Message-Id: <20231212094725.22184-9-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Spamd-Bar: ++++++ Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=RS0mBIQI; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out2.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Rspamd-Server: rspamd2 X-Spamd-Result: default: False [6.09 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[99.99%]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; DWL_DNSWL_BLOCKED(0.00)[suse.com:dkim]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; RCPT_COUNT_SEVEN(0.00)[8]; NEURAL_HAM_SHORT(-0.20)[-1.000]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 5EE4D1F8AF Add another function level in spinlock.c hiding the spinlock_t layout from the low level locking code. This is done in preparation of introducing rspinlock_t for recursive locks without having to duplicate all of the locking code. Signed-off-by: Juergen Gross --- V2: - new patch --- xen/common/spinlock.c | 104 +++++++++++++++++++++++-------------- xen/include/xen/spinlock.h | 1 + 2 files changed, 65 insertions(+), 40 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 7d611d3d7d..31d12b1006 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -261,29 +261,31 @@ void spin_debug_disable(void) #ifdef CONFIG_DEBUG_LOCK_PROFILE +#define LOCK_PROFILE_PAR lock->profile #define LOCK_PROFILE_REL \ - if ( lock->profile ) \ + if ( profile ) \ { \ - lock->profile->time_hold += NOW() - lock->profile->time_locked; \ - lock->profile->lock_cnt++; \ + profile->time_hold += NOW() - profile->time_locked; \ + profile->lock_cnt++; \ } #define LOCK_PROFILE_VAR(var, val) s_time_t var = (val) #define LOCK_PROFILE_BLOCK(var) var = var ? : NOW() #define LOCK_PROFILE_BLKACC(tst, val) \ if ( tst ) \ { \ - lock->profile->time_block += lock->profile->time_locked - (val); \ - lock->profile->block_cnt++; \ + profile->time_block += profile->time_locked - (val); \ + profile->block_cnt++; \ } #define LOCK_PROFILE_GOT(val) \ - if ( lock->profile ) \ + if ( profile ) \ { \ - lock->profile->time_locked = NOW(); \ + profile->time_locked = NOW(); \ LOCK_PROFILE_BLKACC(val, val); \ } #else +#define LOCK_PROFILE_PAR NULL #define LOCK_PROFILE_REL #define LOCK_PROFILE_VAR(var, val) #define LOCK_PROFILE_BLOCK(var) @@ -307,17 +309,18 @@ static always_inline uint16_t observe_head(const spinlock_tickets_t *t) return read_atomic(&t->head); } -static void always_inline spin_lock_common(spinlock_t *lock, +static void always_inline spin_lock_common(spinlock_tickets_t *t, + union lock_debug *debug, + struct lock_profile *profile, void (*cb)(void *data), void *data) { spinlock_tickets_t tickets = SPINLOCK_TICKET_INC; LOCK_PROFILE_VAR(block, 0); - check_lock(&lock->debug, false); + check_lock(debug, false); preempt_disable(); - tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail, - tickets.head_tail); - while ( tickets.tail != observe_head(&lock->tickets) ) + tickets.head_tail = arch_fetch_and_add(&t->head_tail, tickets.head_tail); + while ( tickets.tail != observe_head(t) ) { LOCK_PROFILE_BLOCK(block); if ( cb ) @@ -325,18 +328,19 @@ static void always_inline spin_lock_common(spinlock_t *lock, arch_lock_relax(); } arch_lock_acquire_barrier(); - got_lock(&lock->debug); + got_lock(debug); LOCK_PROFILE_GOT(block); } void _spin_lock(spinlock_t *lock) { - spin_lock_common(lock, NULL, NULL); + spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, NULL, + NULL); } void _spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data) { - spin_lock_common(lock, cb, data); + spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, cb, data); } void _spin_lock_irq(spinlock_t *lock) @@ -355,16 +359,23 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) return flags; } -void _spin_unlock(spinlock_t *lock) +static void always_inline spin_unlock_common(spinlock_tickets_t *t, + union lock_debug *debug, + struct lock_profile *profile) { LOCK_PROFILE_REL; - rel_lock(&lock->debug); + rel_lock(debug); arch_lock_release_barrier(); - add_sized(&lock->tickets.head, 1); + add_sized(&t->head, 1); arch_lock_signal(); preempt_enable(); } +void _spin_unlock(spinlock_t *lock) +{ + spin_unlock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + void _spin_unlock_irq(spinlock_t *lock) { _spin_unlock(lock); @@ -377,25 +388,25 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) local_irq_restore(flags); } +static int always_inline spin_is_locked_common(const spinlock_tickets_t *t) +{ + return t->head != t->tail; +} + int _spin_is_locked(const spinlock_t *lock) { - /* - * Recursive locks may be locked by another CPU, yet we return - * "false" here, making this function suitable only for use in - * ASSERT()s and alike. - */ - return lock->recurse_cpu == SPINLOCK_NO_CPU - ? lock->tickets.head != lock->tickets.tail - : lock->recurse_cpu == smp_processor_id(); + return spin_is_locked_common(&lock->tickets); } -int _spin_trylock(spinlock_t *lock) +static int always_inline spin_trylock_common(spinlock_tickets_t *t, + union lock_debug *debug, + struct lock_profile *profile) { spinlock_tickets_t old, new; preempt_disable(); - check_lock(&lock->debug, true); - old = observe_lock(&lock->tickets); + check_lock(debug, true); + old = observe_lock(t); if ( old.head != old.tail ) { preempt_enable(); @@ -403,8 +414,7 @@ int _spin_trylock(spinlock_t *lock) } new = old; new.tail++; - if ( cmpxchg(&lock->tickets.head_tail, - old.head_tail, new.head_tail) != old.head_tail ) + if ( cmpxchg(&t->head_tail, old.head_tail, new.head_tail) != old.head_tail ) { preempt_enable(); return 0; @@ -413,29 +423,41 @@ int _spin_trylock(spinlock_t *lock) * cmpxchg() is a full barrier so no need for an * arch_lock_acquire_barrier(). */ - got_lock(&lock->debug); + got_lock(debug); LOCK_PROFILE_GOT(0); return 1; } -void _spin_barrier(spinlock_t *lock) +int _spin_trylock(spinlock_t *lock) +{ + return spin_trylock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + +static void always_inline spin_barrier_common(spinlock_tickets_t *t, + union lock_debug *debug, + struct lock_profile *profile) { spinlock_tickets_t sample; LOCK_PROFILE_VAR(block, NOW()); - check_barrier(&lock->debug); + check_barrier(debug); smp_mb(); - sample = observe_lock(&lock->tickets); + sample = observe_lock(t); if ( sample.head != sample.tail ) { - while ( observe_head(&lock->tickets) == sample.head ) + while ( observe_head(t) == sample.head ) arch_lock_relax(); - LOCK_PROFILE_BLKACC(lock->profile, block); + LOCK_PROFILE_BLKACC(profile, block); } smp_mb(); } +void _spin_barrier(spinlock_t *lock) +{ + spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + int rspin_trylock(rspinlock_t *lock) { unsigned int cpu = smp_processor_id(); @@ -448,7 +470,8 @@ int rspin_trylock(rspinlock_t *lock) if ( likely(lock->recurse_cpu != cpu) ) { - if ( !spin_trylock(lock) ) + if ( !spin_trylock_common(&lock->tickets, &lock->debug, + LOCK_PROFILE_PAR) ) return 0; lock->recurse_cpu = cpu; } @@ -466,7 +489,8 @@ void rspin_lock(rspinlock_t *lock) if ( likely(lock->recurse_cpu != cpu) ) { - _spin_lock(lock); + spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, NULL, + NULL); lock->recurse_cpu = cpu; } @@ -490,7 +514,7 @@ void rspin_unlock(rspinlock_t *lock) if ( likely(--lock->recurse_cnt == 0) ) { lock->recurse_cpu = SPINLOCK_NO_CPU; - spin_unlock(lock); + spin_unlock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } } diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 82ef99d3b6..d6f4b66613 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -163,6 +163,7 @@ extern void cf_check spinlock_profile_reset(unsigned char key); #else struct lock_profile_qhead { }; +struct lock_profile { }; #define SPIN_LOCK_UNLOCKED { \ .recurse_cpu = SPINLOCK_NO_CPU, \ From patchwork Tue Dec 12 09:47:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 955E6C4332F for ; Tue, 12 Dec 2023 09:48:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652860.1018977 (Exim 4.92) (envelope-from ) id 1rCzNH-0002Qz-HJ; Tue, 12 Dec 2023 09:48:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652860.1018977; Tue, 12 Dec 2023 09:48:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNH-0002Qs-Dj; Tue, 12 Dec 2023 09:48:23 +0000 Received: by outflank-mailman (input) for mailman id 652860; Tue, 12 Dec 2023 09:48:21 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNF-0006i7-DE for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:21 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 948b53c6-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:48:19 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 06E76224B1; Tue, 12 Dec 2023 09:48:19 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id B6EEE139E9; Tue, 12 Dec 2023 09:48:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id 4pZtK2IseGXqfgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:18 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 948b53c6-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374499; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GeHSgUQmT5Ki5ph3MFsQ59wc1zBmxDp8n3ZyP6ZPgHs=; b=Cmfoz7I/V4oW5S7TltXIHoolTK9kmfZrWSj1gNeUqmiJhRSeYYD4q9XbZ8mn4MGXwSaR75 5hOTo8mahHvw7kcZO+SNWB7LdjR2W7lYbEJpMH0YtxQ9Y3gv6X8vMJD0Jwzoqwcc+gj/u5 ExsNZ9HYTM8mKNGN2aIvxV1Ia1VHv1Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374499; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GeHSgUQmT5Ki5ph3MFsQ59wc1zBmxDp8n3ZyP6ZPgHs=; b=Cmfoz7I/V4oW5S7TltXIHoolTK9kmfZrWSj1gNeUqmiJhRSeYYD4q9XbZ8mn4MGXwSaR75 5hOTo8mahHvw7kcZO+SNWB7LdjR2W7lYbEJpMH0YtxQ9Y3gv6X8vMJD0Jwzoqwcc+gj/u5 ExsNZ9HYTM8mKNGN2aIvxV1Ia1VHv1Q= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Paul Durrant Subject: [PATCH v4 09/12] xen/spinlock: add missing rspin_is_locked() and rspin_barrier() Date: Tue, 12 Dec 2023 10:47:22 +0100 Message-Id: <20231212094725.22184-10-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Spamd-Bar: ++++++ Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b="Cmfoz7I/"; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out1.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Rspamd-Server: rspamd2 X-Spamd-Result: default: False [6.09 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; DWL_DNSWL_BLOCKED(0.00)[suse.com:dkim]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; RCPT_COUNT_SEVEN(0.00)[10]; NEURAL_HAM_SHORT(-0.20)[-1.000]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 06E76224B1 Add rspin_is_locked() and rspin_barrier() in order to prepare differing spinlock_t and rspinlock_t types. Signed-off-by: Juergen Gross --- V2: - partially carved out from V1 patch, partially new --- xen/arch/x86/mm/p2m-pod.c | 2 +- xen/common/domain.c | 2 +- xen/common/page_alloc.c | 2 +- xen/common/spinlock.c | 17 +++++++++++++++++ xen/drivers/char/console.c | 4 ++-- xen/drivers/passthrough/pci.c | 2 +- xen/include/xen/spinlock.h | 2 ++ 7 files changed, 25 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 61a91f5a94..40d3b25d25 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -385,7 +385,7 @@ int p2m_pod_empty_cache(struct domain *d) /* After this barrier no new PoD activities can happen. */ BUG_ON(!d->is_dying); - spin_barrier(&p2m->pod.lock.lock); + rspin_barrier(&p2m->pod.lock.lock); lock_page_alloc(p2m); diff --git a/xen/common/domain.c b/xen/common/domain.c index dc97755391..198cb36878 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -982,7 +982,7 @@ int domain_kill(struct domain *d) case DOMDYING_alive: domain_pause(d); d->is_dying = DOMDYING_dying; - spin_barrier(&d->domain_lock); + rspin_barrier(&d->domain_lock); argo_destroy(d); vnuma_destroy(d->vnuma); domain_set_outstanding_pages(d, 0); diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index a25c00a7d4..14010b6fa5 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -476,7 +476,7 @@ unsigned long domain_adjust_tot_pages(struct domain *d, long pages) { long dom_before, dom_after, dom_claimed, sys_before, sys_after; - ASSERT(spin_is_locked(&d->page_alloc_lock)); + ASSERT(rspin_is_locked(&d->page_alloc_lock)); d->tot_pages += pages; /* diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 31d12b1006..91e325f3fe 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -458,6 +458,23 @@ void _spin_barrier(spinlock_t *lock) spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } +int rspin_is_locked(const rspinlock_t *lock) +{ + /* + * Recursive locks may be locked by another CPU, yet we return + * "false" here, making this function suitable only for use in + * ASSERT()s and alike. + */ + return lock->recurse_cpu == SPINLOCK_NO_CPU + ? spin_is_locked_common(&lock->tickets) + : lock->recurse_cpu == smp_processor_id(); +} + +void rspin_barrier(rspinlock_t *lock) +{ + spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + int rspin_trylock(rspinlock_t *lock) { unsigned int cpu = smp_processor_id(); diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 8d05c57f69..e6502641eb 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -328,7 +328,7 @@ static void cf_check do_dec_thresh(unsigned char key, struct cpu_user_regs *regs static void conring_puts(const char *str, size_t len) { - ASSERT(spin_is_locked(&console_lock)); + ASSERT(rspin_is_locked(&console_lock)); while ( len-- ) conring[CONRING_IDX_MASK(conringp++)] = *str++; @@ -766,7 +766,7 @@ static void __putstr(const char *str) { size_t len = strlen(str); - ASSERT(spin_is_locked(&console_lock)); + ASSERT(rspin_is_locked(&console_lock)); console_serial_puts(str, len); video_puts(str, len); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 41444f8e2e..94f52b7acc 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -64,7 +64,7 @@ void pcidevs_unlock(void) bool pcidevs_locked(void) { - return !!spin_is_locked(&_pcidevs_lock); + return rspin_is_locked(&_pcidevs_lock); } static struct radix_tree_root pci_segments; diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index d6f4b66613..e63db4eb4c 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -239,6 +239,8 @@ void rspin_lock(rspinlock_t *lock); unsigned long __rspin_lock_irqsave(rspinlock_t *lock); void rspin_unlock(rspinlock_t *lock); void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); +int rspin_is_locked(const rspinlock_t *lock); +void rspin_barrier(rspinlock_t *lock); #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) From patchwork Tue Dec 12 09:47:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA68EC4167B for ; Tue, 12 Dec 2023 09:48:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652862.1018987 (Exim 4.92) (envelope-from ) id 1rCzNK-0002pl-PZ; Tue, 12 Dec 2023 09:48:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652862.1018987; Tue, 12 Dec 2023 09:48:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNK-0002pX-Mh; Tue, 12 Dec 2023 09:48:26 +0000 Received: by outflank-mailman (input) for mailman id 652862; Tue, 12 Dec 2023 09:48:26 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNK-0007Gv-A9 for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:26 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 97da7a10-98d3-11ee-98e8-6d05b1d4d9a1; Tue, 12 Dec 2023 10:48:25 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 96282224B7; Tue, 12 Dec 2023 09:48:24 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 5C7A3139E9; Tue, 12 Dec 2023 09:48:24 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id 2X9YFWgseGX1fgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:24 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 97da7a10-98d3-11ee-98e8-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374504; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BklndjCL4Uz1x9x3WS3WiTgdhj80m92SjU/6dJo8OCQ=; b=VhO+QMOAA1MSIMe5kcASh6vyJs81aCpKNXCiEN64r6Ip+WsduMUegl4RaG/MUD4aWfuiAn L+enWUnDzxDMVn2+839l3mO9H5WGtISAvwbzsQCAvLxAv4B+8QC242+j8B0sbZZprJS3OI tapeKTxXZmMCU++ih6mvOXHKYQ22VxY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374504; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BklndjCL4Uz1x9x3WS3WiTgdhj80m92SjU/6dJo8OCQ=; b=VhO+QMOAA1MSIMe5kcASh6vyJs81aCpKNXCiEN64r6Ip+WsduMUegl4RaG/MUD4aWfuiAn L+enWUnDzxDMVn2+839l3mO9H5WGtISAvwbzsQCAvLxAv4B+8QC242+j8B0sbZZprJS3OI tapeKTxXZmMCU++ih6mvOXHKYQ22VxY= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 10/12] xen/spinlock: split recursive spinlocks from normal ones Date: Tue, 12 Dec 2023 10:47:23 +0100 Message-Id: <20231212094725.22184-11-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out1.suse.de; none X-Spamd-Result: default: False [10.00 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; RCPT_COUNT_SEVEN(0.00)[8]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[] Recursive and normal spinlocks are sharing the same data structure for representation of the lock. This has two major disadvantages: - it is not clear from the definition of a lock, whether it is intended to be used recursive or not, while a mixture of both usage variants needs to be - in production builds (builds without CONFIG_DEBUG_LOCKS) the needed data size of an ordinary spinlock is 8 bytes instead of 4, due to the additional recursion data needed (associated with that the rwlock data is using 12 instead of only 8 bytes) Fix that by introducing a struct spinlock_recursive for recursive spinlocks only, and switch recursive spinlock functions to require pointers to this new struct. This allows to check the correct usage at build time. Signed-off-by: Juergen Gross --- V2: - use shorter names (Jan Beulich) - don't embed spinlock_t in rspinlock_t (Jan Beulich) --- xen/common/spinlock.c | 49 ++++++++++++++++++++++++++++++++ xen/include/xen/spinlock.h | 58 +++++++++++++++++++++++++------------- 2 files changed, 88 insertions(+), 19 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 91e325f3fe..d0f8393504 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -541,6 +541,55 @@ void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags) local_irq_restore(flags); } +int nrspin_trylock(rspinlock_t *lock) +{ + check_lock(&lock->debug, true); + + if ( unlikely(lock->recurse_cpu != SPINLOCK_NO_CPU) ) + return 0; + + return spin_trylock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + +void nrspin_lock(rspinlock_t *lock) +{ + spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, NULL, + NULL); +} + +void nrspin_unlock(rspinlock_t *lock) +{ + spin_unlock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + +void nrspin_lock_irq(rspinlock_t *lock) +{ + ASSERT(local_irq_is_enabled()); + local_irq_disable(); + nrspin_lock(lock); +} + +void nrspin_unlock_irq(rspinlock_t *lock) +{ + nrspin_unlock(lock); + local_irq_enable(); +} + +unsigned long __nrspin_lock_irqsave(rspinlock_t *lock) +{ + unsigned long flags; + + local_irq_save(flags); + nrspin_lock(lock); + return flags; +} + +void nrspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags) +{ + nrspin_unlock(lock); + local_irq_restore(flags); +} + #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile_anc { diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index e63db4eb4c..ca18b9250a 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -76,8 +76,6 @@ union lock_debug { }; */ struct spinlock; -/* Temporary hack until a dedicated struct rspinlock is existing. */ -#define rspinlock spinlock struct lock_profile { struct lock_profile *next; /* forward link */ @@ -108,6 +106,10 @@ struct lock_profile_qhead { __used_section(".lockprofile.data") = \ &__lock_profile_data_##name #define _SPIN_LOCK_UNLOCKED(x) { \ + .debug =_LOCK_DEBUG, \ + .profile = x, \ +} +#define _RSPIN_LOCK_UNLOCKED(x) { \ .recurse_cpu = SPINLOCK_NO_CPU, \ .debug =_LOCK_DEBUG, \ .profile = x, \ @@ -117,8 +119,9 @@ struct lock_profile_qhead { spinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l); \ _LOCK_PROFILE_PTR(l) +#define RSPIN_LOCK_UNLOCKED _RSPIN_LOCK_UNLOCKED(NULL) #define DEFINE_RSPINLOCK(l) \ - rspinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ + rspinlock_t l = _RSPIN_LOCK_UNLOCKED(NULL); \ static struct lock_profile __lock_profile_data_##l = _RLOCK_PROFILE(l); \ _LOCK_PROFILE_PTR(l) @@ -143,8 +146,11 @@ struct lock_profile_qhead { #define spin_lock_init_prof(s, l) \ __spin_lock_init_prof(s, l, lock, spinlock_t, 0) -#define rspin_lock_init_prof(s, l) \ - __spin_lock_init_prof(s, l, rlock, rspinlock_t, 1) +#define rspin_lock_init_prof(s, l) do { \ + __spin_lock_init_prof(s, l, rlock, rspinlock_t, 1); \ + (s)->l.recurse_cpu = SPINLOCK_NO_CPU; \ + (s)->l.recurse_cnt = 0; \ + } while (0) void _lock_profile_register_struct( int32_t type, struct lock_profile_qhead *qhead, int32_t idx); @@ -166,11 +172,15 @@ struct lock_profile_qhead { }; struct lock_profile { }; #define SPIN_LOCK_UNLOCKED { \ + .debug =_LOCK_DEBUG, \ +} +#define RSPIN_LOCK_UNLOCKED { \ + .debug =_LOCK_DEBUG, \ .recurse_cpu = SPINLOCK_NO_CPU, \ .debug =_LOCK_DEBUG, \ } #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED -#define DEFINE_RSPINLOCK(l) rspinlock_t l = SPIN_LOCK_UNLOCKED +#define DEFINE_RSPINLOCK(l) rspinlock_t l = RSPIN_LOCK_UNLOCKED #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l)) #define rspin_lock_init_prof(s, l) rspin_lock_init(&((s)->l)) @@ -180,7 +190,6 @@ struct lock_profile { }; #endif - typedef union { uint32_t head_tail; struct { @@ -192,6 +201,14 @@ typedef union { #define SPINLOCK_TICKET_INC { .head_tail = 0x10000, } typedef struct spinlock { + spinlock_tickets_t tickets; + union lock_debug debug; +#ifdef CONFIG_DEBUG_LOCK_PROFILE + struct lock_profile *profile; +#endif +} spinlock_t; + +typedef struct rspinlock { spinlock_tickets_t tickets; uint16_t recurse_cpu:SPINLOCK_CPU_BITS; #define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) @@ -202,12 +219,10 @@ typedef struct spinlock { #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif -} spinlock_t; - -typedef spinlock_t rspinlock_t; +} rspinlock_t; #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED) -#define rspin_lock_init(l) (*(l) = (rspinlock_t)SPIN_LOCK_UNLOCKED) +#define rspin_lock_init(l) (*(l) = (rspinlock_t)RSPIN_LOCK_UNLOCKED) void _spin_lock(spinlock_t *lock); void _spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data); @@ -242,6 +257,19 @@ void rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); int rspin_is_locked(const rspinlock_t *lock); void rspin_barrier(rspinlock_t *lock); +int nrspin_trylock(rspinlock_t *lock); +void nrspin_lock(rspinlock_t *lock); +void nrspin_unlock(rspinlock_t *lock); +void nrspin_lock_irq(rspinlock_t *lock); +void nrspin_unlock_irq(rspinlock_t *lock); +#define nrspin_lock_irqsave(l, f) \ + ({ \ + BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ + ((f) = __nrspin_lock_irqsave(l)); \ + }) +unsigned long __nrspin_lock_irqsave(rspinlock_t *lock); +void nrspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); + #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) #define spin_lock_irq(l) _spin_lock_irq(l) @@ -270,12 +298,4 @@ void rspin_barrier(rspinlock_t *lock); /* Ensure a lock is quiescent between two critical operations. */ #define spin_barrier(l) _spin_barrier(l) -#define nrspin_trylock(l) spin_trylock(l) -#define nrspin_lock(l) spin_lock(l) -#define nrspin_unlock(l) spin_unlock(l) -#define nrspin_lock_irq(l) spin_lock_irq(l) -#define nrspin_unlock_irq(l) spin_unlock_irq(l) -#define nrspin_lock_irqsave(l, f) spin_lock_irqsave(l, f) -#define nrspin_unlock_irqrestore(l, f) spin_unlock_irqrestore(l, f) - #endif /* __SPINLOCK_H__ */ From patchwork Tue Dec 12 09:47:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77B22C4332F for ; Tue, 12 Dec 2023 09:54:19 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652891.1019027 (Exim 4.92) (envelope-from ) id 1rCzSu-0008CI-9f; Tue, 12 Dec 2023 09:54:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652891.1019027; Tue, 12 Dec 2023 09:54:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzSu-0008CB-5e; Tue, 12 Dec 2023 09:54:12 +0000 Received: by outflank-mailman (input) for mailman id 652891; Tue, 12 Dec 2023 09:54:11 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNQ-0006i7-9p for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:32 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9b26cd42-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:48:30 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2B6511F74C; Tue, 12 Dec 2023 09:48:30 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id E5F36139E9; Tue, 12 Dec 2023 09:48:29 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id GlrjNm0seGX+fgAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:29 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9b26cd42-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X9mtlOEEzRw0edEwEif+vRv2D1U88AlwXf4iThb3cgE=; b=j16YnGSd8HmcxmMNaY9AbaLdu2gwwGtPOeiAo3vlqhr2TN6EJVeWVMfG6CfxOooQ47VS1a 1/37ZRLk5Hi/YZxZO2ykAAE9sOTX8JXldRsMBiaqeMXpGO4ldLURJQTqXkpFGtoPFIHa5n 5g5tfAHahqu+Ig6LjgfAQ2Ukb+uyVoQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X9mtlOEEzRw0edEwEif+vRv2D1U88AlwXf4iThb3cgE=; b=j16YnGSd8HmcxmMNaY9AbaLdu2gwwGtPOeiAo3vlqhr2TN6EJVeWVMfG6CfxOooQ47VS1a 1/37ZRLk5Hi/YZxZO2ykAAE9sOTX8JXldRsMBiaqeMXpGO4ldLURJQTqXkpFGtoPFIHa5n 5g5tfAHahqu+Ig6LjgfAQ2Ukb+uyVoQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 11/12] xen/spinlock: remove indirection through macros for spin_*() functions Date: Tue, 12 Dec 2023 10:47:24 +0100 Message-Id: <20231212094725.22184-12-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out2.suse.de; none X-Spamd-Result: default: False [8.80 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_SPAM(5.10)[100.00%]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_SEVEN(0.00)[8]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[] In reality all spin_*() functions are macros which are defined to just call a related real function. Remove this macro layer, as it is adding complexity without any gain. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- V2: - new patch --- xen/common/spinlock.c | 28 +++++++++--------- xen/include/xen/spinlock.h | 58 +++++++++++++++----------------------- 2 files changed, 36 insertions(+), 50 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index d0f8393504..296bcf33e6 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -332,30 +332,30 @@ static void always_inline spin_lock_common(spinlock_tickets_t *t, LOCK_PROFILE_GOT(block); } -void _spin_lock(spinlock_t *lock) +void spin_lock(spinlock_t *lock) { spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, NULL, NULL); } -void _spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data) +void spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data) { spin_lock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR, cb, data); } -void _spin_lock_irq(spinlock_t *lock) +void spin_lock_irq(spinlock_t *lock) { ASSERT(local_irq_is_enabled()); local_irq_disable(); - _spin_lock(lock); + spin_lock(lock); } -unsigned long _spin_lock_irqsave(spinlock_t *lock) +unsigned long __spin_lock_irqsave(spinlock_t *lock) { unsigned long flags; local_irq_save(flags); - _spin_lock(lock); + spin_lock(lock); return flags; } @@ -371,20 +371,20 @@ static void always_inline spin_unlock_common(spinlock_tickets_t *t, preempt_enable(); } -void _spin_unlock(spinlock_t *lock) +void spin_unlock(spinlock_t *lock) { spin_unlock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } -void _spin_unlock_irq(spinlock_t *lock) +void spin_unlock_irq(spinlock_t *lock) { - _spin_unlock(lock); + spin_unlock(lock); local_irq_enable(); } -void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) +void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) { - _spin_unlock(lock); + spin_unlock(lock); local_irq_restore(flags); } @@ -393,7 +393,7 @@ static int always_inline spin_is_locked_common(const spinlock_tickets_t *t) return t->head != t->tail; } -int _spin_is_locked(const spinlock_t *lock) +int spin_is_locked(const spinlock_t *lock) { return spin_is_locked_common(&lock->tickets); } @@ -429,7 +429,7 @@ static int always_inline spin_trylock_common(spinlock_tickets_t *t, return 1; } -int _spin_trylock(spinlock_t *lock) +int spin_trylock(spinlock_t *lock) { return spin_trylock_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } @@ -453,7 +453,7 @@ static void always_inline spin_barrier_common(spinlock_tickets_t *t, smp_mb(); } -void _spin_barrier(spinlock_t *lock) +void spin_barrier(spinlock_t *lock) { spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index ca18b9250a..87946965b2 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -224,18 +224,30 @@ typedef struct rspinlock { #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED) #define rspin_lock_init(l) (*(l) = (rspinlock_t)RSPIN_LOCK_UNLOCKED) -void _spin_lock(spinlock_t *lock); -void _spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data); -void _spin_lock_irq(spinlock_t *lock); -unsigned long _spin_lock_irqsave(spinlock_t *lock); +void spin_lock(spinlock_t *lock); +void spin_lock_cb(spinlock_t *lock, void (*cb)(void *data), void *data); +void spin_lock_irq(spinlock_t *lock); +#define spin_lock_irqsave(l, f) \ + ({ \ + BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ + ((f) = __spin_lock_irqsave(l)); \ + }) +unsigned long __spin_lock_irqsave(spinlock_t *lock); -void _spin_unlock(spinlock_t *lock); -void _spin_unlock_irq(spinlock_t *lock); -void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags); +void spin_unlock(spinlock_t *lock); +void spin_unlock_irq(spinlock_t *lock); +void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags); -int _spin_is_locked(const spinlock_t *lock); -int _spin_trylock(spinlock_t *lock); -void _spin_barrier(spinlock_t *lock); +int spin_is_locked(const spinlock_t *lock); +int spin_trylock(spinlock_t *lock); +#define spin_trylock_irqsave(lock, flags) \ +({ \ + local_irq_save(flags); \ + spin_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ +}) +/* Ensure a lock is quiescent between two critical operations. */ +void spin_barrier(spinlock_t *lock); /* * rspin_[un]lock(): Use these forms when the lock can (safely!) be @@ -270,32 +282,6 @@ void nrspin_unlock_irq(rspinlock_t *lock); unsigned long __nrspin_lock_irqsave(rspinlock_t *lock); void nrspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); -#define spin_lock(l) _spin_lock(l) -#define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) -#define spin_lock_irq(l) _spin_lock_irq(l) -#define spin_lock_irqsave(l, f) \ - ({ \ - BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ - ((f) = _spin_lock_irqsave(l)); \ - }) - -#define spin_unlock(l) _spin_unlock(l) -#define spin_unlock_irq(l) _spin_unlock_irq(l) -#define spin_unlock_irqrestore(l, f) _spin_unlock_irqrestore(l, f) - -#define spin_is_locked(l) _spin_is_locked(l) -#define spin_trylock(l) _spin_trylock(l) - -#define spin_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) - #define spin_lock_kick(l) arch_lock_signal_wmb() -/* Ensure a lock is quiescent between two critical operations. */ -#define spin_barrier(l) _spin_barrier(l) - #endif /* __SPINLOCK_H__ */ From patchwork Tue Dec 12 09:47:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13488837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BED5C4332F for ; Tue, 12 Dec 2023 09:51:09 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.652875.1018996 (Exim 4.92) (envelope-from ) id 1rCzPf-0005jz-Ce; Tue, 12 Dec 2023 09:50:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 652875.1018996; Tue, 12 Dec 2023 09:50:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzPf-0005js-A0; Tue, 12 Dec 2023 09:50:51 +0000 Received: by outflank-mailman (input) for mailman id 652875; Tue, 12 Dec 2023 09:50:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rCzNV-0006i7-SG for xen-devel@lists.xenproject.org; Tue, 12 Dec 2023 09:48:37 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [2a07:de40:b251:101:10:150:64:2]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9e84002a-98d3-11ee-9b0f-b553b5be7939; Tue, 12 Dec 2023 10:48:36 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BE2EB1F74C; Tue, 12 Dec 2023 09:48:35 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 83F73139E9; Tue, 12 Dec 2023 09:48:35 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id gFz5HnMseGUAfwAAn2gu4w (envelope-from ); Tue, 12 Dec 2023 09:48:35 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9e84002a-98d3-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o0uw0mmQZTxGdRt6ID2VqCT2e47nRjokh0HeNjme8pc=; b=GpYCzSlzf8uUWHp752gL4WHDqkywen/2Pq2rJqzgYiipkl4dTnvAsaSDdEVjDuCqWYYy1q kW5XhUATQEYwlYlnXiAYeEKuPjY1tuLtavOZ1L6rSBht0XhK4NgBRn9AfnKMf0aYk61XY/ vHGIjVYXyUTBZGiCXC/zDnG9GWdWoyo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1702374515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o0uw0mmQZTxGdRt6ID2VqCT2e47nRjokh0HeNjme8pc=; b=GpYCzSlzf8uUWHp752gL4WHDqkywen/2Pq2rJqzgYiipkl4dTnvAsaSDdEVjDuCqWYYy1q kW5XhUATQEYwlYlnXiAYeEKuPjY1tuLtavOZ1L6rSBht0XhK4NgBRn9AfnKMf0aYk61XY/ vHGIjVYXyUTBZGiCXC/zDnG9GWdWoyo= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 12/12] xen/spinlock: support higher number of cpus Date: Tue, 12 Dec 2023 10:47:25 +0100 Message-Id: <20231212094725.22184-13-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231212094725.22184-1-jgross@suse.com> References: <20231212094725.22184-1-jgross@suse.com> MIME-Version: 1.0 X-Spamd-Bar: / Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=GpYCzSlz; dmarc=pass (policy=quarantine) header.from=suse.com; spf=fail (smtp-out2.suse.de: domain of jgross@suse.com does not designate 2a07:de40:b281:104:10:150:64:98 as permitted sender) smtp.mailfrom=jgross@suse.com X-Rspamd-Server: rspamd2 X-Spamd-Result: default: False [0.99 / 50.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:98:from]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; DWL_DNSWL_BLOCKED(0.00)[suse.com:dkim]; BROKEN_CONTENT_TYPE(1.50)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[suse.com:+]; MX_GOOD(-0.01)[]; RCPT_COUNT_SEVEN(0.00)[8]; NEURAL_HAM_SHORT(-0.20)[-1.000]; DMARC_POLICY_ALLOW(0.00)[suse.com,quarantine]; DMARC_POLICY_ALLOW_WITH_FAILURES(-0.50)[]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; R_SPF_FAIL(0.00)[-all]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; SPAM_FLAG(5.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; WHITELIST_DMARC(-7.00)[suse.com:D:+]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: BE2EB1F74C Allow 16 bits per cpu number, which is the limit imposed by spinlock_tickets_t. This will allow up to 65535 cpus, while increasing only the size of recursive spinlocks in debug builds from 8 to 12 bytes. Signed-off-by: Juergen Gross --- xen/common/spinlock.c | 1 + xen/include/xen/spinlock.h | 18 +++++++++--------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 296bcf33e6..ae7c7c2086 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -481,6 +481,7 @@ int rspin_trylock(rspinlock_t *lock) /* Don't allow overflow of recurse_cpu field. */ BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU); + BUILD_BUG_ON(SPINLOCK_CPU_BITS > sizeof(lock->recurse_cpu) * 8); BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3); check_lock(&lock->debug, true); diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 87946965b2..d720778cc1 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -7,16 +7,16 @@ #include #include -#define SPINLOCK_CPU_BITS 12 +#define SPINLOCK_CPU_BITS 16 #ifdef CONFIG_DEBUG_LOCKS union lock_debug { - uint16_t val; -#define LOCK_DEBUG_INITVAL 0xffff + uint32_t val; +#define LOCK_DEBUG_INITVAL 0xffffffff struct { - uint16_t cpu:SPINLOCK_CPU_BITS; -#define LOCK_DEBUG_PAD_BITS (14 - SPINLOCK_CPU_BITS) - uint16_t :LOCK_DEBUG_PAD_BITS; + uint32_t cpu:SPINLOCK_CPU_BITS; +#define LOCK_DEBUG_PAD_BITS (30 - SPINLOCK_CPU_BITS) + uint32_t :LOCK_DEBUG_PAD_BITS; bool irq_safe:1; bool unseen:1; }; @@ -210,10 +210,10 @@ typedef struct spinlock { typedef struct rspinlock { spinlock_tickets_t tickets; - uint16_t recurse_cpu:SPINLOCK_CPU_BITS; + uint16_t recurse_cpu; #define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) -#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) - uint16_t recurse_cnt:SPINLOCK_RECURSE_BITS; +#define SPINLOCK_RECURSE_BITS 8 + uint8_t recurse_cnt; #define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) union lock_debug debug; #ifdef CONFIG_DEBUG_LOCK_PROFILE