From patchwork Fri Dec 2 18:36:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristen Carlson Accardi X-Patchwork-Id: 13063096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C28C4321E for ; Fri, 2 Dec 2022 18:37:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234477AbiLBShK (ORCPT ); Fri, 2 Dec 2022 13:37:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234541AbiLBShH (ORCPT ); Fri, 2 Dec 2022 13:37:07 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC423EDD4D; Fri, 2 Dec 2022 10:37:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670006226; x=1701542226; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qv8PoJW739IycS7cP6BKN7byFcsUeUj+E5iETGUdTaY=; b=fjwrwjP9y+xUEsMd6ZeGhn8XC348NLasrXvwxkfJICVBO3ZSoij9vVmd 6n287W0Zzn8E3fNXrvfyuK9LQn5YK/Qs11plq4giEMXVX8lTMYiFuwfEs QEpOUFuhTtLoQvWK6EQSi6TctCfuSeG0vO1EmjwEJ8QsD7FEviR89DLlT sIuJDFUD/uH85zJcrDOXw8cvjUFZOfqxGXtl7TpDw8vU07t6dbq5tKTs0 wXuvF00RoE3LxhWuD1HrUFBI3Yys/YJIUztwpoEiqJzp58mPhX8LVReCW EcFU/FwdhO5Xmlp4DIidrt4HPrm3/R3nttsrRQZIYLD/oAQqq9k4gb+w+ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="314724475" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="314724475" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 10:37:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="713717343" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="713717343" Received: from kcaskeyx-mobl1.amr.corp.intel.com (HELO kcaccard-desk.amr.corp.intel.com) ([10.251.1.207]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 10:37:03 -0800 From: Kristen Carlson Accardi To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Cc: zhiquan1.li@intel.com, Kristen Carlson Accardi , Sean Christopherson Subject: [PATCH v2 01/18] x86/sgx: Call cond_resched() at the end of sgx_reclaim_pages() Date: Fri, 2 Dec 2022 10:36:37 -0800 Message-Id: <20221202183655.3767674-2-kristen@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221202183655.3767674-1-kristen@linux.intel.com> References: <20221202183655.3767674-1-kristen@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Sean Christopherson In order to avoid repetition of cond_resched() in ksgxd() and sgx_alloc_epc_page(), move the invocation of post-reclaim cond_resched() inside sgx_reclaim_pages(). Except in the case of sgx_reclaim_direct(), sgx_reclaim_pages() is always called in a loop and is always followed by a call to cond_resched(). This will hold true for the EPC cgroup as well, which adds even more calls to sgx_reclaim_pages() and thus cond_resched(). Calls to sgx_reclaim_direct() may be performance sensitive. Allow sgx_reclaim_direct() to avoid the cond_resched() call by moving the original sgx_reclaim_pages() call to __sgx_reclaim_pages() and then have sgx_reclaim_pages() become a wrapper around that call with a cond_resched(). Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Cc: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 160c8dbee0ab..ffce6fc70a1f 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -287,7 +287,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, * problematic as it would increase the lock contention too much, which would * halt forward progress. */ -static void sgx_reclaim_pages(void) +static void __sgx_reclaim_pages(void) { struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; struct sgx_backing backing[SGX_NR_TO_SCAN]; @@ -369,6 +369,12 @@ static void sgx_reclaim_pages(void) } } +static void sgx_reclaim_pages(void) +{ + __sgx_reclaim_pages(); + cond_resched(); +} + static bool sgx_should_reclaim(unsigned long watermark) { return atomic_long_read(&sgx_nr_free_pages) < watermark && @@ -378,12 +384,14 @@ static bool sgx_should_reclaim(unsigned long watermark) /* * sgx_reclaim_direct() should be called (without enclave's mutex held) * in locations where SGX memory resources might be low and might be - * needed in order to make forward progress. + * needed in order to make forward progress. This call to + * __sgx_reclaim_pages() avoids the cond_resched() in sgx_reclaim_pages() + * to improve performance. */ void sgx_reclaim_direct(void) { if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) - sgx_reclaim_pages(); + __sgx_reclaim_pages(); } static int ksgxd(void *p) @@ -410,8 +418,6 @@ static int ksgxd(void *p) if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) sgx_reclaim_pages(); - - cond_resched(); } return 0; @@ -582,7 +588,6 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) } sgx_reclaim_pages(); - cond_resched(); } if (sgx_should_reclaim(SGX_NR_LOW_PAGES))