From patchwork Tue Oct 22 15:58:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11204753 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 140CF913 for ; Tue, 22 Oct 2019 15:58:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA010214B2 for ; Tue, 22 Oct 2019 15:58:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KJ0tkEZ4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA010214B2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4E536B0007; Tue, 22 Oct 2019 11:58:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DFE6D6B0008; Tue, 22 Oct 2019 11:58:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEDF46B000A; Tue, 22 Oct 2019 11:58:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id A5F386B0007 for ; Tue, 22 Oct 2019 11:58:13 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 34D3B824999B for ; Tue, 22 Oct 2019 15:58:13 +0000 (UTC) X-FDA: 76071877266.13.lead49_81d4fa8db5a1e X-Spam-Summary: 2,0,0,dfaa864836a591e3,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:1:2:41:69:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1437:1515:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2741:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4385:4605:5007:6261:6653:7514:7903:8603:8957:9010:9036:9413:9592:9707:10004:11026:11473:11658:11914:12043:12291:12294:12296:12297:12438:12517:12519:12555:12679:12683:12739:12895:12986:13894:13972:14096:14394:14687:21080:21444:21451:21611:21627:21666:21790:30054,0,RBL:209.85.167.67:@gmail.com:.lbl8.mailshell.net-62.18.175.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SU MMARY:no X-HE-Tag: lead49_81d4fa8db5a1e X-Filterd-Recvd-Size: 11631 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Oct 2019 15:58:12 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id 21so4590358lft.10 for ; Tue, 22 Oct 2019 08:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2Egq0KjbDto0SJVyFWX4U54efLY3lAH4Yzy9pg7bXtE=; b=KJ0tkEZ4FUSc+DV3oAIJ+9hT57WjsNH2uUzm3ZKF56NFMlkt8Jr+7LO1Ep88TL7yPb URglKxvmPttT8mn8YwHYbWXFzcivNXpo7/lcJUY3gAhjdqqE4x8bbh/hCR71A6kH1vAi hHoCfeIof5U8kNek3qJs69TCQ5ZjQ/OC+Of7ZjusMjOOghXGBFZ7UwAEH0vqMN2pGi1U z6xarHhaF6DQptj86StDFPtWKs8KP9SCyI+fn25LLWZLmDWlvPJdmARsfocYjeqfu1bC +uIm1+C5FoUeKZcfNEj89o6mcsSyOPlQrhITVTl+MXcof6X1vPudiCihJayfcVv6Q08a /02g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2Egq0KjbDto0SJVyFWX4U54efLY3lAH4Yzy9pg7bXtE=; b=t8Lye0lpOpE6N3LTkl9z4fBcnYeRC/6ga41R5TbuwxV9SwimDhx7cI71+bZEruy862 nTizjbt8Yw8O7wbxzMCPeJRZz45ghab7t3fNIWTo4xA33PTjsvm5Eq7g+vIU9hpq+vp2 ojXIFN1bbBT6ipBiA5I1Bnj2dmeA1kjPFxPEnM8/vvBow0XnfXxHSW+Rm4SBJr34jwUj CyR3s+CS8SL76lpAQCtruAI4fkoGiWaYH3MgBaHmtsLiuyD0OIbHzm7zrwD2AewOxjHq yBRqVFpoYqxoyT8u9emmOH0aeMUolmA9NrKIhak7rvVbtnUXwQ8ANGwHFPbnTOD28S0f tCQw== X-Gm-Message-State: APjAAAVgzZfH+erpVvhmvgc4wyLrVYjkW6qja/+99hPbM0iqRl2GDrn4 KpuUGXF5LzCuOTJuljNHH20= X-Google-Smtp-Source: APXvYqyYSbGGyIPKCejPKLgCReIGnnlt73NG52MUPbET30mOrt+KfKd80G5zbSJYA3o10zN7OhJupA== X-Received: by 2002:a19:6813:: with SMTP id d19mr16840328lfc.144.1571759890273; Tue, 22 Oct 2019 08:58:10 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id v203sm10019637lfa.25.2019.10.22.08.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2019 08:58:09 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH 1/1] mm/vmalloc: rework vmap_area_lock Date: Tue, 22 Oct 2019 17:58:00 +0200 Message-Id: <20191022155800.20468-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With the new allocation approach introduced in the 5.2 kernel, it becomes possible to get rid of one global spinlock. By doing that we can further improve the KVA from the performance point of view. Basically we can have two independent locks, one for allocation part and another one for deallocation, because of two different entities: "free data structures" and "busy data structures". As a result, allocation/deallocation operations can still interfere between each other in case of running simultaneously on different CPUs, it means there is still dependency, but with two locks it becomes lower. Summarizing: - it reduces the high lock contention - it allows to perform operations on "free" and "busy" trees in parallel on different CPUs. Please note it does not solve scalability issue. Test results: In order to evaluate this patch, we can run "vmalloc test driver" to see how many CPU cycles it takes to complete all test cases running sequentially. All online CPUs run it so it will cause a high lock contention. HiKey 960, ARM64, 8xCPUs, big.LITTLE: sudo ./test_vmalloc.sh sequential_test_order=1 [ 390.950557] All test took CPU0=457126382 cycles [ 391.046690] All test took CPU1=454763452 cycles [ 391.128586] All test took CPU2=454539334 cycles [ 391.222669] All test took CPU3=455649517 cycles [ 391.313946] All test took CPU4=388272196 cycles [ 391.410425] All test took CPU5=384036264 cycles [ 391.492219] All test took CPU6=387432964 cycles [ 391.578433] All test took CPU7=387201996 cycles [ 304.721224] All test took CPU0=391521310 cycles [ 304.821219] All test took CPU1=393533002 cycles [ 304.917120] All test took CPU2=392243032 cycles [ 305.008986] All test took CPU3=392353853 cycles [ 305.108944] All test took CPU4=297630721 cycles [ 305.196406] All test took CPU5=297548736 cycles [ 305.288602] All test took CPU6=297092392 cycles [ 305.381088] All test took CPU7=297293597 cycles ~14%-23% patched variant is better. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 80 ++++++++++++++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 30 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 2005acd612af..f48f64c8d200 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -331,6 +331,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn); static DEFINE_SPINLOCK(vmap_area_lock); +static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); static LLIST_HEAD(vmap_purge_list); @@ -1114,7 +1115,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, */ pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); - spin_lock(&vmap_area_lock); + spin_lock(&free_vmap_area_lock); if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) kmem_cache_free(vmap_area_cachep, pva); @@ -1124,14 +1125,17 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, * returned. Therefore trigger the overflow path. */ addr = __alloc_vmap_area(size, align, vstart, vend); + spin_unlock(&free_vmap_area_lock); + if (unlikely(addr == vend)) goto overflow; va->va_start = addr; va->va_end = addr + size; va->vm = NULL; - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + spin_lock(&vmap_area_lock); + insert_vmap_area(va, &vmap_area_root, &vmap_area_list); spin_unlock(&vmap_area_lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); @@ -1141,7 +1145,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, return va; overflow: - spin_unlock(&vmap_area_lock); if (!purged) { purge_vmap_area_lazy(); purged = 1; @@ -1177,28 +1180,25 @@ int unregister_vmap_purge_notifier(struct notifier_block *nb) } EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier); -static void __free_vmap_area(struct vmap_area *va) +/* + * Free a region of KVA allocated by alloc_vmap_area + */ +static void free_vmap_area(struct vmap_area *va) { /* * Remove from the busy tree/list. */ + spin_lock(&vmap_area_lock); unlink_va(va, &vmap_area_root); + spin_unlock(&vmap_area_lock); /* - * Merge VA with its neighbors, otherwise just add it. + * Insert/Merge it back to the free tree/list. */ + spin_lock(&free_vmap_area_lock); merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); -} - -/* - * Free a region of KVA allocated by alloc_vmap_area - */ -static void free_vmap_area(struct vmap_area *va) -{ - spin_lock(&vmap_area_lock); - __free_vmap_area(va); - spin_unlock(&vmap_area_lock); + spin_unlock(&free_vmap_area_lock); } /* @@ -1291,7 +1291,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) flush_tlb_kernel_range(start, end); resched_threshold = lazy_max_pages() << 1; - spin_lock(&vmap_area_lock); + spin_lock(&free_vmap_area_lock); llist_for_each_entry_safe(va, n_va, valist, purge_list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; @@ -1306,9 +1306,9 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) atomic_long_sub(nr, &vmap_lazy_nr); if (atomic_long_read(&vmap_lazy_nr) < resched_threshold) - cond_resched_lock(&vmap_area_lock); + cond_resched_lock(&free_vmap_area_lock); } - spin_unlock(&vmap_area_lock); + spin_unlock(&free_vmap_area_lock); return true; } @@ -2030,15 +2030,21 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) } EXPORT_SYMBOL_GPL(map_vm_area); -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, - unsigned long flags, const void *caller) +static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, + struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); vm->flags = flags; vm->addr = (void *)va->va_start; vm->size = va->va_end - va->va_start; vm->caller = caller; va->vm = vm; +} + +static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, + unsigned long flags, const void *caller) +{ + spin_lock(&vmap_area_lock); + setup_vmalloc_vm_locked(vm, va, flags, caller); spin_unlock(&vmap_area_lock); } @@ -3278,7 +3284,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, goto err_free; } retry: - spin_lock(&vmap_area_lock); + spin_lock(&free_vmap_area_lock); /* start scanning - we scan from the top, begin with the last area */ area = term_area = last_area; @@ -3360,29 +3366,38 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, va = vas[area]; va->va_start = start; va->va_end = start + size; - - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); } - spin_unlock(&vmap_area_lock); + spin_unlock(&free_vmap_area_lock); /* insert all vm's */ - for (area = 0; area < nr_vms; area++) - setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, + spin_lock(&vmap_area_lock); + for (area = 0; area < nr_vms; area++) { + insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); + + setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + } + spin_unlock(&vmap_area_lock); kfree(vas); return vms; recovery: - /* Remove previously inserted areas. */ + /* + * Remove previously allocated areas. There is no + * need in removing these areas from the busy tree, + * because they are inserted only on the final step + * and when pcpu_get_vm_areas() is success. + */ while (area--) { - __free_vmap_area(vas[area]); + merge_or_add_vmap_area(vas[area], + &free_vmap_area_root, &free_vmap_area_list); vas[area] = NULL; } overflow: - spin_unlock(&vmap_area_lock); + spin_unlock(&free_vmap_area_lock); if (!purged) { purge_vmap_area_lazy(); purged = true; @@ -3433,9 +3448,12 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) + __acquires(&vmap_purge_lock) __acquires(&vmap_area_lock) { + mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock); + return seq_list_start(&vmap_area_list, *pos); } @@ -3445,8 +3463,10 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos) } static void s_stop(struct seq_file *m, void *p) + __releases(&vmap_purge_lock) __releases(&vmap_area_lock) { + mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock); }