From patchwork Wed Jan 9 16:40:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Penyaev X-Patchwork-Id: 10754505 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73CDE6C5 for ; Wed, 9 Jan 2019 16:40:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 64CC029437 for ; Wed, 9 Jan 2019 16:40:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6367B28BDE; Wed, 9 Jan 2019 16:40:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5848429437 for ; Wed, 9 Jan 2019 16:40:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5916A8E00A3; Wed, 9 Jan 2019 11:40:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 51A3A8E00A2; Wed, 9 Jan 2019 11:40:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40A8E8E00A3; Wed, 9 Jan 2019 11:40:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id D2D9D8E00A2 for ; Wed, 9 Jan 2019 11:40:42 -0500 (EST) Received: by mail-ed1-f72.google.com with SMTP id c34so3114657edb.8 for ; Wed, 09 Jan 2019 08:40:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=hrxtyRmtbXqzCLozOcPVEDaxh/hePTam7Z6TNNXN3T0=; b=FjtAwoQGjyDaKQ2plXtUQWKbkrFt6XraqQAUTd7qygRPyDAgACtyVr9OTn9xhkeisl qhBfs2wEmVUf7NUhEeygDs//jAO+AHLkBD2tcDcKcLJYNTfPDnINSHJvSTamtEyZQObw e1YQS6QMHPzqAnpJPTA5BsGIOcbT0jg9rusn0w2j2ZQCX8I8Ifbzh+836/yq021QVOOc RgF/ir5xt9TxGLbO68Njhycz9BRP+Tbcq7I65XhHEP9XTCYcwN6Jw4XU4l++thIfU3av PrcLoPfPu0jCZkJi65HwFdHfYks2Fmv0ATE1bKxVjWrLt5xBmgNoGiksNF4DT0Lb6XVX CGZw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Gm-Message-State: AJcUukdSKIpBEgI0vN+VMdHruK+YZa/8v3nWf/SjuhVYvzb6AGQOusfF qXXRUKBrvi3o1mmXebLVoIME+4JNyJHg4VWGNrMkkDqg3Fp0wcSgXgil3KISVbKgqvUPu8elOPh XztZ7thng6TZb8dnNZFvjgJgL4y+5wCVQgvvR/l5TRtQRAzZpj0uaSimH8dG3sVz5SA== X-Received: by 2002:a50:a458:: with SMTP id v24mr6610305edb.241.1547052042357; Wed, 09 Jan 2019 08:40:42 -0800 (PST) X-Google-Smtp-Source: ALg8bN4rNSA/vcVPlH7yfc//smv4vCROgzw9vs1up4UQq5qBqD7xFtan1YrjEZsTptD2RXTYdlrJ X-Received: by 2002:a50:a458:: with SMTP id v24mr6610051edb.241.1547052037641; Wed, 09 Jan 2019 08:40:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547052037; cv=none; d=google.com; s=arc-20160816; b=ezmKzCMTm4lHY4JF2+a5LIOZO1YudyYBE4JLT0fOSq7hGYwog4ZOSsDdiRnRZ6r9ua 7ucv7wu3Qbbg4LTazAbRVNnPf4Q5rMN/9Z7OmGpg3Yji5pIiiUVSNlc2mS7KsIi8VmRz M5Cd94oCe2mxi35VMVcuZ++Ju45nUkzrPWFm9E+m45KOafwGBgukM1sJOug0c4dER+YU 6NQNUuY6DuQFMQ7YsU5s882AAnzLBSD6N+BEcUxZa/akJ7CfIFEyS4KJ2zQXNXvmVxN8 mk8198Aklzvn0B3lSj0dilH5zxWtJIHAXsHQ7fyJ2QciyTdhmPEF/aALDbJr9UHokp0T RQkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=hrxtyRmtbXqzCLozOcPVEDaxh/hePTam7Z6TNNXN3T0=; b=Et7X3wE5MhQIEYVvdbpmvUCfdXTnPAFVVgpb4Zw3JgM6XTsrcuYt7xcK+BzYcXYHi1 uQpT2hGYXyJYJVRAyQjZiK7hJipXBxsBgRR1lsAfEYH+/vXPyHHccgS+4x5Cs8PCY6Ve OwpqZx939CWPtfLX3REfHu6D0N4+nVWM81FVC7KXUCXzFN9CuxryFCfRugc8hWN2rqeb P4G21AS/DenO17TeuxjsGU5exIoHguwn3hBgiOULviVkJZWzS1iKBtDgNSjazyfYWJhS y9vkDc+ogg8rDmOfnCPEZADQ0H5AwbeSq6TVYcwOUPSJK8ZIm46yXOiDp1Jdfwtb417s 5Jbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id d8-v6si1445005ejm.81.2019.01.09.08.40.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jan 2019 08:40:37 -0800 (PST) Received-SPF: pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2F557AF58; Wed, 9 Jan 2019 16:40:37 +0000 (UTC) From: Roman Penyaev To: Cc: Roman Penyaev , Andrew Morton , Michal Hocko , Andrey Ryabinin , Joe Perches , "Luis R. Rodriguez" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 01/15] mm/vmalloc: add new 'alignment' field for vm_struct structure Date: Wed, 9 Jan 2019 17:40:11 +0100 Message-Id: <20190109164025.24554-2-rpenyaev@suse.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190109164025.24554-1-rpenyaev@suse.de> References: <20190109164025.24554-1-rpenyaev@suse.de> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP I need a new alignment field for vm area in order to reallocate previously allocated area with the same alignment. Patch for a new vrealloc() call will follow and this new call I want to keep as simple as possible, thus not to provide dozens of variants, like vrealloc_user(), which cares about alignment. Current changes are just preparations. Worth to mention, that on archs were unsigned long is 64 bit this new field does not bloat vm_struct, because originally there was a padding between nr_pages and phys_addr. Signed-off-by: Roman Penyaev Cc: Andrew Morton Cc: Michal Hocko Cc: Andrey Ryabinin Cc: Joe Perches Cc: "Luis R. Rodriguez" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/vmalloc.h | 1 + mm/vmalloc.c | 10 ++++++---- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c95cd61..78210aa0bb43 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -38,6 +38,7 @@ struct vm_struct { unsigned long flags; struct page **pages; unsigned int nr_pages; + unsigned int alignment; phys_addr_t phys_addr; const void *caller; }; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e83961767dc1..4851b4a67f55 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1347,12 +1347,14 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) EXPORT_SYMBOL_GPL(map_vm_area); static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, - unsigned long flags, const void *caller) + unsigned int align, unsigned long flags, + const void *caller) { spin_lock(&vmap_area_lock); vm->flags = flags; vm->addr = (void *)va->va_start; vm->size = va->va_end - va->va_start; + vm->alignment = align; vm->caller = caller; va->vm = vm; va->flags |= VM_VM_AREA; @@ -1399,7 +1401,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, return NULL; } - setup_vmalloc_vm(area, va, flags, caller); + setup_vmalloc_vm(area, va, align, flags, caller); return area; } @@ -2601,8 +2603,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, /* insert all vm's */ for (area = 0; area < nr_vms; area++) - setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, - pcpu_get_vm_areas); + setup_vmalloc_vm(vms[area], vas[area], align, + VM_ALLOC, pcpu_get_vm_areas); kfree(vas); return vms; From patchwork Wed Jan 9 16:40:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Penyaev X-Patchwork-Id: 10754497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E43196C5 for ; Wed, 9 Jan 2019 16:40:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D25EA292DF for ; Wed, 9 Jan 2019 16:40:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D04F3291B9; Wed, 9 Jan 2019 16:40:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 386B32943F for ; Wed, 9 Jan 2019 16:40:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 236F28E00A0; Wed, 9 Jan 2019 11:40:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1E4FB8E0038; Wed, 9 Jan 2019 11:40:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AD038E00A0; Wed, 9 Jan 2019 11:40:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id A26738E0038 for ; Wed, 9 Jan 2019 11:40:39 -0500 (EST) Received: by mail-ed1-f70.google.com with SMTP id e29so3207767ede.19 for ; Wed, 09 Jan 2019 08:40:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=aDvrgZgqyB2GQtMjh4VtH5ecdHk0OysmftWgol7M3xg=; b=EbkQtarb1sOb8mYGnaFmjEZzkMwQHxbmMXpDQaqPjOK9x032l7Zjl+8Ntcb0pZ5TLk cPKiDdxY0ZKo4N7kQsIcgE/eXsr7ejuoC23XA3GBmETihU0n0rguo3zbiEevmpop/G/2 6sGrald+VOzcaRKCUjmf63G7YQ1EB48kw4Uh5RnTe42QqOyOczu/pF4Nahps4zOC2QoP BaaTJAVq3KFPhAztnAYH8fBkciqcgJYxj8+Peyr6zfpDYaUzQXbAsKySoKhYXMWmj8Fn 1foE95hv04xLehAtcHRMiU/8twekWpOSEKfzdCB9g0VcpVkoef94ok1cuKQg3SBt797O 4ILA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Gm-Message-State: AJcUukcoypvQLgKIT5rIYBOZfCHjxjD3vBG1ZKgaCT4vXwPKLp6K/aT2 hcm1Lu26hrLFsUdFWdsp3wvGHdiHpBBQlQQZauk/3cEuSCbXmVkGzNRdMKD1DR2Q4ySKGikosS/ qnDXRBjp/Rl/67xwj/3Ib+raFAcYN6wMZbgdnW2RdCBDtcWR0pWVR+c7KgIALTUk4Hw== X-Received: by 2002:a17:906:c288:: with SMTP id r8-v6mr6015099ejz.9.1547052039093; Wed, 09 Jan 2019 08:40:39 -0800 (PST) X-Google-Smtp-Source: ALg8bN4qfYDFRuvuAdvV8QpLJETHu4/njNIKqimTQSSr8tvwTuWIsh35SkN1dgCyrGkX13LRyOaG X-Received: by 2002:a17:906:c288:: with SMTP id r8-v6mr6015031ejz.9.1547052037657; Wed, 09 Jan 2019 08:40:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547052037; cv=none; d=google.com; s=arc-20160816; b=wmVzY5f1ANpY4+BJiE+thEpBOA2PrmSqKvAqj3fUwnidl/md2+dx/EVxvSFuDM+Ilm y9l6d32pICrbZM7pm84iLQ+z41dY3WV/MwspqLYuxlDOp/hn/ry0phQuDLsb8GCP1yao nrsKQFbomU/WUKbWXZiFi7fnZ6qK9p1XKdITIkEQFdUrUUMnlojTSNu4YpQ36aJSqLUg +9OE4i8+tWO9menDuYh8yWzARybkBLumP3EQrBtEeaJciiJe5WMEby29AeP/tlJuBfE6 Qm7Xz5Lhg7mPQp5GxuhdAtQ/S99k1Ft1TCNkLOLDERNfbuzYsgrsjxVjjdT4Kg9QUN+d oCWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=aDvrgZgqyB2GQtMjh4VtH5ecdHk0OysmftWgol7M3xg=; b=R8djFws/cenCLiN3Vv/Y+OyLjhKTzDccBhImQiOVKmw07IdJr7ceQVKK0cAoW7tm5H Si/u5lCtxUnpVOnw1S7BKXhhGjRZiHpwiz+8QeF+J76kUQ/ITvlrImoWUE6XCXQjMR7g LOm5Ebnvz4WmzqRPcNIkcUKP2XSU3xJuT44rWQNuv94LnaoA4bSYuLW0WGT8qZB1GklZ g3208G/XcpfoDcb94Rwbvn/WcSljaDbevN+bIU16buHNkFh3VtX0DkrCt7IsOSwDbi69 rGKo2RuFBtHkDxJ8zJnCU4wKp19H3cZQt2n0tJU4N+X8aIvJzjCMjGQVZAWzfC32OaaN f7yg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id o13si1551064edr.264.2019.01.09.08.40.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jan 2019 08:40:37 -0800 (PST) Received-SPF: pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 500A7AF74; Wed, 9 Jan 2019 16:40:37 +0000 (UTC) From: Roman Penyaev To: Cc: Roman Penyaev , Andrew Morton , Michal Hocko , Andrey Ryabinin , Joe Perches , "Luis R. Rodriguez" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 02/15] mm/vmalloc: move common logic from __vmalloc_area_node to a separate func Date: Wed, 9 Jan 2019 17:40:12 +0100 Message-Id: <20190109164025.24554-3-rpenyaev@suse.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190109164025.24554-1-rpenyaev@suse.de> References: <20190109164025.24554-1-rpenyaev@suse.de> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This one moves logic related to pages array creation to a separate function, which will be used by vrealloc() call as well, which implementation will follow. Signed-off-by: Roman Penyaev Cc: Andrew Morton Cc: Michal Hocko Cc: Andrey Ryabinin Cc: Joe Perches Cc: "Luis R. Rodriguez" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/vmalloc.c | 36 +++++++++++++++++++++++++++++------- 1 file changed, 29 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4851b4a67f55..ad6cd807f6db 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1662,21 +1662,26 @@ EXPORT_SYMBOL(vmap); static void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, pgprot_t prot, int node, const void *caller); -static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, - pgprot_t prot, int node) + +static int alloc_vm_area_array(struct vm_struct *area, gfp_t gfp_mask, int node) { + unsigned int nr_pages, array_size; struct page **pages; - unsigned int nr_pages, array_size, i; + const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; - const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? 0 : __GFP_HIGHMEM; + if (WARN_ON(area->pages)) + return -EINVAL; + nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; + if (!nr_pages) + return -EINVAL; + array_size = (nr_pages * sizeof(struct page *)); - area->nr_pages = nr_pages; /* Please note that the recursion is strictly bounded. */ if (array_size > PAGE_SIZE) { pages = __vmalloc_node(array_size, 1, nested_gfp|highmem_mask, @@ -1684,8 +1689,25 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, } else { pages = kmalloc_node(array_size, nested_gfp, node); } + if (!pages) + return -ENOMEM; + + area->nr_pages = nr_pages; area->pages = pages; - if (!area->pages) { + + return 0; +} + +static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, + pgprot_t prot, int node) +{ + const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; + const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? + 0 : + __GFP_HIGHMEM; + unsigned int i; + + if (alloc_vm_area_array(area, gfp_mask, node)) { remove_vm_area(area->addr); kfree(area); return NULL; @@ -1709,7 +1731,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, cond_resched(); } - if (map_vm_area(area, prot, pages)) + if (map_vm_area(area, prot, area->pages)) goto fail; return area->addr; From patchwork Wed Jan 9 16:40:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Penyaev X-Patchwork-Id: 10754499 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 788BD6C5 for ; Wed, 9 Jan 2019 16:40:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66020292B1 for ; Wed, 9 Jan 2019 16:40:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 582732947C; Wed, 9 Jan 2019 16:40:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FCCD29301 for ; Wed, 9 Jan 2019 16:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4CAC8E0038; Wed, 9 Jan 2019 11:40:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9FAAB8E00A1; Wed, 9 Jan 2019 11:40:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C3578E0038; Wed, 9 Jan 2019 11:40:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 2D4138E00A1 for ; Wed, 9 Jan 2019 11:40:40 -0500 (EST) Received: by mail-ed1-f70.google.com with SMTP id f17so3167790edm.20 for ; Wed, 09 Jan 2019 08:40:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=8Q1tkEMTNRczIaBkNm7NIRCjVZixCbZcVfNzV8TIEeg=; b=k+2wujrikKWgS0iWvn2Jnl8Y7pAoQ3ZNr28ZWd1LJPCfJ31F7U4zklmoP5gOHFWp8K wuWxYTbcqwAPiWfkdYs3xMmvtzBX6QluFSpm+BZo4hu516DKdfMW+prg2mvrb5U9k18O F46ExHzY7kQczHsukDzMyl1yl0rMBuc0L+xAbj6tKjL9lLcnkAMj4Opx+FBl4z02R8O0 kgGtPSaCHMoGU/lONhcniLulT9TcNxy/1FBtWQDge6zfthjqES7OckaG8V8DI6cQmJL6 2WLYC3UCoKBJ3g3nQdYOzKNouZabCpUjC67RHzomTMMvXe5QiZH4mXQGCyuiQNO/JFib N2sg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Gm-Message-State: AJcUukfGrr72/gsy4F1lbXBxuus2GhZiqxbufoUQfFUQyorD/BgEAkWN AYqAKuvAoDjMoqxjz8phfbcQJtVbggjgdrt1fv2D//l812JAbAeuCgeFj2Ix+rCkKmwOMi8AUpA 7RM8RCmnN+CdbryoAiMmo9GobiPMA0ZEiScySASUz8+VCzEo8k3Zy1CAmKU4XvdGqXA== X-Received: by 2002:aa7:ccc8:: with SMTP id y8mr6283458edt.118.1547052039637; Wed, 09 Jan 2019 08:40:39 -0800 (PST) X-Google-Smtp-Source: ALg8bN70G/+cA0sIue0/DSI0PnlO+i76lxy1MIDKp50vaBe4BjJOGtw7KdyvTf0BxyuyZjiIYj5i X-Received: by 2002:aa7:ccc8:: with SMTP id y8mr6283379edt.118.1547052038087; Wed, 09 Jan 2019 08:40:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547052038; cv=none; d=google.com; s=arc-20160816; b=lni2e1bKAtdLqzubK9pyDaa1a7rZTtReHPHb7dQlXYF/ah/bm0R2gN0uRhfAt+ftA0 neugBH2biYVYFIAYJ+QK4QSXbMmbcAQQbcfPuDImNwQDGHqhZozj2A4anLZjT9lk9FPh 72+1hCf7yoH6q90+3yg5MPdKe7Ex3QSg1qPMtbyFIk7QDhv7tlhKaHSMpgS3ePrxkmTA 3eq/te1TSiM4R2BwTZA1nqqyhPHNsZPSdJvrbvNmUHCs8h8b2rVJNUeYZxR22F/a3Zpl V3t8IxAIc+uxyHtCfVQftkNOdSBL28SQtEw9asTvQjjJnmixHiKQiCoXsJdS6SN0Q3iY DFag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=8Q1tkEMTNRczIaBkNm7NIRCjVZixCbZcVfNzV8TIEeg=; b=HnCpjjwspRlfFgvUW8fBKIB6LJsjOpfRTm4N1kM8Q0uYBIbM1PAiBTfBjpGGlW9A87 h49POxxZJB15u9AUeomIkdksBaiv2lmqezyKTeUmFGkUuzWsbWSY0wYMUar4Ap8mUVMK KyAbJ2Q7/0TZgs0URZeISB0bUaxDNAY2/4CybKXgbJZYP5GXt2uOyheGRntUDzfUuYLq 9/SlRH335xTpUDZMWCAQFbnC9rg01lSeAwBcrGEqBzVSPZ+l/8/HgU4A59qXw2fi3y9+ eYC1A0uSKO2pJT+6BY266REybfzXZfH962A7JcWX6BiGb0USBqrZlrUBOVfUp6rOv2d9 rIqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id q24-v6si368444ejb.146.2019.01.09.08.40.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jan 2019 08:40:38 -0800 (PST) Received-SPF: pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rpenyaev@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=rpenyaev@suse.de X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A926DAF82; Wed, 9 Jan 2019 16:40:37 +0000 (UTC) From: Roman Penyaev To: Cc: Roman Penyaev , Andrew Morton , Michal Hocko , Andrey Ryabinin , Joe Perches , "Luis R. Rodriguez" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 03/15] mm/vmalloc: introduce new vrealloc() call and its subsidiary reach analog Date: Wed, 9 Jan 2019 17:40:13 +0100 Message-Id: <20190109164025.24554-4-rpenyaev@suse.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190109164025.24554-1-rpenyaev@suse.de> References: <20190109164025.24554-1-rpenyaev@suse.de> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Function changes the size of virtual contigues memory, previously allocated by vmalloc(). vrealloc() under the hood does the following: 1. allocates new vm area based on the alignment of the old one. 2. allocates pages array for a new vm area. 3. fill in ->pages array taking pages from the old area increasing page ref. In case of virtual size grow (old_size < new_size) new pages for a new area are allocated using gfp passed by the caller. Basically vrealloc() repeats glibc realloc() with only one big difference: old area is not freed, i.e. caller is responsible for calling vfree() in case of successfull reallocation. Why vfree() is not called for old area directly from vrealloc()? Because sometimes it is better just to have transaction-like reallocation for several pointers and reallocate all at once, i.e.: new_p1 = vrealloc(p1, new_len); new_p2 = vrealloc(p2, new_len); if (!new_p1 || !new_p2) { vfree(new_p1); vfree(new_p2); return -ENOMEM; } vfree(p1); vfree(p2); p1 = new_p1; p2 = new_p2; Signed-off-by: Roman Penyaev Cc: Andrew Morton Cc: Michal Hocko Cc: Andrey Ryabinin Cc: Joe Perches Cc: "Luis R. Rodriguez" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/vmalloc.h | 3 ++ mm/vmalloc.c | 106 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 109 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 78210aa0bb43..2902faf26c4f 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -72,6 +72,7 @@ static inline void vmalloc_init(void) extern void *vmalloc(unsigned long size); extern void *vzalloc(unsigned long size); +extern void *vrealloc(void *old_addr, unsigned long size); extern void *vmalloc_user(unsigned long size); extern void *vmalloc_node(unsigned long size, int node); extern void *vzalloc_node(unsigned long size, int node); @@ -83,6 +84,8 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller); +extern void *__vrealloc_node(void *old_addr, unsigned long size, gfp_t gfp_mask, + pgprot_t prot, int node, const void *caller); #ifndef CONFIG_MMU extern void *__vmalloc_node_flags(unsigned long size, int node, gfp_t flags); static inline void *__vmalloc_node_flags_caller(unsigned long size, int node, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ad6cd807f6db..94cc99e780c7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1889,6 +1889,112 @@ void *vzalloc(unsigned long size) } EXPORT_SYMBOL(vzalloc); +void *__vrealloc_node(void *old_addr, unsigned long size, gfp_t gfp_mask, + pgprot_t prot, int node, const void *caller) +{ + const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; + const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? 0 : + __GFP_HIGHMEM; + struct vm_struct *old_area, *area; + struct page *page; + + unsigned int i; + + old_area = find_vm_area(old_addr); + if (!old_area) + return NULL; + + if (!(old_area->flags & VM_ALLOC)) + return NULL; + + size = PAGE_ALIGN(size); + if (!size || (size >> PAGE_SHIFT) > totalram_pages()) + return NULL; + + if (get_vm_area_size(old_area) == size) + return old_addr; + + area = __get_vm_area_node(size, old_area->alignment, VM_UNINITIALIZED | + old_area->flags, VMALLOC_START, VMALLOC_END, + node, gfp_mask, caller); + if (!area) + return NULL; + + if (alloc_vm_area_array(area, gfp_mask, node)) { + __vunmap(area->addr, 0); + return NULL; + } + + for (i = 0; i < area->nr_pages; i++) { + if (i < old_area->nr_pages) { + /* Take a page from old area and increase a ref */ + + page = old_area->pages[i]; + area->pages[i] = page; + get_page(page); + } else { + /* Allocate more pages in case of grow */ + + page = alloc_page(alloc_mask|highmem_mask); + if (unlikely(!page)) { + /* + * Successfully allocated i pages, free + * them in __vunmap() + */ + area->nr_pages = i; + goto fail; + } + + area->pages[i] = page; + if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) + cond_resched(); + } + } + if (map_vm_area(area, prot, area->pages)) + goto fail; + + /* New area is fully ready */ + clear_vm_uninitialized_flag(area); + kmemleak_vmalloc(area, size, gfp_mask); + + return area->addr; + +fail: + warn_alloc(gfp_mask, NULL, "vrealloc: allocation failure"); + __vfree(area->addr); + + return NULL; +} +EXPORT_SYMBOL(__vrealloc_node); + +/** + * vrealloc - reallocate virtually contiguous memory with zero fill + * @old_addr: old virtual address + * @size: new size + * + * Allocate additional pages to cover new @size from the page level + * allocator if memory grows. Then pages are mapped into a new + * contiguous kernel virtual space, previous area is NOT freed. + * + * Do not forget to call vfree() passing old address. But careful, + * calling vfree() from interrupt will cause vfree_deferred() call, + * which in its turn uses freed address as a temporal pointer for a + * llist element, i.e. memory will be corrupted. + * + * If new size is equal to the old size - old pointer is returned. + * I.e. appropriate check should be made before calling vfree(). + * + * For tight control over page level allocator and protection flags + * use __vrealloc_node() instead. + */ +void *vrealloc(void *old_addr, unsigned long size) +{ + return __vrealloc_node(old_addr, size, GFP_KERNEL | __GFP_ZERO, + PAGE_KERNEL, NUMA_NO_NODE, + __builtin_return_address(0)); +} +EXPORT_SYMBOL(vrealloc); + /** * vmalloc_user - allocate zeroed virtually contiguous memory for userspace * @size: allocation size