From patchwork Mon May 4 14:58:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Chartre X-Patchwork-Id: 11526469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD2C317E8 for ; Mon, 4 May 2020 15:00:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9B3C1206D7 for ; Mon, 4 May 2020 15:00:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="k3nxkLIW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B3C1206D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25F5C8E0031; Mon, 4 May 2020 11:00:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 210FA8E0024; Mon, 4 May 2020 11:00:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D8BB8E0031; Mon, 4 May 2020 11:00:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id E97F28E0024 for ; Mon, 4 May 2020 11:00:18 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A332B181AC9CC for ; Mon, 4 May 2020 15:00:18 +0000 (UTC) X-FDA: 76779347316.20.loaf26_3cd2ab8bf5f0d X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,alexandre.chartre@oracle.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:156.151.31.86:@oracle.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: loaf26_3cd2ab8bf5f0d X-Filterd-Recvd-Size: 8315 Received: from userp2130.oracle.com (userp2130.oracle.com [156.151.31.86]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 4 May 2020 15:00:17 +0000 (UTC) Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 044ElbNQ116459; Mon, 4 May 2020 15:00:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=K4WuIRzv6NwwGlBbEMaCvqf1ii2XJ+sibBt35OcpXCQ=; b=k3nxkLIW+pEF/Y1ilQRHwL3pob0Wn2rsWjG7ffWhhxZMxX3ArXWlk53vjW8dlpCraVDy imbEqF68auq3T0FZwjgk1kOkrKy/xfFtKr8jSV6L5g6z3OAzfru1i2z0TUkJfh+CpwQT NyFQkYbHWh8Ywk5aRwA/AEWEijwE2FmJCgklc2rjAOQ8pS7HE3+3PhS6s4xKa++fb5Wo oKeG1oYIf/DWcAAq7TKZoX3+2CdjEab8V32ZbOM9qAcB/X4opRlwbftrikc5VTofvRq0 LA+86v60a5hycNhDTXyeOZ6fHqTWUW3ZTqYMQmC294WtBK+U6JXpvL08u/sDuSUbsIiY Cg== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2130.oracle.com with ESMTP id 30s09qyftp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 04 May 2020 15:00:08 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 044ElLhg033730; Mon, 4 May 2020 15:00:07 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3020.oracle.com with ESMTP id 30sjnaw2ej-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 04 May 2020 15:00:07 +0000 Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 044F06lt001343; Mon, 4 May 2020 15:00:06 GMT Received: from linux-1.home.com (/10.175.9.166) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 04 May 2020 08:00:05 -0700 From: Alexandre Chartre To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, konrad.wilk@oracle.com, jan.setjeeilers@oracle.com, liran.alon@oracle.com, junaids@google.com, graf@amazon.de, rppt@linux.vnet.ibm.com, kuzuno@gmail.com, mgross@linux.intel.com, alexandre.chartre@oracle.com Subject: [RFC v4][PATCH part-2 08/13] mm/dpt: Keep track of VA ranges mapped in a decorated page-table Date: Mon, 4 May 2020 16:58:05 +0200 Message-Id: <20200504145810.11882-9-alexandre.chartre@oracle.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20200504145810.11882-1-alexandre.chartre@oracle.com> References: <20200504145810.11882-1-alexandre.chartre@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9610 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 adultscore=0 phishscore=0 mlxlogscore=999 bulkscore=0 malwarescore=0 spamscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005040123 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9610 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 clxscore=1015 suspectscore=2 priorityscore=1501 malwarescore=0 mlxlogscore=999 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005040123 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add functions to keep track of VA ranges mapped in a decorated page-table. This will be used when unmapping to ensure the same range is unmapped, at the same page-table level. This will also be used to handle mapping and unmapping of overlapping VA ranges. Signed-off-by: Alexandre Chartre --- arch/x86/include/asm/dpt.h | 12 ++++++++ arch/x86/mm/dpt.c | 60 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/arch/x86/include/asm/dpt.h b/arch/x86/include/asm/dpt.h index 5a38d97a70a8..0d74afb10141 100644 --- a/arch/x86/include/asm/dpt.h +++ b/arch/x86/include/asm/dpt.h @@ -16,6 +16,17 @@ enum page_table_level { PGT_LEVEL_PGD }; +/* + * Structure to keep track of address ranges mapped into a decorated + * page-table. + */ +struct dpt_range_mapping { + struct list_head list; + void *ptr; /* range start address */ + size_t size; /* range size */ + enum page_table_level level; /* mapping level */ +}; + /* * A decorated page-table (dpt) encapsulates a native page-table (e.g. * a PGD) and maintain additional attributes related to this page-table. @@ -24,6 +35,7 @@ struct dpt { spinlock_t lock; /* protect all attributes */ pgd_t *pagetable; /* the actual page-table */ unsigned int alignment; /* page-table alignment */ + struct list_head mapping_list; /* list of VA range mapping */ /* * A page-table can have direct references to another page-table, diff --git a/arch/x86/mm/dpt.c b/arch/x86/mm/dpt.c index 0e725344b921..12eb0d794d84 100644 --- a/arch/x86/mm/dpt.c +++ b/arch/x86/mm/dpt.c @@ -59,6 +59,24 @@ static int dpt_add_backend_page(struct dpt *dpt, void *addr, return 0; } +/* + * Return the range mapping starting at the specified address, or NULL if + * no such range is found. + */ +static struct dpt_range_mapping *dpt_get_range_mapping(struct dpt *dpt, + void *ptr) +{ + struct dpt_range_mapping *range; + + lockdep_assert_held(&dpt->lock); + list_for_each_entry(range, &dpt->mapping_list, list) { + if (range->ptr == ptr) + return range; + } + + return NULL; +} + /* * Check if an offset in the page-table is valid, i.e. check that the * offset is on a page effectively belonging to the page-table. @@ -563,6 +581,7 @@ static int dpt_copy_pgd_range(struct dpt *dpt, int dpt_map_range(struct dpt *dpt, void *ptr, size_t size, enum page_table_level level) { + struct dpt_range_mapping *range_mapping; unsigned long addr = (unsigned long)ptr; unsigned long end = addr + ((unsigned long)size); unsigned long flags; @@ -571,8 +590,36 @@ int dpt_map_range(struct dpt *dpt, void *ptr, size_t size, pr_debug("DPT %p: MAP %px/%lx/%d\n", dpt, ptr, size, level); spin_lock_irqsave(&dpt->lock, flags); + + /* check if the range is already mapped */ + range_mapping = dpt_get_range_mapping(dpt, ptr); + if (range_mapping) { + pr_debug("DPT %p: MAP %px/%lx/%d already mapped\n", + dpt, ptr, size, level); + err = -EBUSY; + goto done; + } + + /* map new range */ + range_mapping = kmalloc(sizeof(*range_mapping), GFP_KERNEL); + if (!range_mapping) { + err = -ENOMEM; + goto done; + } + err = dpt_copy_pgd_range(dpt, dpt->pagetable, current->mm->pgd, addr, end, level); + if (err) { + kfree(range_mapping); + goto done; + } + + INIT_LIST_HEAD(&range_mapping->list); + range_mapping->ptr = ptr; + range_mapping->size = size; + range_mapping->level = level; + list_add(&range_mapping->list, &dpt->mapping_list); +done: spin_unlock_irqrestore(&dpt->lock, flags); return err; @@ -611,6 +658,8 @@ struct dpt *dpt_create(unsigned int pgt_alignment) if (!dpt) return NULL; + INIT_LIST_HEAD(&dpt->mapping_list); + pagetable = (unsigned long)__get_free_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, alloc_order); @@ -632,6 +681,7 @@ void dpt_destroy(struct dpt *dpt) { unsigned int pgt_alignment; unsigned int alloc_order; + struct dpt_range_mapping *range, *range_next; unsigned long index; void *entry; @@ -643,6 +693,16 @@ void dpt_destroy(struct dpt *dpt) free_page((unsigned long)DPT_BACKEND_PAGE_ADDR(entry)); } + list_for_each_entry_safe(range, range_next, &dpt->mapping_list, list) { + list_del(&range->list); + kfree(range); + } + + if (dpt->backend_pages_count) { + xa_for_each(&dpt->backend_pages, index, entry) + free_page((unsigned long)DPT_BACKEND_PAGE_ADDR(entry)); + } + if (dpt->pagetable) { pgt_alignment = dpt->alignment; alloc_order = round_up(PAGE_SIZE + pgt_alignment,