From patchwork Fri Jan 15 12:37:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 897D0C433E9 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6950C221F7 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732919AbhAOMn1 (ORCPT ); Fri, 15 Jan 2021 07:43:27 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:34356 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388381AbhAOMnZ (ORCPT ); Fri, 15 Jan 2021 07:43:25 -0500 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCYQ1g011718; Fri, 15 Jan 2021 07:37:37 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=XMrSdkBO6H0bhPZj+2/Gi75A7sr3FfvZntzURITiK5I=; b=bRkXjY/OyDMhDquKHRG5/K4GMDINWNXzcia1Atqt+lipPGztoGoaFn95UrxTwsX4VeMa QEZY2lVJ0X7o7FFCC+jEjiQRRfULDqPG8jk+HFSImMWgyasH2T5hu9EPbr6UeqFWFqva aXSCr4yYd1mGuS67qYR4ZcIbfMmWEPtll9dYjQJ//BfmyRLpUuqVYH3BD2Oj15od0TCZ JfZDl2gmKa/lxoUmvoEL8y8doXhZSx1+PwjLozEfkZVmEHSMKLmqcuHNtgcmD8ykQeRM lNV+T1j9Fa0UZIL+Kc8VY9rpiPgH33SMcR0LE3qXak521OKBFroh1713lT1vyzJ3mrI9 Fw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363aux0m31-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:37 -0500 Received: from m0098393.ppops.net (m0098393.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCavhb028309; Fri, 15 Jan 2021 07:37:36 -0500 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 363aux0m2a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:36 -0500 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FC8Cmq014079; Fri, 15 Jan 2021 12:37:34 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma06ams.nl.ibm.com with ESMTP id 35ydrdf9nd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:34 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbWW846137764 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:32 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 29CABAE051; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B2AC3AE055; Fri, 15 Jan 2021 12:37:31 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:31 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Date: Fri, 15 Jan 2021 13:37:20 +0100 Message-Id: <20210115123730.381612-2-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 clxscore=1015 suspectscore=0 spamscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501 malwarescore=0 bulkscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150075 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bring x86 in line with the other architectures and include the generic header at asm-generic/page.h . This provides the macros PAGE_SHIFT, PAGE_SIZE, PAGE_MASK, virt_to_pfn, and pfn_to_virt. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan Reviewed-by: Janosch Frank --- lib/x86/asm/page.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h index 1359eb7..2cf8881 100644 --- a/lib/x86/asm/page.h +++ b/lib/x86/asm/page.h @@ -13,9 +13,7 @@ typedef unsigned long pteval_t; typedef unsigned long pgd_t; -#define PAGE_SHIFT 12 -#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) -#define PAGE_MASK (~(PAGE_SIZE-1)) +#include #ifndef __ASSEMBLY__ From patchwork Fri Jan 15 12:37:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B74DC433E0 for ; Fri, 15 Jan 2021 12:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BDB6221F7 for ; Fri, 15 Jan 2021 12:45:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388107AbhAOMp3 (ORCPT ); Fri, 15 Jan 2021 07:45:29 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:31592 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733007AbhAOMpU (ORCPT ); Fri, 15 Jan 2021 07:45:20 -0500 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCZEZk196375; Fri, 15 Jan 2021 07:37:37 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=5xPxsI6PlfJ2XD0UTxH/GUA7rr+qBpV6LHpc0QDBqYo=; b=pEVxKW2x++hOctUXaMhkBDe76PbNfS2WR58mMYcRefAOnBp/IN2EVqwI4XURTPjC+1K5 uR+3NRiKCoJXRv8T4Km2Ds5G1gphqDCfO+OFM5gMaxU2LKSEl2AiJoG9MQFSKLNXmrxi oItIwiBH3CAMvfwV6YfWPXrRRDTy+NgfM1dXGwquUByJ+6T+VQ0lJr8N3GJPLh8+8xXG LIf+SF/EvTD4XD4LsAhNL654jUTkhC+IgZIkFanE4VH6QnNTTgHX7yUIBrRCi8pBkN+E pAv0piVunAqCAmf02uYATAYQonctVPnFk2oKPrdmCRH5/QYKRKJeZulxdSHX6Ga+ivua Ew== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 363af5s9ku-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:37 -0500 Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCZ9UL195902; Fri, 15 Jan 2021 07:37:37 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0b-001b2d01.pphosted.com with ESMTP id 363af5s9kb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:36 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FC8GCT023595; Fri, 15 Jan 2021 12:37:35 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma03fra.de.ibm.com with ESMTP id 35y448bynw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:35 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbWsD50528608 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:32 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AEBE3AE056; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 47224AE055; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail Date: Fri, 15 Jan 2021 13:37:21 +0100 Message-Id: <20210115123730.381612-3-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 malwarescore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 priorityscore=1501 phishscore=0 adultscore=0 suspectscore=0 impostorscore=0 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150075 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a list_add_tail wrapper function to allow adding elements to the end of a list. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan Reviewed-by: Janosch Frank --- lib/list.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/lib/list.h b/lib/list.h index 18d9516..7f9717e 100644 --- a/lib/list.h +++ b/lib/list.h @@ -50,4 +50,13 @@ static inline void list_add(struct linked_list *head, struct linked_list *li) head->next = li; } +/* + * Add the given element before the given list head. + */ +static inline void list_add_tail(struct linked_list *head, struct linked_list *li) +{ + assert(head); + list_add(head->prev, li); +} + #endif From patchwork Fri Jan 15 12:37:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9EC2C43381 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E340221FA for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388311AbhAOMn2 (ORCPT ); Fri, 15 Jan 2021 07:43:28 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:20794 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387632AbhAOMn0 (ORCPT ); Fri, 15 Jan 2021 07:43:26 -0500 Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXKtV020081; Fri, 15 Jan 2021 07:37:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=tK5dFZ/FvkVDsAgZKBc3w31GaRdrflM4u6eauuUUR0A=; b=Q4uMwNBBvK/qvzP8jdiFPjypRxRA0LNiR8X3ElVwGp/gR6+duB1/VcIgeXk7WztY2KrL 48iTnuvD2q10YWC8bnkfrlEhF25syN1TSQ96tWxzCXyW6I0J6P3GcV2GmhNjgKAQPiWU zRChI/juVjE5ODoLkWmlny3BAKO3YgnASa+c86vIblCttHTXUuVsnAFYBimYNzscgyuT YWs9agG74xwsq1ed1K7qGcPQdH63p8XLNfx+3XJLpc5HwhtabXc4OO+0SJ5WkBVUpIE2 pVyLTsicAA/bBWjonGgbKQ+eVEAI1EFhS4P2PkRPbPhXvFaO2Lnvwd2vvnaJVPVkpHjZ hg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363b1dga35-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:44 -0500 Received: from m0187473.ppops.net (m0187473.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXMM4020194; Fri, 15 Jan 2021 07:37:43 -0500 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 363b1dga0a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:43 -0500 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FC8ZLF014115; Fri, 15 Jan 2021 12:37:35 GMT Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by ppma06ams.nl.ibm.com with ESMTP id 35ydrdf9ng-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:35 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbXv944499430 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:33 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4B6E8AE051; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C9CF7AE055; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:32 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements Date: Fri, 15 Jan 2021 13:37:22 +0100 Message-Id: <20210115123730.381612-4-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 spamscore=0 lowpriorityscore=0 priorityscore=1501 bulkscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 impostorscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some asserts to make sure the state is consistent. Simplify and improve the readability of vm_free. If a NULL pointer is freed, no operation is performed. Fixes: 3f6fee0d4da4 ("lib/vmalloc: vmalloc support for handling allocation metadata") Signed-off-by: Claudio Imbrenda Acked-by: Janosch Frank --- lib/vmalloc.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/lib/vmalloc.c b/lib/vmalloc.c index 986a34c..6b52790 100644 --- a/lib/vmalloc.c +++ b/lib/vmalloc.c @@ -162,13 +162,16 @@ static void *vm_memalign(size_t alignment, size_t size) static void vm_free(void *mem) { struct metadata *m; - uintptr_t ptr, end; + uintptr_t ptr, page, i; + if (!mem) + return; /* the pointer is not page-aligned, it was a single-page allocation */ if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) { assert(GET_MAGIC(mem) == VM_MAGIC); - ptr = virt_to_pte_phys(page_root, mem) & PAGE_MASK; - free_page(phys_to_virt(ptr)); + page = virt_to_pte_phys(page_root, mem) & PAGE_MASK; + assert(page); + free_page(phys_to_virt(page)); return; } @@ -176,13 +179,14 @@ static void vm_free(void *mem) m = GET_METADATA(mem); assert(m->magic == VM_MAGIC); assert(m->npages > 0); + assert(m->npages < BIT_ULL(BITS_PER_LONG - PAGE_SHIFT)); /* free all the pages including the metadata page */ - ptr = (uintptr_t)mem - PAGE_SIZE; - end = ptr + m->npages * PAGE_SIZE; - for ( ; ptr < end; ptr += PAGE_SIZE) - free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr))); - /* free the last one separately to avoid overflow issues */ - free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr))); + ptr = (uintptr_t)m & PAGE_MASK; + for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) { + page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK; + assert(page); + free_page(phys_to_virt(page)); + } } static struct alloc_ops vmalloc_ops = { From patchwork Fri Jan 15 12:37:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6E21C433DB for ; Fri, 15 Jan 2021 12:41:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DFB1221F7 for ; Fri, 15 Jan 2021 12:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732923AbhAOMlJ (ORCPT ); Fri, 15 Jan 2021 07:41:09 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:64060 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387497AbhAOMkX (ORCPT ); Fri, 15 Jan 2021 07:40:23 -0500 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXLxD064885; Fri, 15 Jan 2021 07:37:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=U+ku+qamBIPAaG7KR7o0+FylIBPOLGdubc5b6jJ22EM=; b=MDQSHMixnVDdgQrUrnbI34ThsLlfnf9bO4zuKkfr8SQLRdpa+nDIUwxIMHPym9kVRpGR AU/+cgsGaMITdV1/arQRdo/jiia+tLWqZB0nTrlZ3KpLitYt1Ev3FLSTKOCN6RQbBejQ QbRCOT5/qlVd9AW+frQbt1mOK9BpxscqJ/DDpIEj7l5QGZAOCh56sPI/SwF/VkKX6YId sVyqK7GWFtTb7Cr79NQtCTTjxjXFmuso3fFv9JOxpJ9ihQg3QLa5NJs39IsRdC4idNyX LOrbnM1mjKNvw5HPt7JgAB5QqsagMJHBeQIaKzbEjuKEsIHctq69AsgVGCzx01Jpgp9/ +w== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8g1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:39 -0500 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXoUd066830; Fri, 15 Jan 2021 07:37:38 -0500 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8f4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:38 -0500 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCbaac026135; Fri, 15 Jan 2021 12:37:36 GMT Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by ppma05fra.de.ibm.com with ESMTP id 35y448byvv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:36 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbX4142664270 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:34 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C83E6AE053; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 61B46AE045; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Date: Fri, 15 Jan 2021 13:37:23 +0100 Message-Id: <20210115123730.381612-5-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 suspectscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 malwarescore=0 clxscore=1015 adultscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix the definitions of the memory areas. Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding the headers. Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas") Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/asm-generic/memory_areas.h | 9 ++++----- lib/arm/asm/memory_areas.h | 11 +++-------- lib/arm64/asm/memory_areas.h | 11 +++-------- lib/powerpc/asm/memory_areas.h | 11 +++-------- lib/ppc64/asm/memory_areas.h | 11 +++-------- lib/s390x/asm/memory_areas.h | 13 ++++++------- lib/x86/asm/memory_areas.h | 27 ++++++++++++++++----------- lib/alloc_page.h | 3 +++ lib/alloc_page.c | 4 +--- 9 files changed, 42 insertions(+), 58 deletions(-) diff --git a/lib/asm-generic/memory_areas.h b/lib/asm-generic/memory_areas.h index 927baa7..3074afe 100644 --- a/lib/asm-generic/memory_areas.h +++ b/lib/asm-generic/memory_areas.h @@ -1,11 +1,10 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__ +#define __ASM_GENERIC_MEMORY_AREAS_H__ #define AREA_NORMAL_PFN 0 #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 1 #endif diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h index 927baa7..c723310 100644 --- a/lib/arm/asm/memory_areas.h +++ b/lib/arm/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMARM_MEMORY_AREAS_H_ +#define _ASMARM_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/arm64/asm/memory_areas.h b/lib/arm64/asm/memory_areas.h index 927baa7..18e8ca8 100644 --- a/lib/arm64/asm/memory_areas.h +++ b/lib/arm64/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMARM64_MEMORY_AREAS_H_ +#define _ASMARM64_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/powerpc/asm/memory_areas.h b/lib/powerpc/asm/memory_areas.h index 927baa7..76d1738 100644 --- a/lib/powerpc/asm/memory_areas.h +++ b/lib/powerpc/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_ +#define _ASMPOWERPC_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/ppc64/asm/memory_areas.h b/lib/ppc64/asm/memory_areas.h index 927baa7..b9fd46b 100644 --- a/lib/ppc64/asm/memory_areas.h +++ b/lib/ppc64/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMPPC64_MEMORY_AREAS_H_ +#define _ASMPPC64_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/s390x/asm/memory_areas.h b/lib/s390x/asm/memory_areas.h index 4856a27..827bfb3 100644 --- a/lib/s390x/asm/memory_areas.h +++ b/lib/s390x/asm/memory_areas.h @@ -1,16 +1,15 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMS390X_MEMORY_AREAS_H_ +#define _ASMS390X_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN BIT(31-12) +#define AREA_NORMAL_PFN (1 << 19) #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) #define AREA_LOW_PFN 0 #define AREA_LOW_NUMBER 1 -#define AREA_LOW 2 +#define AREA_LOW (1 << AREA_LOW_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 2 #define AREA_DMA31 AREA_LOW diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h index 952f5bd..e84016f 100644 --- a/lib/x86/asm/memory_areas.h +++ b/lib/x86/asm/memory_areas.h @@ -1,21 +1,26 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASM_X86_MEMORY_AREAS_H_ +#define _ASM_X86_MEMORY_AREAS_H_ #define AREA_NORMAL_PFN BIT(36-12) #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) -#define AREA_PAE_HIGH_PFN BIT(32-12) -#define AREA_PAE_HIGH_NUMBER 1 -#define AREA_PAE_HIGH 2 +#define AREA_HIGH_PFN BIT(32-12) +#define AREA_HIGH_NUMBER 1 +#define AREA_HIGH (1 << AREA_HIGH_NUMBER) -#define AREA_LOW_PFN 0 +#define AREA_LOW_PFN BIT(24-12) #define AREA_LOW_NUMBER 2 -#define AREA_LOW 4 +#define AREA_LOW (1 << AREA_LOW_NUMBER) -#define AREA_PAE (AREA_PAE | AREA_LOW) +#define AREA_LOWEST_PFN 0 +#define AREA_LOWEST_NUMBER 3 +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 4 + +#define AREA_DMA24 AREA_LOWEST +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW) +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH) #endif diff --git a/lib/alloc_page.h b/lib/alloc_page.h index 816ff5d..b6aace5 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -10,6 +10,9 @@ #include +#define AREA_ANY -1 +#define AREA_ANY_NUMBER 0xff + /* Returns true if the page allocator has been initialized */ bool page_alloc_initialized(void); diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 685ab1e..ed0ff02 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -19,8 +19,6 @@ #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT) -#define MAX_AREAS 6 - #define ORDER_MASK 0x3f #define ALLOC_MASK 0x40 #define SPECIAL_MASK 0x80 @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn) return; } #ifdef AREA_HIGH_PFN - __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN), base_pfn, &top_pfn); + __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN, base_pfn, &top_pfn); #endif __page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN, base_pfn, &top_pfn); #ifdef AREA_LOW_PFN From patchwork Fri Jan 15 12:37:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88744C4332E for ; Fri, 15 Jan 2021 12:44:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6384F221FA for ; Fri, 15 Jan 2021 12:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387488AbhAOMoZ (ORCPT ); Fri, 15 Jan 2021 07:44:25 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:25318 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731332AbhAOMoR (ORCPT ); Fri, 15 Jan 2021 07:44:17 -0500 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXQIE058749; Fri, 15 Jan 2021 07:37:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=tOGayUs71BJ4wLsYc5gTHjt5X9MmZRcIRky+ib3AO+0=; b=qOoZ2dbU0zozMxRPJvvag0imHlz3YeBgGwCjK58YwCdvmALTHBusyTspGy3plemtBtBA mebDb7iA8L7bujEPTM0Ic+/K8lRJfX0OJNNpCLQpUzWfOA/GFEVIEF02+IiqATm2GTAq JW3Rf2iNufpZPvvrPVamNpPEJuvUcW++UuUwtqbF9r5LAfV/wyScimUNtdrhyVHLVoCp sQuLWw/taaPbPs2JL9Hya1s2QP3LjEFCAJKzATsY9p/pLyrDqyinShpq1LUTCvhA8LvJ odwVjip5WuCRHUU85hGI+LNJBBKIKo/0Ba8afBXwAb5Boo3tKUHEM43ZLFdPx0QyK4V1 LA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363auyrn1w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:39 -0500 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXTED058832; Fri, 15 Jan 2021 07:37:39 -0500 Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 363auyrn1e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:38 -0500 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCbKMb032570; Fri, 15 Jan 2021 12:37:37 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma03ams.nl.ibm.com with ESMTP id 35y447yp49-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:36 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbY4D33161578 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:34 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 60E05AE053; Fri, 15 Jan 2021 12:37:34 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E3955AE055; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:33 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 05/11] lib/alloc_page: fix and improve the page allocator Date: Fri, 15 Jan 2021 13:37:24 +0100 Message-Id: <20210115123730.381612-6-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 impostorscore=0 spamscore=0 bulkscore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 mlxscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150075 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the documentation. * introduce and use pfn_t to semantically tag parameters as PFNs * remove the PFN macro, use virt_to_pfn instead * rename area_or_metadata_contains and area_contains to area_contains_pfn and usable_area_contains_pfn respectively * fix/improve comments in lib/alloc_page.h * move some wrapper functions to the header Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Fixes: 34c950651861 ("lib/alloc_page: allow reserving arbitrary memory ranges") Signed-off-by: Claudio Imbrenda --- lib/alloc_page.h | 52 ++++++++++----- lib/alloc_page.c | 165 +++++++++++++++++++++++------------------------ 2 files changed, 118 insertions(+), 99 deletions(-) diff --git a/lib/alloc_page.h b/lib/alloc_page.h index b6aace5..6fd2ff0 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -8,6 +8,7 @@ #ifndef ALLOC_PAGE_H #define ALLOC_PAGE_H 1 +#include #include #define AREA_ANY -1 @@ -23,7 +24,7 @@ bool page_alloc_initialized(void); * top_pfn is the physical frame number of the first page immediately after * the end of the area to initialize */ -void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn); +void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn); /* Enables the page allocator. At least one area must have been initialized */ void page_alloc_ops_enable(void); @@ -37,9 +38,12 @@ void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size); /* * Allocate aligned memory from any area. - * Equivalent to memalign_pages_area(~0, alignment, size). + * Equivalent to memalign_pages_area(AREA_ANY, alignment, size). */ -void *memalign_pages(size_t alignment, size_t size); +static inline void *memalign_pages(size_t alignment, size_t size) +{ + return memalign_pages_area(AREA_ANY, alignment, size); +} /* * Allocate naturally aligned memory from the specified areas. @@ -48,16 +52,23 @@ void *memalign_pages(size_t alignment, size_t size); void *alloc_pages_area(unsigned int areas, unsigned int order); /* - * Allocate one page from any area. - * Equivalent to alloc_pages(0); + * Allocate naturally aligned pages from any area; the number of allocated + * pages is 1 << order. + * Equivalent to alloc_pages_area(AREA_ANY, order); */ -void *alloc_page(void); +static inline void *alloc_pages(unsigned int order) +{ + return alloc_pages_area(AREA_ANY, order); +} /* - * Allocate naturally aligned memory from any area. - * Equivalent to alloc_pages_area(~0, order); + * Allocate one page from any area. + * Equivalent to alloc_pages(0); */ -void *alloc_pages(unsigned int order); +static inline void *alloc_page(void) +{ + return alloc_pages(0); +} /* * Frees a memory block allocated with any of the memalign_pages* or @@ -66,31 +77,40 @@ void *alloc_pages(unsigned int order); */ void free_pages(void *mem); -/* For backwards compatibility */ +/* + * Free one page. + * Equivalent to free_pages(mem). + */ static inline void free_page(void *mem) { return free_pages(mem); } -/* For backwards compatibility */ +/* + * Free pages by order. + * Equivalent to free_pages(mem). + */ static inline void free_pages_by_order(void *mem, unsigned int order) { free_pages(mem); } /* - * Allocates and reserves the specified memory range if possible. - * Returns NULL in case of failure. + * Reserves the specified physical memory range if possible. + * If the specified range cannot be reserved in its entirety, no action is + * performed and -1 is returned. + * + * Returns 0 in case of success, -1 otherwise. */ -void *alloc_pages_special(uintptr_t addr, size_t npages); +int reserve_pages(phys_addr_t addr, size_t npages); /* * Frees a reserved memory range that had been reserved with - * alloc_pages_special. + * reserve_pages. * The memory range does not need to match a previous allocation * exactly, it can also be a subset, in which case only the specified * pages will be freed and unreserved. */ -void free_pages_special(uintptr_t addr, size_t npages); +void unreserve_pages(phys_addr_t addr, size_t npages); #endif diff --git a/lib/alloc_page.c b/lib/alloc_page.c index ed0ff02..337a4e0 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -17,25 +17,29 @@ #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order)) #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) -#define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT) #define ORDER_MASK 0x3f #define ALLOC_MASK 0x40 #define SPECIAL_MASK 0x80 +typedef phys_addr_t pfn_t; + struct mem_area { /* Physical frame number of the first usable frame in the area */ - uintptr_t base; + pfn_t base; /* Physical frame number of the first frame outside the area */ - uintptr_t top; - /* Combination of SPECIAL_MASK, ALLOC_MASK, and order */ + pfn_t top; + /* Per page metadata, each entry is a combination *_MASK and order */ u8 *page_states; /* One freelist for each possible block size, up to NLISTS */ struct linked_list freelists[NLISTS]; }; +/* Descriptors for each possible area */ static struct mem_area areas[MAX_AREAS]; +/* Mask of initialized areas */ static unsigned int areas_mask; +/* Protects areas and areas mask */ static struct spinlock lock; bool page_alloc_initialized(void) @@ -43,12 +47,24 @@ bool page_alloc_initialized(void) return areas_mask != 0; } -static inline bool area_or_metadata_contains(struct mem_area *a, uintptr_t pfn) +/* + * Each memory area contains an array of metadata entries at the very + * beginning. The usable memory follows immediately afterwards. + * This function returns true if the given pfn falls anywhere within the + * memory area, including the metadata area. + */ +static inline bool area_contains_pfn(struct mem_area *a, pfn_t pfn) { - return (pfn >= PFN(a->page_states)) && (pfn < a->top); + return (pfn >= virt_to_pfn(a->page_states)) && (pfn < a->top); } -static inline bool area_contains(struct mem_area *a, uintptr_t pfn) +/* + * Each memory area contains an array of metadata entries at the very + * beginning. The usable memory follows immediately afterwards. + * This function returns true if the given pfn falls in the usable range of + * the given memory area. + */ +static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn) { return (pfn >= a->base) && (pfn < a->top); } @@ -69,21 +85,19 @@ static inline bool area_contains(struct mem_area *a, uintptr_t pfn) */ static void split(struct mem_area *a, void *addr) { - uintptr_t pfn = PFN(addr); - struct linked_list *p; - uintptr_t i, idx; + pfn_t pfn = virt_to_pfn(addr); + pfn_t i, idx; u8 order; - assert(a && area_contains(a, pfn)); + assert(a && usable_area_contains_pfn(a, pfn)); idx = pfn - a->base; order = a->page_states[idx]; assert(!(order & ~ORDER_MASK) && order && (order < NLISTS)); assert(IS_ALIGNED_ORDER(pfn, order)); - assert(area_contains(a, pfn + BIT(order) - 1)); + assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); /* Remove the block from its free list */ - p = list_remove(addr); - assert(p); + list_remove(addr); /* update the block size for each page in the block */ for (i = 0; i < BIT(order); i++) { @@ -92,9 +106,9 @@ static void split(struct mem_area *a, void *addr) } order--; /* add the first half block to the appropriate free list */ - list_add(a->freelists + order, p); + list_add(a->freelists + order, addr); /* add the second half block to the appropriate free list */ - list_add(a->freelists + order, (void *)((pfn + BIT(order)) * PAGE_SIZE)); + list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); } /* @@ -105,7 +119,7 @@ static void split(struct mem_area *a, void *addr) */ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) { - struct linked_list *p, *res = NULL; + struct linked_list *p; u8 order; assert((al < NLISTS) && (sz < NLISTS)); @@ -130,17 +144,17 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) for (; order > sz; order--) split(a, p); - res = list_remove(p); - memset(a->page_states + (PFN(res) - a->base), ALLOC_MASK | order, BIT(order)); - return res; + list_remove(p); + memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order)); + return p; } -static struct mem_area *get_area(uintptr_t pfn) +static struct mem_area *get_area(pfn_t pfn) { uintptr_t i; for (i = 0; i < MAX_AREAS; i++) - if ((areas_mask & BIT(i)) && area_contains(areas + i, pfn)) + if ((areas_mask & BIT(i)) && usable_area_contains_pfn(areas + i, pfn)) return areas + i; return NULL; } @@ -160,17 +174,16 @@ static struct mem_area *get_area(uintptr_t pfn) * - all of the pages of the two blocks must have the same block size * - the function is called with the lock held */ -static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2) +static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2) { - uintptr_t first, second, i; - struct linked_list *li; + pfn_t first, second, i; assert(IS_ALIGNED_ORDER(pfn, order) && IS_ALIGNED_ORDER(pfn2, order)); assert(pfn2 == pfn + BIT(order)); assert(a); /* attempting to coalesce two blocks that belong to different areas */ - if (!area_contains(a, pfn) || !area_contains(a, pfn2 + BIT(order) - 1)) + if (!usable_area_contains_pfn(a, pfn) || !usable_area_contains_pfn(a, pfn2 + BIT(order) - 1)) return false; first = pfn - a->base; second = pfn2 - a->base; @@ -179,17 +192,15 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2 return false; /* we can coalesce, remove both blocks from their freelists */ - li = list_remove((void *)(pfn2 << PAGE_SHIFT)); - assert(li); - li = list_remove((void *)(pfn << PAGE_SHIFT)); - assert(li); + list_remove(pfn_to_virt(pfn2)); + list_remove(pfn_to_virt(pfn)); /* check the metadata entries and update with the new size */ for (i = 0; i < (2ull << order); i++) { assert(a->page_states[first + i] == order); a->page_states[first + i] = order + 1; } /* finally add the newly coalesced block to the appropriate freelist */ - list_add(a->freelists + order + 1, li); + list_add(a->freelists + order + 1, pfn_to_virt(pfn)); return true; } @@ -209,7 +220,7 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2 */ static void _free_pages(void *mem) { - uintptr_t pfn2, pfn = PFN(mem); + pfn_t pfn2, pfn = virt_to_pfn(mem); struct mem_area *a = NULL; uintptr_t i, p; u8 order; @@ -232,7 +243,7 @@ static void _free_pages(void *mem) /* ensure that the block is aligned properly for its size */ assert(IS_ALIGNED_ORDER(pfn, order)); /* ensure that the area can contain the whole block */ - assert(area_contains(a, pfn + BIT(order) - 1)); + assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); for (i = 0; i < BIT(order); i++) { /* check that all pages of the block have consistent metadata */ @@ -268,63 +279,68 @@ void free_pages(void *mem) spin_unlock(&lock); } -static void *_alloc_page_special(uintptr_t addr) +static int _reserve_one_page(pfn_t pfn) { struct mem_area *a; - uintptr_t mask, i; + pfn_t mask, i; - a = get_area(PFN(addr)); - assert(a); - i = PFN(addr) - a->base; + a = get_area(pfn); + if (!a) + return -1; + i = pfn - a->base; if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK)) - return NULL; + return -1; while (a->page_states[i]) { - mask = GENMASK_ULL(63, PAGE_SHIFT + a->page_states[i]); - split(a, (void *)(addr & mask)); + mask = GENMASK_ULL(63, a->page_states[i]); + split(a, pfn_to_virt(pfn & mask)); } a->page_states[i] = SPECIAL_MASK; - return (void *)addr; + return 0; } -static void _free_page_special(uintptr_t addr) +static void _unreserve_one_page(pfn_t pfn) { struct mem_area *a; - uintptr_t i; + pfn_t i; - a = get_area(PFN(addr)); + a = get_area(pfn); assert(a); - i = PFN(addr) - a->base; + i = pfn - a->base; assert(a->page_states[i] == SPECIAL_MASK); a->page_states[i] = ALLOC_MASK; - _free_pages((void *)addr); + _free_pages(pfn_to_virt(pfn)); } -void *alloc_pages_special(uintptr_t addr, size_t n) +int reserve_pages(phys_addr_t addr, size_t n) { - uintptr_t i; + pfn_t pfn; + size_t i; assert(IS_ALIGNED(addr, PAGE_SIZE)); + pfn = addr >> PAGE_SHIFT; spin_lock(&lock); for (i = 0; i < n; i++) - if (!_alloc_page_special(addr + i * PAGE_SIZE)) + if (_reserve_one_page(pfn + i)) break; if (i < n) { for (n = 0 ; n < i; n++) - _free_page_special(addr + n * PAGE_SIZE); - addr = 0; + _unreserve_one_page(pfn + n); + n = 0; } spin_unlock(&lock); - return (void *)addr; + return -!n; } -void free_pages_special(uintptr_t addr, size_t n) +void unreserve_pages(phys_addr_t addr, size_t n) { - uintptr_t i; + pfn_t pfn; + size_t i; assert(IS_ALIGNED(addr, PAGE_SIZE)); + pfn = addr >> PAGE_SHIFT; spin_lock(&lock); for (i = 0; i < n; i++) - _free_page_special(addr + i * PAGE_SIZE); + _unreserve_one_page(pfn + i); spin_unlock(&lock); } @@ -351,11 +367,6 @@ void *alloc_pages_area(unsigned int area, unsigned int order) return page_memalign_order_area(area, order, order); } -void *alloc_pages(unsigned int order) -{ - return alloc_pages_area(AREA_ANY, order); -} - /* * Allocates (1 << order) physically contiguous aligned pages. * Returns NULL if the allocation was not possible. @@ -370,18 +381,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) return page_memalign_order_area(area, size, alignment); } -void *memalign_pages(size_t alignment, size_t size) -{ - return memalign_pages_area(AREA_ANY, alignment, size); -} - -/* - * Allocates one page - */ -void *alloc_page() -{ - return alloc_pages(0); -} static struct alloc_ops page_alloc_ops = { .memalign = memalign_pages, @@ -416,7 +415,7 @@ void page_alloc_ops_enable(void) * - the memory area to add does not overlap with existing areas * - the memory area to add has at least 5 pages available */ -static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) +static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) { size_t table_size, npages, i; struct mem_area *a; @@ -437,7 +436,7 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) /* fill in the values of the new area */ a = areas + n; - a->page_states = (void *)(start_pfn << PAGE_SHIFT); + a->page_states = pfn_to_virt(start_pfn); a->base = start_pfn + table_size; a->top = top_pfn; npages = top_pfn - a->base; @@ -447,14 +446,14 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) for (i = 0; i < MAX_AREAS; i++) { if (!(areas_mask & BIT(i))) continue; - assert(!area_or_metadata_contains(areas + i, start_pfn)); - assert(!area_or_metadata_contains(areas + i, top_pfn - 1)); - assert(!area_or_metadata_contains(a, PFN(areas[i].page_states))); - assert(!area_or_metadata_contains(a, areas[i].top - 1)); + assert(!area_contains_pfn(areas + i, start_pfn)); + assert(!area_contains_pfn(areas + i, top_pfn - 1)); + assert(!area_contains_pfn(a, virt_to_pfn(areas[i].page_states))); + assert(!area_contains_pfn(a, areas[i].top - 1)); } /* initialize all freelists for the new area */ for (i = 0; i < NLISTS; i++) - a->freelists[i].next = a->freelists[i].prev = a->freelists + i; + a->freelists[i].prev = a->freelists[i].next = a->freelists + i; /* initialize the metadata for the available memory */ for (i = a->base; i < a->top; i += 1ull << order) { @@ -473,13 +472,13 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) assert(order < NLISTS); /* initialize the metadata and add to the freelist */ memset(a->page_states + (i - a->base), order, BIT(order)); - list_add(a->freelists + order, (void *)(i << PAGE_SHIFT)); + list_add(a->freelists + order, pfn_to_virt(i)); } /* finally mark the area as present */ areas_mask |= BIT(n); } -static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, uintptr_t *top_pfn) +static void __page_alloc_init_area(u8 n, pfn_t cutoff, pfn_t base_pfn, pfn_t *top_pfn) { if (*top_pfn > cutoff) { spin_lock(&lock); @@ -500,7 +499,7 @@ static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, u * Prerequisites: * see _page_alloc_init_area */ -void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn) +void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn) { if (n != AREA_ANY_NUMBER) { __page_alloc_init_area(n, 0, base_pfn, &top_pfn); From patchwork Fri Jan 15 12:37:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7994DC433DB for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 527FA223E0 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388229AbhAOMnY (ORCPT ); Fri, 15 Jan 2021 07:43:24 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:18588 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387463AbhAOMnX (ORCPT ); Fri, 15 Jan 2021 07:43:23 -0500 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCYlsR175440; Fri, 15 Jan 2021 07:37:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=ixiqTO3LGJt+CEmpWlnOBc6oIrmT6d/iTOXcPe6348g=; b=fHAjMgPe9bL9AlGODodbmVTZBx8TA/3RRdUYIoqyxEj2Yet2JY+FK6aoaTbTyn0ND3q+ /GbXmLvWJRTJ8Llri+UOLPOmnsfYoxH2jCiizxyHTZRhRGFLmwOHLR2jQ4kODaQwTb2Q z6i/INnSElzPTWEtjCg35z9vVX7cP2I4eZrYDxsg+Wbqxi+WPHPi+hWO5TlX/TGB/TSM nYkvO+FZLQUSJ+bM6CEqqETJ0BgGeBAWV0u1WPNh6vAEmOKXnH9A9yJzZ+92ZeZ41rUW 35jAlxXrPqQA0MXVb4CJf6+T6yGLUEJS8/cFjmcNDSGdwr2wczmT3yEuXcxuWtJ0JhLa ew== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363axcger6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:40 -0500 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCYear175070; Fri, 15 Jan 2021 07:37:39 -0500 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 363axcgeq1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:39 -0500 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCaQsU022780; Fri, 15 Jan 2021 12:37:37 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04ams.nl.ibm.com with ESMTP id 35y448fq7e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:37 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbZ8Q39780768 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:35 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DE461AE05A; Fri, 15 Jan 2021 12:37:34 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7D252AE059; Fri, 15 Jan 2021 12:37:34 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:34 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 06/11] lib/alloc.h: remove align_min from struct alloc_ops Date: Fri, 15 Jan 2021 13:37:25 +0100 Message-Id: <20210115123730.381612-7-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 mlxlogscore=999 spamscore=0 adultscore=0 suspectscore=0 mlxscore=0 malwarescore=0 clxscore=1015 phishscore=0 priorityscore=1501 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove align_min from struct alloc_ops, since it is no longer used. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc.h | 1 - lib/alloc_page.c | 1 - lib/alloc_phys.c | 9 +++++---- lib/vmalloc.c | 1 - 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/lib/alloc.h b/lib/alloc.h index 9b4b634..db90b01 100644 --- a/lib/alloc.h +++ b/lib/alloc.h @@ -25,7 +25,6 @@ struct alloc_ops { void *(*memalign)(size_t alignment, size_t size); void (*free)(void *ptr); - size_t align_min; }; extern struct alloc_ops *alloc_ops; diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 337a4e0..7d1fa85 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -385,7 +385,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) static struct alloc_ops page_alloc_ops = { .memalign = memalign_pages, .free = free_pages, - .align_min = PAGE_SIZE, }; /* diff --git a/lib/alloc_phys.c b/lib/alloc_phys.c index 72e20f7..a4d2bf2 100644 --- a/lib/alloc_phys.c +++ b/lib/alloc_phys.c @@ -29,8 +29,8 @@ static phys_addr_t base, top; static void *early_memalign(size_t alignment, size_t size); static struct alloc_ops early_alloc_ops = { .memalign = early_memalign, - .align_min = DEFAULT_MINIMUM_ALIGNMENT }; +static size_t align_min; struct alloc_ops *alloc_ops = &early_alloc_ops; @@ -39,8 +39,7 @@ void phys_alloc_show(void) int i; spin_lock(&lock); - printf("phys_alloc minimum alignment: %#" PRIx64 "\n", - (u64)early_alloc_ops.align_min); + printf("phys_alloc minimum alignment: %#" PRIx64 "\n", (u64)align_min); for (i = 0; i < nr_regions; ++i) printf("%016" PRIx64 "-%016" PRIx64 " [%s]\n", (u64)regions[i].base, @@ -64,7 +63,7 @@ void phys_alloc_set_minimum_alignment(phys_addr_t align) { assert(align && !(align & (align - 1))); spin_lock(&lock); - early_alloc_ops.align_min = align; + align_min = align; spin_unlock(&lock); } @@ -83,6 +82,8 @@ static phys_addr_t phys_alloc_aligned_safe(phys_addr_t size, top_safe = MIN(top_safe, 1ULL << 32); assert(base < top_safe); + if (align < align_min) + align = align_min; addr = ALIGN(base, align); size += addr - base; diff --git a/lib/vmalloc.c b/lib/vmalloc.c index 6b52790..aa7cc41 100644 --- a/lib/vmalloc.c +++ b/lib/vmalloc.c @@ -192,7 +192,6 @@ static void vm_free(void *mem) static struct alloc_ops vmalloc_ops = { .memalign = vm_memalign, .free = vm_free, - .align_min = PAGE_SIZE, }; void __attribute__((__weak__)) find_highmem(void) From patchwork Fri Jan 15 12:37:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDD59C433E0 for ; Fri, 15 Jan 2021 12:44:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A72AB208BA for ; Fri, 15 Jan 2021 12:44:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388163AbhAOMiZ (ORCPT ); Fri, 15 Jan 2021 07:38:25 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:56396 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388156AbhAOMiY (ORCPT ); Fri, 15 Jan 2021 07:38:24 -0500 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXL9q064925; Fri, 15 Jan 2021 07:37:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=tNB5DOg1+LA2S8QenBB0AS6ag93/IuE9xib+YRp3LaQ=; b=K5Zpkz9HJp/SQj20QKnw4H27mM58rx2236C+63hCrBSF26PHrYq21bLceLIynyd/Welc 9zMm4wJNo43lypSLa++A6gAaMIDq4OkkVeKc2lHqSt0kPfD00caoZoVVGtH0HH3jVKRq svp2zHDsI2/Sw29/N+X83ot2MJpZDphpPNJrYxhY1qCPGyyW0swqc3nX7o4VYsxIWOaX x8B9Se4dlS6bs5CUgrQekRfcJGb9P8B0SlSixpxMfjs9OTQ/7H69M594bEOd7YlwRocm F/XIFlvZbcqWBh9UsspoIz2GIIlEIFinCtbHoZ8pzZpTPuPwmMFdspO1QG3KZ2m3Nmfs pg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8h6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:40 -0500 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXPkt065313; Fri, 15 Jan 2021 07:37:40 -0500 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8fv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:40 -0500 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FC8Cmr014079; Fri, 15 Jan 2021 12:37:37 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06ams.nl.ibm.com with ESMTP id 35ydrdf9nj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:37 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbZpc39780772 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:35 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 72B3DAE056; Fri, 15 Jan 2021 12:37:35 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0880BAE051; Fri, 15 Jan 2021 12:37:35 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:34 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 07/11] lib/alloc_page: Optimization to skip known empty freelists Date: Fri, 15 Jan 2021 13:37:26 +0100 Message-Id: <20210115123730.381612-8-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 suspectscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 malwarescore=0 clxscore=1015 adultscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Keep track of the largest block order available in each area, and do not search past it when looking for free memory. This will avoid needlessly scanning the freelists for the largest block orders, which will be empty in most cases. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc_page.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 7d1fa85..37f28ce 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -31,6 +31,8 @@ struct mem_area { pfn_t top; /* Per page metadata, each entry is a combination *_MASK and order */ u8 *page_states; + /* Highest block order available in this area */ + u8 max_order; /* One freelist for each possible block size, up to NLISTS */ struct linked_list freelists[NLISTS]; }; @@ -104,6 +106,8 @@ static void split(struct mem_area *a, void *addr) assert(a->page_states[idx + i] == order); a->page_states[idx + i] = order - 1; } + if ((order == a->max_order) && (is_list_empty(a->freelists + order))) + a->max_order--; order--; /* add the first half block to the appropriate free list */ list_add(a->freelists + order, addr); @@ -127,13 +131,13 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) order = sz > al ? sz : al; /* search all free lists for some memory */ - for ( ; order < NLISTS; order++) { + for ( ; order <= a->max_order; order++) { p = a->freelists[order].next; if (!is_list_empty(p)) break; } /* out of memory */ - if (order >= NLISTS) + if (order > a->max_order) return NULL; /* @@ -201,6 +205,8 @@ static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2) } /* finally add the newly coalesced block to the appropriate freelist */ list_add(a->freelists + order + 1, pfn_to_virt(pfn)); + if (order + 1 > a->max_order) + a->max_order = order + 1; return true; } @@ -438,6 +444,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) a->page_states = pfn_to_virt(start_pfn); a->base = start_pfn + table_size; a->top = top_pfn; + a->max_order = 0; npages = top_pfn - a->base; assert((a->base - start_pfn) * PAGE_SIZE >= npages); @@ -472,6 +479,8 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) /* initialize the metadata and add to the freelist */ memset(a->page_states + (i - a->base), order, BIT(order)); list_add(a->freelists + order, pfn_to_virt(i)); + if (order > a->max_order) + a->max_order = order; } /* finally mark the area as present */ areas_mask |= BIT(n); From patchwork Fri Jan 15 12:37:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAFCCC433E0 for ; Fri, 15 Jan 2021 12:44:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB1B72184D for ; Fri, 15 Jan 2021 12:44:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388170AbhAOMi0 (ORCPT ); Fri, 15 Jan 2021 07:38:26 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:30348 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2387511AbhAOMiZ (ORCPT ); Fri, 15 Jan 2021 07:38:25 -0500 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCVnn5143999; Fri, 15 Jan 2021 07:37:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=EOlOoWT2raHD1+EscM9RXvmuhF3b3l+vzFFX3f8lKzI=; b=lCq5MQxlJhV3Bp5E1JpDoId6XfaVx11kJlNQWCz8cMuNshzVefvYhdWipMGSV2tr/s1A Hma5R3Qx5EniJL4xY2PdgBVxwCq2PMUmV8qKlxJA4of+qtM0Eo7SW7x0tTfB3zZB8LdJ 0xLBKpEDrRyhIThIwwYAjpfsM8Ng2G3N9DIenD/FwbNr4TxOw+V2nw9jLhFrVqccKqqI raxZzvkom6emOLRwd3wZPIpVf2yebxmjz+HH3u1mFmHLyohngC5gUhDeRYV8RJqzMVMB MMVywbJVRc2/H1ji6yC8HOa6LHtADj6BxvMW7ELKUEaB3V8v5rNM6BzoENg0E2P/2Hd+ 4g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 363a3qt3u5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:40 -0500 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCVxKj144684; Fri, 15 Jan 2021 07:37:40 -0500 Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0b-001b2d01.pphosted.com with ESMTP id 363a3qt3td-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:40 -0500 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCbcbk021963; Fri, 15 Jan 2021 12:37:38 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma02fra.de.ibm.com with ESMTP id 35y448kyek-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:38 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbUiV20971936 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:30 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F1C71AE045; Fri, 15 Jan 2021 12:37:35 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 90763AE051; Fri, 15 Jan 2021 12:37:35 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:35 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 08/11] lib/alloc_page: rework metadata format Date: Fri, 15 Jan 2021 13:37:27 +0100 Message-Id: <20210115123730.381612-9-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 phishscore=0 mlxscore=0 impostorscore=0 adultscore=0 suspectscore=0 bulkscore=0 spamscore=0 priorityscore=1501 clxscore=1015 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150075 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch changes the format of the metadata so that the metadata is now a 2-bit field instead of two separate flags. This allows to have 4 different states for memory: STATUS_FRESH: the memory is free and has not been touched at all since boot (not even read from!) STATUS_FREE: the memory is free, but it is probably not fresh any more STATUS_ALLOCATED: the memory has been allocated and is in use STATUS_SPECIAL: the memory has been removed from the pool of allocated memory for some kind of special purpose according to the needs of the caller Some macros are also introduced to test the status of a specific metadata item. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc_page.c | 49 +++++++++++++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 19 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 37f28ce..d8b2758 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -18,9 +18,20 @@ #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order)) #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) -#define ORDER_MASK 0x3f -#define ALLOC_MASK 0x40 -#define SPECIAL_MASK 0x80 +#define ORDER_MASK 0x3f +#define STATUS_MASK 0xc0 + +#define STATUS_FRESH 0x00 +#define STATUS_FREE 0x40 +#define STATUS_ALLOCATED 0x80 +#define STATUS_SPECIAL 0xc0 + +#define IS_FRESH(x) (((x) & STATUS_MASK) == STATUS_FRESH) +#define IS_FREE(x) (((x) & STATUS_MASK) == STATUS_FREE) +#define IS_ALLOCATED(x) (((x) & STATUS_MASK) == STATUS_ALLOCATED) +#define IS_SPECIAL(x) (((x) & STATUS_MASK) == STATUS_SPECIAL) + +#define IS_USABLE(x) (IS_FREE(x) || IS_FRESH(x)) typedef phys_addr_t pfn_t; @@ -87,14 +98,14 @@ static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn) */ static void split(struct mem_area *a, void *addr) { - pfn_t pfn = virt_to_pfn(addr); - pfn_t i, idx; - u8 order; + pfn_t i, idx, pfn = virt_to_pfn(addr); + u8 metadata, order; assert(a && usable_area_contains_pfn(a, pfn)); idx = pfn - a->base; - order = a->page_states[idx]; - assert(!(order & ~ORDER_MASK) && order && (order < NLISTS)); + metadata = a->page_states[idx]; + order = metadata & ORDER_MASK; + assert(IS_USABLE(metadata) && order && (order < NLISTS)); assert(IS_ALIGNED_ORDER(pfn, order)); assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); @@ -103,8 +114,8 @@ static void split(struct mem_area *a, void *addr) /* update the block size for each page in the block */ for (i = 0; i < BIT(order); i++) { - assert(a->page_states[idx + i] == order); - a->page_states[idx + i] = order - 1; + assert(a->page_states[idx + i] == metadata); + a->page_states[idx + i] = metadata - 1; } if ((order == a->max_order) && (is_list_empty(a->freelists + order))) a->max_order--; @@ -149,7 +160,7 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) split(a, p); list_remove(p); - memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order)); + memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order)); return p; } @@ -243,7 +254,7 @@ static void _free_pages(void *mem) order = a->page_states[p] & ORDER_MASK; /* ensure that the first page is allocated and not special */ - assert(a->page_states[p] == (order | ALLOC_MASK)); + assert(IS_ALLOCATED(a->page_states[p])); /* ensure that the order has a sane value */ assert(order < NLISTS); /* ensure that the block is aligned properly for its size */ @@ -253,9 +264,9 @@ static void _free_pages(void *mem) for (i = 0; i < BIT(order); i++) { /* check that all pages of the block have consistent metadata */ - assert(a->page_states[p + i] == (ALLOC_MASK | order)); + assert(a->page_states[p + i] == (STATUS_ALLOCATED | order)); /* set the page as free */ - a->page_states[p + i] &= ~ALLOC_MASK; + a->page_states[p + i] = STATUS_FREE | order; } /* provisionally add the block to the appropriate free list */ list_add(a->freelists + order, mem); @@ -294,13 +305,13 @@ static int _reserve_one_page(pfn_t pfn) if (!a) return -1; i = pfn - a->base; - if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK)) + if (!IS_USABLE(a->page_states[i])) return -1; while (a->page_states[i]) { mask = GENMASK_ULL(63, a->page_states[i]); split(a, pfn_to_virt(pfn & mask)); } - a->page_states[i] = SPECIAL_MASK; + a->page_states[i] = STATUS_SPECIAL; return 0; } @@ -312,8 +323,8 @@ static void _unreserve_one_page(pfn_t pfn) a = get_area(pfn); assert(a); i = pfn - a->base; - assert(a->page_states[i] == SPECIAL_MASK); - a->page_states[i] = ALLOC_MASK; + assert(a->page_states[i] == STATUS_SPECIAL); + a->page_states[i] = STATUS_ALLOCATED; _free_pages(pfn_to_virt(pfn)); } @@ -477,7 +488,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) order++; assert(order < NLISTS); /* initialize the metadata and add to the freelist */ - memset(a->page_states + (i - a->base), order, BIT(order)); + memset(a->page_states + (i - a->base), STATUS_FRESH | order, BIT(order)); list_add(a->freelists + order, pfn_to_virt(i)); if (order > a->max_order) a->max_order = order; From patchwork Fri Jan 15 12:37:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF83CC43331 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C87B3223E0 for ; Fri, 15 Jan 2021 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732977AbhAOMnl (ORCPT ); Fri, 15 Jan 2021 07:43:41 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:47346 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388197AbhAOMie (ORCPT ); Fri, 15 Jan 2021 07:38:34 -0500 Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXI9p019867; Fri, 15 Jan 2021 07:37:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=cUnR5ChKYHPQMwBBkTykL0wWrqbLLjjZ/eDxY+avFM0=; b=jDBhI7HmarTidBPcqOXPwKiP/fu6G454GSYBYn6wAJalCW42rewlDxdQzkWwhWHbIhHN Q8Q3Y8Ui3BYExCp5VpXH4oZ0aQv8a3ZqTb/FSA9C7zLmbZ/bfKk4Fq5ZYK9iQzONtVRR FZq4V6ryD/ElTHLrKh74bZktdjwBJTS2XgVy4e/p/xPbZjZmHO03FPi2Rbf5PhSwUyFn Z1x7VlHqae0I9/22X4C33gHNW3fJJ+NURDlpItAD885OTyxG7esLQI786y4JpjhkzRS2 1HmI19CcJXCXirxMZZ4qaJvoM5Bcq0l0yhoIrTaY+s9VrSPp8Q4dZRB1iHroQchnulhn Jg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363b1dga46-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:49 -0500 Received: from m0187473.ppops.net (m0187473.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXJhU019965; Fri, 15 Jan 2021 07:37:46 -0500 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 363b1dga1m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:45 -0500 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCb0ig025922; Fri, 15 Jan 2021 12:37:39 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma05fra.de.ibm.com with ESMTP id 35y448byvw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:38 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbVIJ20971940 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:31 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7CD0AAE053; Fri, 15 Jan 2021 12:37:36 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1A593AE056; Fri, 15 Jan 2021 12:37:36 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:36 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 09/11] lib/alloc: replace areas with more generic flags Date: Fri, 15 Jan 2021 13:37:28 +0100 Message-Id: <20210115123730.381612-10-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 spamscore=0 lowpriorityscore=0 priorityscore=1501 bulkscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 impostorscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags. This patch introduces the flags and changes the names of the funcions, subsequent patches will actually wire up the flags to do something. The first two flags introduced are: - FLAG_DONTZERO to ask the allocated memory not to be zeroed - FLAG_FRESH to indicate that the allocated memory should have not been touched (READ or written to) in any way since boot. This patch also fixes the order of arguments to consistently have alignment first and then size, thereby fixing a bug where the two values would get swapped. Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc_page.h | 39 ++++++++++++++++++++++----------------- lib/alloc_page.c | 16 ++++++++-------- lib/s390x/smp.c | 2 +- 3 files changed, 31 insertions(+), 26 deletions(-) diff --git a/lib/alloc_page.h b/lib/alloc_page.h index 6fd2ff0..1af1419 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -11,8 +11,13 @@ #include #include -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define AREA_ANY_NUMBER 0xff + +#define AREA_ANY 0x00000 +#define AREA_MASK 0x0ffff + +#define FLAG_DONTZERO 0x10000 +#define FLAG_FRESH 0x20000 /* Returns true if the page allocator has been initialized */ bool page_alloc_initialized(void); @@ -30,39 +35,39 @@ void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn); void page_alloc_ops_enable(void); /* - * Allocate aligned memory from the specified areas. - * areas is a bitmap of allowed areas + * Allocate aligned memory with the specified flags. + * flags is a bitmap of allowed areas and flags. * alignment must be a power of 2 */ -void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size); +void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags); /* - * Allocate aligned memory from any area. - * Equivalent to memalign_pages_area(AREA_ANY, alignment, size). + * Allocate aligned memory from any area and with default flags. + * Equivalent to memalign_pages_flags(alignment, size, AREA_ANY). */ static inline void *memalign_pages(size_t alignment, size_t size) { - return memalign_pages_area(AREA_ANY, alignment, size); + return memalign_pages_flags(alignment, size, AREA_ANY); } /* - * Allocate naturally aligned memory from the specified areas. - * Equivalent to memalign_pages_area(areas, 1ull << order, 1ull << order). + * Allocate 1ull << order naturally aligned pages with the specified flags. + * Equivalent to memalign_pages_flags(1ull << order, 1ull << order, flags). */ -void *alloc_pages_area(unsigned int areas, unsigned int order); +void *alloc_pages_flags(unsigned int order, unsigned int flags); /* - * Allocate naturally aligned pages from any area; the number of allocated - * pages is 1 << order. - * Equivalent to alloc_pages_area(AREA_ANY, order); + * Allocate 1ull << order naturally aligned pages from any area and with + * default flags. + * Equivalent to alloc_pages_flags(order, AREA_ANY); */ static inline void *alloc_pages(unsigned int order) { - return alloc_pages_area(AREA_ANY, order); + return alloc_pages_flags(order, AREA_ANY); } /* - * Allocate one page from any area. + * Allocate one page from any area and with default flags. * Equivalent to alloc_pages(0); */ static inline void *alloc_page(void) @@ -83,7 +88,7 @@ void free_pages(void *mem); */ static inline void free_page(void *mem) { - return free_pages(mem); + free_pages(mem); } /* diff --git a/lib/alloc_page.c b/lib/alloc_page.c index d8b2758..47e2981 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -361,16 +361,16 @@ void unreserve_pages(phys_addr_t addr, size_t n) spin_unlock(&lock); } -static void *page_memalign_order_area(unsigned area, u8 ord, u8 al) +static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags) { void *res = NULL; - int i; + int i, area; spin_lock(&lock); - area &= areas_mask; + area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask; for (i = 0; !res && (i < MAX_AREAS); i++) if (area & BIT(i)) - res = page_memalign_order(areas + i, ord, al); + res = page_memalign_order(areas + i, al, ord); spin_unlock(&lock); return res; } @@ -379,23 +379,23 @@ static void *page_memalign_order_area(unsigned area, u8 ord, u8 al) * Allocates (1 << order) physically contiguous and naturally aligned pages. * Returns NULL if the allocation was not possible. */ -void *alloc_pages_area(unsigned int area, unsigned int order) +void *alloc_pages_flags(unsigned int order, unsigned int flags) { - return page_memalign_order_area(area, order, order); + return page_memalign_order_flags(order, order, flags); } /* * Allocates (1 << order) physically contiguous aligned pages. * Returns NULL if the allocation was not possible. */ -void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) +void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags) { assert(is_power_of_2(alignment)); alignment = get_order(PAGE_ALIGN(alignment) >> PAGE_SHIFT); size = get_order(PAGE_ALIGN(size) >> PAGE_SHIFT); assert(alignment < NLISTS); assert(size < NLISTS); - return page_memalign_order_area(area, size, alignment); + return page_memalign_order_flags(alignment, size, flags); } diff --git a/lib/s390x/smp.c b/lib/s390x/smp.c index 77d80ca..44b2eb4 100644 --- a/lib/s390x/smp.c +++ b/lib/s390x/smp.c @@ -190,7 +190,7 @@ int smp_cpu_setup(uint16_t addr, struct psw psw) sigp_retry(cpu->addr, SIGP_INITIAL_CPU_RESET, 0, NULL); - lc = alloc_pages_area(AREA_DMA31, 1); + lc = alloc_pages_flags(1, AREA_DMA31); cpu->lowcore = lc; memset(lc, 0, PAGE_SIZE * 2); sigp_retry(cpu->addr, SIGP_SET_PREFIX, (unsigned long )lc, NULL); From patchwork Fri Jan 15 12:37:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 288F0C433E6 for ; Fri, 15 Jan 2021 12:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E95F12184D for ; Fri, 15 Jan 2021 12:45:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387784AbhAOMpU (ORCPT ); Fri, 15 Jan 2021 07:45:20 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:15400 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729011AbhAOMpT (ORCPT ); Fri, 15 Jan 2021 07:45:19 -0500 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCXKKW064866; Fri, 15 Jan 2021 07:37:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=OxhQfdgTKMXXCUVdhNU9fQldxM61VSS3UBG2RIXAQ1U=; b=Fu1Mfr/dDf10gEPlwJToNe40F1y0kFOUmfcHCP7i4c+UVK6h4K5EPc5fkj7/LQOGxGHD YVVQKEACxWHDpjUoJydopXBBZZLuyYTG1dAWdR9ZvNYcg9hFl6ppR8haoPKtGXbfn6Xb ShUdtvb1f5Jb3d4WnMbf7Wn+3RDeMAHvzSFgbkodFydnaYrDOcEyqvG77deuSlN3S6cy tCM6yhHUh60XUei9phn3bapN3qUKYPYMz00kEBT0tQLAi51y+fIOKihVJ3ykP1ooxkW/ CUHYAdKnWA1XpTf3tzJIn3veBMUKGkMBuv18yuneE8xpjTAWTcksMlQfmmiJJ4jq44RE SA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8hy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:42 -0500 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCXr8q066955; Fri, 15 Jan 2021 07:37:41 -0500 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 363akfh8gx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:41 -0500 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCbYhg001564; Fri, 15 Jan 2021 12:37:39 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma04fra.de.ibm.com with ESMTP id 3604h9b8re-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:39 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbVuQ34013536 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:31 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 10673AE051; Fri, 15 Jan 2021 12:37:37 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 98D5CAE056; Fri, 15 Jan 2021 12:37:36 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:36 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 10/11] lib/alloc_page: Wire up FLAG_DONTZERO Date: Fri, 15 Jan 2021 13:37:29 +0100 Message-Id: <20210115123730.381612-11-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 suspectscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 malwarescore=0 clxscore=1015 adultscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150077 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Memory allocated without FLAG_DONTZERO will now be zeroed before being returned to the caller. This means that by default all allocated memory is now zeroed, restoring the default behaviour that had been accidentally removed by a previous commit. Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Reported-by: Nadav Amit Signed-off-by: Claudio Imbrenda --- lib/alloc_page.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 47e2981..95d957b 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -372,6 +372,8 @@ static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags) if (area & BIT(i)) res = page_memalign_order(areas + i, al, ord); spin_unlock(&lock); + if (res && !(flags & FLAG_DONTZERO)) + memset(res, 0, BIT(ord) * PAGE_SIZE); return res; } From patchwork Fri Jan 15 12:37:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 12022549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760A0C4332B for ; Fri, 15 Jan 2021 12:44:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F829208BA for ; Fri, 15 Jan 2021 12:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732014AbhAOMoW (ORCPT ); Fri, 15 Jan 2021 07:44:22 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:29800 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732039AbhAOMoQ (ORCPT ); Fri, 15 Jan 2021 07:44:16 -0500 Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10FCHQ78062172; Fri, 15 Jan 2021 07:37:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=lDrsl4UKieZ2DHQ1tl12vIF5+ujR80MYZZ7g+qq5p/M=; b=kJMqkqdQG2/iEWf9zqSc9JacfGORfBwm/bcxacFJCcFFhhUy+3atXPzRxfIGIzTwyjnb mpcAKM8TvBvHII7+B0Rllnpd9E3GzQI3eqSj3Hzw1DbRAaadoGj++5hCAyaYR98bD4/3 RFR6NSRPFWTbOtvQ0tAhBww2DPO0KSVAIYEDC9weYk1DlF/UJkY35nY2XVc/VnSQ3CKb Q2WAmmy7ydCzM2kz/cxrjRJX/Nu8gptp1kxp4RGh0ytSovBUaIac5Mlio8QPsK4QRZTV BjB4Vb1VWftC4TiNe0UavnwU1UE9UNcbRXDe67f9Hdq/PraOcXNnG1ZA5cbImrKf4BeG Lw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 363ax38gss-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:44 -0500 Received: from m0098404.ppops.net (m0098404.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 10FCHvnm064922; Fri, 15 Jan 2021 07:37:43 -0500 Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com with ESMTP id 363ax38gqk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 07:37:43 -0500 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10FCbe4U032468; Fri, 15 Jan 2021 12:37:40 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma06fra.de.ibm.com with ESMTP id 361wgq9gcf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 15 Jan 2021 12:37:40 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10FCbWWs34013542 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 15 Jan 2021 12:37:32 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 91F09AE045; Fri, 15 Jan 2021 12:37:37 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2D54AAE056; Fri, 15 Jan 2021 12:37:37 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.4.167]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 15 Jan 2021 12:37:37 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com, krish.sadhukhan@oracle.com Subject: [kvm-unit-tests PATCH v2 11/11] lib/alloc_page: Properly handle requests for fresh blocks Date: Fri, 15 Jan 2021 13:37:30 +0100 Message-Id: <20210115123730.381612-12-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210115123730.381612-1-imbrenda@linux.ibm.com> References: <20210115123730.381612-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-15_07:2021-01-15,2021-01-15 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 phishscore=0 clxscore=1015 priorityscore=1501 spamscore=0 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101150075 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Upon initialization, all memory in an area is marked as fresh. Once memory is used and freed, the freed memory is marked as free. Free memory is always appended to the front of the freelist, meaning that fresh memory stays on the tail. When a block of fresh memory is split, the two blocks are put on the tail of the appropriate freelist, so they can be found when needed. When a fresh block is requested, a fresh block one order bigger is taken, the first half is put back in the free pool (on the tail), and the second half is returned. The reason behind this is that the first page of every block always contains the pointers of the freelist. Since the first page of a fresh block is actually not fresh, it cannot be returned when a fresh allocation is requested. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc_page.c | 51 +++++++++++++++++++++++++++++++++++++----------- 1 file changed, 40 insertions(+), 11 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 95d957b..84f01e1 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -120,10 +120,17 @@ static void split(struct mem_area *a, void *addr) if ((order == a->max_order) && (is_list_empty(a->freelists + order))) a->max_order--; order--; - /* add the first half block to the appropriate free list */ - list_add(a->freelists + order, addr); - /* add the second half block to the appropriate free list */ - list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + + /* add the two half blocks to the appropriate free list */ + if (IS_FRESH(metadata)) { + /* add to the tail if the blocks are fresh */ + list_add_tail(a->freelists + order, addr); + list_add_tail(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + } else { + /* add to the front if the blocks are dirty */ + list_add(a->freelists + order, addr); + list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + } } /* @@ -132,21 +139,33 @@ static void split(struct mem_area *a, void *addr) * * Both parameters must be not larger than the largest allowed order */ -static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) +static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz, bool fresh) { struct linked_list *p; + pfn_t idx; u8 order; assert((al < NLISTS) && (sz < NLISTS)); /* we need the bigger of the two as starting point */ order = sz > al ? sz : al; + /* + * we need to go one order up if we want a completely fresh block, + * since the first page contains the freelist pointers, and + * therefore it is always dirty + */ + order += fresh; /* search all free lists for some memory */ for ( ; order <= a->max_order; order++) { - p = a->freelists[order].next; - if (!is_list_empty(p)) - break; + p = fresh ? a->freelists[order].prev : a->freelists[order].next; + if (is_list_empty(p)) + continue; + idx = virt_to_pfn(p) - a->base; + if (fresh && !IS_FRESH(a->page_states[idx])) + continue; + break; } + /* out of memory */ if (order > a->max_order) return NULL; @@ -160,7 +179,16 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) split(a, p); list_remove(p); - memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order)); + /* We now have a block twice the size, but the first page is dirty. */ + if (fresh) { + order--; + /* Put back the first (partially dirty) half of the block */ + memset(a->page_states + idx, STATUS_FRESH | order, BIT(order)); + list_add_tail(a->freelists + order, p); + idx += BIT(order); + p = pfn_to_virt(a->base + idx); + } + memset(a->page_states + idx, STATUS_ALLOCATED | order, BIT(order)); return p; } @@ -364,13 +392,14 @@ void unreserve_pages(phys_addr_t addr, size_t n) static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags) { void *res = NULL; - int i, area; + int i, area, fresh; + fresh = !!(flags & FLAG_FRESH); spin_lock(&lock); area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask; for (i = 0; !res && (i < MAX_AREAS); i++) if (area & BIT(i)) - res = page_memalign_order(areas + i, al, ord); + res = page_memalign_order(areas + i, al, ord, fresh); spin_unlock(&lock); if (res && !(flags & FLAG_DONTZERO)) memset(res, 0, BIT(ord) * PAGE_SIZE);