From patchwork Wed Dec 16 20:11:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 642F6C0018C for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35C3923730 for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728749AbgLPUOV (ORCPT ); Wed, 16 Dec 2020 15:14:21 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:14022 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbgLPUOV (ORCPT ); Wed, 16 Dec 2020 15:14:21 -0500 Received: from pps.filterd (m0127361.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2vxe181265 for ; Wed, 16 Dec 2020 15:13:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=g+6NcmY2CMN3pnInIKAxRv/N64gV3ATfEKs3aWONZF0=; b=Lq/TgipjRP2cImS1xRkgE5VBhO7SrXOquZJXIZhJ+KHnIC9y9eJcQPLpzLet3gf9Mpa1 wxNk+/3qSufJQMzt6PkDxFglt110fAodQqIVBh5cJhVFZSUatjU8b3mCbimAopk7cJ2x Gm+bWb8lC53iCFlCBYSBwUlojdAEq5GmllYN5dgH3nO1L64pysZjNdqioKMUDHaa7J08 NdjSU3vvH4fU6Z2eAYm7RUAPxMYArIpZCCMNIPsUjtC2ZTocgxgHfKTJrCWI8cwWP3sT bu0Sms6OGja+T4fa/BqhLWOveg1x2HBU9vfgd1Wv33HcEzpc0IfJTu+LGpiXD/bmyxRB Cw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frs78gek-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:13:39 -0500 Received: from m0127361.ppops.net (m0127361.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK36tG181632 for ; Wed, 16 Dec 2020 15:13:39 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frs78ge2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:13:39 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK7ZA1021512; Wed, 16 Dec 2020 20:13:37 GMT Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by ppma03fra.de.ibm.com with ESMTP id 35cng8evkk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:13:37 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCK5933489186 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:20 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6DFB442042; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 008BB4204B; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:19 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 01/12] lib/x86: fix page.h to include the generic header Date: Wed, 16 Dec 2020 21:11:49 +0100 Message-Id: <20201216201200.255172-2-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 phishscore=0 mlxlogscore=934 mlxscore=0 spamscore=0 suspectscore=0 adultscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160124 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bring x86 in line with the other architectures and include the generic header at asm-generic/page.h . This provides the macros PAGE_SHIFT, PAGE_SIZE, PAGE_MASK, virt_to_pfn, and pfn_to_virt. Signed-off-by: Claudio Imbrenda Reviewed-by: Thomas Huth --- lib/x86/asm/page.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h index 1359eb7..2cf8881 100644 --- a/lib/x86/asm/page.h +++ b/lib/x86/asm/page.h @@ -13,9 +13,7 @@ typedef unsigned long pteval_t; typedef unsigned long pgd_t; -#define PAGE_SHIFT 12 -#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) -#define PAGE_MASK (~(PAGE_SIZE-1)) +#include #ifndef __ASSEMBLY__ From patchwork Wed Dec 16 20:11:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 992B0C2BBD5 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BEDA23600 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728696AbgLPUNH (ORCPT ); Wed, 16 Dec 2020 15:13:07 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:7836 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728533AbgLPUNG (ORCPT ); Wed, 16 Dec 2020 15:13:06 -0500 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2gIC046294 for ; Wed, 16 Dec 2020 15:12:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=yts+v++psXJeaC4CFiqms3Pr5Ycb3y3LxQ3h0Qaz7mQ=; b=lstpuioAhasKO65jn7v3oKnH11BdkYyyhX5ep31VYNDrZihfoCr1AW0GN6/kHdT/clRP wBbNF0sMZneANFiBAY8cDQqQJAWU2Hjb3Gz4w+nxHz7FOC8YdprMESB2cbLLSW6BhDf7 OgZFhy2JjQt3QdqhiyLDPXVNvlffpiul+vB3Ba+vFbY8XaZ8H09qMCmILepSjVbHXehM DLFVNwwcoXRSFd3P51QddXj6yzdk8yLohNRE3OkllyjH12yMNQM05PlS9dWcvQMO2GuW VpgTUJGrRyThluvcyyaT0w7YbmwqU0FuSTEQKuZuNcLxiyd7CBQLtrSfJ1uk/jV6e70d Lg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fr7usctf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from m0098394.ppops.net (m0098394.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK3BTO049049 for ; Wed, 16 Dec 2020 15:12:25 -0500 Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fr7uscsu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:25 -0500 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK8MFj003486; Wed, 16 Dec 2020 20:12:23 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma06fra.de.ibm.com with ESMTP id 35d310a4u6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:23 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCLsX26477024 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:21 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id ED57E42042; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 80B8D4203F; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 02/12] lib/list.h: add list_add_tail Date: Wed, 16 Dec 2020 21:11:50 +0100 Message-Id: <20201216201200.255172-3-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 priorityscore=1501 mlxscore=0 adultscore=0 bulkscore=0 clxscore=1015 spamscore=0 impostorscore=0 lowpriorityscore=0 mlxlogscore=997 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a list_add_tail wrapper function to allow adding elements to the end of a list. Signed-off-by: Claudio Imbrenda Reviewed-by: Thomas Huth --- lib/list.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/lib/list.h b/lib/list.h index 18d9516..7f9717e 100644 --- a/lib/list.h +++ b/lib/list.h @@ -50,4 +50,13 @@ static inline void list_add(struct linked_list *head, struct linked_list *li) head->next = li; } +/* + * Add the given element before the given list head. + */ +static inline void list_add_tail(struct linked_list *head, struct linked_list *li) +{ + assert(head); + list_add(head->prev, li); +} + #endif From patchwork Wed Dec 16 20:11:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D21C2BBD4 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8634223609 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728713AbgLPUNI (ORCPT ); Wed, 16 Dec 2020 15:13:08 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:65382 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728694AbgLPUNH (ORCPT ); Wed, 16 Dec 2020 15:13:07 -0500 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2g0k174452 for ; Wed, 16 Dec 2020 15:12:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=rIWsyPNixR0hWHVge+Rqu3flWU/5ovq2Dy81UKXuBQk=; b=b6WmJ2hMWxKc/+b2lRj2utVC3gIUFduQ/q8Ow8NidSqVe2/l6mV5ERSaeevUF6Exwhwn DfIdMTRkquaB89gE4gAd2k3USviyCNoXpkEGIG1faEZe3gobPAH3xMGKvn2ONmI6QZmD mxsM2FvSCT/VQQlXS+IfBKM9NynV5wzRGTTRbP1CB1j07nRbygF45R5j44jfIJwgvtg5 9LPSpYBHrUIiioOOkMhZxwy99zfGU/eNSYAcxAWgTbLFK8KLxvYf1+72vVMqE6rkLVTj 9PtflK6L4+Axkbr34mifN7kP7IB+Q8VHUVDNVxxAJPPJRBU3aojN8EFxQL3kmUo7Xm0h JQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 35fq29bxmy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK2ksm174926 for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0b-001b2d01.pphosted.com with ESMTP id 35fq29bxmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:26 -0500 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK87IH016910; Wed, 16 Dec 2020 20:12:23 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma04fra.de.ibm.com with ESMTP id 35cng8adtr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:23 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCLEx26214694 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:21 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6FF2642047; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0C9104204B; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:20 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 03/12] lib/vmalloc: add some asserts and improvements Date: Wed, 16 Dec 2020 21:11:51 +0100 Message-Id: <20201216201200.255172-4-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 impostorscore=0 mlxscore=0 lowpriorityscore=0 priorityscore=1501 bulkscore=0 spamscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some asserts to make sure the state is consistent. Simplify and improve the readability of vm_free. Fixes: 3f6fee0d4da4 ("lib/vmalloc: vmalloc support for handling allocation metadata") Signed-off-by: Claudio Imbrenda --- lib/vmalloc.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/lib/vmalloc.c b/lib/vmalloc.c index 986a34c..7a49adf 100644 --- a/lib/vmalloc.c +++ b/lib/vmalloc.c @@ -162,13 +162,14 @@ static void *vm_memalign(size_t alignment, size_t size) static void vm_free(void *mem) { struct metadata *m; - uintptr_t ptr, end; + uintptr_t ptr, page, i; /* the pointer is not page-aligned, it was a single-page allocation */ if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) { assert(GET_MAGIC(mem) == VM_MAGIC); - ptr = virt_to_pte_phys(page_root, mem) & PAGE_MASK; - free_page(phys_to_virt(ptr)); + page = virt_to_pte_phys(page_root, mem) & PAGE_MASK; + assert(page); + free_page(phys_to_virt(page)); return; } @@ -176,13 +177,14 @@ static void vm_free(void *mem) m = GET_METADATA(mem); assert(m->magic == VM_MAGIC); assert(m->npages > 0); + assert(m->npages < BIT_ULL(BITS_PER_LONG - PAGE_SHIFT)); /* free all the pages including the metadata page */ - ptr = (uintptr_t)mem - PAGE_SIZE; - end = ptr + m->npages * PAGE_SIZE; - for ( ; ptr < end; ptr += PAGE_SIZE) - free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr))); - /* free the last one separately to avoid overflow issues */ - free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr))); + ptr = (uintptr_t)m & PAGE_MASK; + for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) { + page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK; + assert(page); + free_page(phys_to_virt(page)); + } } static struct alloc_ops vmalloc_ops = { From patchwork Wed Dec 16 20:11:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAA06C3526A for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A344823600 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728718AbgLPUNI (ORCPT ); Wed, 16 Dec 2020 15:13:08 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:12824 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728533AbgLPUNI (ORCPT ); Wed, 16 Dec 2020 15:13:08 -0500 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2NUN024717 for ; Wed, 16 Dec 2020 15:12:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=sP0om3uAzGdwWGie+bYSZzjUFxz+0QQZXe+KhC7aJl4=; b=gJApNzRfC59Fh9mOZF2PDUAR9rr0pFx0a4gnCbQX+ez7uwyigP5Z94u0+KyauLxmGKOK ILqB+IaYgGlCtvCisd1UGRCOJE7K00Jf/OXB6m87yW5w7UjBhMDOuOY9032zXI7HeTiu P1VDHUCPNIIyrb6Sb+Uc7KFAcRbfF3idzAR7vF0IjUdGHBxNk5CBfKnHYtkjiwAmIRVq cITljLnafY0jb3sp1cnvkzbznH7dZq/m5Qy5iaokq6niH9cB9zRGx/9Oljp+Z8pNyVCU KCHA16TWsQksXI2UY/QwGEz+okzjJXrBFB4cIhRAprXuAw7w1pygfGq1XzVpJY1U4Gfe qw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frf5h3y3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK2Sku025442 for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frf5h3xs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:26 -0500 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK9xa5021971; Wed, 16 Dec 2020 20:12:24 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma05fra.de.ibm.com with ESMTP id 35fmywg3a2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:24 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCMI631654308 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:22 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E832642042; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 82C8C4204B; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 04/12] lib/asm: Fix definitions of memory areas Date: Wed, 16 Dec 2020 21:11:52 +0100 Message-Id: <20201216201200.255172-5-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 mlxscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 bulkscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix the definitions of the memory areas. Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding the headers. Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas") Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/asm-generic/memory_areas.h | 9 ++++----- lib/arm/asm/memory_areas.h | 11 +++-------- lib/arm64/asm/memory_areas.h | 11 +++-------- lib/powerpc/asm/memory_areas.h | 11 +++-------- lib/ppc64/asm/memory_areas.h | 11 +++-------- lib/s390x/asm/memory_areas.h | 13 ++++++------- lib/x86/asm/memory_areas.h | 27 ++++++++++++++++----------- lib/alloc_page.h | 3 +++ lib/alloc_page.c | 4 +--- 9 files changed, 42 insertions(+), 58 deletions(-) diff --git a/lib/asm-generic/memory_areas.h b/lib/asm-generic/memory_areas.h index 927baa7..3074afe 100644 --- a/lib/asm-generic/memory_areas.h +++ b/lib/asm-generic/memory_areas.h @@ -1,11 +1,10 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__ +#define __ASM_GENERIC_MEMORY_AREAS_H__ #define AREA_NORMAL_PFN 0 #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 1 #endif diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h index 927baa7..c723310 100644 --- a/lib/arm/asm/memory_areas.h +++ b/lib/arm/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMARM_MEMORY_AREAS_H_ +#define _ASMARM_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/arm64/asm/memory_areas.h b/lib/arm64/asm/memory_areas.h index 927baa7..18e8ca8 100644 --- a/lib/arm64/asm/memory_areas.h +++ b/lib/arm64/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMARM64_MEMORY_AREAS_H_ +#define _ASMARM64_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/powerpc/asm/memory_areas.h b/lib/powerpc/asm/memory_areas.h index 927baa7..76d1738 100644 --- a/lib/powerpc/asm/memory_areas.h +++ b/lib/powerpc/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_ +#define _ASMPOWERPC_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/ppc64/asm/memory_areas.h b/lib/ppc64/asm/memory_areas.h index 927baa7..b9fd46b 100644 --- a/lib/ppc64/asm/memory_areas.h +++ b/lib/ppc64/asm/memory_areas.h @@ -1,11 +1,6 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMPPC64_MEMORY_AREAS_H_ +#define _ASMPPC64_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN 0 -#define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 - -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#include #endif diff --git a/lib/s390x/asm/memory_areas.h b/lib/s390x/asm/memory_areas.h index 4856a27..827bfb3 100644 --- a/lib/s390x/asm/memory_areas.h +++ b/lib/s390x/asm/memory_areas.h @@ -1,16 +1,15 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASMS390X_MEMORY_AREAS_H_ +#define _ASMS390X_MEMORY_AREAS_H_ -#define AREA_NORMAL_PFN BIT(31-12) +#define AREA_NORMAL_PFN (1 << 19) #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) #define AREA_LOW_PFN 0 #define AREA_LOW_NUMBER 1 -#define AREA_LOW 2 +#define AREA_LOW (1 << AREA_LOW_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 2 #define AREA_DMA31 AREA_LOW diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h index 952f5bd..e84016f 100644 --- a/lib/x86/asm/memory_areas.h +++ b/lib/x86/asm/memory_areas.h @@ -1,21 +1,26 @@ -#ifndef MEMORY_AREAS_H -#define MEMORY_AREAS_H +#ifndef _ASM_X86_MEMORY_AREAS_H_ +#define _ASM_X86_MEMORY_AREAS_H_ #define AREA_NORMAL_PFN BIT(36-12) #define AREA_NORMAL_NUMBER 0 -#define AREA_NORMAL 1 +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER) -#define AREA_PAE_HIGH_PFN BIT(32-12) -#define AREA_PAE_HIGH_NUMBER 1 -#define AREA_PAE_HIGH 2 +#define AREA_HIGH_PFN BIT(32-12) +#define AREA_HIGH_NUMBER 1 +#define AREA_HIGH (1 << AREA_HIGH_NUMBER) -#define AREA_LOW_PFN 0 +#define AREA_LOW_PFN BIT(24-12) #define AREA_LOW_NUMBER 2 -#define AREA_LOW 4 +#define AREA_LOW (1 << AREA_LOW_NUMBER) -#define AREA_PAE (AREA_PAE | AREA_LOW) +#define AREA_LOWEST_PFN 0 +#define AREA_LOWEST_NUMBER 3 +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER) -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define MAX_AREAS 4 + +#define AREA_DMA24 AREA_LOWEST +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW) +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH) #endif diff --git a/lib/alloc_page.h b/lib/alloc_page.h index 816ff5d..b6aace5 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -10,6 +10,9 @@ #include +#define AREA_ANY -1 +#define AREA_ANY_NUMBER 0xff + /* Returns true if the page allocator has been initialized */ bool page_alloc_initialized(void); diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 685ab1e..ed0ff02 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -19,8 +19,6 @@ #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT) -#define MAX_AREAS 6 - #define ORDER_MASK 0x3f #define ALLOC_MASK 0x40 #define SPECIAL_MASK 0x80 @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn) return; } #ifdef AREA_HIGH_PFN - __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN), base_pfn, &top_pfn); + __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN, base_pfn, &top_pfn); #endif __page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN, base_pfn, &top_pfn); #ifdef AREA_LOW_PFN From patchwork Wed Dec 16 20:11:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 468E0C3526E for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19C7B23719 for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728735AbgLPUNL (ORCPT ); Wed, 16 Dec 2020 15:13:11 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:6886 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728711AbgLPUNK (ORCPT ); Wed, 16 Dec 2020 15:13:10 -0500 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2N02024641 for ; Wed, 16 Dec 2020 15:12:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=Eq/1KUdmDNI1MRJDeaN8hjsh8KyJYPmEleUhhyKskNU=; b=NVfy76Juq/splMcXpXuTQRvLZELXJ4jsakMatVyOTeb0v5n6IB2DR1qcGepwFI4p0pgD dpjBoIQfEvuLqBiOVuWg9TxfzSJgrl6Jat4t/6iAxufK1e/YFMirpI9PKLZZIII2esqV SawP2mysOSBOLdDcT4KUPOz9rrP4CdblInBmh3dOOlkaPXPPbS+bRHdImiQZxIPbjQVf 4c7LrJ4pjf3tTsdBViFzCylfAF8Z293GBz/7oMoBL9qDdM2ikBxepmeytKijqqnDAw7r WrhUP3mAm7xLLElAvSbgG6S2LFMdLSF9g1GjIwZPbPetLZsdINcIeKin69QfYdl7K1S3 iw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frf5h3y7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:27 -0500 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK2PNW024939 for ; Wed, 16 Dec 2020 15:12:26 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frf5h3xu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:26 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK7A8T021463; Wed, 16 Dec 2020 20:12:25 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma03fra.de.ibm.com with ESMTP id 35cng8evjg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:24 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCM1U31654314 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:22 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7CB694204F; Wed, 16 Dec 2020 20:12:22 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 098F94204B; Wed, 16 Dec 2020 20:12:22 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:21 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 05/12] lib/alloc_page: fix and improve the page allocator Date: Wed, 16 Dec 2020 21:11:53 +0100 Message-Id: <20201216201200.255172-6-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 mlxscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 bulkscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the documentation. * introduce and use pfn_t to semantically tag parameters as PFNs * remove the PFN macro, use virt_to_pfn instead * rename area_or_metadata_contains and area_contains to area_contains_pfn and usable_area_contains_pfn respectively * fix/improve comments in lib/alloc_page.h * move some wrapper functions to the header Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Fixes: 34c950651861 ("lib/alloc_page: allow reserving arbitrary memory ranges") Signed-off-by: Claudio Imbrenda --- lib/alloc_page.h | 49 +++++++++----- lib/alloc_page.c | 165 +++++++++++++++++++++++------------------------ 2 files changed, 116 insertions(+), 98 deletions(-) diff --git a/lib/alloc_page.h b/lib/alloc_page.h index b6aace5..d8550c6 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -8,6 +8,7 @@ #ifndef ALLOC_PAGE_H #define ALLOC_PAGE_H 1 +#include #include #define AREA_ANY -1 @@ -23,7 +24,7 @@ bool page_alloc_initialized(void); * top_pfn is the physical frame number of the first page immediately after * the end of the area to initialize */ -void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn); +void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn); /* Enables the page allocator. At least one area must have been initialized */ void page_alloc_ops_enable(void); @@ -37,9 +38,12 @@ void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size); /* * Allocate aligned memory from any area. - * Equivalent to memalign_pages_area(~0, alignment, size). + * Equivalent to memalign_pages_area(AREA_ANY, alignment, size). */ -void *memalign_pages(size_t alignment, size_t size); +static inline void *memalign_pages(size_t alignment, size_t size) +{ + return memalign_pages_area(AREA_ANY, alignment, size); +} /* * Allocate naturally aligned memory from the specified areas. @@ -48,16 +52,22 @@ void *memalign_pages(size_t alignment, size_t size); void *alloc_pages_area(unsigned int areas, unsigned int order); /* - * Allocate one page from any area. - * Equivalent to alloc_pages(0); + * Allocate naturally aligned memory from any area. + * Equivalent to alloc_pages_area(AREA_ANY, order); */ -void *alloc_page(void); +static inline void *alloc_pages(unsigned int order) +{ + return alloc_pages_area(AREA_ANY, order); +} /* - * Allocate naturally aligned memory from any area. - * Equivalent to alloc_pages_area(~0, order); + * Allocate one page from any area. + * Equivalent to alloc_pages(0); */ -void *alloc_pages(unsigned int order); +static inline void *alloc_page(void) +{ + return alloc_pages(0); +} /* * Frees a memory block allocated with any of the memalign_pages* or @@ -66,23 +76,32 @@ void *alloc_pages(unsigned int order); */ void free_pages(void *mem); -/* For backwards compatibility */ +/* + * Free one page. + * Equivalent to free_pages(mem). + */ static inline void free_page(void *mem) { return free_pages(mem); } -/* For backwards compatibility */ +/* + * Free pages by order. + * Equivalent to free_pages(mem). + */ static inline void free_pages_by_order(void *mem, unsigned int order) { free_pages(mem); } /* - * Allocates and reserves the specified memory range if possible. - * Returns NULL in case of failure. + * Allocates and reserves the specified physical memory range if possible. + * If the specified range cannot be reserved in its entirety, no action is + * performed and false is returned. + * + * Returns true in case of success, false otherwise. */ -void *alloc_pages_special(uintptr_t addr, size_t npages); +bool alloc_pages_special(phys_addr_t addr, size_t npages); /* * Frees a reserved memory range that had been reserved with @@ -91,6 +110,6 @@ void *alloc_pages_special(uintptr_t addr, size_t npages); * exactly, it can also be a subset, in which case only the specified * pages will be freed and unreserved. */ -void free_pages_special(uintptr_t addr, size_t npages); +void free_pages_special(phys_addr_t addr, size_t npages); #endif diff --git a/lib/alloc_page.c b/lib/alloc_page.c index ed0ff02..8d2700d 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -17,25 +17,29 @@ #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order)) #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) -#define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT) #define ORDER_MASK 0x3f #define ALLOC_MASK 0x40 #define SPECIAL_MASK 0x80 +typedef phys_addr_t pfn_t; + struct mem_area { /* Physical frame number of the first usable frame in the area */ - uintptr_t base; + pfn_t base; /* Physical frame number of the first frame outside the area */ - uintptr_t top; - /* Combination of SPECIAL_MASK, ALLOC_MASK, and order */ + pfn_t top; + /* Per page metadata, each entry is a combination *_MASK and order */ u8 *page_states; /* One freelist for each possible block size, up to NLISTS */ struct linked_list freelists[NLISTS]; }; +/* Descriptors for each possible area */ static struct mem_area areas[MAX_AREAS]; +/* Mask of initialized areas */ static unsigned int areas_mask; +/* Protects areas and areas mask */ static struct spinlock lock; bool page_alloc_initialized(void) @@ -43,12 +47,24 @@ bool page_alloc_initialized(void) return areas_mask != 0; } -static inline bool area_or_metadata_contains(struct mem_area *a, uintptr_t pfn) +/* + * Each memory area contains an array of metadata entries at the very + * beginning. The usable memory follows immediately afterwards. + * This function returns true if the given pfn falls anywhere within the + * memory area, including the metadata area. + */ +static inline bool area_contains_pfn(struct mem_area *a, pfn_t pfn) { - return (pfn >= PFN(a->page_states)) && (pfn < a->top); + return (pfn >= virt_to_pfn(a->page_states)) && (pfn < a->top); } -static inline bool area_contains(struct mem_area *a, uintptr_t pfn) +/* + * Each memory area contains an array of metadata entries at the very + * beginning. The usable memory follows immediately afterwards. + * This function returns true if the given pfn falls in the usable range of + * the given memory area. + */ +static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn) { return (pfn >= a->base) && (pfn < a->top); } @@ -69,21 +85,19 @@ static inline bool area_contains(struct mem_area *a, uintptr_t pfn) */ static void split(struct mem_area *a, void *addr) { - uintptr_t pfn = PFN(addr); - struct linked_list *p; - uintptr_t i, idx; + pfn_t pfn = virt_to_pfn(addr); + pfn_t i, idx; u8 order; - assert(a && area_contains(a, pfn)); + assert(a && usable_area_contains_pfn(a, pfn)); idx = pfn - a->base; order = a->page_states[idx]; assert(!(order & ~ORDER_MASK) && order && (order < NLISTS)); assert(IS_ALIGNED_ORDER(pfn, order)); - assert(area_contains(a, pfn + BIT(order) - 1)); + assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); /* Remove the block from its free list */ - p = list_remove(addr); - assert(p); + list_remove(addr); /* update the block size for each page in the block */ for (i = 0; i < BIT(order); i++) { @@ -92,9 +106,9 @@ static void split(struct mem_area *a, void *addr) } order--; /* add the first half block to the appropriate free list */ - list_add(a->freelists + order, p); + list_add(a->freelists + order, addr); /* add the second half block to the appropriate free list */ - list_add(a->freelists + order, (void *)((pfn + BIT(order)) * PAGE_SIZE)); + list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); } /* @@ -105,7 +119,7 @@ static void split(struct mem_area *a, void *addr) */ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) { - struct linked_list *p, *res = NULL; + struct linked_list *p; u8 order; assert((al < NLISTS) && (sz < NLISTS)); @@ -130,17 +144,17 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) for (; order > sz; order--) split(a, p); - res = list_remove(p); - memset(a->page_states + (PFN(res) - a->base), ALLOC_MASK | order, BIT(order)); - return res; + list_remove(p); + memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order)); + return p; } -static struct mem_area *get_area(uintptr_t pfn) +static struct mem_area *get_area(pfn_t pfn) { uintptr_t i; for (i = 0; i < MAX_AREAS; i++) - if ((areas_mask & BIT(i)) && area_contains(areas + i, pfn)) + if ((areas_mask & BIT(i)) && usable_area_contains_pfn(areas + i, pfn)) return areas + i; return NULL; } @@ -160,17 +174,16 @@ static struct mem_area *get_area(uintptr_t pfn) * - all of the pages of the two blocks must have the same block size * - the function is called with the lock held */ -static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2) +static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2) { - uintptr_t first, second, i; - struct linked_list *li; + pfn_t first, second, i; assert(IS_ALIGNED_ORDER(pfn, order) && IS_ALIGNED_ORDER(pfn2, order)); assert(pfn2 == pfn + BIT(order)); assert(a); /* attempting to coalesce two blocks that belong to different areas */ - if (!area_contains(a, pfn) || !area_contains(a, pfn2 + BIT(order) - 1)) + if (!usable_area_contains_pfn(a, pfn) || !usable_area_contains_pfn(a, pfn2 + BIT(order) - 1)) return false; first = pfn - a->base; second = pfn2 - a->base; @@ -179,17 +192,15 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2 return false; /* we can coalesce, remove both blocks from their freelists */ - li = list_remove((void *)(pfn2 << PAGE_SHIFT)); - assert(li); - li = list_remove((void *)(pfn << PAGE_SHIFT)); - assert(li); + list_remove(pfn_to_virt(pfn2)); + list_remove(pfn_to_virt(pfn)); /* check the metadata entries and update with the new size */ for (i = 0; i < (2ull << order); i++) { assert(a->page_states[first + i] == order); a->page_states[first + i] = order + 1; } /* finally add the newly coalesced block to the appropriate freelist */ - list_add(a->freelists + order + 1, li); + list_add(a->freelists + order + 1, pfn_to_virt(pfn)); return true; } @@ -209,7 +220,7 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2 */ static void _free_pages(void *mem) { - uintptr_t pfn2, pfn = PFN(mem); + pfn_t pfn2, pfn = virt_to_pfn(mem); struct mem_area *a = NULL; uintptr_t i, p; u8 order; @@ -232,7 +243,7 @@ static void _free_pages(void *mem) /* ensure that the block is aligned properly for its size */ assert(IS_ALIGNED_ORDER(pfn, order)); /* ensure that the area can contain the whole block */ - assert(area_contains(a, pfn + BIT(order) - 1)); + assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); for (i = 0; i < BIT(order); i++) { /* check that all pages of the block have consistent metadata */ @@ -268,63 +279,68 @@ void free_pages(void *mem) spin_unlock(&lock); } -static void *_alloc_page_special(uintptr_t addr) +static bool _alloc_page_special(pfn_t pfn) { struct mem_area *a; - uintptr_t mask, i; + pfn_t mask, i; - a = get_area(PFN(addr)); - assert(a); - i = PFN(addr) - a->base; + a = get_area(pfn); + if (!a) + return false; + i = pfn - a->base; if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK)) - return NULL; + return false; while (a->page_states[i]) { - mask = GENMASK_ULL(63, PAGE_SHIFT + a->page_states[i]); - split(a, (void *)(addr & mask)); + mask = GENMASK_ULL(63, a->page_states[i]); + split(a, pfn_to_virt(pfn & mask)); } a->page_states[i] = SPECIAL_MASK; - return (void *)addr; + return true; } -static void _free_page_special(uintptr_t addr) +static void _free_page_special(pfn_t pfn) { struct mem_area *a; - uintptr_t i; + pfn_t i; - a = get_area(PFN(addr)); + a = get_area(pfn); assert(a); - i = PFN(addr) - a->base; + i = pfn - a->base; assert(a->page_states[i] == SPECIAL_MASK); a->page_states[i] = ALLOC_MASK; - _free_pages((void *)addr); + _free_pages(pfn_to_virt(pfn)); } -void *alloc_pages_special(uintptr_t addr, size_t n) +bool alloc_pages_special(phys_addr_t addr, size_t n) { - uintptr_t i; + pfn_t pfn; + size_t i; assert(IS_ALIGNED(addr, PAGE_SIZE)); + pfn = addr >> PAGE_SHIFT; spin_lock(&lock); for (i = 0; i < n; i++) - if (!_alloc_page_special(addr + i * PAGE_SIZE)) + if (!_alloc_page_special(pfn + i)) break; if (i < n) { for (n = 0 ; n < i; n++) - _free_page_special(addr + n * PAGE_SIZE); - addr = 0; + _free_page_special(pfn + n); + n = 0; } spin_unlock(&lock); - return (void *)addr; + return n; } -void free_pages_special(uintptr_t addr, size_t n) +void free_pages_special(phys_addr_t addr, size_t n) { - uintptr_t i; + pfn_t pfn; + size_t i; assert(IS_ALIGNED(addr, PAGE_SIZE)); + pfn = addr >> PAGE_SHIFT; spin_lock(&lock); for (i = 0; i < n; i++) - _free_page_special(addr + i * PAGE_SIZE); + _free_page_special(pfn + i); spin_unlock(&lock); } @@ -351,11 +367,6 @@ void *alloc_pages_area(unsigned int area, unsigned int order) return page_memalign_order_area(area, order, order); } -void *alloc_pages(unsigned int order) -{ - return alloc_pages_area(AREA_ANY, order); -} - /* * Allocates (1 << order) physically contiguous aligned pages. * Returns NULL if the allocation was not possible. @@ -370,18 +381,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) return page_memalign_order_area(area, size, alignment); } -void *memalign_pages(size_t alignment, size_t size) -{ - return memalign_pages_area(AREA_ANY, alignment, size); -} - -/* - * Allocates one page - */ -void *alloc_page() -{ - return alloc_pages(0); -} static struct alloc_ops page_alloc_ops = { .memalign = memalign_pages, @@ -416,7 +415,7 @@ void page_alloc_ops_enable(void) * - the memory area to add does not overlap with existing areas * - the memory area to add has at least 5 pages available */ -static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) +static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) { size_t table_size, npages, i; struct mem_area *a; @@ -437,7 +436,7 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) /* fill in the values of the new area */ a = areas + n; - a->page_states = (void *)(start_pfn << PAGE_SHIFT); + a->page_states = pfn_to_virt(start_pfn); a->base = start_pfn + table_size; a->top = top_pfn; npages = top_pfn - a->base; @@ -447,14 +446,14 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) for (i = 0; i < MAX_AREAS; i++) { if (!(areas_mask & BIT(i))) continue; - assert(!area_or_metadata_contains(areas + i, start_pfn)); - assert(!area_or_metadata_contains(areas + i, top_pfn - 1)); - assert(!area_or_metadata_contains(a, PFN(areas[i].page_states))); - assert(!area_or_metadata_contains(a, areas[i].top - 1)); + assert(!area_contains_pfn(areas + i, start_pfn)); + assert(!area_contains_pfn(areas + i, top_pfn - 1)); + assert(!area_contains_pfn(a, virt_to_pfn(areas[i].page_states))); + assert(!area_contains_pfn(a, areas[i].top - 1)); } /* initialize all freelists for the new area */ for (i = 0; i < NLISTS; i++) - a->freelists[i].next = a->freelists[i].prev = a->freelists + i; + a->freelists[i].prev = a->freelists[i].next = a->freelists + i; /* initialize the metadata for the available memory */ for (i = a->base; i < a->top; i += 1ull << order) { @@ -473,13 +472,13 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn) assert(order < NLISTS); /* initialize the metadata and add to the freelist */ memset(a->page_states + (i - a->base), order, BIT(order)); - list_add(a->freelists + order, (void *)(i << PAGE_SHIFT)); + list_add(a->freelists + order, pfn_to_virt(i)); } /* finally mark the area as present */ areas_mask |= BIT(n); } -static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, uintptr_t *top_pfn) +static void __page_alloc_init_area(u8 n, pfn_t cutoff, pfn_t base_pfn, pfn_t *top_pfn) { if (*top_pfn > cutoff) { spin_lock(&lock); @@ -500,7 +499,7 @@ static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, u * Prerequisites: * see _page_alloc_init_area */ -void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn) +void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn) { if (n != AREA_ANY_NUMBER) { __page_alloc_init_area(n, 0, base_pfn, &top_pfn); From patchwork Wed Dec 16 20:11:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E670AC2D0E4 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB2A323715 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728728AbgLPUNJ (ORCPT ); Wed, 16 Dec 2020 15:13:09 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:63234 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728694AbgLPUNI (ORCPT ); Wed, 16 Dec 2020 15:13:08 -0500 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2gEx138194 for ; Wed, 16 Dec 2020 15:12:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=12nMh1wtvEAgafG++AUrXPE+5WXvMsrsbgxwZh4QeDU=; b=GsoJaJitmzHOGauOIeUD7StxSYOiRK+W48OcoFbf5lCAD+beYo8cl8gKo7MHHrrvvE6R lsNC1sdJpZwkidNPPai1sYk6IaqHyCKwHAkJctFpWH+NuuN7lHfoupiakNezMclCDR55 NaFpMbNJHq22svoTxoErgOk704EqAx+9DKDrOhcQg4iDIi5SYDcpRoXPXXfLVEoe4Qud /38lfZ7ZmI/n+ZEw3xDRunJ/a6CEys68FTN2VDmC2Bq2VwL60oEH5KJdt0lKwKjSqXa0 g/mtrZKmlnMqWNhV+HGYcDgJuilWxKL2zhDc9VelULcGiXebAT9yyOrhZG4RCut0EQa1 LQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fqeku4gk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:28 -0500 Received: from m0098393.ppops.net (m0098393.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK3rJh143314 for ; Wed, 16 Dec 2020 15:12:27 -0500 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fqeku4g3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:27 -0500 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK8xYm024982; Wed, 16 Dec 2020 20:12:25 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04ams.nl.ibm.com with ESMTP id 35cng84n0n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:25 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCN7031392140 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:23 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0076A42042; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8FB4E42047; Wed, 16 Dec 2020 20:12:22 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:22 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 06/12] lib/alloc.h: remove align_min from struct alloc_ops Date: Wed, 16 Dec 2020 21:11:54 +0100 Message-Id: <20201216201200.255172-7-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 clxscore=1015 impostorscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 spamscore=0 mlxscore=0 malwarescore=0 bulkscore=0 suspectscore=0 mlxlogscore=989 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove align_min from struct alloc_ops, since it is no longer used. Signed-off-by: Claudio Imbrenda Reviewed-by: Krish Sadhukhan --- lib/alloc.h | 1 - lib/alloc_page.c | 1 - lib/alloc_phys.c | 9 +++++---- lib/vmalloc.c | 1 - 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/lib/alloc.h b/lib/alloc.h index 9b4b634..db90b01 100644 --- a/lib/alloc.h +++ b/lib/alloc.h @@ -25,7 +25,6 @@ struct alloc_ops { void *(*memalign)(size_t alignment, size_t size); void (*free)(void *ptr); - size_t align_min; }; extern struct alloc_ops *alloc_ops; diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 8d2700d..b1cdf21 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -385,7 +385,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) static struct alloc_ops page_alloc_ops = { .memalign = memalign_pages, .free = free_pages, - .align_min = PAGE_SIZE, }; /* diff --git a/lib/alloc_phys.c b/lib/alloc_phys.c index 72e20f7..a4d2bf2 100644 --- a/lib/alloc_phys.c +++ b/lib/alloc_phys.c @@ -29,8 +29,8 @@ static phys_addr_t base, top; static void *early_memalign(size_t alignment, size_t size); static struct alloc_ops early_alloc_ops = { .memalign = early_memalign, - .align_min = DEFAULT_MINIMUM_ALIGNMENT }; +static size_t align_min; struct alloc_ops *alloc_ops = &early_alloc_ops; @@ -39,8 +39,7 @@ void phys_alloc_show(void) int i; spin_lock(&lock); - printf("phys_alloc minimum alignment: %#" PRIx64 "\n", - (u64)early_alloc_ops.align_min); + printf("phys_alloc minimum alignment: %#" PRIx64 "\n", (u64)align_min); for (i = 0; i < nr_regions; ++i) printf("%016" PRIx64 "-%016" PRIx64 " [%s]\n", (u64)regions[i].base, @@ -64,7 +63,7 @@ void phys_alloc_set_minimum_alignment(phys_addr_t align) { assert(align && !(align & (align - 1))); spin_lock(&lock); - early_alloc_ops.align_min = align; + align_min = align; spin_unlock(&lock); } @@ -83,6 +82,8 @@ static phys_addr_t phys_alloc_aligned_safe(phys_addr_t size, top_safe = MIN(top_safe, 1ULL << 32); assert(base < top_safe); + if (align < align_min) + align = align_min; addr = ALIGN(base, align); size += addr - base; diff --git a/lib/vmalloc.c b/lib/vmalloc.c index 7a49adf..e146162 100644 --- a/lib/vmalloc.c +++ b/lib/vmalloc.c @@ -190,7 +190,6 @@ static void vm_free(void *mem) static struct alloc_ops vmalloc_ops = { .memalign = vm_memalign, .free = vm_free, - .align_min = PAGE_SIZE, }; void __attribute__((__weak__)) find_highmem(void) From patchwork Wed Dec 16 20:11:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 007A6C3526B for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DDCD023619 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728732AbgLPUNK (ORCPT ); Wed, 16 Dec 2020 15:13:10 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:31388 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728533AbgLPUNJ (ORCPT ); Wed, 16 Dec 2020 15:13:09 -0500 Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK6VuC157930 for ; Wed, 16 Dec 2020 15:12:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=wVpd1t0ftlLtvfnnG9sLjczal4UvdC4tN6mnpoZX9xQ=; b=pCRQlkLFptddF+BMyJi/8G+smQuA48z4eq48hfIM0W50+5BJogw+r0UD4MwsUkTJf31b gMrXhMOc+Utuq0S3Nv8v3U+MOFXmjaASTFq4n6V5d2NwKNRrqu899KQ5reoJLdn1wn3F yawxro/xAbRUayDbK8vq/4PD2no3LrYsKo9bLMlmLci6xTXRmEF6jYbLKIkf9GjOJQMd Y+Ke2nVCKojPVKNz9rpqa8Cia9mfj/Q1GhSUDnN9DU9JmXBQBY/BCfTK8pPGI8QtV9zW HuWQ5DeWTrSe4IifI+CZR41FosOFmCx+KJ9gRwKe2xmgJovPMueyPYr2X6UFU3blZiKC 0A== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frac94hw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:28 -0500 Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK6hBf158625 for ; Wed, 16 Dec 2020 15:12:27 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frac94hb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:27 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK7Z9u021512; Wed, 16 Dec 2020 20:12:25 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma03fra.de.ibm.com with ESMTP id 35cng8evjh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:25 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCNZ031392144 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:23 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7CFC142042; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 16B8C4204C; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 07/12] lib/alloc_page: Optimization to skip known empty freelists Date: Wed, 16 Dec 2020 21:11:55 +0100 Message-Id: <20201216201200.255172-8-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 priorityscore=1501 spamscore=0 bulkscore=0 suspectscore=0 mlxlogscore=999 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Keep track of the largest block order available in each area, and do not search past it when looking for free memory. This will avoid needlessly scanning the freelists for the largest block orders, which will be empty in most cases. Signed-off-by: Claudio Imbrenda --- lib/alloc_page.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index b1cdf21..6a76b45 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -31,6 +31,8 @@ struct mem_area { pfn_t top; /* Per page metadata, each entry is a combination *_MASK and order */ u8 *page_states; + /* Highest block order available in this area */ + u8 max_order; /* One freelist for each possible block size, up to NLISTS */ struct linked_list freelists[NLISTS]; }; @@ -104,6 +106,8 @@ static void split(struct mem_area *a, void *addr) assert(a->page_states[idx + i] == order); a->page_states[idx + i] = order - 1; } + if ((order == a->max_order) && (is_list_empty(a->freelists + order))) + a->max_order--; order--; /* add the first half block to the appropriate free list */ list_add(a->freelists + order, addr); @@ -127,13 +131,13 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) order = sz > al ? sz : al; /* search all free lists for some memory */ - for ( ; order < NLISTS; order++) { + for ( ; order <= a->max_order; order++) { p = a->freelists[order].next; if (!is_list_empty(p)) break; } /* out of memory */ - if (order >= NLISTS) + if (order > a->max_order) return NULL; /* @@ -201,6 +205,8 @@ static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2) } /* finally add the newly coalesced block to the appropriate freelist */ list_add(a->freelists + order + 1, pfn_to_virt(pfn)); + if (order + 1 > a->max_order) + a->max_order = order + 1; return true; } @@ -438,6 +444,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) a->page_states = pfn_to_virt(start_pfn); a->base = start_pfn + table_size; a->top = top_pfn; + a->max_order = 0; npages = top_pfn - a->base; assert((a->base - start_pfn) * PAGE_SIZE >= npages); @@ -472,6 +479,8 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) /* initialize the metadata and add to the freelist */ memset(a->page_states + (i - a->base), order, BIT(order)); list_add(a->freelists + order, pfn_to_virt(i)); + if (order > a->max_order) + a->max_order = order; } /* finally mark the area as present */ areas_mask |= BIT(n); From patchwork Wed Dec 16 20:11:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5857BC3526D for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CEF323600 for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728533AbgLPUNL (ORCPT ); Wed, 16 Dec 2020 15:13:11 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:65402 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728724AbgLPUNK (ORCPT ); Wed, 16 Dec 2020 15:13:10 -0500 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK4kOO127073 for ; Wed, 16 Dec 2020 15:12:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=Aoo4jJ0Vh+3jZT3+Av/NDvrUny4ll0RKxTqBANDA0EI=; b=LjLQA4fWteQ+mJ+pcJs8HcFcdAYrpizfafvZZ7N7b0Hf9FLV59BNtBpAXImK+n4PMFZf ei7Lduwia5gTE0Jb+cz5GWA+gY31uREAqn5ARwUWtl7hkkJLrQMkTMAEQwbBAhIkAzXL m2/00yFzauPzRgSyTOou5ymYgi1AUDr8h6PlzdavPJXIzVBbaUgN09mV1JzB5BO/qS8+ RK9/jEvAVI6X05tr4XneQysSC1uT9lsJGHageYMxCzUNoGnKkeviI1ey2+s1D4ixFbsn HK/I5Y5i9IDhYDJ/75Zz5ESzOFt5Syi0dcFks4x8Ic3p1zVzE5VJcaDNuktkrS1f1FpV dg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 35frmfrnr0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:28 -0500 Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK5WAr133985 for ; Wed, 16 Dec 2020 15:12:28 -0500 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0b-001b2d01.pphosted.com with ESMTP id 35frmfrnqf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:28 -0500 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK8mOT017402; Wed, 16 Dec 2020 20:12:26 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04fra.de.ibm.com with ESMTP id 35cng8adts-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:26 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCOEw28901850 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:24 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F3C884204B; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9074842047; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:23 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 08/12] lib/alloc_page: rework metadata format Date: Wed, 16 Dec 2020 21:11:56 +0100 Message-Id: <20201216201200.255172-9-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 bulkscore=0 malwarescore=0 mlxscore=0 spamscore=0 phishscore=0 adultscore=0 suspectscore=0 priorityscore=1501 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch changes the format of the metadata so that the metadata is now a 2-bit field instead of two separate flags. This allows to have 4 different states for memory: STATUS_FRESH: the memory is free and has not been touched at all since boot (not even read from!) STATUS_FREE: the memory is free, but it is probably not fresh any more STATUS_ALLOCATED: the memory has been allocated and is in use STATUS_SPECIAL: the memory has been removed from the pool of allocated memory for some kind of special purpose according to the needs of the caller Some macros are also introduced to test the status of a specific metadata item. Signed-off-by: Claudio Imbrenda --- lib/alloc_page.c | 49 +++++++++++++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 19 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 6a76b45..dfa43d5 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -18,9 +18,20 @@ #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order)) #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT)) -#define ORDER_MASK 0x3f -#define ALLOC_MASK 0x40 -#define SPECIAL_MASK 0x80 +#define ORDER_MASK 0x3f +#define STATUS_MASK 0xc0 + +#define STATUS_FRESH 0x00 +#define STATUS_FREE 0x40 +#define STATUS_ALLOCATED 0x80 +#define STATUS_SPECIAL 0xc0 + +#define IS_FRESH(x) (((x) & STATUS_MASK) == STATUS_FRESH) +#define IS_FREE(x) (((x) & STATUS_MASK) == STATUS_FREE) +#define IS_ALLOCATED(x) (((x) & STATUS_MASK) == STATUS_ALLOCATED) +#define IS_SPECIAL(x) (((x) & STATUS_MASK) == STATUS_SPECIAL) + +#define IS_USABLE(x) (IS_FREE(x) || IS_FRESH(x)) typedef phys_addr_t pfn_t; @@ -87,14 +98,14 @@ static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn) */ static void split(struct mem_area *a, void *addr) { - pfn_t pfn = virt_to_pfn(addr); - pfn_t i, idx; - u8 order; + pfn_t i, idx, pfn = virt_to_pfn(addr); + u8 metadata, order; assert(a && usable_area_contains_pfn(a, pfn)); idx = pfn - a->base; - order = a->page_states[idx]; - assert(!(order & ~ORDER_MASK) && order && (order < NLISTS)); + metadata = a->page_states[idx]; + order = metadata & ORDER_MASK; + assert(IS_USABLE(metadata) && order && (order < NLISTS)); assert(IS_ALIGNED_ORDER(pfn, order)); assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1)); @@ -103,8 +114,8 @@ static void split(struct mem_area *a, void *addr) /* update the block size for each page in the block */ for (i = 0; i < BIT(order); i++) { - assert(a->page_states[idx + i] == order); - a->page_states[idx + i] = order - 1; + assert(a->page_states[idx + i] == metadata); + a->page_states[idx + i] = metadata - 1; } if ((order == a->max_order) && (is_list_empty(a->freelists + order))) a->max_order--; @@ -149,7 +160,7 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) split(a, p); list_remove(p); - memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order)); + memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order)); return p; } @@ -243,7 +254,7 @@ static void _free_pages(void *mem) order = a->page_states[p] & ORDER_MASK; /* ensure that the first page is allocated and not special */ - assert(a->page_states[p] == (order | ALLOC_MASK)); + assert(IS_ALLOCATED(a->page_states[p])); /* ensure that the order has a sane value */ assert(order < NLISTS); /* ensure that the block is aligned properly for its size */ @@ -253,9 +264,9 @@ static void _free_pages(void *mem) for (i = 0; i < BIT(order); i++) { /* check that all pages of the block have consistent metadata */ - assert(a->page_states[p + i] == (ALLOC_MASK | order)); + assert(a->page_states[p + i] == (STATUS_ALLOCATED | order)); /* set the page as free */ - a->page_states[p + i] &= ~ALLOC_MASK; + a->page_states[p + i] = STATUS_FREE | order; } /* provisionally add the block to the appropriate free list */ list_add(a->freelists + order, mem); @@ -294,13 +305,13 @@ static bool _alloc_page_special(pfn_t pfn) if (!a) return false; i = pfn - a->base; - if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK)) + if (!IS_USABLE(a->page_states[i])) return false; while (a->page_states[i]) { mask = GENMASK_ULL(63, a->page_states[i]); split(a, pfn_to_virt(pfn & mask)); } - a->page_states[i] = SPECIAL_MASK; + a->page_states[i] = STATUS_SPECIAL; return true; } @@ -312,8 +323,8 @@ static void _free_page_special(pfn_t pfn) a = get_area(pfn); assert(a); i = pfn - a->base; - assert(a->page_states[i] == SPECIAL_MASK); - a->page_states[i] = ALLOC_MASK; + assert(a->page_states[i] == STATUS_SPECIAL); + a->page_states[i] = STATUS_ALLOCATED; _free_pages(pfn_to_virt(pfn)); } @@ -477,7 +488,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn) order++; assert(order < NLISTS); /* initialize the metadata and add to the freelist */ - memset(a->page_states + (i - a->base), order, BIT(order)); + memset(a->page_states + (i - a->base), STATUS_FRESH | order, BIT(order)); list_add(a->freelists + order, pfn_to_virt(i)); if (order > a->max_order) a->max_order = order; From patchwork Wed Dec 16 20:11:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AA6BC2BBCF for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53987235F8 for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728790AbgLPUO0 (ORCPT ); Wed, 16 Dec 2020 15:14:26 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42276 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727656AbgLPUO0 (ORCPT ); Wed, 16 Dec 2020 15:14:26 -0500 Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK32vB112851 for ; Wed, 16 Dec 2020 15:13:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=8TVbNkzDTC8MDl4iMn0bY9wkiVPUsMTRegb7/UJTNnw=; b=XMU+9w9sDJUNTquut7NYZsqk+9+MEZ5WHg7I7S4/+OQXIgX1pQxts2P2Qjc/epnbc2m7 BmnUuWr+FpSZ498wdoRMNwOXlopFo/dlnh86Ml5hcsAU2IKhhew2+Lrg5VNgh1UdZXYg Se/t8F3bbZ/KUXbk/6jrc1/nCmJRtdGn6ZtAZawdqXiFVG43zRHY4+aAvs/Oih6c3K7o nfL+k8sYFMdBC6203217lt2DZhRcM3L9z7+i4gt0cgab8vILtYmep/UdyjSibpOqS8Cv YlWSJRmJrRcMH/cw1qV19eotSitafTMb32ZOUXR/WaWxfslG3LZpayagpkof+l5RqJvr ew== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fpywuj09-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:13:45 -0500 Received: from m0098404.ppops.net (m0098404.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK7DLk138282 for ; Wed, 16 Dec 2020 15:13:44 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 35fpywuhyg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:13:44 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK7N9T021484; Wed, 16 Dec 2020 20:13:41 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma03fra.de.ibm.com with ESMTP id 35cng8evkn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:13:41 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCOse34079148 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:24 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 77C5642047; Wed, 16 Dec 2020 20:12:24 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 138E342045; Wed, 16 Dec 2020 20:12:24 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:24 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 09/12] lib/alloc: replace areas with more generic flags Date: Wed, 16 Dec 2020 21:11:57 +0100 Message-Id: <20201216201200.255172-10-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 malwarescore=0 bulkscore=0 spamscore=0 impostorscore=0 phishscore=0 priorityscore=1501 mlxscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags. This patch introduces the flags and changes the names of the funcions, subsequent patches will actually wire up the flags to do something. The first two flags introduced are: FLAG_ZERO to ask the allocated memory to be zeroed FLAG_FRESH to indicate that the allocated memory should have not been touched (READ or written to) in any way since boot. Signed-off-by: Claudio Imbrenda --- lib/alloc_page.h | 21 +++++++++++++-------- lib/alloc_page.c | 14 +++++++------- lib/s390x/smp.c | 2 +- 3 files changed, 21 insertions(+), 16 deletions(-) diff --git a/lib/alloc_page.h b/lib/alloc_page.h index d8550c6..1039814 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -11,8 +11,13 @@ #include #include -#define AREA_ANY -1 -#define AREA_ANY_NUMBER 0xff +#define AREA_ANY_NUMBER 0xff + +#define AREA_ANY 0x00000 +#define AREA_MASK 0x0ffff + +#define FLAG_ZERO 0x10000 +#define FLAG_FRESH 0x20000 /* Returns true if the page allocator has been initialized */ bool page_alloc_initialized(void); @@ -34,22 +39,22 @@ void page_alloc_ops_enable(void); * areas is a bitmap of allowed areas * alignment must be a power of 2 */ -void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size); +void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags); /* * Allocate aligned memory from any area. - * Equivalent to memalign_pages_area(AREA_ANY, alignment, size). + * Equivalent to memalign_pages_flags(alignment, size, AREA_ANY). */ static inline void *memalign_pages(size_t alignment, size_t size) { - return memalign_pages_area(AREA_ANY, alignment, size); + return memalign_pages_flags(alignment, size, AREA_ANY); } /* * Allocate naturally aligned memory from the specified areas. - * Equivalent to memalign_pages_area(areas, 1ull << order, 1ull << order). + * Equivalent to memalign_pages_flags(1ull << order, 1ull << order, flags). */ -void *alloc_pages_area(unsigned int areas, unsigned int order); +void *alloc_pages_flags(unsigned int order, unsigned int flags); /* * Allocate naturally aligned memory from any area. @@ -57,7 +62,7 @@ void *alloc_pages_area(unsigned int areas, unsigned int order); */ static inline void *alloc_pages(unsigned int order) { - return alloc_pages_area(AREA_ANY, order); + return alloc_pages_flags(order, AREA_ANY); } /* diff --git a/lib/alloc_page.c b/lib/alloc_page.c index dfa43d5..d850b6a 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -361,13 +361,13 @@ void free_pages_special(phys_addr_t addr, size_t n) spin_unlock(&lock); } -static void *page_memalign_order_area(unsigned area, u8 ord, u8 al) +static void *page_memalign_order_flags(u8 ord, u8 al, u32 flags) { void *res = NULL; - int i; + int i, area; spin_lock(&lock); - area &= areas_mask; + area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask; for (i = 0; !res && (i < MAX_AREAS); i++) if (area & BIT(i)) res = page_memalign_order(areas + i, ord, al); @@ -379,23 +379,23 @@ static void *page_memalign_order_area(unsigned area, u8 ord, u8 al) * Allocates (1 << order) physically contiguous and naturally aligned pages. * Returns NULL if the allocation was not possible. */ -void *alloc_pages_area(unsigned int area, unsigned int order) +void *alloc_pages_flags(unsigned int order, unsigned int flags) { - return page_memalign_order_area(area, order, order); + return page_memalign_order_flags(order, order, flags); } /* * Allocates (1 << order) physically contiguous aligned pages. * Returns NULL if the allocation was not possible. */ -void *memalign_pages_area(unsigned int area, size_t alignment, size_t size) +void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags) { assert(is_power_of_2(alignment)); alignment = get_order(PAGE_ALIGN(alignment) >> PAGE_SHIFT); size = get_order(PAGE_ALIGN(size) >> PAGE_SHIFT); assert(alignment < NLISTS); assert(size < NLISTS); - return page_memalign_order_area(area, size, alignment); + return page_memalign_order_flags(size, alignment, flags); } diff --git a/lib/s390x/smp.c b/lib/s390x/smp.c index 77d80ca..44b2eb4 100644 --- a/lib/s390x/smp.c +++ b/lib/s390x/smp.c @@ -190,7 +190,7 @@ int smp_cpu_setup(uint16_t addr, struct psw psw) sigp_retry(cpu->addr, SIGP_INITIAL_CPU_RESET, 0, NULL); - lc = alloc_pages_area(AREA_DMA31, 1); + lc = alloc_pages_flags(1, AREA_DMA31); cpu->lowcore = lc; memset(lc, 0, PAGE_SIZE * 2); sigp_retry(cpu->addr, SIGP_SET_PREFIX, (unsigned long )lc, NULL); From patchwork Wed Dec 16 20:11:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A5C3C3526C for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F384423600 for ; Wed, 16 Dec 2020 20:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728737AbgLPUNK (ORCPT ); Wed, 16 Dec 2020 15:13:10 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:65411 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727434AbgLPUNK (ORCPT ); Wed, 16 Dec 2020 15:13:10 -0500 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK4mpL127166 for ; Wed, 16 Dec 2020 15:12:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=5GwpfX7vOVAQ5bd6U7k77kV7Gd6L+tM3O9qamNvSuIY=; b=i/7CIJB/eHjb4I3O9/Wxt6ToZ9fTE4vyYiZSFVPrs8f3NGOW4J1qNSCwdvLG0x7N0+0g b9pVB1pRsP9duWMlUm6bkR3QzCzhqveqfv+dqUhnBPo2vbK7hKaF/N5y7kN8r7bPUS98 3WfQhP8x5sHanm6ge3qAAARhjkI+u6iS0zzlNn51OyzRUtdSuRPEKyEITaxricsjEf6E ECyZJDteHKN43O/AGpjIpjuoRSANx/uc45Li8o6jmbO4EeMN7c7QcJ/VeC8WKp2A6pwy 3kQrglinPHJL5h8oMlLssGD9J3VSL1JLdRlXLfsQjeLQlwfkMFKWQON3lO3Rq0FAMg8Q AA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 35frmfrnr9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:29 -0500 Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK52kY128693 for ; Wed, 16 Dec 2020 15:12:29 -0500 Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0b-001b2d01.pphosted.com with ESMTP id 35frmfrnqs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:29 -0500 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK8xJr003942; Wed, 16 Dec 2020 20:12:27 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06fra.de.ibm.com with ESMTP id 35d310a4u7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:27 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCP5o31064466 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:25 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 04C3F42047; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8D48642045; Wed, 16 Dec 2020 20:12:24 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:24 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 10/12] lib/alloc_page: Wire up ZERO_FLAG Date: Wed, 16 Dec 2020 21:11:58 +0100 Message-Id: <20201216201200.255172-11-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 bulkscore=0 malwarescore=0 mlxscore=0 spamscore=0 phishscore=0 adultscore=0 suspectscore=0 priorityscore=1501 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Memory allocated with the ZERO_FLAG will now be zeroed before being returned to the caller. Signed-off-by: Claudio Imbrenda --- lib/alloc_page.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index d850b6a..8c79202 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -372,6 +372,8 @@ static void *page_memalign_order_flags(u8 ord, u8 al, u32 flags) if (area & BIT(i)) res = page_memalign_order(areas + i, ord, al); spin_unlock(&lock); + if (res && (flags & FLAG_ZERO)) + memset(res, 0, BIT(ord) * PAGE_SIZE); return res; } From patchwork Wed Dec 16 20:11:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B0CC3526F for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57EEE23719 for ; Wed, 16 Dec 2020 20:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728743AbgLPUNM (ORCPT ); Wed, 16 Dec 2020 15:13:12 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:46276 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728734AbgLPUNL (ORCPT ); Wed, 16 Dec 2020 15:13:11 -0500 Received: from pps.filterd (m0127361.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK2s3H181080 for ; Wed, 16 Dec 2020 15:12:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=oFWHbR0tiNt/LYHoCfUszl+cnuJC8+R0h0RnvFHg/vM=; b=UAdJ4Sn8BeVjKK9Hbz61E0xD50HLmq4n6ULYzXk4fgZ80NrRYT0i3ZIRVcapiAkNjG3s B0Q2Tc+jsoY9dcaRNthbIpn39jjPFfUxmSjYYkCvYiU7vIDf3kj2CYlLxz91RQJnkd6t AWMPMazAdlM0TvALcnQGoPqnBmbZVGSAtI+MkyeTVAA+3NQQVGvQuZEccTY+pRvJCPum JhtPA7+W2o6CdWstRP9Hq6mQO8BMMgROOnsnR6qirgHhJ7i2ZEYi32NL1lNxovBbtc9f IYODnawvGbUruaMnylUXyUlL+3h/PgKyaV7RRBTYSwOxafQ2cRX0C8gicVpVs4zMhu0C ug== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frs78fqc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:12:30 -0500 Received: from m0127361.ppops.net (m0127361.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK35dv181588 for ; Wed, 16 Dec 2020 15:12:29 -0500 Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 35frs78fpp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:12:29 -0500 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK8h0K021154; Wed, 16 Dec 2020 20:12:27 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma03ams.nl.ibm.com with ESMTP id 35cng8cmmf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:12:27 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCPUB33358134 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:25 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8232D42042; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1A0D042049; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 11/12] lib/alloc_page: Properly handle requests for fresh blocks Date: Wed, 16 Dec 2020 21:11:59 +0100 Message-Id: <20201216201200.255172-12-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 phishscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 suspectscore=0 adultscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160124 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Upon initialization, all memory in an area is marked as fresh. Once memory is used and freed, the freed memory is marked as free. Free memory is always appended to the front of the freelist, meaning that fresh memory stays on the tail. When a block of fresh memory is split, the two blocks are put on the tail of the appropriate freelist, so they can be found when needed. When a fresh block is requested, a fresh block one order bigger is taken, the first half is put back in the free pool (on the tail), and the second half is returned. The reason behind this is that the first page of every block always contains the pointers of the freelist. Since the first page of a fresh block is actually not fresh, it cannot be returned when a fresh allocation is requested. Signed-off-by: Claudio Imbrenda --- lib/alloc_page.c | 51 +++++++++++++++++++++++++++++++++++++----------- 1 file changed, 40 insertions(+), 11 deletions(-) diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 8c79202..4d5722f 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -120,10 +120,17 @@ static void split(struct mem_area *a, void *addr) if ((order == a->max_order) && (is_list_empty(a->freelists + order))) a->max_order--; order--; - /* add the first half block to the appropriate free list */ - list_add(a->freelists + order, addr); - /* add the second half block to the appropriate free list */ - list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + + /* add the two half blocks to the appropriate free list */ + if (IS_FRESH(metadata)) { + /* add to the tail if the blocks are fresh */ + list_add_tail(a->freelists + order, addr); + list_add_tail(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + } else { + /* add to the front if the blocks are dirty */ + list_add(a->freelists + order, addr); + list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order))); + } } /* @@ -132,21 +139,33 @@ static void split(struct mem_area *a, void *addr) * * Both parameters must be not larger than the largest allowed order */ -static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) +static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz, bool fresh) { struct linked_list *p; + pfn_t idx; u8 order; assert((al < NLISTS) && (sz < NLISTS)); /* we need the bigger of the two as starting point */ order = sz > al ? sz : al; + /* + * we need to go one order up if we want a completely fresh block, + * since the first page contains the freelist pointers, and + * therefore it is always dirty + */ + order += fresh; /* search all free lists for some memory */ for ( ; order <= a->max_order; order++) { - p = a->freelists[order].next; - if (!is_list_empty(p)) - break; + p = fresh ? a->freelists[order].prev : a->freelists[order].next; + if (is_list_empty(p)) + continue; + idx = virt_to_pfn(p) - a->base; + if (fresh && !IS_FRESH(a->page_states[idx])) + continue; + break; } + /* out of memory */ if (order > a->max_order) return NULL; @@ -160,7 +179,16 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz) split(a, p); list_remove(p); - memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order)); + /* We now have a block twice the size, but the first page is dirty. */ + if (fresh) { + order--; + /* Put back the first (partially dirty) half of the block */ + memset(a->page_states + idx, STATUS_FRESH | order, BIT(order)); + list_add_tail(a->freelists + order, p); + idx += BIT(order); + p = pfn_to_virt(a->base + idx); + } + memset(a->page_states + idx, STATUS_ALLOCATED | order, BIT(order)); return p; } @@ -364,13 +392,14 @@ void free_pages_special(phys_addr_t addr, size_t n) static void *page_memalign_order_flags(u8 ord, u8 al, u32 flags) { void *res = NULL; - int i, area; + int i, area, fresh; + fresh = !!(flags & FLAG_FRESH); spin_lock(&lock); area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask; for (i = 0; !res && (i < MAX_AREAS); i++) if (area & BIT(i)) - res = page_memalign_order(areas + i, ord, al); + res = page_memalign_order(areas + i, ord, al, fresh); spin_unlock(&lock); if (res && (flags & FLAG_ZERO)) memset(res, 0, BIT(ord) * PAGE_SIZE); From patchwork Wed Dec 16 20:12:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Imbrenda X-Patchwork-Id: 11978533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EDB7C2BBCD for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 696B423731 for ; Wed, 16 Dec 2020 20:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728792AbgLPUO1 (ORCPT ); Wed, 16 Dec 2020 15:14:27 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:29418 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728783AbgLPUO1 (ORCPT ); Wed, 16 Dec 2020 15:14:27 -0500 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK313A079780 for ; Wed, 16 Dec 2020 15:13:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=IERvzUj3rwx2mjIssMn/97qRkSDU1evfeQiSHjsm2Zc=; b=f6PyIzdVl/H8Yo9UhCeDXF+YbJFFytlPkHlGLAg4bFea9MGjb2hpodN6rluYrDBwtavw 3QL8SiFCF8iKakiKFYrTqWwCVEfJy3I1wEhnW6/y4pdcaaG0oTWSR/TWkgCG1SIFk+RD 8b22zGAIPNDWne2IXssQSxlfzAmisuoefOJs41vHAlZlPTDxdjam37RmiXL6f9pFWZ4+ FDhfYdpO+8O9cxX+irWigW7O9Mi0RfH/OcQPDdWwWNWQBLbAD6mVx9w4ZhxpIOXnLv2E Kx45LXXzVk0kwnAGPJNXIpqlmfy8MzOsIVwQN1Wm5rYAT+EUqn1lj7E7fErEkMXbcGc7 qA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 35fqbqtwpn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 16 Dec 2020 15:13:45 -0500 Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BGK39Mc081247 for ; Wed, 16 Dec 2020 15:13:44 -0500 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0b-001b2d01.pphosted.com with ESMTP id 35fqbqtwp5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 15:13:44 -0500 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BGK79Tr021456; Wed, 16 Dec 2020 20:13:43 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma03fra.de.ibm.com with ESMTP id 35cng8evkp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Dec 2020 20:13:42 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BGKCQGb36110782 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Dec 2020 20:12:26 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 048D342042; Wed, 16 Dec 2020 20:12:26 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9516C42047; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) Received: from ibm-vm.ibmuc.com (unknown [9.145.10.74]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 16 Dec 2020 20:12:25 +0000 (GMT) From: Claudio Imbrenda To: kvm@vger.kernel.org Cc: frankja@linux.ibm.com, david@redhat.com, thuth@redhat.com, pbonzini@redhat.com, cohuck@redhat.com, lvivier@redhat.com, nadav.amit@gmail.com Subject: [kvm-unit-tests PATCH v1 12/12] lib/alloc_page: default flags and zero pages by default Date: Wed, 16 Dec 2020 21:12:00 +0100 Message-Id: <20201216201200.255172-13-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201216201200.255172-1-imbrenda@linux.ibm.com> References: <20201216201200.255172-1-imbrenda@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-16_08:2020-12-15,2020-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 malwarescore=0 mlxscore=0 lowpriorityscore=0 spamscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012160120 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The new function page_alloc_set_default_flags can be used to set the default flags for allocations. The passed value will be ORed with the flags argument passed to the allocator at each allocation. The default value for the default flags is FLAG_ZERO, which means that by default all allocated memory is now zeroed, restoring the default behaviour that had been accidentally removed by a previous commit. If needed, a testcase can call page_alloc_set_default_flags(0) in order to get non-zeroed pages from the allocator. For example, if the testcase will need fresh memory, the zero flag should be removed from the default. Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Reported-by: Nadav Amit Signed-off-by: Claudio Imbrenda --- lib/alloc_page.h | 3 +++ lib/alloc_page.c | 8 ++++++++ 2 files changed, 11 insertions(+) diff --git a/lib/alloc_page.h b/lib/alloc_page.h index 1039814..8b53a58 100644 --- a/lib/alloc_page.h +++ b/lib/alloc_page.h @@ -22,6 +22,9 @@ /* Returns true if the page allocator has been initialized */ bool page_alloc_initialized(void); +/* Sets the default flags for the page allocator, the default is FLAG_ZERO */ +void page_alloc_set_default_flags(unsigned int flags); + /* * Initializes a memory area. * n is the number of the area to initialize diff --git a/lib/alloc_page.c b/lib/alloc_page.c index 4d5722f..08e0d05 100644 --- a/lib/alloc_page.c +++ b/lib/alloc_page.c @@ -54,12 +54,19 @@ static struct mem_area areas[MAX_AREAS]; static unsigned int areas_mask; /* Protects areas and areas mask */ static struct spinlock lock; +/* Default behaviour: zero allocated pages */ +static unsigned int default_flags = FLAG_ZERO; bool page_alloc_initialized(void) { return areas_mask != 0; } +void page_alloc_set_default_flags(unsigned int flags) +{ + default_flags = flags; +} + /* * Each memory area contains an array of metadata entries at the very * beginning. The usable memory follows immediately afterwards. @@ -394,6 +401,7 @@ static void *page_memalign_order_flags(u8 ord, u8 al, u32 flags) void *res = NULL; int i, area, fresh; + flags |= default_flags; fresh = !!(flags & FLAG_FRESH); spin_lock(&lock); area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask;