From patchwork Mon Nov 20 14:56:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 13461432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5D5CC197A0 for ; Mon, 20 Nov 2023 14:56:55 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.636971.992677 (Exim 4.92) (envelope-from ) id 1r55hc-0008Qh-Pi; Mon, 20 Nov 2023 14:56:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 636971.992677; Mon, 20 Nov 2023 14:56:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hc-0008QJ-K2; Mon, 20 Nov 2023 14:56:44 +0000 Received: by outflank-mailman (input) for mailman id 636971; Mon, 20 Nov 2023 14:56:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hb-0007tb-Ls for xen-devel@lists.xenproject.org; Mon, 20 Nov 2023 14:56:43 +0000 Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 02967cb8-87b5-11ee-9b0e-b553b5be7939; Mon, 20 Nov 2023 15:56:40 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 02967cb8-87b5-11ee-9b0e-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1700492199; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=52aGbe9hxD11kdwDju4bGGiS3h4rYhUSeNqPBgUhvg8=; b=CgR4xyaHuZwtxPoQbnirUolFrac2+NbhgeQhx2237Qprkk+hWDo3efTH rddsZNRfcgrvKqQuKrzNCbVxdnVDGO3MADFq5+gdZEzYlasRiag30nmHZ zR1eqJ6yMNxWfFpCSs0hLejLJ3wFY6OOyaftesRw7bJZjB58T8RYrxqmO A=; X-CSE-ConnectionGUID: YxgkUgEXQvaxEBofcrPFUw== X-CSE-MsgGUID: jPXUVQxFSqmGn8lpxfeyJA== Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 127427128 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.159.70 X-Policy: $RELAYED X-ThreatScanner-Verdict: Negative IronPort-Data: A9a23:P8lDIa9vJdzLiiH8nDIsDrUD636TJUtcMsCJ2f8bNWPcYEJGY0x3z WcYUWuOa66LZWLxKoggYI+w8klV7JfQmtNrTlRqrCg8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s 5hBMImowOQcFCK0SsKFa+C5xZVE/fjVAOK6UKidYnwZqTZMEE8JkQhkl/MynrlmiN24BxLlk d7pqojUNUTNNwRcawr40Ird7ksz1BjOkGlA5AdnPKoT5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl/5 fMbFT8gXimdjvKb6rW1QfNUqPk8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI ZBDMHw2MUWGPEUn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwPwRrC+ TucrwwVBDkYDOyi9wOV3Em1g8DJzSfjSIwfBZ+Ro6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0efBdDuk74wGl0bfP7kCSAW1sZiVadNUsucsyRDor/ lyEhdXkAXpoqrL9YW2Z3qeZq3W1Iyd9BW0fYS4JSyMV7t+lp5s85i8jVf46TvTz1IesX2itk nbV9EDSmon/k+Y6x6q4wF/qswuyr4jkZAk77R/vcF+6u1YRiJGeW2C41bTKxa8fdN7FFgjf7 ChspiSI0AwZ4XiweM2xrAYlRujBCw6tamG0vLKWN8BJG86R03CiZ5tMxzp1OV1kNM0JERewP xeL5VoBtcINYCHxBUOSX25WI55ypZUM6Py/DqyEBjawSsUZmPC7ENFGOhfLgjGFfLkEmqAjI 5aLGftA/l5DYZmLOAGeHr9HuZdyn3BW+I8mbcyjp/hR+ebENSH9pHZsGAfmU93VG4ve8ViPq I4FbJLWo/idOcWnChTqHUcoBQhiBRAG6Vre8qS7qsbrztJaJVwc IronPort-HdrOrdr: A9a23:NMj0Oal/gd+mWaK0iUqf0yc+EAnpDfIU3DAbv31ZSRFFG/Fxl6 iV8sjzsiWE8Qr5OUtQ/+xoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO VbAsxD4bXLfCBHZK3BgTVQfexO/DD+ytHLudvj X-Talos-CUID: 9a23:Pv4CJ2gAXeho0ZJThaFyvQPGaTJuQ23Sz0rfDn+BF295ZZeSR1SKwI1ZjJ87 X-Talos-MUID: 9a23:fHNHeAnT3+POTwiD1OBSdnpgDPdV3Z2+JnsUlLoA682cJSJ9KhW02WE= X-IronPort-AV: E=Sophos;i="6.04,214,1695700800"; d="scan'208";a="127427128" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , George Dunlap , Jan Beulich , "Stefano Stabellini" , Wei Liu , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 1/3] x86/treewide: Switch bool_t to bool Date: Mon, 20 Nov 2023 14:56:21 +0000 Message-ID: <20231120145623.167383-2-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231120145623.167383-1-andrew.cooper3@citrix.com> References: <20231120145623.167383-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 ... as part of cleaning up the types used. Minor style cleanup on some altered lines. No functional change. Signed-off-by: Andrew Cooper Acked-by: Jan Beulich --- CC: George Dunlap CC: Jan Beulich CC: Stefano Stabellini CC: Wei Liu CC: Julien Grall CC: Roger Pau Monné There's not an obvious way to subdivide this further without getting to a silly number of patches. --- xen/arch/x86/cpu/microcode/core.c | 4 +-- xen/arch/x86/cpu/mtrr/generic.c | 2 +- xen/arch/x86/cpu/vpmu.c | 2 +- xen/arch/x86/cpu/vpmu_amd.c | 4 +-- xen/arch/x86/cpu/vpmu_intel.c | 6 ++-- xen/arch/x86/hvm/asid.c | 4 +-- xen/arch/x86/hvm/emulate.c | 28 +++++++++--------- xen/arch/x86/hvm/hvm.c | 36 ++++++++++++------------ xen/arch/x86/hvm/intercept.c | 2 +- xen/arch/x86/hvm/mtrr.c | 12 ++++---- xen/arch/x86/hvm/nestedhvm.c | 4 +-- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/svm/nestedsvm.c | 16 +++++------ xen/arch/x86/hvm/svm/svm.c | 8 +++--- xen/arch/x86/hvm/vlapic.c | 31 ++++++++++---------- xen/arch/x86/hvm/vmx/vmcs.c | 29 +++++++++---------- xen/arch/x86/hvm/vmx/vmx.c | 6 ++-- xen/arch/x86/hvm/vmx/vvmx.c | 20 ++++++------- xen/arch/x86/include/asm/acpi.h | 2 +- xen/arch/x86/include/asm/apic.h | 2 +- xen/arch/x86/include/asm/domain.h | 28 +++++++++--------- xen/arch/x86/include/asm/hardirq.h | 2 +- xen/arch/x86/include/asm/hvm/asid.h | 2 +- xen/arch/x86/include/asm/hvm/emulate.h | 2 +- xen/arch/x86/include/asm/hvm/hvm.h | 24 ++++++++-------- xen/arch/x86/include/asm/hvm/io.h | 6 ++-- xen/arch/x86/include/asm/hvm/nestedhvm.h | 4 +-- xen/arch/x86/include/asm/hvm/vcpu.h | 16 +++++------ xen/arch/x86/include/asm/hvm/vlapic.h | 12 ++++---- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 10 +++---- xen/arch/x86/include/asm/hvm/vmx/vmx.h | 2 +- xen/arch/x86/include/asm/hvm/vmx/vvmx.h | 2 +- xen/arch/x86/include/asm/mtrr.h | 16 +++++------ xen/arch/x86/include/asm/p2m.h | 20 ++++++------- xen/arch/x86/include/asm/page.h | 2 +- xen/arch/x86/include/asm/paging.h | 2 +- xen/arch/x86/include/asm/pci.h | 8 +++--- xen/arch/x86/include/asm/psr.h | 2 +- xen/arch/x86/include/asm/vpmu.h | 12 ++++---- xen/arch/x86/mm/hap/nested_ept.c | 12 ++++---- xen/arch/x86/mm/mem_paging.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 29 ++++++++++--------- xen/arch/x86/mm/p2m-pod.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 6 ++-- xen/arch/x86/mm/p2m.c | 9 +++--- xen/arch/x86/mm/paging.c | 6 ++-- xen/arch/x86/x86_64/mmconf-fam10h.c | 2 +- xen/arch/x86/x86_64/mmconfig-shared.c | 8 +++--- xen/arch/x86/x86_64/mmconfig_64.c | 6 ++-- 49 files changed, 237 insertions(+), 237 deletions(-) diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c index 65ebeb50deea..95bcb52b222d 100644 --- a/xen/arch/x86/cpu/microcode/core.c +++ b/xen/arch/x86/cpu/microcode/core.c @@ -58,7 +58,7 @@ static module_t __initdata ucode_mod; static signed int __initdata ucode_mod_idx; -static bool_t __initdata ucode_mod_forced; +static bool __initdata ucode_mod_forced; static unsigned int nr_cores; /* @@ -93,7 +93,7 @@ static struct ucode_mod_blob __initdata ucode_blob; * By default we will NOT parse the multiboot modules to see if there is * cpio image with the microcode images. */ -static bool_t __initdata ucode_scan; +static bool __initdata ucode_scan; /* By default, ucode loading is done in NMI handler */ static bool ucode_in_nmi = true; diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c index 660ae26c2350..25ae5f5b7d6a 100644 --- a/xen/arch/x86/cpu/mtrr/generic.c +++ b/xen/arch/x86/cpu/mtrr/generic.c @@ -120,7 +120,7 @@ void __init get_mtrr_state(void) rdmsrl(MSR_MTRRcap, mtrr_state.mtrr_cap); } -static bool_t __initdata mtrr_show; +static bool __initdata mtrr_show; boolean_param("mtrr.show", mtrr_show); static const char *__init mtrr_attrib_to_str(mtrr_type x) diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index a022126f18fd..ed84372b8001 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -369,7 +369,7 @@ void vpmu_save(struct vcpu *v) apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED); } -int vpmu_load(struct vcpu *v, bool_t from_guest) +int vpmu_load(struct vcpu *v, bool from_guest) { struct vpmu_struct *vpmu = vcpu_vpmu(v); int ret; diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 18266b9521a9..c28a7e3c4719 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -31,7 +31,7 @@ static unsigned int __read_mostly num_counters; static const u32 __read_mostly *counters; static const u32 __read_mostly *ctrls; -static bool_t __read_mostly k7_counters_mirrored; +static bool __read_mostly k7_counters_mirrored; #define F10H_NUM_COUNTERS 4 #define F15H_NUM_COUNTERS 6 @@ -217,7 +217,7 @@ static int cf_check amd_vpmu_load(struct vcpu *v, bool from_guest) if ( from_guest ) { - bool_t is_running = 0; + bool is_running = false; struct xen_pmu_amd_ctxt *guest_ctxt = &vpmu->xenpmu_data->pmu.c.amd; ASSERT(!has_vlapic(v->domain)); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 6330c89b47be..0a73ae27a4cb 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -52,7 +52,7 @@ /* Alias registers (0x4c1) for full-width writes to PMCs */ #define MSR_PMC_ALIAS_MASK (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0)) -static bool_t __read_mostly full_width_write; +static bool __read_mostly full_width_write; /* * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed @@ -607,7 +607,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) tmp = msr - MSR_P6_EVNTSEL(0); if ( tmp >= 0 && tmp < arch_pmc_cnt ) { - bool_t blocked = 0; + bool blocked = false; uint64_t umaskevent = msr_content & MSR_IA32_CMT_EVTSEL_UE_MASK; struct xen_pmu_cntr_pair *xen_pmu_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters); @@ -818,7 +818,7 @@ static int cf_check core2_vpmu_initialise(struct vcpu *v) { struct vpmu_struct *vpmu = vcpu_vpmu(v); u64 msr_content; - static bool_t ds_warned; + static bool ds_warned; if ( v->domain->arch.cpuid->basic.pmu_version <= 1 || v->domain->arch.cpuid->basic.pmu_version >= 6 ) diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c index 0faaa24a8f6e..8d27b7dba17b 100644 --- a/xen/arch/x86/hvm/asid.c +++ b/xen/arch/x86/hvm/asid.c @@ -43,7 +43,7 @@ struct hvm_asid_data { uint64_t core_asid_generation; uint32_t next_asid; uint32_t max_asid; - bool_t disabled; + bool disabled; }; static DEFINE_PER_CPU(struct hvm_asid_data, hvm_asid_data); @@ -100,7 +100,7 @@ void hvm_asid_flush_core(void) data->disabled = 1; } -bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid) +bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid) { struct hvm_asid_data *data = &this_cpu(hvm_asid_data); diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 254716c76670..15d9962f3a2c 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -150,8 +150,8 @@ void hvmemul_cancel(struct vcpu *v) } static int hvmemul_do_io( - bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int size, - uint8_t dir, bool_t df, bool_t data_is_addr, uintptr_t data) + bool is_mmio, paddr_t addr, unsigned long *reps, unsigned int size, + uint8_t dir, bool df, bool data_is_addr, uintptr_t data) { struct vcpu *curr = current; struct domain *currd = curr->domain; @@ -363,8 +363,8 @@ static int hvmemul_do_io( } static int hvmemul_do_io_buffer( - bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int size, - uint8_t dir, bool_t df, void *buffer) + bool is_mmio, paddr_t addr, unsigned long *reps, unsigned int size, + uint8_t dir, bool df, void *buffer) { int rc; @@ -421,8 +421,8 @@ static inline void hvmemul_release_page(struct page_info *page) } static int hvmemul_do_io_addr( - bool_t is_mmio, paddr_t addr, unsigned long *reps, - unsigned int size, uint8_t dir, bool_t df, paddr_t ram_gpa) + bool is_mmio, paddr_t addr, unsigned long *reps, + unsigned int size, uint8_t dir, bool df, paddr_t ram_gpa) { struct vcpu *v = current; unsigned long ram_gmfn = paddr_to_pfn(ram_gpa); @@ -510,7 +510,7 @@ static int hvmemul_do_pio_addr(uint16_t port, unsigned long *reps, unsigned int size, uint8_t dir, - bool_t df, + bool df, paddr_t ram_addr) { return hvmemul_do_io_addr(0, port, reps, size, dir, df, ram_addr); @@ -534,7 +534,7 @@ static int hvmemul_do_mmio_buffer(paddr_t mmio_gpa, unsigned long *reps, unsigned int size, uint8_t dir, - bool_t df, + bool df, void *buffer) { return hvmemul_do_io_buffer(1, mmio_gpa, reps, size, dir, df, buffer); @@ -554,7 +554,7 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa, unsigned long *reps, unsigned int size, uint8_t dir, - bool_t df, + bool df, paddr_t ram_gpa) { return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa); @@ -1034,7 +1034,7 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( } static void latch_linear_to_phys(struct hvm_vcpu_io *hvio, unsigned long gla, - unsigned long gpa, bool_t write) + unsigned long gpa, bool write) { if ( hvio->mmio_access.gla_valid ) return; @@ -1048,7 +1048,7 @@ static void latch_linear_to_phys(struct hvm_vcpu_io *hvio, unsigned long gla, static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, - uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn) + uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool known_gpfn) { struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned long offset = gla & ~PAGE_MASK; @@ -1101,7 +1101,7 @@ static int hvmemul_linear_mmio_access( static inline int hvmemul_linear_mmio_read( unsigned long gla, unsigned int size, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, - bool_t translate) + bool translate) { return hvmemul_linear_mmio_access(gla, size, IOREQ_READ, buffer, pfec, hvmemul_ctxt, translate); @@ -1110,7 +1110,7 @@ static inline int hvmemul_linear_mmio_read( static inline int hvmemul_linear_mmio_write( unsigned long gla, unsigned int size, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, - bool_t translate) + bool translate) { return hvmemul_linear_mmio_access(gla, size, IOREQ_WRITE, buffer, pfec, hvmemul_ctxt, translate); @@ -1990,7 +1990,7 @@ static int cf_check hvmemul_rep_stos( unsigned long addr; paddr_t gpa; p2m_type_t p2mt; - bool_t df = !!(ctxt->regs->eflags & X86_EFLAGS_DF); + bool df = ctxt->regs->eflags & X86_EFLAGS_DF; int rc = hvmemul_virtual_to_linear(seg, offset, bytes_per_rep, reps, hvm_access_write, hvmemul_ctxt, &addr); diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 482eebbabf7f..35a30df3b1b4 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -69,7 +69,7 @@ #include -bool_t __read_mostly hvm_enabled; +bool __read_mostly hvm_enabled; #ifdef DBG_LEVEL_0 unsigned int opt_hvm_debug_level __read_mostly; @@ -87,12 +87,12 @@ unsigned long __section(".bss.page_aligned") __aligned(PAGE_SIZE) hvm_io_bitmap[HVM_IOBITMAP_SIZE / BYTES_PER_LONG]; /* Xen command-line option to enable HAP */ -static bool_t __initdata opt_hap_enabled = 1; +static bool __initdata opt_hap_enabled = true; boolean_param("hap", opt_hap_enabled); #ifndef opt_hvm_fep /* Permit use of the Forced Emulation Prefix in HVM guests */ -bool_t __read_mostly opt_hvm_fep; +bool __read_mostly opt_hvm_fep; boolean_param("hvm_fep", opt_hvm_fep); #endif static const char __initconst warning_hvm_fep[] = @@ -102,7 +102,7 @@ static const char __initconst warning_hvm_fep[] = "Please *DO NOT* use this in production.\n"; /* Xen command-line option to enable altp2m */ -static bool_t __initdata opt_altp2m_enabled = 0; +static bool __initdata opt_altp2m_enabled; boolean_param("altp2m", opt_altp2m_enabled); static int cf_check cpu_callback( @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, /* Check access permissions first, then handle faults */ if ( !mfn_eq(mfn, INVALID_MFN) ) { - bool_t violation; + bool violation; /* If the access is against the permissions, then send to vm_event */ switch (p2ma) @@ -1914,7 +1914,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, /* Should #VE be emulated for this fault? */ if ( p2m_is_altp2m(p2m) && !cpu_has_vmx_virt_exceptions ) { - bool_t sve; + bool sve; p2m->get_entry(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, &sve); @@ -2125,7 +2125,7 @@ int hvm_set_efer(uint64_t value) } /* Exit UC mode only if all VCPUs agree on MTRR/PAT and are not in no_fill. */ -static bool_t domain_exit_uc_mode(struct vcpu *v) +static bool domain_exit_uc_mode(struct vcpu *v) { struct domain *d = v->domain; struct vcpu *vs; @@ -2142,7 +2142,7 @@ static bool_t domain_exit_uc_mode(struct vcpu *v) return 1; } -static void hvm_set_uc_mode(struct vcpu *v, bool_t is_in_uc_mode) +static void hvm_set_uc_mode(struct vcpu *v, bool is_in_uc_mode) { v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode; shadow_blow_tables_per_domain(v->domain); @@ -2705,8 +2705,8 @@ struct hvm_write_map { /* On non-NULL return, we leave this function holding an additional * ref on the underlying mfn, if any */ -static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent, - bool_t *writable) +static void *_hvm_map_guest_frame(unsigned long gfn, bool permanent, + bool *writable) { void *map; p2m_type_t p2mt; @@ -2750,19 +2750,19 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent, return map; } -void *hvm_map_guest_frame_rw(unsigned long gfn, bool_t permanent, - bool_t *writable) +void *hvm_map_guest_frame_rw(unsigned long gfn, bool permanent, + bool *writable) { *writable = 1; return _hvm_map_guest_frame(gfn, permanent, writable); } -void *hvm_map_guest_frame_ro(unsigned long gfn, bool_t permanent) +void *hvm_map_guest_frame_ro(unsigned long gfn, bool permanent) { return _hvm_map_guest_frame(gfn, permanent, NULL); } -void hvm_unmap_guest_frame(void *p, bool_t permanent) +void hvm_unmap_guest_frame(void *p, bool permanent) { mfn_t mfn; struct page_info *page; @@ -2806,7 +2806,7 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d) spin_unlock(&d->arch.hvm.write_map.lock); } -static void *hvm_map_entry(unsigned long va, bool_t *writable) +static void *hvm_map_entry(unsigned long va, bool *writable) { unsigned long gfn; uint32_t pfec; @@ -2851,7 +2851,7 @@ static int task_switch_load_seg( struct segment_register desctab, segr; seg_desc_t *pdesc = NULL, desc; u8 dpl, rpl; - bool_t writable; + bool writable; int fault_type = X86_EXC_TS; struct vcpu *v = current; @@ -3030,7 +3030,7 @@ void hvm_task_switch( struct cpu_user_regs *regs = guest_cpu_user_regs(); struct segment_register gdt, tr, prev_tr, segr; seg_desc_t *optss_desc = NULL, *nptss_desc = NULL, tss_desc; - bool_t otd_writable, ntd_writable; + bool otd_writable, ntd_writable; unsigned int eflags, new_cpl; pagefault_info_t pfinfo; int exn_raised, rc; @@ -4642,7 +4642,7 @@ static int do_altp2m_op( case HVMOP_altp2m_set_domain_state: { struct vcpu *v; - bool_t ostate; + bool ostate; if ( nestedhvm_enabled(d) ) { diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index 61664c0ad13f..a949419cbebb 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -324,7 +324,7 @@ bool relocate_portio_handler(struct domain *d, unsigned int old_port, return false; } -bool_t hvm_mmio_internal(paddr_t gpa) +bool hvm_mmio_internal(paddr_t gpa) { const struct hvm_io_handler *handler; const struct hvm_io_ops *ops; diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 7f486358b1ba..52df34a6de03 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -359,7 +359,7 @@ uint32_t get_pat_flags(struct vcpu *v, return pat_type_2_pte_flags(pat_entry_value); } -static inline bool_t valid_mtrr_type(uint8_t type) +static inline bool valid_mtrr_type(uint8_t type) { switch ( type ) { @@ -373,8 +373,8 @@ static inline bool_t valid_mtrr_type(uint8_t type) return 0; } -bool_t mtrr_def_type_msr_set(struct domain *d, struct mtrr_state *m, - uint64_t msr_content) +bool mtrr_def_type_msr_set(struct domain *d, struct mtrr_state *m, + uint64_t msr_content) { uint8_t def_type = msr_content & 0xff; bool fixed_enabled = MASK_EXTR(msr_content, MTRRdefType_FE); @@ -405,8 +405,8 @@ bool_t mtrr_def_type_msr_set(struct domain *d, struct mtrr_state *m, return 1; } -bool_t mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m, - uint32_t row, uint64_t msr_content) +bool mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m, + uint32_t row, uint64_t msr_content) { uint64_t *fixed_range_base = (uint64_t *)m->fixed_ranges; @@ -428,7 +428,7 @@ bool_t mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m, return 1; } -bool_t mtrr_var_range_msr_set( +bool mtrr_var_range_msr_set( struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t msr_content) { uint32_t index, phys_addr; diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c index 64d7eec9a1de..12bf7172b873 100644 --- a/xen/arch/x86/hvm/nestedhvm.c +++ b/xen/arch/x86/hvm/nestedhvm.c @@ -16,7 +16,7 @@ static unsigned long *shadow_io_bitmap[3]; /* Nested VCPU */ -bool_t +bool nestedhvm_vcpu_in_guestmode(struct vcpu *v) { return vcpu_nestedhvm(v).nv_guestmode; @@ -155,7 +155,7 @@ static int __init cf_check nestedhvm_setup(void) __initcall(nestedhvm_setup); unsigned long * -nestedhvm_vcpu_iomap_get(bool_t ioport_80, bool_t ioport_ed) +nestedhvm_vcpu_iomap_get(bool ioport_80, bool ioport_ed) { int i; diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index 8da07ff8a23b..b16c59f77270 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -126,7 +126,7 @@ static void stdvga_cache_disable(struct hvm_hw_stdvga *s) s->cache = STDVGA_CACHE_DISABLED; } -static bool_t stdvga_cache_is_enabled(const struct hvm_hw_stdvga *s) +static bool stdvga_cache_is_enabled(const struct hvm_hw_stdvga *s) { return s->cache == STDVGA_CACHE_ENABLED; } diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c index a09b6abaaeaf..4073c317ecc2 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -50,7 +50,7 @@ int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr) if ( !nv->nv_vvmcx ) { - bool_t writable; + bool writable; void *vvmcx = hvm_map_guest_frame_rw(paddr_to_pfn(vmcbaddr), 1, &writable); @@ -346,7 +346,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs) return 0; } -static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm) +static int nsvm_vmrun_permissionmap(struct vcpu *v, bool viopm) { struct svm_vcpu *arch_svm = &v->arch.hvm.svm; struct nestedsvm *svm = &vcpu_nestedsvm(v); @@ -357,7 +357,7 @@ static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm) unsigned int i; enum hvm_translation_result ret; unsigned long *ns_viomap; - bool_t ioport_80 = 1, ioport_ed = 1; + bool ioport_80 = true, ioport_ed = true; ns_msrpm_ptr = (unsigned long *)svm->ns_cached_msrpm; @@ -853,9 +853,9 @@ uint64_t cf_check nsvm_vcpu_hostcr3(struct vcpu *v) static int nsvm_vmcb_guest_intercepts_msr(unsigned long *msr_bitmap, - uint32_t msr, bool_t write) + uint32_t msr, bool write) { - bool_t enabled; + bool enabled; unsigned long *msr_bit; msr_bit = svm_msrbit(msr_bitmap, msr); @@ -887,7 +887,7 @@ nsvm_vmcb_guest_intercepts_ioio(paddr_t iopm_pa, uint64_t exitinfo1) ioio_info_t ioinfo; uint16_t port; unsigned int size; - bool_t enabled; + bool enabled; ioinfo.bytes = exitinfo1; port = ioinfo.fields.port; @@ -926,7 +926,7 @@ nsvm_vmcb_guest_intercepts_ioio(paddr_t iopm_pa, uint64_t exitinfo1) return NESTEDHVM_VMEXIT_INJECT; } -static bool_t +static bool nsvm_vmcb_guest_intercepts_exitcode(struct vcpu *v, struct cpu_user_regs *regs, uint64_t exitcode) { @@ -1289,7 +1289,7 @@ enum nestedhvm_vmexits nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs, uint64_t exitcode) { - bool_t is_intercepted; + bool is_intercepted; ASSERT(vcpu_nestedhvm(v).nv_vmexit_pending == 0); is_intercepted = nsvm_vmcb_guest_intercepts_exitcode(v, regs, exitcode); diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 674e54e04a1c..df4cb3fd335f 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -59,7 +59,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb); static DEFINE_PER_CPU(struct vmcb_struct *, host_vmcb_va); #endif -static bool_t amd_erratum383_found __read_mostly; +static bool amd_erratum383_found __read_mostly; /* OSVW bits */ static uint64_t osvw_length, osvw_status; @@ -1014,7 +1014,7 @@ static void noreturn cf_check svm_do_resume(void) bool debug_state = (v->domain->debugger_attached || v->domain->arch.monitor.software_breakpoint_enabled || v->domain->arch.monitor.debug_exception_enabled); - bool_t vcpu_guestmode = 0; + bool vcpu_guestmode = false; struct vlapic *vlapic = vcpu_vlapic(v); if ( nestedhvm_enabled(v->domain) && nestedhvm_vcpu_in_guestmode(v) ) @@ -2537,7 +2537,7 @@ static struct hvm_function_table __initdata_cf_clobber svm_function_table = { const struct hvm_function_table * __init start_svm(void) { - bool_t printed = 0; + bool printed = false; svm_host_osvw_reset(); @@ -2594,7 +2594,7 @@ void svm_vmexit_handler(void) struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb; int insn_len, rc; vintr_t intr; - bool_t vcpu_guestmode = 0; + bool vcpu_guestmode = false; struct vlapic *vlapic = vcpu_vlapic(v); regs->rax = vmcb->rax; diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index a8e87c444627..a54010d71ea1 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -191,10 +191,10 @@ uint32_t vlapic_set_ppr(struct vlapic *vlapic) return ppr; } -static bool_t vlapic_match_logical_addr(const struct vlapic *vlapic, - uint32_t mda) +static bool vlapic_match_logical_addr(const struct vlapic *vlapic, + uint32_t mda) { - bool_t result = 0; + bool result = false; uint32_t logical_id = vlapic_get_reg(vlapic, APIC_LDR); if ( vlapic_x2apic_mode(vlapic) ) @@ -224,9 +224,9 @@ static bool_t vlapic_match_logical_addr(const struct vlapic *vlapic, return result; } -bool_t vlapic_match_dest( +bool vlapic_match_dest( const struct vlapic *target, const struct vlapic *source, - int short_hand, uint32_t dest, bool_t dest_mode) + int short_hand, uint32_t dest, bool dest_mode) { HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "target %p, source %p, dest %#x, " "dest_mode %#x, short_hand %#x", @@ -264,7 +264,7 @@ static void vlapic_init_sipi_one(struct vcpu *target, uint32_t icr) switch ( icr & APIC_MODE_MASK ) { case APIC_DM_INIT: { - bool_t fpu_initialised; + bool fpu_initialised; int rc; /* No work on INIT de-assert for P4-type APIC. */ @@ -307,7 +307,7 @@ static void cf_check vlapic_init_sipi_action(void *data) uint32_t icr = vcpu_vlapic(origin)->init_sipi.icr; uint32_t dest = vcpu_vlapic(origin)->init_sipi.dest; uint32_t short_hand = icr & APIC_SHORT_MASK; - bool_t dest_mode = !!(icr & APIC_DEST_MASK); + bool dest_mode = icr & APIC_DEST_MASK; struct vcpu *v; if ( icr == 0 ) @@ -349,7 +349,8 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low) case APIC_DM_NMI: if ( !test_and_set_bool(v->arch.nmi_pending) ) { - bool_t wake = 0; + bool wake = false; + domain_lock(v->domain); if ( v->is_initialised ) wake = test_and_clear_bit(_VPF_down, &v->pause_flags); @@ -373,7 +374,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low) struct vlapic *vlapic_lowest_prio( struct domain *d, const struct vlapic *source, - int short_hand, uint32_t dest, bool_t dest_mode) + int short_hand, uint32_t dest, bool dest_mode) { int old = hvm_domain_irq(d)->round_robin_prev_vcpu; uint32_t ppr, target_ppr = UINT_MAX; @@ -457,8 +458,8 @@ void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) hvm_dpci_msi_eoi(d, vector); } -static bool_t is_multicast_dest(struct vlapic *vlapic, unsigned int short_hand, - uint32_t dest, bool_t dest_mode) +static bool is_multicast_dest(struct vlapic *vlapic, unsigned int short_hand, + uint32_t dest, bool dest_mode) { if ( vlapic_domain(vlapic)->max_vcpus <= 2 ) return 0; @@ -482,7 +483,7 @@ void vlapic_ipi( { unsigned int dest; unsigned int short_hand = icr_low & APIC_SHORT_MASK; - bool_t dest_mode = !!(icr_low & APIC_DEST_MASK); + bool dest_mode = icr_low & APIC_DEST_MASK; HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "icr = 0x%08x:%08x", icr_high, icr_low); @@ -523,7 +524,7 @@ void vlapic_ipi( /* fall through */ default: { struct vcpu *v; - bool_t batch = is_multicast_dest(vlapic, short_hand, dest, dest_mode); + bool batch = is_multicast_dest(vlapic, short_hand, dest, dest_mode); if ( batch ) cpu_raise_softirq_batch_begin(); @@ -1342,7 +1343,7 @@ int vlapic_has_pending_irq(struct vcpu *v) return irr; } -int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack) +int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool force_ack) { struct vlapic *vlapic = vcpu_vlapic(v); int isr; @@ -1377,7 +1378,7 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack) return 1; } -bool_t is_vlapic_lvtpc_enabled(struct vlapic *vlapic) +bool is_vlapic_lvtpc_enabled(struct vlapic *vlapic) { return (vlapic_enabled(vlapic) && !(vlapic_get_reg(vlapic, APIC_LVTPC) & APIC_LVT_MASKED)); diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 6cefb88aec29..6711697ff6ea 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -32,13 +32,13 @@ #include #include -static bool_t __read_mostly opt_vpid_enabled = 1; +static bool __read_mostly opt_vpid_enabled = true; boolean_param("vpid", opt_vpid_enabled); -static bool_t __read_mostly opt_unrestricted_guest_enabled = 1; +static bool __read_mostly opt_unrestricted_guest_enabled = true; boolean_param("unrestricted_guest", opt_unrestricted_guest_enabled); -static bool_t __read_mostly opt_apicv_enabled = 1; +static bool __read_mostly opt_apicv_enabled = true; boolean_param("apicv", opt_apicv_enabled); /* @@ -168,12 +168,12 @@ u32 vmx_vmexit_control __read_mostly; u32 vmx_vmentry_control __read_mostly; u64 vmx_ept_vpid_cap __read_mostly; u64 vmx_vmfunc __read_mostly; -bool_t vmx_virt_exception __read_mostly; +bool vmx_virt_exception __read_mostly; static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, vmxon_region); static DEFINE_PER_CPU(paddr_t, current_vmcs); static DEFINE_PER_CPU(struct list_head, active_vmcs_list); -DEFINE_PER_CPU(bool_t, vmxon); +DEFINE_PER_CPU(bool, vmxon); static u32 vmcs_revision_id __read_mostly; u64 __read_mostly vmx_basic_msr; @@ -209,7 +209,7 @@ static void __init vmx_display_features(void) } static u32 adjust_vmx_controls( - const char *name, u32 ctl_min, u32 ctl_opt, u32 msr, bool_t *mismatch) + const char *name, u32 ctl_min, u32 ctl_opt, u32 msr, bool *mismatch) { u32 vmx_msr_low, vmx_msr_high, ctl = ctl_min | ctl_opt; @@ -229,7 +229,7 @@ static u32 adjust_vmx_controls( return ctl; } -static bool_t cap_check(const char *name, u32 expected, u32 saw) +static bool cap_check(const char *name, u32 expected, u32 saw) { if ( saw != expected ) printk("VMX %s: saw %#x expected %#x\n", name, saw, expected); @@ -247,7 +247,7 @@ static int vmx_init_vmcs_config(bool bsp) u32 _vmx_vmexit_control; u32 _vmx_vmentry_control; u64 _vmx_vmfunc = 0; - bool_t mismatch = 0; + bool mismatch = false; rdmsr(MSR_IA32_VMX_BASIC, vmx_basic_msr_low, vmx_basic_msr_high); @@ -802,7 +802,7 @@ struct foreign_vmcs { }; static DEFINE_PER_CPU(struct foreign_vmcs, foreign_vmcs); -bool_t vmx_vmcs_try_enter(struct vcpu *v) +bool vmx_vmcs_try_enter(struct vcpu *v) { struct foreign_vmcs *fv; @@ -840,7 +840,7 @@ bool_t vmx_vmcs_try_enter(struct vcpu *v) void vmx_vmcs_enter(struct vcpu *v) { - bool_t okay = vmx_vmcs_try_enter(v); + bool okay = vmx_vmcs_try_enter(v); ASSERT(okay); } @@ -1599,10 +1599,9 @@ void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector) &v->arch.hvm.vmx.eoi_exitmap_changed); } -bool_t vmx_vcpu_pml_enabled(const struct vcpu *v) +bool vmx_vcpu_pml_enabled(const struct vcpu *v) { - return !!(v->arch.hvm.vmx.secondary_exec_control & - SECONDARY_EXEC_ENABLE_PML); + return v->arch.hvm.vmx.secondary_exec_control & SECONDARY_EXEC_ENABLE_PML; } int vmx_vcpu_enable_pml(struct vcpu *v) @@ -1704,7 +1703,7 @@ void vmx_vcpu_flush_pml_buffer(struct vcpu *v) vmx_vmcs_exit(v); } -bool_t vmx_domain_pml_enabled(const struct domain *d) +bool vmx_domain_pml_enabled(const struct domain *d) { return d->arch.hvm.vmx.status & VMX_DOMAIN_PML_ENABLED; } @@ -1872,7 +1871,7 @@ static void vmx_update_debug_state(struct vcpu *v) void cf_check vmx_do_resume(void) { struct vcpu *v = current; - bool_t debug_state; + bool debug_state; unsigned long host_cr4; if ( v->arch.hvm.vmx.active_cpu == smp_processor_id() ) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 1edc7f1e919f..b99770d588fb 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -50,7 +50,7 @@ #include #include -static bool_t __initdata opt_force_ept; +static bool __initdata opt_force_ept; boolean_param("force-ept", opt_force_ept); static void cf_check vmx_ctxt_switch_from(struct vcpu *v); @@ -2196,7 +2196,7 @@ static void cf_check vmx_process_isr(int isr, struct vcpu *v) static void __vmx_deliver_posted_interrupt(struct vcpu *v) { - bool_t running = v->is_running; + bool running = v->is_running; vcpu_unblock(v); /* @@ -4793,7 +4793,7 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs) struct domain *currd = curr->domain; u32 new_asid, old_asid; struct hvm_vcpu_asid *p_asid; - bool_t need_flush; + bool need_flush; ASSERT(hvmemul_cache_disabled(curr)); diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index b7be424afbca..e2bb71b0ab58 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -811,7 +811,7 @@ static void unmap_io_bitmap(struct vcpu *v, unsigned int idx) } } -static bool_t __must_check _map_io_bitmap(struct vcpu *v, u64 vmcs_reg) +static bool __must_check _map_io_bitmap(struct vcpu *v, u64 vmcs_reg) { struct nestedvmx *nvmx = &vcpu_2_nvmx(v); unsigned long gpa; @@ -825,7 +825,7 @@ static bool_t __must_check _map_io_bitmap(struct vcpu *v, u64 vmcs_reg) return nvmx->iobitmap[index] != NULL; } -static inline bool_t __must_check map_io_bitmap_all(struct vcpu *v) +static inline bool __must_check map_io_bitmap_all(struct vcpu *v) { return _map_io_bitmap(v, IO_BITMAP_A) && _map_io_bitmap(v, IO_BITMAP_B); @@ -1148,7 +1148,7 @@ static uint64_t get_host_eptp(struct vcpu *v) return p2m_get_hostp2m(v->domain)->ept.eptp; } -static bool_t nvmx_vpid_enabled(const struct vcpu *v) +static bool nvmx_vpid_enabled(const struct vcpu *v) { uint32_t second_cntl; @@ -1591,12 +1591,12 @@ static int nvmx_handle_vmxoff(struct cpu_user_regs *regs) return X86EMUL_OKAY; } -static bool_t vvmcs_launched(struct list_head *launched_list, - unsigned long vvmcs_mfn) +static bool vvmcs_launched(struct list_head *launched_list, + unsigned long vvmcs_mfn) { struct vvmcs_list *vvmcs; struct list_head *pos; - bool_t launched = 0; + bool launched = false; list_for_each(pos, launched_list) { @@ -1679,7 +1679,7 @@ static enum vmx_insn_errno nvmx_vmresume(struct vcpu *v) static int nvmx_handle_vmresume(struct cpu_user_regs *regs) { - bool_t launched; + bool launched; struct vcpu *v = current; struct nestedvmx *nvmx = &vcpu_2_nvmx(v); unsigned long intr_shadow; @@ -1715,7 +1715,7 @@ static int nvmx_handle_vmresume(struct cpu_user_regs *regs) static int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) { - bool_t launched; + bool launched; struct vcpu *v = current; struct nestedvmx *nvmx = &vcpu_2_nvmx(v); unsigned long intr_shadow; @@ -1785,7 +1785,7 @@ static int nvmx_handle_vmptrld(struct cpu_user_regs *regs) if ( !vvmcx_valid(v) ) { - bool_t writable; + bool writable; void *vvmcx = hvm_map_guest_frame_rw(paddr_to_pfn(gpa), 1, &writable); if ( vvmcx ) @@ -1894,7 +1894,7 @@ static int nvmx_handle_vmclear(struct cpu_user_regs *regs) else { /* Even if this VMCS isn't the current one, we must clear it. */ - bool_t writable; + bool writable; vvmcs = hvm_map_guest_frame_rw(paddr_to_pfn(gpa), 0, &writable); diff --git a/xen/arch/x86/include/asm/acpi.h b/xen/arch/x86/include/asm/acpi.h index 6ce79ce465b4..6d94f822d476 100644 --- a/xen/arch/x86/include/asm/acpi.h +++ b/xen/arch/x86/include/asm/acpi.h @@ -127,7 +127,7 @@ struct acpi_sleep_info { uint32_t sleep_state; uint64_t wakeup_vector; uint32_t vector_width; - bool_t sleep_extended; + bool sleep_extended; }; #define MAX_MADT_ENTRIES MAX(256, 2 * NR_CPUS) diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h index 7625c0ecd66b..8459e43ded57 100644 --- a/xen/arch/x86/include/asm/apic.h +++ b/xen/arch/x86/include/asm/apic.h @@ -139,7 +139,7 @@ static __inline void apic_icr_write(u32 low, u32 dest) } } -static __inline bool_t apic_isr_read(u8 vector) +static __inline bool apic_isr_read(u8 vector) { return (apic_read(APIC_ISR + ((vector & ~0x1f) >> 1)) >> (vector & 0x1f)) & 1; diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h index d033224d2b1a..4b6b7ceab1ed 100644 --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -106,17 +106,17 @@ struct shadow_domain { /* Shadow hashtable */ struct page_info **hash_table; - bool_t hash_walking; /* Some function is walking the hash table */ + bool hash_walking; /* Some function is walking the hash table */ /* Fast MMIO path heuristic */ bool has_fast_mmio_entries; #ifdef CONFIG_HVM /* OOS */ - bool_t oos_active; + bool oos_active; /* Has this domain ever used HVMOP_pagetable_dying? */ - bool_t pagetable_dying_op; + bool pagetable_dying_op; #endif #ifdef CONFIG_PV @@ -160,7 +160,7 @@ struct shadow_vcpu { unsigned long off[SHADOW_OOS_FIXUPS]; } oos_fixup[SHADOW_OOS_PAGES]; - bool_t pagetable_dying; + bool pagetable_dying; #endif #endif }; @@ -199,7 +199,7 @@ struct paging_domain { /* flags to control paging operation */ u32 mode; /* Has that pool ever run out of memory? */ - bool_t p2m_alloc_failed; + bool p2m_alloc_failed; /* extension for shadow paging support */ struct shadow_domain shadow; /* extension for hardware-assited paging */ @@ -353,7 +353,7 @@ struct arch_domain mm_lock_t nested_p2m_lock; /* altp2m: allow multiple copies of host p2m */ - bool_t altp2m_active; + bool altp2m_active; struct p2m_domain *altp2m_p2m[MAX_ALTP2M]; mm_lock_t altp2m_list_lock; uint64_t *altp2m_eptp; @@ -364,10 +364,10 @@ struct arch_domain struct radix_tree_root irq_pirq; /* Is shared-info page in 32-bit format? */ - bool_t has_32bit_shinfo; + bool has_32bit_shinfo; /* Is PHYSDEVOP_eoi to automatically unmask the event channel? */ - bool_t auto_unmask; + bool auto_unmask; /* * The width of the FIP/FDP register in the FPU that needs to be @@ -399,7 +399,7 @@ struct arch_domain /* TSC management (emulation, pv, scaling, stats) */ int tsc_mode; /* see asm/time.h */ - bool_t vtsc; /* tsc is emulated (may change after migrate) */ + bool vtsc; /* tsc is emulated (may change after migrate) */ s_time_t vtsc_last; /* previous TSC value (guarantee monotonicity) */ uint64_t vtsc_offset; /* adjustment for save/restore/migrate */ uint32_t tsc_khz; /* cached guest khz for certain emulated or @@ -452,7 +452,7 @@ struct arch_domain } monitor; /* Mem_access emulation control */ - bool_t mem_access_emulate_each_rep; + bool mem_access_emulate_each_rep; /* Don't unconditionally inject #GP for unhandled MSRs. */ bool msr_relaxed; @@ -544,8 +544,8 @@ struct pv_vcpu unsigned long sysenter_callback_eip; unsigned short syscall32_callback_cs; unsigned short sysenter_callback_cs; - bool_t syscall32_disables_events; - bool_t sysenter_disables_events; + bool syscall32_disables_events; + bool sysenter_disables_events; /* * 64bit segment bases. @@ -586,7 +586,7 @@ struct pv_vcpu uint32_t dr7_emul; /* Deferred VA-based update state. */ - bool_t need_update_runstate_area; + bool need_update_runstate_area; struct vcpu_time_info pending_system_time; }; @@ -656,7 +656,7 @@ struct arch_vcpu uint64_t xcr0_accum; /* This variable determines whether nonlazy extended state has been used, * and thus should be saved/restored. */ - bool_t nonlazy_xstate_used; + bool nonlazy_xstate_used; /* Restore all FPU state (lazy and non-lazy state) on context switch? */ bool fully_eager_fpu; diff --git a/xen/arch/x86/include/asm/hardirq.h b/xen/arch/x86/include/asm/hardirq.h index 276e3419d778..342361cb6fdd 100644 --- a/xen/arch/x86/include/asm/hardirq.h +++ b/xen/arch/x86/include/asm/hardirq.h @@ -9,7 +9,7 @@ typedef struct { unsigned int __local_irq_count; unsigned int nmi_count; unsigned int mce_count; - bool_t __mwait_wakeup; + bool __mwait_wakeup; } __cacheline_aligned irq_cpustat_t; #include /* Standard mappings for irq_cpustat_t above */ diff --git a/xen/arch/x86/include/asm/hvm/asid.h b/xen/arch/x86/include/asm/hvm/asid.h index 0207f8fc29db..17c58353d139 100644 --- a/xen/arch/x86/include/asm/hvm/asid.h +++ b/xen/arch/x86/include/asm/hvm/asid.h @@ -26,7 +26,7 @@ void hvm_asid_flush_core(void); /* Called before entry to guest context. Checks ASID allocation, returns a * boolean indicating whether all ASIDs must be flushed. */ -bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid); +bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid); #endif /* __ASM_X86_HVM_ASID_H__ */ diff --git a/xen/arch/x86/include/asm/hvm/emulate.h b/xen/arch/x86/include/asm/hvm/emulate.h index 398d0db0781b..29d679442e10 100644 --- a/xen/arch/x86/include/asm/hvm/emulate.h +++ b/xen/arch/x86/include/asm/hvm/emulate.h @@ -50,7 +50,7 @@ struct hvm_emulate_ctxt { bool is_mem_access; - bool_t set_context; + bool set_context; }; enum emul_kind { diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 6d53713fc3a9..a4c1af19acd6 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -21,7 +21,7 @@ struct pirq; /* needed by pi_update_irte */ #ifdef CONFIG_HVM_FEP /* Permit use of the Forced Emulation Prefix in HVM guests */ -extern bool_t opt_hvm_fep; +extern bool opt_hvm_fep; #else #define opt_hvm_fep 0 #endif @@ -95,7 +95,7 @@ struct hvm_function_table { const char *name; /* Support Hardware-Assisted Paging? */ - bool_t hap_supported; + bool hap_supported; /* Necessary hardware support for alternate p2m's? */ bool altp2m_supported; @@ -189,10 +189,10 @@ struct hvm_function_table { int (*nhvm_vcpu_reset)(struct vcpu *v); int (*nhvm_vcpu_vmexit_event)(struct vcpu *v, const struct x86_event *event); uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v); - bool_t (*nhvm_vmcx_guest_intercepts_event)( + bool (*nhvm_vmcx_guest_intercepts_event)( struct vcpu *v, unsigned int vector, int errcode); - bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v); + bool (*nhvm_vmcx_hap_enabled)(struct vcpu *v); enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v); void (*nhvm_domain_relinquish_resources)(struct domain *d); @@ -218,7 +218,7 @@ struct hvm_function_table { /* Alternate p2m */ void (*altp2m_vcpu_update_p2m)(struct vcpu *v); void (*altp2m_vcpu_update_vmfunc_ve)(struct vcpu *v); - bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v); + bool (*altp2m_vcpu_emulate_ve)(struct vcpu *v); int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs); /* vmtrace */ @@ -247,7 +247,7 @@ struct hvm_function_table { }; extern struct hvm_function_table hvm_funcs; -extern bool_t hvm_enabled; +extern bool hvm_enabled; extern s8 hvm_port80_allowed; extern const struct hvm_function_table *start_svm(void); @@ -346,10 +346,10 @@ static inline bool hvm_virtual_to_linear_addr( access_type, active_cs, linear); } -void *hvm_map_guest_frame_rw(unsigned long gfn, bool_t permanent, - bool_t *writable); -void *hvm_map_guest_frame_ro(unsigned long gfn, bool_t permanent); -void hvm_unmap_guest_frame(void *p, bool_t permanent); +void *hvm_map_guest_frame_rw(unsigned long gfn, bool permanent, + bool *writable); +void *hvm_map_guest_frame_ro(unsigned long gfn, bool permanent); +void hvm_unmap_guest_frame(void *p, bool permanent); void hvm_mapped_guest_frames_mark_dirty(struct domain *d); int hvm_debug_op(struct vcpu *v, int32_t op); @@ -616,7 +616,7 @@ static inline uint64_t nhvm_vcpu_p2m_base(struct vcpu *v) } /* returns true, when l1 guest intercepts the specified trap */ -static inline bool_t nhvm_vmcx_guest_intercepts_event( +static inline bool nhvm_vmcx_guest_intercepts_event( struct vcpu *v, unsigned int vector, int errcode) { return alternative_call(hvm_funcs.nhvm_vmcx_guest_intercepts_event, v, @@ -624,7 +624,7 @@ static inline bool_t nhvm_vmcx_guest_intercepts_event( } /* returns true when l1 guest wants to use hap to run l2 guest */ -static inline bool_t nhvm_vmcx_hap_enabled(struct vcpu *v) +static inline bool nhvm_vmcx_hap_enabled(struct vcpu *v) { return alternative_call(hvm_funcs.nhvm_vmcx_hap_enabled, v); } diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h index e5225e75ef26..54de84185fb3 100644 --- a/xen/arch/x86/include/asm/hvm/io.h +++ b/xen/arch/x86/include/asm/hvm/io.h @@ -54,7 +54,7 @@ typedef int (*hvm_io_write_t)(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t data); -typedef bool_t (*hvm_io_accept_t)(const struct hvm_io_handler *handler, +typedef bool (*hvm_io_accept_t)(const struct hvm_io_handler *handler, const ioreq_t *p); typedef void (*hvm_io_complete_t)(const struct hvm_io_handler *handler); @@ -72,7 +72,7 @@ int hvm_io_intercept(ioreq_t *p); struct hvm_io_handler *hvm_next_io_handler(struct domain *d); -bool_t hvm_mmio_internal(paddr_t gpa); +bool hvm_mmio_internal(paddr_t gpa); void register_mmio_handler(struct domain *d, const struct hvm_mmio_ops *ops); @@ -121,7 +121,7 @@ struct hvm_hw_stdvga { uint8_t sr[8]; uint8_t gr_index; uint8_t gr[9]; - bool_t stdvga; + bool stdvga; enum stdvga_cache_state cache; uint32_t latch; struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ diff --git a/xen/arch/x86/include/asm/hvm/nestedhvm.h b/xen/arch/x86/include/asm/hvm/nestedhvm.h index 3d1ec53a6ff9..56a2019e1bae 100644 --- a/xen/arch/x86/include/asm/hvm/nestedhvm.h +++ b/xen/arch/x86/include/asm/hvm/nestedhvm.h @@ -32,7 +32,7 @@ static inline bool nestedhvm_enabled(const struct domain *d) int nestedhvm_vcpu_initialise(struct vcpu *v); void nestedhvm_vcpu_destroy(struct vcpu *v); void nestedhvm_vcpu_reset(struct vcpu *v); -bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v); +bool nestedhvm_vcpu_in_guestmode(struct vcpu *v); #define nestedhvm_vcpu_enter_guestmode(v) \ vcpu_nestedhvm(v).nv_guestmode = 1 #define nestedhvm_vcpu_exit_guestmode(v) \ @@ -50,7 +50,7 @@ int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa, struct npfec npfec); /* IO permission map */ -unsigned long *nestedhvm_vcpu_iomap_get(bool_t ioport_80, bool_t ioport_ed); +unsigned long *nestedhvm_vcpu_iomap_get(bool ioport_80, bool ioport_ed); /* Misc */ #define nestedhvm_paging_mode_hap(v) (!!nhvm_vmcx_hap_enabled(v)) diff --git a/xen/arch/x86/include/asm/hvm/vcpu.h b/xen/arch/x86/include/asm/hvm/vcpu.h index c9ef2b325bd4..64c7a6fedea9 100644 --- a/xen/arch/x86/include/asm/hvm/vcpu.h +++ b/xen/arch/x86/include/asm/hvm/vcpu.h @@ -60,7 +60,7 @@ struct hvm_vcpu_io { * For string instruction emulation we need to be able to signal a * necessary retry through other than function return codes. */ - bool_t mmio_retry; + bool mmio_retry; unsigned long msix_unmask_address; unsigned long msix_snoop_address; @@ -70,7 +70,7 @@ struct hvm_vcpu_io { }; struct nestedvcpu { - bool_t nv_guestmode; /* vcpu in guestmode? */ + bool nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ void *nv_n1vmcx; /* VMCB/VMCS used to run l1 guest */ void *nv_n2vmcx; /* shadow VMCB/VMCS used to run l2 guest */ @@ -85,22 +85,22 @@ struct nestedvcpu { struct nestedvmx nvmx; } u; - bool_t nv_flushp2m; /* True, when p2m table must be flushed */ + bool nv_flushp2m; /* True, when p2m table must be flushed */ struct p2m_domain *nv_p2m; /* used p2m table for this vcpu */ bool stale_np2m; /* True when p2m_base in VMCx02 is no longer valid */ uint64_t np2m_generation; struct hvm_vcpu_asid nv_n2asid; - bool_t nv_vmentry_pending; - bool_t nv_vmexit_pending; - bool_t nv_vmswitch_in_progress; /* true during vmentry/vmexit emulation */ + bool nv_vmentry_pending; + bool nv_vmexit_pending; + bool nv_vmswitch_in_progress; /* true during vmentry/vmexit emulation */ /* Does l1 guest intercept io ports 0x80 and/or 0xED ? * Useful to optimize io permission handling. */ - bool_t nv_ioport80; - bool_t nv_ioportED; + bool nv_ioport80; + bool nv_ioportED; /* L2's control-resgister, just as the L2 sees them. */ unsigned long guest_cr[5]; diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h index f27454a13698..88ef94524339 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -66,7 +66,7 @@ struct vlapic { struct hvm_hw_lapic hw; struct hvm_hw_lapic_regs *regs; struct { - bool_t hw, regs; + bool hw, regs; uint32_t id, ldr; } loaded; spinlock_t esr_lock; @@ -97,13 +97,13 @@ static inline void vlapic_set_reg( void vlapic_reg_write(struct vcpu *v, unsigned int reg, uint32_t val); -bool_t is_vlapic_lvtpc_enabled(struct vlapic *vlapic); +bool is_vlapic_lvtpc_enabled(struct vlapic *vlapic); bool vlapic_test_irq(const struct vlapic *vlapic, uint8_t vec); void vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig); int vlapic_has_pending_irq(struct vcpu *v); -int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack); +int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool force_ack); int vlapic_init(struct vcpu *v); void vlapic_destroy(struct vcpu *v); @@ -131,11 +131,11 @@ int vlapic_apicv_write(struct vcpu *v, unsigned int offset); struct vlapic *vlapic_lowest_prio( struct domain *d, const struct vlapic *source, - int short_hand, uint32_t dest, bool_t dest_mode); + int short_hand, uint32_t dest, bool dest_mode); -bool_t vlapic_match_dest( +bool vlapic_match_dest( const struct vlapic *target, const struct vlapic *source, - int short_hand, uint32_t dest, bool_t dest_mode); + int short_hand, uint32_t dest, bool dest_mode); static inline void vlapic_sync_pir_to_irr(struct vcpu *v) { diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index e05664399309..a9afdffae547 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -143,7 +143,7 @@ struct vmx_vcpu { unsigned long host_cr0; /* Do we need to tolerate a spurious EPT_MISCONFIG VM exit? */ - bool_t ept_spurious_misconfig; + bool ept_spurious_misconfig; /* Processor Trace configured and enabled for the vcpu. */ bool ipt_active; @@ -183,7 +183,7 @@ struct vmx_vcpu { int vmx_create_vmcs(struct vcpu *v); void vmx_destroy_vmcs(struct vcpu *v); void vmx_vmcs_enter(struct vcpu *v); -bool_t __must_check vmx_vmcs_try_enter(struct vcpu *v); +bool __must_check vmx_vmcs_try_enter(struct vcpu *v); void vmx_vmcs_exit(struct vcpu *v); void vmx_vmcs_reload(struct vcpu *v); @@ -663,13 +663,13 @@ void virtual_vmcs_vmwrite(const struct vcpu *, u32 encoding, u64 val); enum vmx_insn_errno virtual_vmcs_vmwrite_safe(const struct vcpu *v, u32 vmcs_encoding, u64 val); -DECLARE_PER_CPU(bool_t, vmxon); +DECLARE_PER_CPU(bool, vmxon); -bool_t vmx_vcpu_pml_enabled(const struct vcpu *v); +bool vmx_vcpu_pml_enabled(const struct vcpu *v); int vmx_vcpu_enable_pml(struct vcpu *v); void vmx_vcpu_disable_pml(struct vcpu *v); void vmx_vcpu_flush_pml_buffer(struct vcpu *v); -bool_t vmx_domain_pml_enabled(const struct domain *d); +bool vmx_domain_pml_enabled(const struct domain *d); int vmx_domain_enable_pml(struct domain *d); void vmx_domain_disable_pml(struct domain *d); void vmx_domain_flush_pml_buffers(struct domain *d); diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h index d4b335a2bca9..31643ed48103 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h @@ -585,7 +585,7 @@ void vmx_inject_extint(int trap, uint8_t source); void vmx_inject_nmi(void); void ept_walk_table(struct domain *d, unsigned long gfn); -bool_t ept_handle_misconfig(uint64_t gpa); +bool ept_handle_misconfig(uint64_t gpa); int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order, bool *ipat, p2m_type_t type); void setup_ept_dump(void); diff --git a/xen/arch/x86/include/asm/hvm/vmx/vvmx.h b/xen/arch/x86/include/asm/hvm/vmx/vvmx.h index dc9db69258d2..da10d3fa9617 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vvmx.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vvmx.h @@ -35,7 +35,7 @@ struct nestedvmx { u8 source; } intr; struct { - bool_t enabled; + bool enabled; uint32_t exit_reason; uint32_t exit_qual; } ept; diff --git a/xen/arch/x86/include/asm/mtrr.h b/xen/arch/x86/include/asm/mtrr.h index 1d2744eceb9e..36dac0a775a3 100644 --- a/xen/arch/x86/include/asm/mtrr.h +++ b/xen/arch/x86/include/asm/mtrr.h @@ -44,7 +44,7 @@ struct mtrr_state { u64 mtrr_cap; /* ranges in var MSRs are overlapped or not:0(no overlapped) */ - bool_t overlapped; + bool overlapped; }; extern struct mtrr_state mtrr_state; @@ -68,19 +68,19 @@ extern void mtrr_aps_sync_begin(void); extern void mtrr_aps_sync_end(void); extern void mtrr_bp_restore(void); -extern bool_t mtrr_var_range_msr_set(struct domain *d, struct mtrr_state *m, - uint32_t msr, uint64_t msr_content); -extern bool_t mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m, - uint32_t row, uint64_t msr_content); -extern bool_t mtrr_def_type_msr_set(struct domain *d, struct mtrr_state *m, - uint64_t msr_content); +extern bool mtrr_var_range_msr_set(struct domain *d, struct mtrr_state *m, + uint32_t msr, uint64_t msr_content); +extern bool mtrr_fix_range_msr_set(struct domain *d, struct mtrr_state *m, + uint32_t row, uint64_t msr_content); +extern bool mtrr_def_type_msr_set(struct domain *d, struct mtrr_state *m, + uint64_t msr_content); #ifdef CONFIG_HVM extern void memory_type_changed(struct domain *d); #else static inline void memory_type_changed(struct domain *d) {} #endif -extern bool_t pat_msr_set(uint64_t *pat, uint64_t msr); +extern bool pat_msr_set(uint64_t *pat, uint64_t msr); bool is_var_mtrr_overlapped(const struct mtrr_state *m); bool mtrr_pat_not_equal(const struct vcpu *vd, const struct vcpu *vs); diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h index f2c7d58b5999..32f3f394b05a 100644 --- a/xen/arch/x86/include/asm/p2m.h +++ b/xen/arch/x86/include/asm/p2m.h @@ -27,7 +27,7 @@ #endif #define P2M_DEBUGGING 0 -extern bool_t opt_hap_1gb, opt_hap_2mb; +extern bool opt_hap_1gb, opt_hap_2mb; /* * The upper levels of the p2m pagetable always contain full rights; all @@ -245,7 +245,7 @@ struct p2m_domain { p2m_access_t *p2ma, p2m_query_t q, unsigned int *page_order, - bool_t *sve); + bool *sve); int (*recalc)(struct p2m_domain *p2m, unsigned long gfn); void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); @@ -284,11 +284,11 @@ struct p2m_domain { */ void (*tlb_flush)(struct p2m_domain *p2m); unsigned int defer_flush; - bool_t need_flush; + bool need_flush; /* If true, and an access fault comes in and there is no vm_event listener, * pause domain. Otherwise, remove access restrictions. */ - bool_t access_required; + bool access_required; /* Highest guest frame that's ever been mapped in the p2m */ unsigned long max_mapped_pfn; @@ -420,17 +420,17 @@ void np2m_schedule(int dir); static inline void np2m_schedule(int dir) {} #endif -static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m) +static inline bool p2m_is_hostp2m(const struct p2m_domain *p2m) { return p2m->p2m_class == p2m_host; } -static inline bool_t p2m_is_nestedp2m(const struct p2m_domain *p2m) +static inline bool p2m_is_nestedp2m(const struct p2m_domain *p2m) { return p2m->p2m_class == p2m_nested; } -static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m) +static inline bool p2m_is_altp2m(const struct p2m_domain *p2m) { return p2m->p2m_class == p2m_alternate; } @@ -450,11 +450,11 @@ void p2m_unlock_and_tlb_flush(struct p2m_domain *p2m); mfn_t __nonnull(3, 4) p2m_get_gfn_type_access( struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t, - p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked); + p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool locked); static inline mfn_t __nonnull(3, 4) _get_gfn_type_access( struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t, - p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked) + p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool locked) { if ( !p2m || !paging_mode_translate(p2m->domain) ) { @@ -888,7 +888,7 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx) } /* Switch alternate p2m for a single vcpu */ -bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx); +bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx); /* Check to see if vcpu should be switched to a different p2m. */ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); diff --git a/xen/arch/x86/include/asm/page.h b/xen/arch/x86/include/asm/page.h index c9466172ba3f..93a7b368ac0b 100644 --- a/xen/arch/x86/include/asm/page.h +++ b/xen/arch/x86/include/asm/page.h @@ -378,7 +378,7 @@ static inline unsigned int cacheattr_to_pte_flags(unsigned int cacheattr) } /* return true if permission increased */ -static inline bool_t +static inline bool perms_strictly_increased(uint32_t old_flags, uint32_t new_flags) /* Given the flags of two entries, are the new flags a strict * increase in rights over the old ones? */ diff --git a/xen/arch/x86/include/asm/paging.h b/xen/arch/x86/include/asm/paging.h index 62605d7697bc..76162a9429ce 100644 --- a/xen/arch/x86/include/asm/paging.h +++ b/xen/arch/x86/include/asm/paging.h @@ -210,7 +210,7 @@ int paging_domain_init(struct domain *d); * manipulate the log-dirty bitmap. */ int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl, - bool_t resuming); + bool resuming); /* Call when destroying a vcpu/domain */ void paging_vcpu_teardown(struct vcpu *v); diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h index fd981de9de35..6bfe87e2780b 100644 --- a/xen/arch/x86/include/asm/pci.h +++ b/xen/arch/x86/include/asm/pci.h @@ -39,11 +39,11 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf, uint32_t *data); int pci_msi_conf_write_intercept(struct pci_dev *pdev, unsigned int reg, unsigned int size, uint32_t *data); -bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, - unsigned int *bdf); +bool pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, + unsigned int *bdf); -bool_t pci_ro_mmcfg_decode(unsigned long mfn, unsigned int *seg, - unsigned int *bdf); +bool pci_ro_mmcfg_decode(unsigned long mfn, unsigned int *seg, + unsigned int *bdf); /* MMCFG external variable defines */ extern int pci_mmcfg_config_num; diff --git a/xen/arch/x86/include/asm/psr.h b/xen/arch/x86/include/asm/psr.h index 8ecb7a0eea70..51df78794cd0 100644 --- a/xen/arch/x86/include/asm/psr.h +++ b/xen/arch/x86/include/asm/psr.h @@ -67,7 +67,7 @@ enum psr_type { extern struct psr_cmt *psr_cmt; -static inline bool_t psr_cmt_enabled(void) +static inline bool psr_cmt_enabled(void) { return !!psr_cmt; } diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h index 7858aec6cae6..6629093197c3 100644 --- a/xen/arch/x86/include/asm/vpmu.h +++ b/xen/arch/x86/include/asm/vpmu.h @@ -33,8 +33,8 @@ struct arch_vpmu_ops { int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content); int (*do_interrupt)(struct cpu_user_regs *regs); void (*arch_vpmu_destroy)(struct vcpu *v); - int (*arch_vpmu_save)(struct vcpu *v, bool_t to_guest); - int (*arch_vpmu_load)(struct vcpu *v, bool_t from_guest); + int (*arch_vpmu_save)(struct vcpu *v, bool to_guest); + int (*arch_vpmu_load)(struct vcpu *v, bool from_guest); void (*arch_vpmu_dump)(const struct vcpu *v); #ifdef CONFIG_MEM_SHARING @@ -87,12 +87,12 @@ static inline void vpmu_clear(struct vpmu_struct *vpmu) /* VPMU_AVAILABLE should be altered by get/put_vpmu(). */ vpmu->flags &= VPMU_AVAILABLE; } -static inline bool_t vpmu_is_set(const struct vpmu_struct *vpmu, const u32 mask) +static inline bool vpmu_is_set(const struct vpmu_struct *vpmu, const u32 mask) { return !!(vpmu->flags & mask); } -static inline bool_t vpmu_are_all_set(const struct vpmu_struct *vpmu, - const u32 mask) +static inline bool vpmu_are_all_set(const struct vpmu_struct *vpmu, + const u32 mask) { return !!((vpmu->flags & mask) == mask); } @@ -104,7 +104,7 @@ void vpmu_initialise(struct vcpu *v); void vpmu_destroy(struct vcpu *v); void vpmu_save(struct vcpu *v); void cf_check vpmu_save_force(void *arg); -int vpmu_load(struct vcpu *v, bool_t from_guest); +int vpmu_load(struct vcpu *v, bool from_guest); void vpmu_dump(struct vcpu *v); static inline int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c index d6df48af5427..d88d677825f1 100644 --- a/xen/arch/x86/mm/hap/nested_ept.c +++ b/xen/arch/x86/mm/hap/nested_ept.c @@ -42,7 +42,7 @@ #define NEPT_2M_ENTRY_FLAG (1 << 10) #define NEPT_4K_ENTRY_FLAG (1 << 9) -static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level) +static bool nept_rsv_bits_check(ept_entry_t e, uint32_t level) { uint64_t rsv_bits = EPT_MUST_RSV_BITS; @@ -68,7 +68,7 @@ static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level) } /* EMT checking*/ -static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level) +static bool nept_emt_bits_check(ept_entry_t e, uint32_t level) { if ( e.sp || level == 1 ) { @@ -79,13 +79,13 @@ static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level) return 0; } -static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits) +static bool nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits) { return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits); } /* nept's non-present check */ -static bool_t nept_non_present_check(ept_entry_t e) +static bool nept_non_present_check(ept_entry_t e) { if ( e.epte & EPTE_RWX_MASK ) return 0; @@ -106,7 +106,7 @@ uint64_t nept_get_ept_vpid_cap(void) return caps; } -static bool_t nept_rwx_bits_check(ept_entry_t e) +static bool nept_rwx_bits_check(ept_entry_t e) { /*write only or write/execute only*/ uint8_t rwx_bits = e.epte & EPTE_RWX_MASK; @@ -122,7 +122,7 @@ static bool_t nept_rwx_bits_check(ept_entry_t e) } /* nept's misconfiguration check */ -static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level) +static bool nept_misconfiguration_check(ept_entry_t e, uint32_t level) { return nept_rsv_bits_check(e, level) || nept_emt_bits_check(e, level) || diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c index 2c2b34ccf60a..541ecbeeb001 100644 --- a/xen/arch/x86/mm/mem_paging.c +++ b/xen/arch/x86/mm/mem_paging.c @@ -431,7 +431,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg) int rc; xen_mem_paging_op_t mpo; struct domain *d; - bool_t copyback = 0; + bool copyback = false; if ( copy_from_guest(&mpo, arg, 1) ) return -EFAULT; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 85c4e8e54f66..227bdf7c2433 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -30,7 +30,8 @@ #define is_epte_present(ept_entry) ((ept_entry)->epte & 0x7) #define is_epte_superpage(ept_entry) ((ept_entry)->sp) -static inline bool_t is_epte_valid(ept_entry_t *e) + +static bool is_epte_valid(const ept_entry_t *e) { /* suppress_ve alone is not considered valid, so mask it off */ return ((e->epte & ~(1ul << 63)) != 0 && e->sa_p2mt != p2m_invalid); @@ -239,14 +240,14 @@ static void ept_free_entry(struct p2m_domain *p2m, ept_entry_t *ept_entry, int l p2m_free_ptp(p2m, mfn_to_page(_mfn(ept_entry->mfn))); } -static bool_t ept_split_super_page(struct p2m_domain *p2m, - ept_entry_t *ept_entry, - unsigned int level, unsigned int target) +static bool ept_split_super_page( + struct p2m_domain *p2m, ept_entry_t *ept_entry, + unsigned int level, unsigned int target) { ept_entry_t new_ept, *table; uint64_t trunk; unsigned int i; - bool_t rv = 1; + bool rv = true; /* End if the entry is a leaf entry or reaches the target level. */ if ( level <= target ) @@ -305,7 +306,7 @@ static bool_t ept_split_super_page(struct p2m_domain *p2m, * GUEST_TABLE_POD: * The next entry is marked populate-on-demand. */ -static int ept_next_level(struct p2m_domain *p2m, bool_t read_only, +static int ept_next_level(struct p2m_domain *p2m, bool read_only, ept_entry_t **table, unsigned long *gfn_remainder, int next_level) { @@ -678,7 +679,7 @@ static int cf_check resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) _mfn(e.mfn), level * EPT_TABLE_ORDER, &ipat, e.sa_p2mt); - bool_t recalc = e.recalc; + bool recalc = e.recalc; if ( recalc && p2m_is_changeable(e.sa_p2mt) ) { @@ -760,11 +761,11 @@ static int cf_check resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) return rc; } -bool_t ept_handle_misconfig(uint64_t gpa) +bool ept_handle_misconfig(uint64_t gpa) { struct vcpu *curr = current; struct p2m_domain *p2m = p2m_get_hostp2m(curr->domain); - bool_t spurious; + bool spurious; int rc; if ( altp2m_active(curr->domain) ) @@ -798,11 +799,11 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, unsigned int i, target = order / EPT_TABLE_ORDER; unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ? (gfn | mfn_x(mfn)) : gfn; int ret, rc = 0; - bool_t entry_written = 0; - bool_t need_modify_vtd_table = 1; - bool_t vtd_pte_present = 0; + bool entry_written = false; + bool need_modify_vtd_table = true; + bool vtd_pte_present = false; unsigned int iommu_flags = p2m_get_iommu_flags(p2mt, p2ma, mfn); - bool_t needs_sync = 1; + bool needs_sync = false; ept_entry_t old_entry = { .epte = 0 }; ept_entry_t new_entry = { .epte = 0 }; struct ept_data *ept = &p2m->ept; @@ -1007,7 +1008,7 @@ static mfn_t cf_check ept_get_entry( ept_entry_t *ept_entry; u32 index; int i; - bool_t recalc = 0; + bool recalc = false; mfn_t mfn = INVALID_MFN; struct ept_data *ept = &p2m->ept; diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 9969eb45fa8c..9e5ad68df27c 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -522,7 +522,7 @@ decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order) { unsigned long ret = 0, i, n; struct p2m_domain *p2m = p2m_get_hostp2m(d); - bool_t steal_for_cache; + bool steal_for_cache; long pod = 0, ram = 0; gfn_lock(p2m, gfn, order); diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index b2b14746c1c1..640a11f5647f 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -213,7 +213,7 @@ p2m_free_entry(struct p2m_domain *p2m, l1_pgentry_t *p2m_entry, int page_order) static int p2m_next_level(struct p2m_domain *p2m, void **table, unsigned long *gfn_remainder, unsigned long gfn, u32 shift, - u32 max, unsigned int level, bool_t unmap) + u32 max, unsigned int level, bool unmap) { l1_pgentry_t *p2m_entry, new_entry; void *next; @@ -765,7 +765,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, static mfn_t cf_check p2m_pt_get_entry(struct p2m_domain *p2m, gfn_t gfn_, p2m_type_t *t, p2m_access_t *a, p2m_query_t q, - unsigned int *page_order, bool_t *sve) + unsigned int *page_order, bool *sve) { mfn_t mfn; unsigned long gfn = gfn_x(gfn_); @@ -774,7 +774,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, gfn_t gfn_, l1_pgentry_t *l1e; unsigned int flags; p2m_type_t l1t; - bool_t recalc; + bool recalc; ASSERT(paging_mode_translate(p2m->domain)); diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 0983bd71d9a9..fe9ccabb8702 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -39,7 +39,8 @@ DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); /* Turn on/off host superpage page table support for hap, default on. */ -bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1; +bool __initdata opt_hap_1gb = true; +bool __initdata opt_hap_2mb = true; boolean_param("hap_1gb", opt_hap_1gb); boolean_param("hap_2mb", opt_hap_2mb); @@ -272,7 +273,7 @@ void p2m_unlock_and_tlb_flush(struct p2m_domain *p2m) mfn_t p2m_get_gfn_type_access(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t, p2m_access_t *a, p2m_query_t q, - unsigned int *page_order, bool_t locked) + unsigned int *page_order, bool locked) { mfn_t mfn; @@ -1765,10 +1766,10 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx) p2m_switch_vcpu_altp2m_by_id(v, idx); } -bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) +bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) { struct domain *d = v->domain; - bool_t rc = 0; + bool rc = false; if ( idx >= MAX_ALTP2M ) return rc; diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index 34d833251b78..541c2ea9b225 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -224,7 +224,7 @@ static int paging_log_dirty_enable(struct domain *d) return ret; } -static int paging_log_dirty_disable(struct domain *d, bool_t resuming) +static int paging_log_dirty_disable(struct domain *d, bool resuming) { int ret = 1; @@ -394,7 +394,7 @@ bool paging_mfn_is_dirty(const struct domain *d, mfn_t gmfn) * clear the bitmap and stats as well. */ static int paging_log_dirty_op(struct domain *d, struct xen_domctl_shadow_op *sc, - bool_t resuming) + bool resuming) { int rv = 0, clean = 0, peek = 1; unsigned long pages = 0; @@ -672,7 +672,7 @@ void paging_vcpu_init(struct vcpu *v) #if PG_log_dirty int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl, - bool_t resuming) + bool resuming) { int rc; diff --git a/xen/arch/x86/x86_64/mmconf-fam10h.c b/xen/arch/x86/x86_64/mmconf-fam10h.c index 36b32eb769e1..270dd97b6a31 100644 --- a/xen/arch/x86/x86_64/mmconf-fam10h.c +++ b/xen/arch/x86/x86_64/mmconf-fam10h.c @@ -142,7 +142,7 @@ static void __init get_fam10h_pci_mmconf_base(void) void fam10h_check_enable_mmcfg(void) { u64 val; - bool_t print = opt_cpu_info; + bool print = opt_cpu_info; if (!(pci_probe & PCI_CHECK_ENABLE_AMD_MMCONF)) return; diff --git a/xen/arch/x86/x86_64/mmconfig-shared.c b/xen/arch/x86/x86_64/mmconfig-shared.c index 5dee20fe9ddf..b3b2da73626f 100644 --- a/xen/arch/x86/x86_64/mmconfig-shared.c +++ b/xen/arch/x86/x86_64/mmconfig-shared.c @@ -192,7 +192,7 @@ static const char *__init cf_check pci_mmcfg_amd_fam10h(void) static const char *__init cf_check pci_mmcfg_nvidia_mcp55(void) { - static bool_t __initdata mcp55_checked; + static bool __initdata mcp55_checked; int bus, i; static const u32 extcfg_regnum = 0x90; @@ -361,11 +361,11 @@ static int __init is_mmconf_reserved( return valid; } -static bool_t __init pci_mmcfg_reject_broken(void) +static bool __init pci_mmcfg_reject_broken(void) { typeof(pci_mmcfg_config[0]) *cfg; int i; - bool_t valid = 1; + bool valid = true; if ((pci_mmcfg_config_num == 0) || (pci_mmcfg_config == NULL) || @@ -399,7 +399,7 @@ static bool_t __init pci_mmcfg_reject_broken(void) void __init acpi_mmcfg_init(void) { - bool_t valid = 1; + bool valid = true; pci_segments_init(); diff --git a/xen/arch/x86/x86_64/mmconfig_64.c b/xen/arch/x86/x86_64/mmconfig_64.c index 2b3085931ed3..ffdc62700dba 100644 --- a/xen/arch/x86/x86_64/mmconfig_64.c +++ b/xen/arch/x86/x86_64/mmconfig_64.c @@ -175,8 +175,7 @@ void pci_mmcfg_arch_disable(unsigned int idx) cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number); } -bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, - unsigned int *bdf) +bool pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, unsigned int *bdf) { unsigned int idx; @@ -197,8 +196,7 @@ bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, return 0; } -bool_t pci_ro_mmcfg_decode(unsigned long mfn, unsigned int *seg, - unsigned int *bdf) +bool pci_ro_mmcfg_decode(unsigned long mfn, unsigned int *seg, unsigned int *bdf) { const unsigned long *ro_map; From patchwork Mon Nov 20 14:56:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 13461434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C39D9C2BB3F for ; Mon, 20 Nov 2023 14:57:05 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.636972.992682 (Exim 4.92) (envelope-from ) id 1r55hd-00004x-7X; Mon, 20 Nov 2023 14:56:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 636972.992682; Mon, 20 Nov 2023 14:56:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hd-0008Uc-1m; Mon, 20 Nov 2023 14:56:45 +0000 Received: by outflank-mailman (input) for mailman id 636972; Mon, 20 Nov 2023 14:56:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hb-0008PM-Oj for xen-devel@lists.xenproject.org; Mon, 20 Nov 2023 14:56:44 +0000 Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0179c436-87b5-11ee-98df-6d05b1d4d9a1; Mon, 20 Nov 2023 15:56:40 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0179c436-87b5-11ee-98df-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1700492199; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0pubRcIMm5BDoQD5UTgfI84P4ODA1vnZq+5DPaO8C8Y=; b=UmQ8gnDi6rljnCfOvvVX6BvyBLHTb33ZndW80nUm9nLtCRlu+ZoC62Hs /VYdEtsDveG8N+chXwd2SvV8tg8VT3DtRwn4hiiA6XAaZK+I8JL1dSYKv kWWwpBoPJCvIlaw8hJnf0PRqhO97/OttHrIczgZ13sHUeSBOb2vCPjSo9 s=; X-CSE-ConnectionGUID: YxgkUgEXQvaxEBofcrPFUw== X-CSE-MsgGUID: IIJeF/HIQQCnvq1dR+NJuQ== Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 127427129 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.159.70 X-Policy: $RELAYED X-ThreatScanner-Verdict: Negative IronPort-Data: A9a23:Fut3RqJM+/6QMriZFE+RupUlxSXFcZb7ZxGr2PjKsXjdYENS3zEOn GVLWWvUbvnZZjGmctolbN60/RsPuMLWmtBhHAJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAhk/nOHvylULKs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpKrfrfwP9TlK6q4mhB5wZmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GEW5Ny PM1GAkTZwCureW4y6iDZNRz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP oxANGspM0yojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXQGpkPwR/C9 woq+UzYIzMXGpu/6AO82XSyn7bDp2Ddep8rQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK lcI4Ww+oK4q7kupQ9LhGRqirxasnDQRRt5RGO0S8xyWx+zf5APxLnMfUjdLZdgitck3bT8nz FmEm5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyLzflTqWEIwlSvTsyISoSHevm 1hmsRTSmZ0incEI6qqWzWnZiiKBjJ/HRRQNyDjICzfNAhxCWGK1W2C5wQGEta4cdNfHHgnpg ZQSpySJAAkz4XCxeM+lGrxl8EmBvartDdElqQcH82Md3zqs4WW/Wotb/StzIkxkWu5dJmexP BaD6F4PuMUIVJdPUUOQS9jpY/nGMIC6TYi1PhwqRocmjmdNmP+vo3g1OB/4M5HFm0kwi6AvU ap3gu71ZUv2/Z9PlWLsL89EiO9D+8zL7T+LLXwN50j9gOX2ib/8YettDWZimchjvf7e+12Pq I8DXyZIoj0GONDDjuDs2dZ7BTg3wbITXPgad+Q/mja/Hzdb IronPort-HdrOrdr: A9a23:6RUpO6wYSiPgcjmyZ3QoKrPwFL1zdoMgy1knxilNoRw8SKKlfq eV7Y0mPH7P+VAssR4b+exoVJPtfZqYz+8R3WBzB8bEYOCFghrKEGgK1+KLqFeMJ8S9zJ846U 4JSdkHNDSaNzlHZKjBjzVQa+xQouW6zA== X-Talos-CUID: 9a23:HXZgP2zhIQcnHup1Was0BgUlCs8mT3Lw/E7OKl+VI2FnQ6etZ2KPrfY= X-Talos-MUID: 9a23:tA1d6AS9s+5MGrb5RXTNpRNvGoBIwp2XI0YTiMwKgYqiFzFJbmI= X-IronPort-AV: E=Sophos;i="6.04,214,1695700800"; d="scan'208";a="127427129" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , George Dunlap , Jan Beulich , "Stefano Stabellini" , Wei Liu , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 2/3] xen/treewide: Switch bool_t to bool Date: Mon, 20 Nov 2023 14:56:22 +0000 Message-ID: <20231120145623.167383-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231120145623.167383-1-andrew.cooper3@citrix.com> References: <20231120145623.167383-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 ... as part of cleaning up the types used. Minor style and/or MISRA cleanup on some altered lines. No functional change. Signed-off-by: Andrew Cooper Acked-by: Jan Beulich Acked-by: Julien Grall --- CC: George Dunlap CC: Jan Beulich CC: Stefano Stabellini CC: Wei Liu CC: Julien Grall CC: Roger Pau Monné --- xen/common/device_tree.c | 34 ++++++++++----------- xen/common/domain.c | 2 +- xen/common/domctl.c | 4 +-- xen/common/event_fifo.c | 4 +-- xen/common/grant_table.c | 14 ++++----- xen/common/kernel.c | 2 +- xen/common/kexec.c | 4 +-- xen/common/keyhandler.c | 4 +-- xen/common/kimage.c | 8 ++--- xen/common/livepatch.c | 20 ++++++------ xen/common/memory.c | 2 +- xen/common/notifier.c | 2 +- xen/common/preempt.c | 2 +- xen/common/rangeset.c | 12 ++++---- xen/common/shutdown.c | 2 +- xen/common/symbols.c | 2 +- xen/common/sysctl.c | 4 +-- xen/common/tasklet.c | 4 +-- xen/common/timer.c | 8 ++--- xen/common/trace.c | 12 ++++---- xen/drivers/acpi/apei/apei-base.c | 2 +- xen/drivers/acpi/apei/apei-internal.h | 2 +- xen/drivers/acpi/apei/erst.c | 2 +- xen/drivers/acpi/apei/hest.c | 2 +- xen/drivers/char/console.c | 8 ++--- xen/drivers/char/ehci-dbgp.c | 10 +++--- xen/drivers/char/ns16550.c | 14 ++++----- xen/drivers/char/pl011.c | 2 +- xen/drivers/char/serial.c | 2 +- xen/drivers/cpufreq/cpufreq.c | 2 +- xen/drivers/video/vesa.c | 2 +- xen/include/acpi/cpufreq/cpufreq.h | 10 +++--- xen/include/xen/device_tree.h | 44 +++++++++++++-------------- xen/include/xen/domain.h | 4 +-- xen/include/xen/gdbstub.h | 2 +- xen/include/xen/irq.h | 2 +- xen/include/xen/kernel.h | 2 +- xen/include/xen/kimage.h | 8 ++--- xen/include/xen/livepatch.h | 6 ++-- xen/include/xen/mm-frame.h | 4 +-- xen/include/xen/mm.h | 6 ++-- xen/include/xen/preempt.h | 2 +- xen/include/xen/rangeset.h | 8 ++--- xen/include/xen/rwlock.h | 2 +- xen/include/xen/serial.h | 2 +- xen/include/xen/shutdown.h | 2 +- xen/include/xen/tasklet.h | 6 ++-- 47 files changed, 152 insertions(+), 152 deletions(-) diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c index b1c29529514f..8d1017a49d80 100644 --- a/xen/common/device_tree.c +++ b/xen/common/device_tree.c @@ -78,7 +78,7 @@ struct dt_bus { const char *name; const char *addresses; - bool_t (*match)(const struct dt_device_node *node); + bool (*match)(const struct dt_device_node *node); void (*count_cells)(const struct dt_device_node *child, int *addrc, int *sizec); u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna); @@ -162,8 +162,8 @@ const void *dt_get_property(const struct dt_device_node *np, return pp ? pp->value : NULL; } -bool_t dt_property_read_u32(const struct dt_device_node *np, - const char *name, u32 *out_value) +bool dt_property_read_u32(const struct dt_device_node *np, + const char *name, u32 *out_value) { u32 len; const __be32 *val; @@ -178,8 +178,8 @@ bool_t dt_property_read_u32(const struct dt_device_node *np, } -bool_t dt_property_read_u64(const struct dt_device_node *np, - const char *name, u64 *out_value) +bool dt_property_read_u64(const struct dt_device_node *np, + const char *name, u64 *out_value) { u32 len; const __be32 *val; @@ -297,8 +297,8 @@ int dt_property_match_string(const struct dt_device_node *np, return -ENODATA; } -bool_t dt_device_is_compatible(const struct dt_device_node *device, - const char *compat) +bool dt_device_is_compatible(const struct dt_device_node *device, + const char *compat) { const char* cp; u32 cplen, l; @@ -318,10 +318,10 @@ bool_t dt_device_is_compatible(const struct dt_device_node *device, return 0; } -bool_t dt_machine_is_compatible(const char *compat) +bool dt_machine_is_compatible(const char *compat) { const struct dt_device_node *root; - bool_t rc = 0; + bool rc = false; root = dt_find_node_by_path("/"); if ( root ) @@ -408,9 +408,9 @@ dt_match_node(const struct dt_device_match *matches, return NULL; while ( matches->path || matches->type || - matches->compatible || matches->not_available || matches->prop) + matches->compatible || matches->not_available || matches->prop ) { - bool_t match = 1; + bool match = true; if ( matches->path ) match &= dt_node_path_is_equal(node, matches->path); @@ -481,7 +481,7 @@ dt_find_matching_node(struct dt_device_node *from, return NULL; } -static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent) +static int __dt_n_addr_cells(const struct dt_device_node *np, bool parent) { const __be32 *ip; @@ -498,7 +498,7 @@ static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent) return DT_ROOT_NODE_ADDR_CELLS_DEFAULT; } -static int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent) +static int __dt_n_size_cells(const struct dt_device_node *np, bool parent) { const __be32 *ip; @@ -558,7 +558,7 @@ int dt_child_n_size_cells(const struct dt_device_node *parent) /* * Default translator (generic bus) */ -static bool_t dt_bus_default_match(const struct dt_device_node *node) +static bool dt_bus_default_match(const struct dt_device_node *node) { /* Root node doesn't have "ranges" property */ if ( node->parent == NULL ) @@ -636,7 +636,7 @@ static bool dt_node_is_pci(const struct dt_device_node *np) return is_pci; } -static bool_t dt_bus_pci_match(const struct dt_device_node *np) +static bool dt_bus_pci_match(const struct dt_device_node *np) { /* * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI @@ -1662,7 +1662,7 @@ int dt_device_get_irq(const struct dt_device_node *device, unsigned int index, return dt_irq_translate(&raw, out_irq); } -bool_t dt_device_is_available(const struct dt_device_node *device) +bool dt_device_is_available(const struct dt_device_node *device) { const char *status; u32 statlen; @@ -1680,7 +1680,7 @@ bool_t dt_device_is_available(const struct dt_device_node *device) return 0; } -bool_t dt_device_for_passthrough(const struct dt_device_node *device) +bool dt_device_for_passthrough(const struct dt_device_node *device) { return (dt_find_property(device, "xen,passthrough", NULL) != NULL); diff --git a/xen/common/domain.c b/xen/common/domain.c index 8f9ab01c0cb7..f15c2f1e95d5 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -51,7 +51,7 @@ unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX; /* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */ -bool_t opt_dom0_vcpus_pin; +bool opt_dom0_vcpus_pin; boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin); /* Protect updates/reads (resp.) of domain_list and domain_hash. */ diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 505e29c0dcc2..f5a71ee5f78d 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -126,7 +126,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) arch_get_domain_info(d, info); } -bool_t domctl_lock_acquire(void) +bool domctl_lock_acquire(void) { /* * Caller may try to pause its own VCPUs. We must prevent deadlock @@ -281,7 +281,7 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo, long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { long ret = 0; - bool_t copyback = 0; + bool copyback = false; struct xen_domctl curop, *op = &curop; struct domain *d; diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c index 6cebc3868a07..37cba9bc4564 100644 --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -124,8 +124,8 @@ static int try_set_link(event_word_t *word, event_word_t *w, uint32_t link) * We block unmasking by the guest by marking the tail word as BUSY, * therefore, the cmpxchg() may fail at most 4 times. */ -static bool_t evtchn_fifo_set_link(struct domain *d, event_word_t *word, - uint32_t link) +static bool evtchn_fifo_set_link(struct domain *d, event_word_t *word, + uint32_t link) { event_word_t w; unsigned int try; diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 89b7811c51c3..5721eab22561 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2272,7 +2272,7 @@ gnttab_transfer( for ( i = 0; i < count; i++ ) { - bool_t okay; + bool okay; int rc; if ( i && hypercall_preempt_check() ) @@ -2858,9 +2858,9 @@ struct gnttab_copy_buf { mfn_t mfn; struct page_info *page; void *virt; - bool_t read_only; - bool_t have_grant; - bool_t have_type; + bool read_only; + bool have_grant; + bool have_type; }; static int gnttab_copy_lock_domain(domid_t domid, bool is_gref, @@ -3006,9 +3006,9 @@ static int gnttab_copy_claim_buf(const struct gnttab_copy *op, return rc; } -static bool_t gnttab_copy_buf_valid(const struct gnttab_copy_ptr *p, - const struct gnttab_copy_buf *b, - bool_t has_gref) +static bool gnttab_copy_buf_valid( + const struct gnttab_copy_ptr *p, const struct gnttab_copy_buf *b, + bool has_gref) { if ( !b->virt ) return 0; diff --git a/xen/common/kernel.c b/xen/common/kernel.c index e928d0b23128..08dbaa2a054c 100644 --- a/xen/common/kernel.c +++ b/xen/common/kernel.c @@ -504,7 +504,7 @@ __initcall(param_init); long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { - bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd); + bool deny = xsm_xen_version(XSM_OTHER, cmd); switch ( cmd ) { diff --git a/xen/common/kexec.c b/xen/common/kexec.c index 3ee5f05c2c1a..84fe8c35976e 100644 --- a/xen/common/kexec.c +++ b/xen/common/kexec.c @@ -937,7 +937,7 @@ static int kexec_segments_add_segment(unsigned int *nr_segments, static int kexec_segments_from_ind_page(mfn_t mfn, unsigned int *nr_segments, xen_kexec_segment_t *segments, - bool_t compat) + bool compat) { void *page; kimage_entry_t *entry; @@ -1215,7 +1215,7 @@ static int kexec_status(XEN_GUEST_HANDLE_PARAM(void) uarg) static int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg, - bool_t compat) + bool compat) { int ret = -EINVAL; diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index f4752859cc7c..99a2d72a0202 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -24,7 +24,7 @@ #include static unsigned char keypress_key; -static bool_t alt_key_handling; +static bool alt_key_handling; static keyhandler_fn_t cf_check show_handlers, cf_check dump_hwdom_registers, cf_check dump_domains, cf_check read_clocks; @@ -39,7 +39,7 @@ static struct keyhandler { }; const char *desc; /* Description for help message. */ - bool_t irq_callback, /* Call in irq context? if not, tasklet context. */ + bool irq_callback, /* Call in irq context? if not, tasklet context. */ diagnostic; /* Include in 'dump all' handler. */ } key_table[128] __read_mostly = { diff --git a/xen/common/kimage.c b/xen/common/kimage.c index 210241dfb76c..9961eac187e9 100644 --- a/xen/common/kimage.c +++ b/xen/common/kimage.c @@ -833,21 +833,21 @@ int kimage_load_segments(struct kexec_image *image) return 0; } -kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool_t compat) +kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool compat) { if ( compat ) return (kimage_entry_t *)((uint32_t *)entry + 1); return entry + 1; } -mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool_t compat) +mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool compat) { if ( compat ) return maddr_to_mfn(*(uint32_t *)entry); return maddr_to_mfn(*entry); } -unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat) +unsigned long kimage_entry_ind(kimage_entry_t *entry, bool compat) { if ( compat ) return *(uint32_t *)entry & 0xf; @@ -855,7 +855,7 @@ unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat) } int kimage_build_ind(struct kexec_image *image, mfn_t ind_mfn, - bool_t compat) + bool compat) { void *page; kimage_entry_t *entry; diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c index d89a904bd4e1..845340c7f398 100644 --- a/xen/common/livepatch.c +++ b/xen/common/livepatch.c @@ -55,8 +55,8 @@ struct livepatch_work check_for_livepatch_work. */ uint32_t timeout; /* Timeout to do the operation. */ struct payload *data; /* The payload on which to act. */ - volatile bool_t do_work; /* Signals work to do. */ - volatile bool_t ready; /* Signals all CPUs synchronized. */ + volatile bool do_work; /* Signals work to do. */ + volatile bool ready; /* Signals all CPUs synchronized. */ unsigned int cmd; /* Action request: LIVEPATCH_ACTION_* */ }; @@ -69,7 +69,7 @@ static struct livepatch_work livepatch_work; * would hammer a global livepatch_work structure on every guest VMEXIT. * Having an per-cpu lessens the load. */ -static DEFINE_PER_CPU(bool_t, work_to_do); +static DEFINE_PER_CPU(bool, work_to_do); static DEFINE_PER_CPU(struct tasklet, livepatch_tasklet); static int get_name(const struct xen_livepatch_name *name, char *n) @@ -106,10 +106,10 @@ static int verify_payload(const struct xen_sysctl_livepatch_upload *upload, char return 0; } -bool_t is_patch(const void *ptr) +bool is_patch(const void *ptr) { const struct payload *data; - bool_t r = 0; + bool r = false; /* * Only RCU locking since this list is only ever changed during apply @@ -936,8 +936,8 @@ static int prepare_payload(struct payload *payload, return 0; } -static bool_t is_payload_symbol(const struct livepatch_elf *elf, - const struct livepatch_elf_sym *sym) +static bool is_payload_symbol(const struct livepatch_elf *elf, + const struct livepatch_elf_sym *sym) { if ( sym->sym->st_shndx == SHN_UNDEF || sym->sym->st_shndx >= elf->hdr->e_shnum ) @@ -1018,7 +1018,7 @@ static int build_symbol_table(struct payload *payload, for ( i = 0; i < nsyms; i++ ) { - bool_t found = 0; + bool found = 0; for ( j = 0; j < payload->nfuncs; j++ ) { @@ -1576,7 +1576,7 @@ static void livepatch_do_action(void) data->rc = rc; } -static bool_t is_work_scheduled(const struct payload *data) +static bool is_work_scheduled(const struct payload *data) { ASSERT(spin_is_locked(&payload_lock)); @@ -1864,7 +1864,7 @@ void check_for_livepatch_work(void) * Unless the 'internal' parameter is used - in which case we only * check against the hypervisor. */ -static int build_id_dep(struct payload *payload, bool_t internal) +static int build_id_dep(struct payload *payload, bool internal) { const void *id = NULL; unsigned int len = 0; diff --git a/xen/common/memory.c b/xen/common/memory.c index fa165ebc144b..b3b05c2ec090 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -757,7 +757,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) MEMF_no_refcount) ) { unsigned long dec_count; - bool_t drop_dom_ref; + bool drop_dom_ref; /* * Pages in in_chunk_list is stolen without diff --git a/xen/common/notifier.c b/xen/common/notifier.c index c9ea44db419a..0f9aa0f93fb0 100644 --- a/xen/common/notifier.c +++ b/xen/common/notifier.c @@ -72,7 +72,7 @@ int notifier_call_chain( int ret = NOTIFY_DONE; struct list_head *cursor; struct notifier_block *nb = NULL; - bool_t reverse = !!(val & NOTIFY_REVERSE); + bool reverse = val & NOTIFY_REVERSE; cursor = pcursor && *pcursor ? &(*pcursor)->chain : &nh->head; diff --git a/xen/common/preempt.c b/xen/common/preempt.c index 3b4178fd44ac..0d2dd51ac241 100644 --- a/xen/common/preempt.c +++ b/xen/common/preempt.c @@ -25,7 +25,7 @@ DEFINE_PER_CPU(unsigned int, __preempt_count); -bool_t in_atomic(void) +bool in_atomic(void) { return preempt_count() || in_irq() || !local_irq_is_enabled(); } diff --git a/xen/common/rangeset.c b/xen/common/rangeset.c index f3baf52ab6f9..27ba6099b582 100644 --- a/xen/common/rangeset.c +++ b/xen/common/rangeset.c @@ -248,11 +248,11 @@ int rangeset_remove_range( return rc; } -bool_t rangeset_contains_range( +bool rangeset_contains_range( struct rangeset *r, unsigned long s, unsigned long e) { struct range *x; - bool_t contains; + bool contains; ASSERT(s <= e); @@ -267,11 +267,11 @@ bool_t rangeset_contains_range( return contains; } -bool_t rangeset_overlaps_range( +bool rangeset_overlaps_range( struct rangeset *r, unsigned long s, unsigned long e) { struct range *x; - bool_t overlaps; + bool overlaps; ASSERT(s <= e); @@ -408,13 +408,13 @@ int rangeset_remove_singleton( return rangeset_remove_range(r, s, s); } -bool_t rangeset_contains_singleton( +bool rangeset_contains_singleton( struct rangeset *r, unsigned long s) { return rangeset_contains_range(r, s, s); } -bool_t rangeset_is_empty( +bool rangeset_is_empty( const struct rangeset *r) { return ((r == NULL) || list_empty(&r->range_list)); diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c index a933ee001ea4..37901a4f3391 100644 --- a/xen/common/shutdown.c +++ b/xen/common/shutdown.c @@ -12,7 +12,7 @@ #include /* opt_noreboot: If true, machine will need manual reset on error. */ -bool_t __read_mostly opt_noreboot; +bool __read_mostly opt_noreboot; boolean_param("noreboot", opt_noreboot); static void noreturn maybe_reboot(void) diff --git a/xen/common/symbols.c b/xen/common/symbols.c index 691e61792506..133b58076823 100644 --- a/xen/common/symbols.c +++ b/xen/common/symbols.c @@ -98,7 +98,7 @@ static unsigned int get_symbol_offset(unsigned long pos) return name - symbols_names; } -bool_t is_active_kernel_text(unsigned long addr) +bool is_active_kernel_text(unsigned long addr) { return !!find_text_region(addr); } diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c index 7cabfb023053..3e2cc4906c10 100644 --- a/xen/common/sysctl.c +++ b/xen/common/sysctl.c @@ -297,8 +297,8 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl) { unsigned int i, j, num_nodes; struct xen_sysctl_numainfo *ni = &op->u.numainfo; - bool_t do_meminfo = !guest_handle_is_null(ni->meminfo); - bool_t do_distance = !guest_handle_is_null(ni->distance); + bool do_meminfo = !guest_handle_is_null(ni->meminfo); + bool do_distance = !guest_handle_is_null(ni->distance); num_nodes = last_node(node_online_map) + 1; diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c index 3ad67b5c2493..c8abad3c758a 100644 --- a/xen/common/tasklet.c +++ b/xen/common/tasklet.c @@ -20,7 +20,7 @@ #include /* Some subsystems call into us before we are initialised. We ignore them. */ -static bool_t tasklets_initialised; +static bool tasklets_initialised; DEFINE_PER_CPU(unsigned long, tasklet_work_to_do); @@ -37,7 +37,7 @@ static void tasklet_enqueue(struct tasklet *t) if ( t->is_softirq ) { struct list_head *list = &per_cpu(softirq_tasklet_list, cpu); - bool_t was_empty = list_empty(list); + bool was_empty = list_empty(list); list_add_tail(&t->list, list); if ( was_empty ) cpu_raise_softirq(cpu, TASKLET_SOFTIRQ); diff --git a/xen/common/timer.c b/xen/common/timer.c index 0fddfa74879e..47e060e4e98d 100644 --- a/xen/common/timer.c +++ b/xen/common/timer.c @@ -239,7 +239,7 @@ static inline void deactivate_timer(struct timer *timer) list_add(&timer->inactive, &per_cpu(timers, timer->cpu).inactive); } -static inline bool_t timer_lock(struct timer *timer) +static inline bool timer_lock(struct timer *timer) { unsigned int cpu; @@ -264,7 +264,7 @@ static inline bool_t timer_lock(struct timer *timer) } #define timer_lock_irqsave(t, flags) ({ \ - bool_t __x; \ + bool __x; \ local_irq_save(flags); \ if ( !(__x = timer_lock(t)) ) \ local_irq_restore(flags); \ @@ -358,7 +358,7 @@ void migrate_timer(struct timer *timer, unsigned int new_cpu) { unsigned int old_cpu; #if CONFIG_NR_CPUS > 1 - bool_t active; + bool active; unsigned long flags; rcu_read_lock(&timer_cpu_read_lock); @@ -580,7 +580,7 @@ static void migrate_timers_from_cpu(unsigned int old_cpu) unsigned int new_cpu = cpumask_any(&cpu_online_map); struct timers *old_ts, *new_ts; struct timer *t; - bool_t notify = 0; + bool notify = false; ASSERT(!cpu_online(old_cpu) && cpu_online(new_cpu)); diff --git a/xen/common/trace.c b/xen/common/trace.c index 17d62f70561f..4e7b080e6154 100644 --- a/xen/common/trace.c +++ b/xen/common/trace.c @@ -433,7 +433,7 @@ int tb_control(struct xen_sysctl_tbuf_op *tbc) return rc; } -static inline unsigned int calc_rec_size(bool_t cycles, unsigned int extra) +static inline unsigned int calc_rec_size(bool cycles, unsigned int extra) { unsigned int rec_size = 4; @@ -443,7 +443,7 @@ static inline unsigned int calc_rec_size(bool_t cycles, unsigned int extra) return rec_size; } -static inline bool_t bogus(u32 prod, u32 cons) +static inline bool bogus(u32 prod, u32 cons) { if ( unlikely(prod & 3) || unlikely(prod >= 2 * data_size) || unlikely(cons & 3) || unlikely(cons >= 2 * data_size) ) @@ -546,7 +546,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next, static inline void __insert_record(struct t_buf *buf, unsigned long event, unsigned int extra, - bool_t cycles, + bool cycles, unsigned int rec_size, const void *extra_data) { @@ -617,7 +617,7 @@ static inline void insert_wrap_record(struct t_buf *buf, { u32 space_left = calc_bytes_to_wrap(buf); unsigned int extra_space = space_left - sizeof(u32); - bool_t cycles = 0; + bool cycles = false; BUG_ON(space_left > size); @@ -674,14 +674,14 @@ static DECLARE_SOFTIRQ_TASKLET(trace_notify_dom0_tasklet, * * Logs a trace record into the appropriate buffer. */ -void __trace_var(u32 event, bool_t cycles, unsigned int extra, +void __trace_var(u32 event, bool cycles, unsigned int extra, const void *extra_data) { struct t_buf *buf; unsigned long flags; u32 bytes_to_tail, bytes_to_wrap; unsigned int rec_size, total_size; - bool_t started_below_highwater; + bool started_below_highwater; if( !tb_init_done ) return; diff --git a/xen/drivers/acpi/apei/apei-base.c b/xen/drivers/acpi/apei/apei-base.c index de75c1cef992..053a92c307bb 100644 --- a/xen/drivers/acpi/apei/apei-base.c +++ b/xen/drivers/acpi/apei/apei-base.c @@ -154,7 +154,7 @@ int cf_check apei_exec_noop( * execute all instructions belong to the action. */ int __apei_exec_run(struct apei_exec_context *ctx, u8 action, - bool_t optional) + bool optional) { int rc = -ENOENT; u32 i, ip; diff --git a/xen/drivers/acpi/apei/apei-internal.h b/xen/drivers/acpi/apei/apei-internal.h index 360e94b9c877..90233077b7f9 100644 --- a/xen/drivers/acpi/apei/apei-internal.h +++ b/xen/drivers/acpi/apei/apei-internal.h @@ -48,7 +48,7 @@ static inline u64 apei_exec_ctx_get_output(struct apei_exec_context *ctx) return ctx->value; } -int __apei_exec_run(struct apei_exec_context *ctx, u8 action, bool_t optional); +int __apei_exec_run(struct apei_exec_context *ctx, u8 action, bool optional); static inline int apei_exec_run(struct apei_exec_context *ctx, u8 action) { diff --git a/xen/drivers/acpi/apei/erst.c b/xen/drivers/acpi/apei/erst.c index 40d8f00270d0..e006b3def2be 100644 --- a/xen/drivers/acpi/apei/erst.c +++ b/xen/drivers/acpi/apei/erst.c @@ -58,7 +58,7 @@ #define FIRMWARE_MAX_STALL 50 /* 50us */ static struct acpi_table_erst *__read_mostly erst_tab; -static bool_t __read_mostly erst_enabled; +static bool __read_mostly erst_enabled; /* ERST Error Log Address Range atrributes */ #define ERST_RANGE_RESERVED 0x0001 diff --git a/xen/drivers/acpi/apei/hest.c b/xen/drivers/acpi/apei/hest.c index 5881275d2f37..4ec28c3c11ba 100644 --- a/xen/drivers/acpi/apei/hest.c +++ b/xen/drivers/acpi/apei/hest.c @@ -39,7 +39,7 @@ #define HEST_PFX "HEST: " -static bool_t hest_disable; +static bool hest_disable; boolean_param("hest_disable", hest_disable); /* HEST table parsing */ diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 4824d4a91d45..946af5e62535 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -54,7 +54,7 @@ static unsigned char __read_mostly opt_conswitch[3] = "a"; string_runtime_param("conswitch", opt_conswitch); /* sync_console: force synchronous console output (useful for debugging). */ -static bool_t __initdata opt_sync_console; +static bool __initdata opt_sync_console; boolean_param("sync_console", opt_sync_console); static const char __initconst warning_sync_console[] = "WARNING: CONSOLE OUTPUT IS SYNCHRONOUS\n" @@ -64,7 +64,7 @@ static const char __initconst warning_sync_console[] = "timekeeping. It is NOT recommended for production use!\n"; /* console_to_ring: send guest (incl. dom 0) console data to console ring. */ -static bool_t __read_mostly opt_console_to_ring; +static bool __read_mostly opt_console_to_ring; boolean_param("console_to_ring", opt_console_to_ring); /* console_timestamps: include a timestamp prefix on every Xen console line. */ @@ -760,7 +760,7 @@ long do_console_io( * ***************************************************** */ -static bool_t console_locks_busted; +static bool console_locks_busted; static void __putstr(const char *str) { @@ -911,7 +911,7 @@ static void printk_start_of_line(const char *prefix) static void vprintk_common(const char *prefix, const char *fmt, va_list args) { struct vps { - bool_t continued, do_print; + bool continued, do_print; } *state; static DEFINE_PER_CPU(struct vps, state); static char buf[1024]; diff --git a/xen/drivers/char/ehci-dbgp.c b/xen/drivers/char/ehci-dbgp.c index 00cbdd5454dd..72e1beabbbcd 100644 --- a/xen/drivers/char/ehci-dbgp.c +++ b/xen/drivers/char/ehci-dbgp.c @@ -332,7 +332,7 @@ struct ehci_dbgp { unsigned long timeout; struct timer timer; spinlock_t *lock; - bool_t reset_run; + bool reset_run; u8 bus, slot, func, bar; u16 pci_cr; u32 bar_val; @@ -639,7 +639,7 @@ static int dbgp_control_msg(struct ehci_dbgp *dbgp, unsigned int devnum, { u32 addr, pids, ctrl; struct usb_ctrlrequest req; - bool_t read = (requesttype & USB_DIR_IN) != 0; + bool read = (requesttype & USB_DIR_IN) != 0; int ret; if ( size > (read ? DBGP_MAX_PACKET : 0) ) @@ -873,7 +873,7 @@ static int ehci_dbgp_external_startup(struct ehci_dbgp *dbgp) unsigned int dbg_port = dbgp->phys_port; unsigned int tries = 3; unsigned int reset_port_tries = 1; - bool_t try_hard_once = 1; + bool try_hard_once = true; try_port_reset_again: ret = ehci_dbgp_startup(dbgp); @@ -1252,7 +1252,7 @@ static void cf_check _ehci_dbgp_poll(struct cpu_user_regs *regs) struct ehci_dbgp *dbgp = port->uart; unsigned long flags; unsigned int timeout = MICROSECS(DBGP_CHECK_INTERVAL); - bool_t empty = 0; + bool empty = false; if ( !dbgp->ehci_debug ) return; @@ -1300,7 +1300,7 @@ static void cf_check ehci_dbgp_poll(void *data) #endif } -static bool_t ehci_dbgp_setup_preirq(struct ehci_dbgp *dbgp) +static bool ehci_dbgp_setup_preirq(struct ehci_dbgp *dbgp) { if ( !ehci_dbgp_setup(dbgp) ) return 1; diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c index 28ddedd50d44..ddf2a48be6e7 100644 --- a/xen/drivers/char/ns16550.c +++ b/xen/drivers/char/ns16550.c @@ -58,12 +58,12 @@ static struct ns16550 { struct timer timer; struct timer resume_timer; unsigned int timeout_ms; - bool_t intr_works; - bool_t dw_usr_bsy; + bool intr_works; + bool dw_usr_bsy; #ifdef NS16550_PCI /* PCI card parameters. */ - bool_t pb_bdf_enable; /* if =1, pb-bdf effective, port behind bridge */ - bool_t ps_bdf_enable; /* if =1, ps_bdf effective, port on pci card */ + bool pb_bdf_enable; /* if =1, pb-bdf effective, port behind bridge */ + bool ps_bdf_enable; /* if =1, ps_bdf effective, port on pci card */ unsigned int pb_bdf[3]; /* pci bridge BDF */ unsigned int ps_bdf[3]; /* pci serial port BDF */ u32 bar; @@ -101,8 +101,8 @@ struct ns16550_config_param { unsigned int reg_width; unsigned int fifo_size; u8 lsr_mask; - bool_t mmio; - bool_t bar0; + bool mmio; + bool bar0; unsigned int max_ports; unsigned int base_baud; unsigned int uart_offset; @@ -1172,7 +1172,7 @@ static const struct ns16550_config __initconst uart_config[] = }; static int __init -pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx) +pci_uart_config(struct ns16550 *uart, bool skip_amt, unsigned int idx) { u64 orig_base = uart->io_base; unsigned int b, d, f, nextf, i; diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c index f7bf3ad117af..513b373b2e23 100644 --- a/xen/drivers/char/pl011.c +++ b/xen/drivers/char/pl011.c @@ -39,7 +39,7 @@ static struct pl011 { /* /\* UART with no IRQ line: periodically-polled I/O. *\/ */ /* struct timer timer; */ /* unsigned int timeout_ms; */ - /* bool_t probing, intr_works; */ + /* bool probing, intr_works; */ bool sbsa; /* ARM SBSA generic interface */ bool mmio32; /* 32-bit only MMIO */ } pl011_com = {0}; diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c index 00efe69574f3..6d792f06dd7d 100644 --- a/xen/drivers/char/serial.c +++ b/xen/drivers/char/serial.c @@ -29,7 +29,7 @@ static struct serial_port com[SERHND_IDX + 1] = { } }; -static bool_t __read_mostly post_irq; +static bool __read_mostly post_irq; static inline void serial_start_tx(struct serial_port *port) { diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c index 8d1e789eab8e..430351b775db 100644 --- a/xen/drivers/cpufreq/cpufreq.c +++ b/xen/drivers/cpufreq/cpufreq.c @@ -139,7 +139,7 @@ static int __init cf_check setup_cpufreq_option(const char *str) } custom_param("cpufreq", setup_cpufreq_option); -bool_t __read_mostly cpufreq_verbose; +bool __read_mostly cpufreq_verbose; struct cpufreq_governor *__find_governor(const char *governor) { diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c index b007ff5678ef..70feca21ac4f 100644 --- a/xen/drivers/video/vesa.c +++ b/xen/drivers/video/vesa.c @@ -145,7 +145,7 @@ static void cf_check lfb_flush(void) __asm__ __volatile__ ("sfence" : : : "memory"); } -void __init vesa_endboot(bool_t keep) +void __init vesa_endboot(bool keep) { if ( keep ) { diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h index 281e3f513d66..b0c860f0ec21 100644 --- a/xen/include/acpi/cpufreq/cpufreq.h +++ b/xen/include/acpi/cpufreq/cpufreq.h @@ -22,7 +22,7 @@ DECLARE_PER_CPU(spinlock_t, cpufreq_statistic_lock); -extern bool_t cpufreq_verbose; +extern bool cpufreq_verbose; enum cpufreq_xen_opt { CPUFREQ_none, @@ -52,8 +52,8 @@ struct cpufreq_cpuinfo { }; struct perf_limits { - bool_t no_turbo; - bool_t turbo_disabled; + bool no_turbo; + bool turbo_disabled; uint32_t turbo_pct; uint32_t max_perf_pct; /* max performance in percentage */ uint32_t min_perf_pct; /* min performance in percentage */ @@ -77,7 +77,7 @@ struct cpufreq_policy { struct perf_limits limits; struct cpufreq_governor *governor; - bool_t resume; /* flag for cpufreq 1st run + bool resume; /* flag for cpufreq 1st run * S3 wakeup, hotplug cpu, etc */ s8 turbo; /* tristate flag: 0 for unsupported * -1 for disable, 1 for enabled @@ -114,7 +114,7 @@ struct cpufreq_governor { char name[CPUFREQ_NAME_LEN]; int (*governor)(struct cpufreq_policy *policy, unsigned int event); - bool_t (*handle_option)(const char *name, const char *value); + bool (*handle_option)(const char *name, const char *value); struct list_head governor_list; }; diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h index a262bba2edaf..3ae7b45429b6 100644 --- a/xen/include/xen/device_tree.h +++ b/xen/include/xen/device_tree.h @@ -29,7 +29,7 @@ struct dt_device_match { const char *path; const char *type; const char *compatible; - const bool_t not_available; + const bool not_available; /* * Property name to search for. We only search for the property's * existence. @@ -95,7 +95,7 @@ struct dt_device_node { bool is_protected; /* HACK: Remove this if there is a need of space */ - bool_t static_evtchn_created; + bool static_evtchn_created; /* * The main purpose of this list is to link the structure in the list @@ -138,7 +138,7 @@ struct dt_irq { }; /* If type == DT_IRQ_TYPE_NONE, assume we use level triggered */ -static inline bool_t dt_irq_is_level_triggered(const struct dt_irq *irq) +static inline bool dt_irq_is_level_triggered(const struct dt_irq *irq) { unsigned int type = irq->type; @@ -319,19 +319,19 @@ static inline const char *dt_node_name(const struct dt_device_node *np) return (np && np->name) ? np->name : ""; } -static inline bool_t dt_node_name_is_equal(const struct dt_device_node *np, - const char *name) +static inline bool dt_node_name_is_equal(const struct dt_device_node *np, + const char *name) { return !dt_node_cmp(np->name, name); } -static inline bool_t dt_node_path_is_equal(const struct dt_device_node *np, - const char *path) +static inline bool dt_node_path_is_equal(const struct dt_device_node *np, + const char *path) { return !dt_node_cmp(np->full_name, path); } -static inline bool_t +static inline bool dt_device_type_is_equal(const struct dt_device_node *device, const char *type) { @@ -360,8 +360,8 @@ static inline bool dt_device_is_protected(const struct dt_device_node *device) return device->is_protected; } -static inline bool_t dt_property_name_is_equal(const struct dt_property *pp, - const char *name) +static inline bool dt_property_name_is_equal(const struct dt_property *pp, + const char *name) { return !dt_prop_cmp(pp->name, name); } @@ -372,7 +372,7 @@ dt_device_set_static_evtchn_created(struct dt_device_node *device) device->static_evtchn_created = true; } -static inline bool_t +static inline bool dt_device_static_evtchn_created(const struct dt_device_node *device) { return device->static_evtchn_created; @@ -414,8 +414,8 @@ const struct dt_property *dt_find_property(const struct dt_device_node *np, * * Return true if get the desired value. */ -bool_t dt_property_read_u32(const struct dt_device_node *np, - const char *name, u32 *out_value); +bool dt_property_read_u32(const struct dt_device_node *np, + const char *name, u32 *out_value); /** * dt_property_read_u64 - Helper to read a u64 property. * @np: node to get the value @@ -424,8 +424,8 @@ bool_t dt_property_read_u32(const struct dt_device_node *np, * * Return true if get the desired value. */ -bool_t dt_property_read_u64(const struct dt_device_node *np, - const char *name, u64 *out_value); +bool dt_property_read_u64(const struct dt_device_node *np, + const char *name, u64 *out_value); /** @@ -491,8 +491,8 @@ static inline int dt_property_read_u32_array(const struct dt_device_node *np, * Search for a property in a device node. * Return true if the property exists false otherwise. */ -static inline bool_t dt_property_read_bool(const struct dt_device_node *np, - const char *name) +static inline bool dt_property_read_bool(const struct dt_device_node *np, + const char *name) { const struct dt_property *prop = dt_find_property(np, name, NULL); @@ -536,8 +536,8 @@ int dt_property_match_string(const struct dt_device_node *np, * Checks if the given "compat" string matches one of the strings in * the device's "compatible" property */ -bool_t dt_device_is_compatible(const struct dt_device_node *device, - const char *compat); +bool dt_device_is_compatible(const struct dt_device_node *device, + const char *compat); /** * dt_machine_is_compatible - Test root of device tree for a given compatible value @@ -546,7 +546,7 @@ bool_t dt_device_is_compatible(const struct dt_device_node *device, * Returns true if the root node has the given value in its * compatible property. */ -bool_t dt_machine_is_compatible(const char *compat); +bool dt_machine_is_compatible(const char *compat); /** * dt_find_node_by_name - Find a node by its "name" property @@ -764,7 +764,7 @@ int dt_child_n_addr_cells(const struct dt_device_node *parent); * Returns true if the status property is absent or set to "okay" or "ok", * false otherwise. */ -bool_t dt_device_is_available(const struct dt_device_node *device); +bool dt_device_is_available(const struct dt_device_node *device); /** * dt_device_for_passthrough - Check if a device will be used for @@ -775,7 +775,7 @@ bool_t dt_device_is_available(const struct dt_device_node *device); * Return true if the property "xen,passthrough" is present in the node, * false otherwise. */ -bool_t dt_device_for_passthrough(const struct dt_device_node *device); +bool dt_device_for_passthrough(const struct dt_device_node *device); /** * dt_match_node - Tell if a device_node has a matching of dt_device_match diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 54d88bf5e34b..460c8c3d27b3 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -124,7 +124,7 @@ void arch_dump_domain_info(struct domain *d); int arch_vcpu_reset(struct vcpu *); -bool_t domctl_lock_acquire(void); +bool domctl_lock_acquire(void); void domctl_lock_release(void); /* @@ -144,7 +144,7 @@ void arch_hypercall_tasklet_result(struct vcpu *v, long res); extern unsigned int xen_processor_pmbits; -extern bool_t opt_dom0_vcpus_pin; +extern bool opt_dom0_vcpus_pin; extern cpumask_t dom0_cpus; extern bool dom0_affinity_relaxed; diff --git a/xen/include/xen/gdbstub.h b/xen/include/xen/gdbstub.h index 18c960969b76..d2efeb0e3ae1 100644 --- a/xen/include/xen/gdbstub.h +++ b/xen/include/xen/gdbstub.h @@ -30,7 +30,7 @@ struct cpu_user_regs; struct gdb_context { int serhnd; /* handle on our serial line */ int console_steal_id; /* handle on stolen console */ - bool_t currently_attached; + bool currently_attached; atomic_t running; unsigned long connected; u8 signum; diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h index 58d462e8e6c9..0bdfe2957640 100644 --- a/xen/include/xen/irq.h +++ b/xen/include/xen/irq.h @@ -21,7 +21,7 @@ struct irqaction { void (*handler)(int irq, void *dev_id, struct cpu_user_regs *regs); const char *name; void *dev_id; - bool_t free_on_release; + bool free_on_release; #ifdef CONFIG_IRQ_HAS_MULTIPLE_ACTION struct irqaction *next; #endif diff --git a/xen/include/xen/kernel.h b/xen/include/xen/kernel.h index 2c5ed7736c99..560b1c28322f 100644 --- a/xen/include/xen/kernel.h +++ b/xen/include/xen/kernel.h @@ -102,7 +102,7 @@ extern enum system_state { SYS_STATE_resume } system_state; -bool_t is_active_kernel_text(unsigned long addr); +bool is_active_kernel_text(unsigned long addr); extern const char xen_config_data[]; extern const unsigned int xen_config_data_size; diff --git a/xen/include/xen/kimage.h b/xen/include/xen/kimage.h index cbfb9e9054df..348f07f5c881 100644 --- a/xen/include/xen/kimage.h +++ b/xen/include/xen/kimage.h @@ -47,11 +47,11 @@ int kimage_load_segments(struct kexec_image *image); struct page_info *kimage_alloc_control_page(struct kexec_image *image, unsigned memflags); -kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool_t compat); -mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool_t compat); -unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat); +kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool compat); +mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool compat); +unsigned long kimage_entry_ind(kimage_entry_t *entry, bool compat); int kimage_build_ind(struct kexec_image *image, mfn_t ind_mfn, - bool_t compat); + bool compat); #endif /* __ASSEMBLY__ */ diff --git a/xen/include/xen/livepatch.h b/xen/include/xen/livepatch.h index 9fdb29c382b6..458eef57a7d1 100644 --- a/xen/include/xen/livepatch.h +++ b/xen/include/xen/livepatch.h @@ -48,13 +48,13 @@ struct livepatch_symbol { const char *name; unsigned long value; unsigned int size; - bool_t new_symbol; + bool new_symbol; }; int livepatch_op(struct xen_sysctl_livepatch_op *); void check_for_livepatch_work(void); unsigned long livepatch_symbols_lookup_by_name(const char *symname); -bool_t is_patch(const void *addr); +bool is_patch(const void *addr); /* Arch hooks. */ int arch_livepatch_verify_elf(const struct livepatch_elf *elf); @@ -169,7 +169,7 @@ static inline int livepatch_op(struct xen_sysctl_livepatch_op *op) } static inline void check_for_livepatch_work(void) { }; -static inline bool_t is_patch(const void *addr) +static inline bool is_patch(const void *addr) { return 0; } diff --git a/xen/include/xen/mm-frame.h b/xen/include/xen/mm-frame.h index 0105ed01300a..922ae418807a 100644 --- a/xen/include/xen/mm-frame.h +++ b/xen/include/xen/mm-frame.h @@ -38,7 +38,7 @@ static inline mfn_t mfn_min(mfn_t x, mfn_t y) return _mfn(min(mfn_x(x), mfn_x(y))); } -static inline bool_t mfn_eq(mfn_t x, mfn_t y) +static inline bool mfn_eq(mfn_t x, mfn_t y) { return mfn_x(x) == mfn_x(y); } @@ -77,7 +77,7 @@ static inline gfn_t gfn_min(gfn_t x, gfn_t y) return _gfn(min(gfn_x(x), gfn_x(y))); } -static inline bool_t gfn_eq(gfn_t x, gfn_t y) +static inline bool gfn_eq(gfn_t x, gfn_t y) { return gfn_x(x) == gfn_x(y); } diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 8b9618609f77..595629cf3fda 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -254,7 +254,7 @@ struct page_list_head # define INIT_PAGE_LIST_HEAD(head) ((head)->tail = (head)->next = NULL) # define INIT_PAGE_LIST_ENTRY(ent) ((ent)->prev = (ent)->next = PAGE_LIST_NULL) -static inline bool_t +static inline bool page_list_empty(const struct page_list_head *head) { return !head->next; @@ -313,7 +313,7 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head) } head->tail = page; } -static inline bool_t +static inline bool __page_list_del_head(struct page_info *page, struct page_list_head *head, struct page_info *next, struct page_info *prev) { @@ -427,7 +427,7 @@ page_list_splice(struct page_list_head *list, struct page_list_head *head) # define INIT_PAGE_LIST_HEAD INIT_LIST_HEAD # define INIT_PAGE_LIST_ENTRY INIT_LIST_HEAD -static inline bool_t +static inline bool page_list_empty(const struct page_list_head *head) { return !!list_empty(head); diff --git a/xen/include/xen/preempt.h b/xen/include/xen/preempt.h index bef83135a1b8..aa059b497b29 100644 --- a/xen/include/xen/preempt.h +++ b/xen/include/xen/preempt.h @@ -26,7 +26,7 @@ DECLARE_PER_CPU(unsigned int, __preempt_count); preempt_count()--; \ } while (0) -bool_t in_atomic(void); +bool in_atomic(void); #ifndef NDEBUG void ASSERT_NOT_IN_ATOMIC(void); diff --git a/xen/include/xen/rangeset.h b/xen/include/xen/rangeset.h index 135f33f6066f..a211e3dfac1d 100644 --- a/xen/include/xen/rangeset.h +++ b/xen/include/xen/rangeset.h @@ -52,7 +52,7 @@ void rangeset_limit( #define _RANGESETF_prettyprint_hex 0 #define RANGESETF_prettyprint_hex (1U << _RANGESETF_prettyprint_hex) -bool_t __must_check rangeset_is_empty( +bool __must_check rangeset_is_empty( const struct rangeset *r); /* Add/claim/remove/query a numeric range. */ @@ -62,9 +62,9 @@ int __must_check rangeset_claim_range(struct rangeset *r, unsigned long size, unsigned long *s); int __must_check rangeset_remove_range( struct rangeset *r, unsigned long s, unsigned long e); -bool_t __must_check rangeset_contains_range( +bool __must_check rangeset_contains_range( struct rangeset *r, unsigned long s, unsigned long e); -bool_t __must_check rangeset_overlaps_range( +bool __must_check rangeset_overlaps_range( struct rangeset *r, unsigned long s, unsigned long e); int rangeset_report_ranges( struct rangeset *r, unsigned long s, unsigned long e, @@ -88,7 +88,7 @@ int __must_check rangeset_add_singleton( struct rangeset *r, unsigned long s); int __must_check rangeset_remove_singleton( struct rangeset *r, unsigned long s); -bool_t __must_check rangeset_contains_singleton( +bool __must_check rangeset_contains_singleton( struct rangeset *r, unsigned long s); /* swap contents */ diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index e0d2b41c5c7e..08ba46de1552 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -293,7 +293,7 @@ typedef struct percpu_rwlock percpu_rwlock_t; struct percpu_rwlock { rwlock_t rwlock; - bool_t writer_activating; + bool writer_activating; #ifndef NDEBUG percpu_rwlock_t **percpu_owner; #endif diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h index f0aff7ea7661..cf9701986fe1 100644 --- a/xen/include/xen/serial.h +++ b/xen/include/xen/serial.h @@ -48,7 +48,7 @@ struct serial_port { /* Transmit data buffer (interrupt-driven uart). */ char *txbuf; unsigned int txbufp, txbufc; - bool_t tx_quench; + bool tx_quench; int tx_log_everything; /* Force synchronous transmit. */ int sync; diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h index b3f7e30cde5c..668aed0be580 100644 --- a/xen/include/xen/shutdown.h +++ b/xen/include/xen/shutdown.h @@ -4,7 +4,7 @@ #include /* opt_noreboot: If true, machine will need manual reset on error. */ -extern bool_t opt_noreboot; +extern bool opt_noreboot; void noreturn hwdom_shutdown(u8 reason); diff --git a/xen/include/xen/tasklet.h b/xen/include/xen/tasklet.h index 193acf8f42c1..1362d4af27c8 100644 --- a/xen/include/xen/tasklet.h +++ b/xen/include/xen/tasklet.h @@ -18,9 +18,9 @@ struct tasklet { struct list_head list; int scheduled_on; - bool_t is_softirq; - bool_t is_running; - bool_t is_dead; + bool is_softirq; + bool is_running; + bool is_dead; void (*func)(void *); void *data; }; From patchwork Mon Nov 20 14:56:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 13461433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F29F9C5AD4C for ; Mon, 20 Nov 2023 14:56:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.636970.992663 (Exim 4.92) (envelope-from ) id 1r55hY-0007yC-HZ; Mon, 20 Nov 2023 14:56:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 636970.992663; Mon, 20 Nov 2023 14:56:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hY-0007y4-C8; Mon, 20 Nov 2023 14:56:40 +0000 Received: by outflank-mailman (input) for mailman id 636970; Mon, 20 Nov 2023 14:56:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r55hX-0007tb-OT for xen-devel@lists.xenproject.org; Mon, 20 Nov 2023 14:56:39 +0000 Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 015f2fc5-87b5-11ee-9b0e-b553b5be7939; Mon, 20 Nov 2023 15:56:37 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 015f2fc5-87b5-11ee-9b0e-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1700492197; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uDEFdY6E/rTQZjQXn56NaCkxT9dqQV5VyYEwEpX7kCQ=; b=a1bi1O8sOKLS2nqJv5jAk3lGJzI8fS5PWVk3+Rh8lVjT00J4g+f9SUPy QRs8uYfjF+XoddfqLq9SAor0IXwp5nBZKyUDNOZFcSYVKHOOEdrL8Z71h 9YuppVJlTqEitGO39ByxqpbNFoqwsFKFhIbgt2ir48WHq20SHoopc3H0S o=; X-CSE-ConnectionGUID: YxgkUgEXQvaxEBofcrPFUw== X-CSE-MsgGUID: I8RdwhXlS0mNkBg/AzojDQ== Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 127427127 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.159.70 X-Policy: $RELAYED X-ThreatScanner-Verdict: Negative IronPort-Data: A9a23:RXUQJaLiVriJstj/FE+RupUlxSXFcZb7ZxGr2PjKsXjdYENSgzdRx 2oXUGzXMvuMajT1fdh/b4+yoU8GvZDcmt42GQdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAhk/nOHvylULKs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpKrfrfwP9TlK6q4mhB5wZmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GEW5Ny PM1GAkTZwCureW4y6iDZNRz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP oxANGspM0yojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXQGpkPwR/C+ Qoq+UzYUisWK9yvzQHd91SvrOCItDH8Wq4rQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK lcI4Ww+oK4q7kupQ9LhGRqirxasnDQRRt5RGO0S8xyWx+zf5APxLnMfUjdLZdgitck3bT8nz FmEm5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyLzflTqWEIwlSvTsyISoSHevm 1hmsRTSmZ0incEI6qqWzWnZiiKBjJ/HRRQNyDjICzfNAhxCWGK1W2C5wQGEta4cdNfHHgnpg ZQSpySJAAkz4XCxeM+lGrxl8EmBvartDdElqQcH82Md3zqs4WW/Wotb/StzIkxkWu5dJmexP BaD6F4PuMUIVJdPUUOQS9jpY/nGMIC6TYi1PhwqRocmjmdNmP+vo3g1OB/4M5HFm0kwi6AvU ap3gu71ZUv2/Z9PlWLsL89EiO9D+8zL7T+LLXwN50j9gOX2ib/8YettDWZimchjvf7e+12Pq I8DXyZIoj0GONDDjuDs2dZ7BTg3wbITXPgad+Q/mja/Hzdb IronPort-HdrOrdr: A9a23:MREhx6sfsJOHDLPAF9IHqo5U7skDTtV00zEX/kB9WHVpmszxra 6TdZMgpGbJYVcqKRcdcL+7WJVoLUmxyXcx2/h1AV7AZniAhILLFvAA0WKK+VSJcEeSygce79 YFT0EXMqyJMbEQt6fHCWeDfOrIuOP3kpyVuQ== X-Talos-CUID: 9a23:iHcum29Vi3YwdVP0Q1CVv38EFcMYfVDt9WnZM2KSDmhiUoW7dEDFrQ== X-Talos-MUID: 9a23:SVcDWwhTE/Rxd8Qf7fWUzcMpFZdS77uuOEQ2yZAguOOLcgViPDWxtWHi X-IronPort-AV: E=Sophos;i="6.04,214,1695700800"; d="scan'208";a="127427127" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , George Dunlap , Jan Beulich , "Stefano Stabellini" , Wei Liu , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 3/3] xen: Drop bool_t Date: Mon, 20 Nov 2023 14:56:23 +0000 Message-ID: <20231120145623.167383-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231120145623.167383-1-andrew.cooper3@citrix.com> References: <20231120145623.167383-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 No more users. This completes the work started in commit 920234259475 ("xen/build: Use C99 booleans"), July 2016. Signed-off-by: Andrew Cooper Acked-by: Jan Beulich --- CC: George Dunlap CC: Jan Beulich CC: Stefano Stabellini CC: Wei Liu CC: Julien Grall CC: Roger Pau Monné --- xen/include/xen/types.h | 1 - 1 file changed, 1 deletion(-) diff --git a/xen/include/xen/types.h b/xen/include/xen/types.h index 64e75674da4f..449947b353be 100644 --- a/xen/include/xen/types.h +++ b/xen/include/xen/types.h @@ -64,7 +64,6 @@ typedef __u64 __be64; typedef unsigned int __attribute__((__mode__(__pointer__))) uintptr_t; -typedef bool bool_t; #define test_and_set_bool(b) xchg(&(b), true) #define test_and_clear_bool(b) xchg(&(b), false)