From patchwork Wed Jan 22 01:58:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 11344999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05BBD184C for ; Wed, 22 Jan 2020 05:15:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D6EE824656 for ; Wed, 22 Jan 2020 05:15:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NmUs9zWu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D6EE824656 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iu8LB-0005A5-K6; Wed, 22 Jan 2020 05:14:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iu5LC-0006BE-65 for xen-devel@lists.xenproject.org; Wed, 22 Jan 2020 02:01:58 +0000 X-Inumbo-ID: 0961e7ba-3cbb-11ea-9fd7-bc764e2007e4 Received: from mail-yb1-xb43.google.com (unknown [2607:f8b0:4864:20::b43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 0961e7ba-3cbb-11ea-9fd7-bc764e2007e4; Wed, 22 Jan 2020 02:01:00 +0000 (UTC) Received: by mail-yb1-xb43.google.com with SMTP id f136so2389378ybg.11 for ; Tue, 21 Jan 2020 18:01:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dBW0guSxfrq/E655y/VmZ42DffXmRFCDsvSx02RJf0E=; b=NmUs9zWuv1FaOjoJ94riJW6j6vAMyD1DTuzQ2iUGIIG6dMA1UHoG1EmaqxgF6qaEcP Ja3xR64BDbt/23IXJM7mhsiquvmqUrwLmIVZ0eteF9xYki94bJWf9iXzjDqjZx5wk5f7 l2pBGMADceorTA54G5xMKmF37ARSSNrKkp0tAetrepYnVZj40QJdLOVLD9hlToLopVDq j/1WXCHwx0ool+ml/KO3ER7ozAy84AHnSZ4ccGLJAIjalGiQ4BeFuS4qbP1k2Dv/ihC3 1rEp/A5VdCVEc2t2pr/3trLKCT2Ok2DdAOFaYAJRjsEdn+dhUdCoD59sFyyBR63T4kU+ Sf4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dBW0guSxfrq/E655y/VmZ42DffXmRFCDsvSx02RJf0E=; b=qhTyrDKMw4sku8yHsUiSXfp5c20SO128uXLKtfOLRngFSBzliTSHi125XTLCz8c1Lw xtuSiQQ/A6rMGV5GwBVzdbMpwfNJXpD5B+hqiFua8pbHHDr368lXCBGOszqYl5+MS+DG HZAf6M5NfcSoADIJp2/piONLWlbqjnqqvbXSAjb+k4VhG8OADcTj061Di5WCLfvnXJIO U5gyFnm20949Erz7CzfXQwWEO/FzURIUK2gj8TTCYZwG+61atMdbHgOpy7PaQplkmX7G oKr1MfqpfDiP7gsYUsX/rxHI9qpZU9r7YaXXHSh0jxOIHIi5dRUE6D6YoteHeeDRA3E0 QPQQ== X-Gm-Message-State: APjAAAU0cUPrHaBFKsVE1aNaMP55ijjhn9epWbmLIiNdsldZYn2QZ5ec Y6Ekk8c5+MO/7laqIbsgWofXKA9LbRi4KQ== X-Google-Smtp-Source: APXvYqxeomceaQ25qwsfHjJ8BhSQkWGxmhTr1vrq0QZDfXrvVudenGRRlxCp8x6fBVr2xTQ9Eb8gyg== X-Received: by 2002:a25:c789:: with SMTP id w131mr5850910ybe.439.1579658459571; Tue, 21 Jan 2020 18:00:59 -0800 (PST) Received: from bobbye-pc.knology.net ([216.186.244.35]) by smtp.gmail.com with ESMTPSA id q185sm17504248ywh.61.2020.01.21.18.00.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jan 2020 18:00:59 -0800 (PST) From: Bobby Eshleman To: xen-devel@lists.xenproject.org Date: Tue, 21 Jan 2020 19:58:57 -0600 Message-Id: <50c36ec9c764112c2f48721489865c291a9abf3b.1579615303.git.bobbyeshleman@gmail.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: References: MIME-Version: 1.0 X-Mailman-Approved-At: Wed, 22 Jan 2020 05:14:01 +0000 Subject: [Xen-devel] [RFC XEN PATCH 18/23] riscv: Add p2m.c X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Bobby Eshleman , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Bobby Eshleman , Dan Robertson , Alistair Francis Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Alistair Francis Signed-off-by: Alistair Francis --- xen/arch/riscv/p2m.c | 261 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 261 insertions(+) create mode 100644 xen/arch/riscv/p2m.c diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c new file mode 100644 index 0000000000..38d92d8cc1 --- /dev/null +++ b/xen/arch/riscv/p2m.c @@ -0,0 +1,261 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +#define INVALID_VMID 0 /* VMID 0 is reserved */ + +/* Unlock the flush and do a P2M TLB flush if necessary */ +void p2m_write_unlock(struct p2m_domain *p2m) +{ + /* TODO */ + + write_unlock(&p2m->lock); +} + +void p2m_dump_info(struct domain *d) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + p2m_read_lock(p2m); + printk("p2m mappings for domain %d (vmid %d):\n", + d->domain_id, p2m->vmid); + BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]); + printk(" 1G mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[1], p2m->stats.shattered[1]); + printk(" 2M mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[2], p2m->stats.shattered[2]); + printk(" 4K mappings: %ld\n", p2m->stats.mappings[3]); + p2m_read_unlock(p2m); +} + +void memory_type_changed(struct domain *d) +{ +} + +void dump_p2m_lookup(struct domain *d, paddr_t addr) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr); + + printk("P2M @ %p mfn:%#"PRI_mfn"\n", + p2m->root, mfn_x(page_to_mfn(p2m->root))); +} + +/* + * p2m_save_state and p2m_restore_state work in pair to workaround + * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to + * point to the empty page-tables to stop allocating TLB entries. + */ +void p2m_save_state(struct vcpu *p) +{ + /* TODO */ +} + +void p2m_restore_state(struct vcpu *n) +{ + /* TODO */ +} + +mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) +{ + return p2m_lookup(d, gfn, NULL); +} + +/* + * Force a synchronous P2M TLB flush. + * + * Must be called with the p2m lock held. + */ +static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) +{ + /* TODO */ +} + +int p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned long nr, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a) +{ + int rc = 0; + + /* TODO */ + + return rc; +} + +mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) +{ + mfn_t mfn; + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + p2m_read_lock(p2m); + mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); + p2m_read_unlock(p2m); + + return mfn; +} + +mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, p2m_access_t *a, + unsigned int *page_order, + bool *valid) +{ + /* TODO */ + + mfn_t mfn; + + p2m_read_lock(p2m); + mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); + p2m_read_unlock(p2m); + + return mfn; +} + +static inline int p2m_insert_mapping(struct domain *d, + gfn_t start_gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t t) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + int rc; + + p2m_write_lock(p2m); + rc = p2m_set_entry(p2m, start_gfn, nr, mfn, t, p2m->default_access); + p2m_write_unlock(p2m); + + return rc; +} + +static inline int p2m_remove_mapping(struct domain *d, + gfn_t start_gfn, + unsigned long nr, + mfn_t mfn) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + int rc; + + p2m_write_lock(p2m); + rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN, + p2m_invalid, p2m_access_rwx); + p2m_write_unlock(p2m); + + return rc; +} + +void p2m_tlb_flush_sync(struct p2m_domain *p2m) +{ + if ( p2m->need_flush ) + p2m_force_tlb_flush_sync(p2m); +} + +int map_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t p2mt) +{ + return p2m_insert_mapping(d, gfn, nr, mfn, p2mt); +} + +int unmap_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn) +{ + return p2m_remove_mapping(d, gfn, nr, mfn); +} + +int map_mmio_regions(struct domain *d, + gfn_t start_gfn, + unsigned long nr, + mfn_t mfn) +{ + return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev); +} + +int unmap_mmio_regions(struct domain *d, + gfn_t start_gfn, + unsigned long nr, + mfn_t mfn) +{ + return p2m_remove_mapping(d, start_gfn, nr, mfn); +} + +int map_dev_mmio_region(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn) +{ + /* TODO */ + + return 0; +} + +int guest_physmap_add_entry(struct domain *d, + gfn_t gfn, + mfn_t mfn, + unsigned long page_order, + p2m_type_t t) +{ + return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t); +} + +int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order) +{ + return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); +} + +struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt; + mfn_t mfn = p2m_lookup(d, gfn, &p2mt); + + if ( t ) + *t = p2mt; + + if ( !p2m_is_any_ram(p2mt) ) + return NULL; + + if ( !mfn_valid(mfn) ) + return NULL; + + page = mfn_to_page(mfn); + + /* + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. + */ + if ( p2m_is_foreign(p2mt) ) + { + struct domain *fdom = page_get_owner_and_reference(page); + ASSERT(fdom != NULL); + ASSERT(fdom != d); + return page; + } + + return get_page(page, d) ? page : NULL; +} + +void vcpu_mark_events_pending(struct vcpu *v) +{ + /* TODO */ +} + +void vcpu_update_evtchn_irq(struct vcpu *v) +{ + /* TODO */ +}