diff mbox series

[2/7] ioreq: add internal ioreq initialization support

Message ID 20190821145903.45934-3-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series ioreq: add support for internal servers | expand

Commit Message

Roger Pau Monné Aug. 21, 2019, 2:58 p.m. UTC
Add support for internal ioreq servers to initialization and
deinitialization routines, prevent some functions from being executed
against internal ioreq servers and add guards to only allow internal
callers to modify internal ioreq servers. External callers (ie: from
hypercalls) are only allowed to deal with external ioreq servers.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/dm.c           |   9 +-
 xen/arch/x86/hvm/ioreq.c        | 150 +++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/ioreq.h |   8 +-
 3 files changed, 108 insertions(+), 59 deletions(-)

Comments

Paul Durrant Aug. 21, 2019, 4:24 p.m. UTC | #1
> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 21 August 2019 15:59
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> 
> Add support for internal ioreq servers to initialization and
> deinitialization routines, prevent some functions from being executed
> against internal ioreq servers and add guards to only allow internal
> callers to modify internal ioreq servers. External callers (ie: from
> hypercalls) are only allowed to deal with external ioreq servers.

It's kind of ugly to have the extra 'internal' argument passed to anything other than the create function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have distinct id ranges for internal and external servers. What do you think?

  Paul
Roger Pau Monné Aug. 22, 2019, 7:23 a.m. UTC | #2
On Wed, Aug 21, 2019 at 06:24:17PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 21 August 2019 15:59
> > To: xen-devel@lists.xenproject.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> > Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> > 
> > Add support for internal ioreq servers to initialization and
> > deinitialization routines, prevent some functions from being executed
> > against internal ioreq servers and add guards to only allow internal
> > callers to modify internal ioreq servers. External callers (ie: from
> > hypercalls) are only allowed to deal with external ioreq servers.
> 
> It's kind of ugly to have the extra 'internal' argument passed to anything other than the create function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have distinct id ranges for internal and external servers. What do you think?

That would be fine, I guess I could use the most significant bit in
the id to signal whether the server is internal or external, and
reject dmop calls that target internal servers based on the provided
id. Does that sound sensible?

Thanks, Roger.
Paul Durrant Aug. 22, 2019, 8:30 a.m. UTC | #3
> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 22 August 2019 08:24
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH 2/7] ioreq: add internal ioreq initialization support
> 
> On Wed, Aug 21, 2019 at 06:24:17PM +0200, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: 21 August 2019 15:59
> > > To: xen-devel@lists.xenproject.org
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <Paul.Durrant@citrix.com>
> > > Subject: [PATCH 2/7] ioreq: add internal ioreq initialization support
> > >
> > > Add support for internal ioreq servers to initialization and
> > > deinitialization routines, prevent some functions from being executed
> > > against internal ioreq servers and add guards to only allow internal
> > > callers to modify internal ioreq servers. External callers (ie: from
> > > hypercalls) are only allowed to deal with external ioreq servers.
> >
> > It's kind of ugly to have the extra 'internal' argument passed to anything other than the create
> function so I wonder whether it would be neater to encode it in the ioreq server id. I.e. we have
> distinct id ranges for internal and external servers. What do you think?
> 
> That would be fine, I guess I could use the most significant bit in
> the id to signal whether the server is internal or external, and
> reject dmop calls that target internal servers based on the provided
> id. Does that sound sensible?
> 

Yes, that's basically what I was thinking initially although, as you observe, in the thread for patch #3 having two smaller consecutive ranges would be more convenient.

  Paul
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index d6d0e8be89..5ca8b66d67 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -417,7 +417,7 @@  static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
+                                     &data->id, false);
         break;
     }
 
@@ -452,7 +452,7 @@  static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
+                                              data->start, data->end, false);
         break;
     }
 
@@ -466,7 +466,8 @@  static int dm_op(const struct dmop_args *op_args)
             break;
 
         rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
+                                                  data->start, data->end,
+                                                  false);
         break;
     }
 
@@ -529,7 +530,7 @@  static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled, false);
         break;
     }
 
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index a79cabb680..23ef9b0c02 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -89,6 +89,9 @@  bool hvm_io_pending(struct vcpu *v)
     {
         struct hvm_ioreq_vcpu *sv;
 
+        if ( s->internal )
+          continue;
+
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
                               list_entry )
@@ -193,6 +196,9 @@  bool handle_hvm_io_completion(struct vcpu *v)
     {
         struct hvm_ioreq_vcpu *sv;
 
+        if ( s->internal )
+            continue;
+
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
                               list_entry )
@@ -431,6 +437,9 @@  bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
+        if ( s->internal )
+            continue;
+
         if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
         {
             found = true;
@@ -696,15 +705,18 @@  static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     if ( s->enabled )
         goto done;
 
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
+    if ( !s->internal )
+    {
+        hvm_remove_ioreq_gfn(s, false);
+        hvm_remove_ioreq_gfn(s, true);
 
-    s->enabled = true;
+        list_for_each_entry ( sv,
+                              &s->ioreq_vcpu_list,
+                              list_entry )
+            hvm_update_ioreq_evtchn(s, sv);
+    }
 
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
+    s->enabled = true;
 
   done:
     spin_unlock(&s->lock);
@@ -717,8 +729,11 @@  static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
     if ( !s->enabled )
         goto done;
 
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
+    if ( !s->internal )
+    {
+        hvm_add_ioreq_gfn(s, true);
+        hvm_add_ioreq_gfn(s, false);
+    }
 
     s->enabled = false;
 
@@ -728,40 +743,47 @@  static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
 
 static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
                                  struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
+                                 ioservid_t id, bool internal)
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
     int rc;
 
+    s->internal = internal;
     s->target = d;
-
-    get_knownalive_domain(currd);
-    s->emulator = currd;
-
     spin_lock_init(&s->lock);
-    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
-    spin_lock_init(&s->bufioreq_lock);
-
-    s->ioreq.gfn = INVALID_GFN;
-    s->bufioreq.gfn = INVALID_GFN;
 
     rc = hvm_ioreq_server_alloc_rangesets(s, id);
     if ( rc )
         return rc;
 
-    s->bufioreq_handling = bufioreq_handling;
-
-    for_each_vcpu ( d, v )
+    if ( !internal )
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail_add;
+        get_knownalive_domain(currd);
+
+        s->emulator = currd;
+        INIT_LIST_HEAD(&s->ioreq_vcpu_list);
+        spin_lock_init(&s->bufioreq_lock);
+
+        s->ioreq.gfn = INVALID_GFN;
+        s->bufioreq.gfn = INVALID_GFN;
+
+        s->bufioreq_handling = bufioreq_handling;
+
+        for_each_vcpu ( d, v )
+        {
+            rc = hvm_ioreq_server_add_vcpu(s, v);
+            if ( rc )
+                goto fail_add;
+        }
     }
+    else
+        s->handler = NULL;
 
     return 0;
 
  fail_add:
+    ASSERT(!internal);
     hvm_ioreq_server_remove_all_vcpus(s);
     hvm_ioreq_server_unmap_pages(s);
 
@@ -774,27 +796,31 @@  static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
 static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
 {
     ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
-
-    /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
-     *       This is because the former will do nothing if the pages
-     *       are not mapped, leaving the page to be freed by the latter.
-     *       However if the pages are mapped then the former will set
-     *       the page_info pointer to NULL, meaning the latter will do
-     *       nothing.
-     */
-    hvm_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
 
     hvm_ioreq_server_free_rangesets(s);
 
-    put_domain(s->emulator);
+    if ( !s->internal )
+    {
+        hvm_ioreq_server_remove_all_vcpus(s);
+
+        /*
+         * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
+         *       hvm_ioreq_server_free_pages() in that order.
+         *       This is because the former will do nothing if the pages
+         *       are not mapped, leaving the page to be freed by the latter.
+         *       However if the pages are mapped then the former will set
+         *       the page_info pointer to NULL, meaning the latter will do
+         *       nothing.
+         */
+        hvm_ioreq_server_unmap_pages(s);
+        hvm_ioreq_server_free_pages(s);
+
+        put_domain(s->emulator);
+    }
 }
 
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
+                            ioservid_t *id, bool internal)
 {
     struct hvm_ioreq_server *s;
     unsigned int i;
@@ -826,7 +852,7 @@  int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
      */
     set_ioreq_server(d, i, s);
 
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i, internal);
     if ( rc )
     {
         set_ioreq_server(d, i, NULL);
@@ -863,7 +889,8 @@  int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /* NB: internal servers cannot be destroyed. */
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     domain_pause(d);
@@ -908,7 +935,11 @@  int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to fetch information about internal
+     * ioreq servers.
+     */
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     if ( ioreq_gfn || bufioreq_gfn )
@@ -955,7 +986,7 @@  int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     rc = hvm_ioreq_server_alloc_pages(s);
@@ -991,7 +1022,7 @@  int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
 
 int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
-                                     uint64_t end)
+                                     uint64_t end, bool internal)
 {
     struct hvm_ioreq_server *s;
     struct rangeset *r;
@@ -1009,7 +1040,12 @@  int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to modify the ranges of internal
+     * servers.
+     */
+    if ( (s->internal != internal) ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     switch ( type )
@@ -1043,7 +1079,7 @@  int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
-                                         uint64_t end)
+                                         uint64_t end, bool internal)
 {
     struct hvm_ioreq_server *s;
     struct rangeset *r;
@@ -1061,7 +1097,12 @@  int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    /*
+     * NB: don't allow external callers to modify the ranges of internal
+     * servers.
+     */
+    if ( s->internal != internal ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     switch ( type )
@@ -1122,7 +1163,7 @@  int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal || s->emulator != current->domain )
         goto out;
 
     rc = p2m_set_ioreq_server(d, flags, s);
@@ -1142,7 +1183,7 @@  int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
 }
 
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
+                               bool enabled, bool internal)
 {
     struct hvm_ioreq_server *s;
     int rc;
@@ -1156,7 +1197,8 @@  int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
         goto out;
 
     rc = -EPERM;
-    if ( s->emulator != current->domain )
+    if ( s->internal != internal ||
+         (!internal && s->emulator != current->domain) )
         goto out;
 
     domain_pause(d);
@@ -1185,6 +1227,8 @@  int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
+        if ( s->internal )
+            continue;
         rc = hvm_ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail;
@@ -1218,7 +1262,11 @@  void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( s->internal )
+            continue;
         hvm_ioreq_server_remove_vcpu(s, v);
+    }
 
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e2588e912f..e8119b26a6 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -24,7 +24,7 @@  bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
 
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
+                            ioservid_t *id, bool internal);
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
 int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                               unsigned long *ioreq_gfn,
@@ -34,14 +34,14 @@  int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
                                unsigned long idx, mfn_t *mfn);
 int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
-                                     uint64_t end);
+                                     uint64_t end, bool internal);
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
-                                         uint64_t end);
+                                         uint64_t end, bool internal);
 int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint32_t flags);
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
+                               bool enabled, bool internal);
 
 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);