Message ID | 20210731140332.8701-3-lizhijian@cn.fujitsu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | enable fsdax rdma migration | expand |
Hi, On Sat, Jul 31, 2021 at 5:03 PM Li Zhijian <lizhijian@cn.fujitsu.com> wrote: > > The responder mr registering with ODP will sent RNR NAK back to > the requester in the face of the page fault. > --------- > ibv_poll_cq wc.status=13 RNR retry counter exceeded! > ibv_poll_cq wrid=WRITE RDMA! > --------- > ibv_advise_mr(3) helps to make pages present before the actual IO is > conducted so that the responder does page fault as little as possible. > > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> > --- > migration/rdma.c | 40 ++++++++++++++++++++++++++++++++++++++++ > migration/trace-events | 1 + > 2 files changed, 41 insertions(+) > > diff --git a/migration/rdma.c b/migration/rdma.c > index 8784b5f22a6..a2ad00d665f 100644 > --- a/migration/rdma.c > +++ b/migration/rdma.c > @@ -1117,6 +1117,30 @@ static int qemu_rdma_alloc_qp(RDMAContext *rdma) > return 0; > } > > +/* > + * ibv_advise_mr to avoid RNR NAK error as far as possible. > + * The responder mr registering with ODP will sent RNR NAK back to > + * the requester in the face of the page fault. > + */ > +static void qemu_rdma_advise_prefetch_write_mr(struct ibv_pd *pd, uint64_t addr, > + uint32_t len, uint32_t lkey, > + const char *name, bool wr) > +{ > + int ret; > + int advice = wr ? IBV_ADVISE_MR_ADVICE_PREFETCH_WRITE : > + IBV_ADVISE_MR_ADVICE_PREFETCH; > + struct ibv_sge sg_list = {.lkey = lkey, .addr = addr, .length = len}; > + > + ret = ibv_advise_mr(pd, advice, > + IB_UVERBS_ADVISE_MR_FLAG_FLUSH, &sg_list, 1); > + /* ignore the error */ > + if (ret) { > + trace_qemu_rdma_advise_mr(name, len, addr, strerror(errno)); > + } else { > + trace_qemu_rdma_advise_mr(name, len, addr, "successed"); > + } > +} > + > static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma) > { > int i; > @@ -1140,6 +1164,17 @@ on_demand: > perror("Failed to register local dest ram block!\n"); > break; > } > + > + if (access & IBV_ACCESS_ON_DEMAND) { > + qemu_rdma_advise_prefetch_write_mr(rdma->pd, > + (uintptr_t) > + local->block[i].local_host_addr, > + local->block[i].length, > + local->block[i].mr->lkey, > + local->block[i].block_name, > + true); > + } > + > rdma->total_registrations++; > } > > @@ -1244,6 +1279,11 @@ on_demand: > rdma->total_registrations); > return -1; > } > + if (access & IBV_ACCESS_ON_DEMAND) { > + qemu_rdma_advise_prefetch_write_mr(rdma->pd, (uintptr_t)chunk_start, > + len, block->pmr[chunk]->lkey, > + block->block_name, rkey); > + } > rdma->total_registrations++; > } > > diff --git a/migration/trace-events b/migration/trace-events > index 5f6aa580def..901c1d54c12 100644 > --- a/migration/trace-events > +++ b/migration/trace-events > @@ -213,6 +213,7 @@ qemu_rdma_poll_other(const char *compstr, int64_t comp, int left) "other complet > qemu_rdma_post_send_control(const char *desc) "CONTROL: sending %s.." > qemu_rdma_register_and_get_keys(uint64_t len, void *start) "Registering %" PRIu64 " bytes @ %p" > qemu_rdma_register_odp_mr(const char *name) "Try to register On-Demand Paging memory region: %s" > +qemu_rdma_advise_mr(const char *name, uint32_t len, uint64_t addr, const char *res) "Try to advise block %s prefetch write at %" PRIu32 "@0x%" PRIx64 ": %s" > qemu_rdma_registration_handle_compress(int64_t length, int index, int64_t offset) "Zapping zero chunk: %" PRId64 " bytes, index %d, offset %" PRId64 > qemu_rdma_registration_handle_finished(void) "" > qemu_rdma_registration_handle_ram_blocks(void) "" > -- > 2.31.1 > Following https://github.com/linux-rdma/rdma-core/blob/master/libibverbs/man/ibv_advise_mr.3.md it looks like it is a best-effort optimization, I don't see any down-sides to it. However it seems like it is recommended to use IBV_ADVISE_MR_FLAG_FLUSH in order to increase the optimization chances. Anyway Reviewed-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Thanks, Marcel
Hi Marcel On 22/08/2021 16:39, Marcel Apfelbaum wrote: > Hi, > > On Sat, Jul 31, 2021 at 5:03 PM Li Zhijian <lizhijian@cn.fujitsu.com> wrote: >> The responder mr registering with ODP will sent RNR NAK back to >> the requester in the face of the page fault. >> --------- >> ibv_poll_cq wc.status=13 RNR retry counter exceeded! >> ibv_poll_cq wrid=WRITE RDMA! >> --------- >> ibv_advise_mr(3) helps to make pages present before the actual IO is >> conducted so that the responder does page fault as little as possible. >> >> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> >> --- >> migration/rdma.c | 40 ++++++++++++++++++++++++++++++++++++++++ >> migration/trace-events | 1 + >> 2 files changed, 41 insertions(+) >> >> diff --git a/migration/rdma.c b/migration/rdma.c >> index 8784b5f22a6..a2ad00d665f 100644 >> --- a/migration/rdma.c >> +++ b/migration/rdma.c >> @@ -1117,6 +1117,30 @@ static int qemu_rdma_alloc_qp(RDMAContext *rdma) >> return 0; >> } >> >> +/* >> + * ibv_advise_mr to avoid RNR NAK error as far as possible. >> + * The responder mr registering with ODP will sent RNR NAK back to >> + * the requester in the face of the page fault. >> + */ >> +static void qemu_rdma_advise_prefetch_write_mr(struct ibv_pd *pd, uint64_t addr, >> + uint32_t len, uint32_t lkey, >> + const char *name, bool wr) >> +{ >> + int ret; >> + int advice = wr ? IBV_ADVISE_MR_ADVICE_PREFETCH_WRITE : >> + IBV_ADVISE_MR_ADVICE_PREFETCH; >> + struct ibv_sge sg_list = {.lkey = lkey, .addr = addr, .length = len}; >> + >> + ret = ibv_advise_mr(pd, advice, >> + IB_UVERBS_ADVISE_MR_FLAG_FLUSH, &sg_list, 1); >> + /* ignore the error */ > Following https://github.com/linux-rdma/rdma-core/blob/master/libibverbs/man/ibv_advise_mr.3.md > it looks like it is a best-effort optimization, > I don't see any down-sides to it. > However it seems like it is recommended to use > IBV_ADVISE_MR_FLAG_FLUSH in order to > increase the optimization chances. Good catch, i will update it soon. Thanks > > Anyway > > Reviewed-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> > > Thanks, > Marcel > >
diff --git a/migration/rdma.c b/migration/rdma.c index 8784b5f22a6..a2ad00d665f 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -1117,6 +1117,30 @@ static int qemu_rdma_alloc_qp(RDMAContext *rdma) return 0; } +/* + * ibv_advise_mr to avoid RNR NAK error as far as possible. + * The responder mr registering with ODP will sent RNR NAK back to + * the requester in the face of the page fault. + */ +static void qemu_rdma_advise_prefetch_write_mr(struct ibv_pd *pd, uint64_t addr, + uint32_t len, uint32_t lkey, + const char *name, bool wr) +{ + int ret; + int advice = wr ? IBV_ADVISE_MR_ADVICE_PREFETCH_WRITE : + IBV_ADVISE_MR_ADVICE_PREFETCH; + struct ibv_sge sg_list = {.lkey = lkey, .addr = addr, .length = len}; + + ret = ibv_advise_mr(pd, advice, + IB_UVERBS_ADVISE_MR_FLAG_FLUSH, &sg_list, 1); + /* ignore the error */ + if (ret) { + trace_qemu_rdma_advise_mr(name, len, addr, strerror(errno)); + } else { + trace_qemu_rdma_advise_mr(name, len, addr, "successed"); + } +} + static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma) { int i; @@ -1140,6 +1164,17 @@ on_demand: perror("Failed to register local dest ram block!\n"); break; } + + if (access & IBV_ACCESS_ON_DEMAND) { + qemu_rdma_advise_prefetch_write_mr(rdma->pd, + (uintptr_t) + local->block[i].local_host_addr, + local->block[i].length, + local->block[i].mr->lkey, + local->block[i].block_name, + true); + } + rdma->total_registrations++; } @@ -1244,6 +1279,11 @@ on_demand: rdma->total_registrations); return -1; } + if (access & IBV_ACCESS_ON_DEMAND) { + qemu_rdma_advise_prefetch_write_mr(rdma->pd, (uintptr_t)chunk_start, + len, block->pmr[chunk]->lkey, + block->block_name, rkey); + } rdma->total_registrations++; } diff --git a/migration/trace-events b/migration/trace-events index 5f6aa580def..901c1d54c12 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -213,6 +213,7 @@ qemu_rdma_poll_other(const char *compstr, int64_t comp, int left) "other complet qemu_rdma_post_send_control(const char *desc) "CONTROL: sending %s.." qemu_rdma_register_and_get_keys(uint64_t len, void *start) "Registering %" PRIu64 " bytes @ %p" qemu_rdma_register_odp_mr(const char *name) "Try to register On-Demand Paging memory region: %s" +qemu_rdma_advise_mr(const char *name, uint32_t len, uint64_t addr, const char *res) "Try to advise block %s prefetch write at %" PRIu32 "@0x%" PRIx64 ": %s" qemu_rdma_registration_handle_compress(int64_t length, int index, int64_t offset) "Zapping zero chunk: %" PRId64 " bytes, index %d, offset %" PRId64 qemu_rdma_registration_handle_finished(void) "" qemu_rdma_registration_handle_ram_blocks(void) ""
The responder mr registering with ODP will sent RNR NAK back to the requester in the face of the page fault. --------- ibv_poll_cq wc.status=13 RNR retry counter exceeded! ibv_poll_cq wrid=WRITE RDMA! --------- ibv_advise_mr(3) helps to make pages present before the actual IO is conducted so that the responder does page fault as little as possible. Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> --- migration/rdma.c | 40 ++++++++++++++++++++++++++++++++++++++++ migration/trace-events | 1 + 2 files changed, 41 insertions(+)