Message ID | 1541121763-3277-1-git-send-email-liq3ea@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | nvme: fix oob access issue(CVE-2018-16847) | expand |
On 2/11/18 2:22, Li Qiang wrote: > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > This can lead an oob access issue. This is triggerable in the guest. > Add check to avoid this issue. > > Fixes CVE-2018-16847. > > Reported-by: Li Qiang <liq3ea@gmail.com> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Li Qiang <liq3ea@gmail.com> > --- > hw/block/nvme.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index fc7dacb..d097add 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data, > unsigned size) > { > NvmeCtrl *n = (NvmeCtrl *)opaque; > + > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { Should this be reported via qemu_log_mask(LOG_GUEST_ERROR, ...)? > + return; > + } > memcpy(&n->cmbuf[addr], &data, size); > } > > @@ -1183,6 +1187,9 @@ static uint64_t nvme_cmb_read(void *opaque, hwaddr addr, unsigned size) > uint64_t val; > NvmeCtrl *n = (NvmeCtrl *)opaque; > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { Ditto. > + return 0; > + } > memcpy(&val, &n->cmbuf[addr], size); > return val; > } >
Am 02.11.2018 um 02:22 hat Li Qiang geschrieben: > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > This can lead an oob access issue. This is triggerable in the guest. > Add check to avoid this issue. > > Fixes CVE-2018-16847. > > Reported-by: Li Qiang <liq3ea@gmail.com> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Li Qiang <liq3ea@gmail.com> > --- > hw/block/nvme.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index fc7dacb..d097add 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data, > unsigned size) > { > NvmeCtrl *n = (NvmeCtrl *)opaque; > + > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { What prevents a guest from moving the device to the end of the address space and causing an integer overflow in addr + size? If this happens, we still have .max_access_size = 8. The next question is then, is NVME_CMBSZ_GETSIZE guaranteed to be at least 8? I suppose yes, but do we want to rely on this for security? Kevin
On Fri, Nov 02, 2018 at 11:54:21AM +0100, Kevin Wolf wrote: > Am 02.11.2018 um 02:22 hat Li Qiang geschrieben: > > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > > This can lead an oob access issue. This is triggerable in the guest. > > Add check to avoid this issue. > > > > Fixes CVE-2018-16847. > > > > Reported-by: Li Qiang <liq3ea@gmail.com> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > Signed-off-by: Li Qiang <liq3ea@gmail.com> > > --- > > hw/block/nvme.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > > index fc7dacb..d097add 100644 > > --- a/hw/block/nvme.c > > +++ b/hw/block/nvme.c > > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data, > > unsigned size) > > { > > NvmeCtrl *n = (NvmeCtrl *)opaque; > > + > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > > What prevents a guest from moving the device to the end of the address > space and causing an integer overflow in addr + size? > > If this happens, we still have .max_access_size = 8. The next question is > then, is NVME_CMBSZ_GETSIZE guaranteed to be at least 8? I suppose yes, > but do we want to rely on this for security? > > Kevin The nvme spec at least doesn't allow a way to express a CMB that isn't at a minimum 4k aligned and sized, so 8 byte access should always be within the boundary.
Hello Kevin, Kevin Wolf <kwolf@redhat.com> 于2018年11月2日周五 下午6:54写道: > Am 02.11.2018 um 02:22 hat Li Qiang geschrieben: > > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > > This can lead an oob access issue. This is triggerable in the guest. > > Add check to avoid this issue. > > > > Fixes CVE-2018-16847. > > > > Reported-by: Li Qiang <liq3ea@gmail.com> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > Signed-off-by: Li Qiang <liq3ea@gmail.com> > > --- > > hw/block/nvme.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > > index fc7dacb..d097add 100644 > > --- a/hw/block/nvme.c > > +++ b/hw/block/nvme.c > > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr > addr, uint64_t data, > > unsigned size) > > { > > NvmeCtrl *n = (NvmeCtrl *)opaque; > > + > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > > What prevents a guest from moving the device to the end of the address > space and causing an integer overflow in addr + size? > > This can't happen as the addr can't be any value, it just can be in the Memory Region n->ctrl_mem defines. Thanks, Li Qiang > If this happens, we still have .max_access_size = 8. The next question is > then, is NVME_CMBSZ_GETSIZE guaranteed to be at least 8? I suppose yes, > but do we want to rely on this for security? Kevin >
On Thu, Nov 01, 2018 at 06:22:43PM -0700, Li Qiang wrote: > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > This can lead an oob access issue. This is triggerable in the guest. > Add check to avoid this issue. > > Fixes CVE-2018-16847. > > Reported-by: Li Qiang <liq3ea@gmail.com> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Li Qiang <liq3ea@gmail.com> Hey, so why is this memory region access even considered valid if the request is out of range from what NVMe had registered for its MemoryRegion? Wouldn't it be better to not call the mr->ops->read/write if it's out of bounds? Otherwise every MemoryRegion needs to duplicate the same check, right? Would something like the following work (minimally tested)? --- diff --git a/memory.c b/memory.c index 9b73892768..883fd818e6 100644 --- a/memory.c +++ b/memory.c @@ -1369,6 +1369,9 @@ bool memory_region_access_valid(MemoryRegion *mr, access_size_max = 4; } + if (addr + size > mr->size) + return false; + access_size = MAX(MIN(size, access_size_max), access_size_min); for (i = 0; i < size; i += access_size) { if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size, --
Am 02.11.2018 um 16:22 hat Li Qiang geschrieben: > Hello Kevin, > > Kevin Wolf <kwolf@redhat.com> 于2018年11月2日周五 下午6:54写道: > > > Am 02.11.2018 um 02:22 hat Li Qiang geschrieben: > > > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > > > This can lead an oob access issue. This is triggerable in the guest. > > > Add check to avoid this issue. > > > > > > Fixes CVE-2018-16847. > > > > > > Reported-by: Li Qiang <liq3ea@gmail.com> > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > > Signed-off-by: Li Qiang <liq3ea@gmail.com> > > > --- > > > hw/block/nvme.c | 7 +++++++ > > > 1 file changed, 7 insertions(+) > > > > > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > > > index fc7dacb..d097add 100644 > > > --- a/hw/block/nvme.c > > > +++ b/hw/block/nvme.c > > > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr > > addr, uint64_t data, > > > unsigned size) > > > { > > > NvmeCtrl *n = (NvmeCtrl *)opaque; > > > + > > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > > > > What prevents a guest from moving the device to the end of the address > > space and causing an integer overflow in addr + size? > > > > > This can't happen as the addr can't be any value, it just can be in the > Memory Region n->ctrl_mem defines. Yes, but can't the guest map that memory region whereever it wants? (As Keith confirmed, the integer overflow doesn't seem to have any bad consequences here, but anyway.) Kevin
Kevin Wolf <kwolf@redhat.com> 于2018年11月2日周五 下午11:42写道: > Am 02.11.2018 um 16:22 hat Li Qiang geschrieben: > > Hello Kevin, > > > > Kevin Wolf <kwolf@redhat.com> 于2018年11月2日周五 下午6:54写道: > > > > > Am 02.11.2018 um 02:22 hat Li Qiang geschrieben: > > > > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > > > > This can lead an oob access issue. This is triggerable in the guest. > > > > Add check to avoid this issue. > > > > > > > > Fixes CVE-2018-16847. > > > > > > > > Reported-by: Li Qiang <liq3ea@gmail.com> > > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > > > Signed-off-by: Li Qiang <liq3ea@gmail.com> > > > > --- > > > > hw/block/nvme.c | 7 +++++++ > > > > 1 file changed, 7 insertions(+) > > > > > > > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > > > > index fc7dacb..d097add 100644 > > > > --- a/hw/block/nvme.c > > > > +++ b/hw/block/nvme.c > > > > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, > hwaddr > > > addr, uint64_t data, > > > > unsigned size) > > > > { > > > > NvmeCtrl *n = (NvmeCtrl *)opaque; > > > > + > > > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > > > > > > What prevents a guest from moving the device to the end of the address > > > space and causing an integer overflow in addr + size? > > > > > > > > This can't happen as the addr can't be any value, it just can be in the > > Memory Region n->ctrl_mem defines. > > Yes, but can't the guest map that memory region whereever it wants? > > I think it can't happen, as the 'addr' here is the relative offset in the MR. As we don't (can't) register such a MMIO region(the end of this region is very big to make addr+size overflow, size here can only be 1,2, 4, 8,). So the 'addr' here can't be that large. Thanks, Li Qiang > (As Keith confirmed, the integer overflow doesn't seem to have any bad > consequences here, but anyway.) > > Kevin >
Keith Busch <keith.busch@intel.com> 于2018年11月2日周五 下午11:42写道: > On Thu, Nov 01, 2018 at 06:22:43PM -0700, Li Qiang wrote: > > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > > This can lead an oob access issue. This is triggerable in the guest. > > Add check to avoid this issue. > > > > Fixes CVE-2018-16847. > > > > Reported-by: Li Qiang <liq3ea@gmail.com> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > Signed-off-by: Li Qiang <liq3ea@gmail.com> > > Hey, so why is this memory region access even considered valid if the > request is out of range from what NVMe had registered for its > MemoryRegion? Wouldn't it be better to not call the mr->ops->read/write > if it's out of bounds? Otherwise every MemoryRegion needs to duplicate > the same check, right? > > Yes, This seems a good idea. Once I also encounter one issue like this. The read callback in MR will be called even it is NULL so cause a NULL-deref. discussed here: -->https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg01391.html This touchs the core design of memory subsystem I think, May Paolo or Peter can answer this. > Would something like the following work (minimally tested)? > > --- > diff --git a/memory.c b/memory.c > index 9b73892768..883fd818e6 100644 > --- a/memory.c > +++ b/memory.c > @@ -1369,6 +1369,9 @@ bool memory_region_access_valid(MemoryRegion *mr, > access_size_max = 4; > } > > + if (addr + size > mr->size) > + return false; > + > access_size = MAX(MIN(size, access_size_max), access_size_min); > for (i = 0; i < size; i += access_size) { > if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size, > -- >
Ping.... what't the status of this patch. I see Kevin's new pr doesn't contain this patch. Thanks, Li Qiang Li Qiang <liq3ea@gmail.com> 于2018年11月2日周五 上午9:22写道: > Currently, the nvme_cmb_ops mr doesn't check the addr and size. > This can lead an oob access issue. This is triggerable in the guest. > Add check to avoid this issue. > > Fixes CVE-2018-16847. > > Reported-by: Li Qiang <liq3ea@gmail.com> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Li Qiang <liq3ea@gmail.com> > --- > hw/block/nvme.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index fc7dacb..d097add 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr > addr, uint64_t data, > unsigned size) > { > NvmeCtrl *n = (NvmeCtrl *)opaque; > + > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > + return; > + } > memcpy(&n->cmbuf[addr], &data, size); > } > > @@ -1183,6 +1187,9 @@ static uint64_t nvme_cmb_read(void *opaque, hwaddr > addr, unsigned size) > uint64_t val; > NvmeCtrl *n = (NvmeCtrl *)opaque; > > + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { > + return 0; > + } > memcpy(&val, &n->cmbuf[addr], size); > return val; > } > -- > 1.8.3.1 > >
Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > Ping.... what't the status of this patch. > > I see Kevin's new pr doesn't contain this patch. Oh, I thought you said that you wanted to fix this at a higher level so that the problem is caught before even getting into nvme code? If you don't, I can apply the patch for my next pull request. Kevin
Kevin Wolf <kwolf@redhat.com> 于2018年11月13日周二 下午6:17写道: > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > > Ping.... what't the status of this patch. > > > > I see Kevin's new pr doesn't contain this patch. > > Oh, I thought you said that you wanted to fix this at a higher level so > that the problem is caught before even getting into nvme code? The higher level fix may need a lot of another discussion, but this fix is just as the other things once we did. > If you > don't, I can apply the patch for my next pull request. > I think you can. Also, could you look at another patchset about nvme to fix some small issues. Seems they hangs a long time. Thanks, Li Qiang > > Kevin >
On 02/11/2018 16:40, Keith Busch wrote: > Hey, so why is this memory region access even considered valid if the > request is out of range from what NVMe had registered for its > MemoryRegion? Wouldn't it be better to not call the mr->ops->read/write > if it's out of bounds? Otherwise every MemoryRegion needs to duplicate > the same check, right? Because some crazy devices have misaligned registers. But actually this is not a problem because NVMe doesn't set ops->impl.unaligned to true, so indeed no change is needed. Paolo > Would something like the following work (minimally tested)?
On 13/11/2018 11:17, Kevin Wolf wrote: > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: >> Ping.... what't the status of this patch. >> >> I see Kevin's new pr doesn't contain this patch. > > Oh, I thought you said that you wanted to fix this at a higher level so > that the problem is caught before even getting into nvme code? If you > don't, I can apply the patch for my next pull request. As far as I know the bug doesn't exist. Li Qiang, if you have a reproducer please send it. Prasad, please revoke the CVE. Paolo
Paolo Bonzini <pbonzini@redhat.com> 于2018年11月14日周三 上午2:27写道: > On 13/11/2018 11:17, Kevin Wolf wrote: > > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > >> Ping.... what't the status of this patch. > >> > >> I see Kevin's new pr doesn't contain this patch. > > > > Oh, I thought you said that you wanted to fix this at a higher level so > > that the problem is caught before even getting into nvme code? If you > > don't, I can apply the patch for my next pull request. > > As far as I know the bug doesn't exist. Li Qiang, if you have a > reproducer please send it. > > Hello Paolo, Though I've send the debug information and ASAN output in the mail to secalert@redhat.com, I'm glad provide here. This is for read, I think the write the same but as the PoC is in userspace, the mmap can only map the exact size of the MMIO, So we can only write within the area. But if we using a module we can write the out of MMIO I think The nvme device's parameter should set as 'cmb_size_mb=2' and the PCI address may differ in your system. Thanks, Li Qiang #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <fcntl.h> #include <sys/mman.h> int main(int argc, char **argv) { char *filename = "/sys/bus/pci/devices/0000:00:04.0/resource2"; uint32_t size = 2*1024*1024; char *mmio = NULL; int fd = open(filename, O_RDWR); if (fd < 0) { printf("open file error\n"); exit(1); } mmio = mmap(NULL, size, PROT_WRITE | PROT_READ, MAP_SHARED, fd, 0); if (mmio == MAP_FAILED) { printf("mmap error\n"); exit(1); } int x = *(uint64_t*)(mmio+size-1); } read: [Switching to Thread 0x7fffc7326700 (LWP 52799)] Thread 4 "qemu-system-x86" hit Breakpoint 1, nvme_cmb_read (opaque=0x6240000b8100, addr=2097151, size=2) at hw/block/nvme.c:1182 1182 { (gdb) p /x addr $1 = 0x1fffff (gdb) p /x addr+size $2 = 0x200001 (gdb) c Continuing. ================================================================= [Thread 0x7fff77efd700 (LWP 54057) exited] ==52793==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fff817ff800 at pc 0x7ffff6e9af7f bp 0x7fffc7322fc0 sp 0x7fffc7322770 READ of size 2 at 0x7fff817ff800 thread T3 [Thread 0x7fff7a183700 (LWP 53957) exited] [Thread 0x7fff786fe700 (LWP 53953) exited] [Thread 0x7fff70b21700 (LWP 53952) exited] #0 0x7ffff6e9af7e (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x5cf7e) #1 0x5555562193b3 in nvme_cmb_read hw/block/nvme.c:1186 #2 0x555555d630d0 in memory_region_read_accessor /home/liqiang02/qemu-upstream/qemu/memory.c:440 #3 0x555555d638da in access_with_adjusted_size /home/liqiang02/qemu-upstream/qemu/memory.c:570 #4 0x555555d690fd in memory_region_dispatch_read1 /home/liqiang02/qemu-upstream/qemu/memory.c:1375 #5 0x555555d692b5 in memory_region_dispatch_read /home/liqiang02/qemu-upstream/qemu/memory.c:1404 #6 0x555555ca765b in flatview_read_continue /home/liqiang02/qemu-upstream/qemu/exec.c:3294 #7 0x555555ca790d in flatview_read /home/liqiang02/qemu-upstream/qemu/exec.c:3332 #8 0x555555ca79d3 in address_space_read_full /home/liqiang02/qemu-upstream/qemu/exec.c:3345 #9 0x555555ca7aaa in address_space_rw /home/liqiang02/qemu-upstream/qemu/exec.c:3375 #10 0x555555daadd9 in kvm_cpu_exec /home/liqiang02/qemu-upstream/qemu/accel/kvm/kvm-all.c:2031 #11 0x555555d2b2e5 in qemu_kvm_cpu_thread_fn /home/liqiang02/qemu-upstream/qemu/cpus.c:1277 #12 0x555556a037a0 in qemu_thread_start util/qemu-thread-posix.c:498 #13 0x7fffdadbd493 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x7493) #14 0x7fffdaafface in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xe8ace) > Prasad, please revoke the CVE. > > Paolo > >
On 14/11/2018 02:38, Li Qiang wrote: > > > Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> 于2018 > 年11月14日周三 上午2:27写道: > > On 13/11/2018 11:17, Kevin Wolf wrote: > > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > >> Ping.... what't the status of this patch. > >> > >> I see Kevin's new pr doesn't contain this patch. > > > > Oh, I thought you said that you wanted to fix this at a higher > level so > > that the problem is caught before even getting into nvme code? If you > > don't, I can apply the patch for my next pull request. > > As far as I know the bug doesn't exist. Li Qiang, if you have a > reproducer please send it. > > > Hello Paolo, > Though I've send the debug information and ASAN output in the mail to > secalert@redhat.com <mailto:secalert@redhat.com>, I'm glad provide here. > This is for read, I think the write the same but as the PoC is in > userspace, the mmap can only map the exact size of the MMIO, > So we can only write within the area. But if we using a module we can > write the out of MMIO I think > The nvme device's parameter should set as 'cmb_size_mb=2' and the PCI > address may differ in your system. Ok, thanks. I've created a reproducer using qtest (though I have to run now and cannot post it properly). The patch for the fix is simply: diff --git a/hw/block/nvme.c b/hw/block/nvme.c index fc7dacb816..6385033af3 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1192,7 +1192,7 @@ static const MemoryRegionOps nvme_cmb_ops = { .write = nvme_cmb_write, .endianness = DEVICE_LITTLE_ENDIAN, .impl = { - .min_access_size = 2, + .min_access_size = 1, .max_access_size = 8, }, }; The memory subsystem _is_ recognizing the out-of-bounds 32-bit access, but because min_access_size=2 it sends down a write at offset 2097151 and size 2. Paolo
Paolo Bonzini <pbonzini@redhat.com> 于2018年11月14日周三 下午11:44写道: > On 14/11/2018 02:38, Li Qiang wrote: > > > > > > Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> 于2018 > > 年11月14日周三 上午2:27写道: > > > > On 13/11/2018 11:17, Kevin Wolf wrote: > > > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > > >> Ping.... what't the status of this patch. > > >> > > >> I see Kevin's new pr doesn't contain this patch. > > > > > > Oh, I thought you said that you wanted to fix this at a higher > > level so > > > that the problem is caught before even getting into nvme code? If > you > > > don't, I can apply the patch for my next pull request. > > > > As far as I know the bug doesn't exist. Li Qiang, if you have a > > reproducer please send it. > > > > > > Hello Paolo, > > Though I've send the debug information and ASAN output in the mail to > > secalert@redhat.com <mailto:secalert@redhat.com>, I'm glad provide here. > > This is for read, I think the write the same but as the PoC is in > > userspace, the mmap can only map the exact size of the MMIO, > > So we can only write within the area. But if we using a module we can > > write the out of MMIO I think > > The nvme device's parameter should set as 'cmb_size_mb=2' and the PCI > > address may differ in your system. > > Ok, thanks. I've created a reproducer using qtest (though I have to run > now and cannot post it properly). > > The patch for the fix is simply: > > So do you send this or me? > diff --git a/hw/block/nvme.c b/hw/block/nvme.c > index fc7dacb816..6385033af3 100644 > --- a/hw/block/nvme.c > +++ b/hw/block/nvme.c > @@ -1192,7 +1192,7 @@ static const MemoryRegionOps nvme_cmb_ops = { > .write = nvme_cmb_write, > .endianness = DEVICE_LITTLE_ENDIAN, > .impl = { > - .min_access_size = 2, > + .min_access_size = 1, > .max_access_size = 8, > }, > }; > > > The memory subsystem _is_ recognizing the out-of-bounds 32-bit access, > Thanks, this strengthen my understanding of memory subsystem. Thanks, Li Qiang > but because min_access_size=2 it sends down a write at offset 2097151 > and size 2. > Paolo >
On 15/11/2018 04:14, Li Qiang wrote: > > > Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> 于2018 > 年11月14日周三 下午11:44写道: > > On 14/11/2018 02:38, Li Qiang wrote: > > > > > > Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com> > <mailto:pbonzini@redhat.com <mailto:pbonzini@redhat.com>>> 于2018 > > 年11月14日周三 上午2:27写道: > > > > On 13/11/2018 11:17, Kevin Wolf wrote: > > > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben: > > >> Ping.... what't the status of this patch. > > >> > > >> I see Kevin's new pr doesn't contain this patch. > > > > > > Oh, I thought you said that you wanted to fix this at a higher > > level so > > > that the problem is caught before even getting into nvme > code? If you > > > don't, I can apply the patch for my next pull request. > > > > As far as I know the bug doesn't exist. Li Qiang, if you have a > > reproducer please send it. > > > > > > Hello Paolo, > > Though I've send the debug information and ASAN output in the mail to > > secalert@redhat.com <mailto:secalert@redhat.com> > <mailto:secalert@redhat.com <mailto:secalert@redhat.com>>, I'm glad > provide here. > > This is for read, I think the write the same but as the PoC is in > > userspace, the mmap can only map the exact size of the MMIO, > > So we can only write within the area. But if we using a module we can > > write the out of MMIO I think > > The nvme device's parameter should set as 'cmb_size_mb=2' and the PCI > > address may differ in your system. > > Ok, thanks. I've created a reproducer using qtest (though I have to run > now and cannot post it properly). > > The patch for the fix is simply: > > > So do you send this or me? Me, together with the test. Paolo
diff --git a/hw/block/nvme.c b/hw/block/nvme.c index fc7dacb..d097add 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1175,6 +1175,10 @@ static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data, unsigned size) { NvmeCtrl *n = (NvmeCtrl *)opaque; + + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { + return; + } memcpy(&n->cmbuf[addr], &data, size); } @@ -1183,6 +1187,9 @@ static uint64_t nvme_cmb_read(void *opaque, hwaddr addr, unsigned size) uint64_t val; NvmeCtrl *n = (NvmeCtrl *)opaque; + if (addr + size > NVME_CMBSZ_GETSIZE(n->bar.cmbsz)) { + return 0; + } memcpy(&val, &n->cmbuf[addr], size); return val; }