From patchwork Wed Apr 15 22:23:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11492085 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA1BC912 for ; Wed, 15 Apr 2020 22:23:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9952C20787 for ; Wed, 15 Apr 2020 22:23:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="poMVaLmZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9952C20787 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B0B358E0050; Wed, 15 Apr 2020 18:23:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ABBFD8E0001; Wed, 15 Apr 2020 18:23:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D1148E0050; Wed, 15 Apr 2020 18:23:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 861E98E0001 for ; Wed, 15 Apr 2020 18:23:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 47F49824556B for ; Wed, 15 Apr 2020 22:23:20 +0000 (UTC) X-FDA: 76711516560.23.rake49_3a334d02d0c36 X-Spam-Summary: 2,0,0,4d5b2c3f59a2f87d,d41d8cd98f00b204,3vomxxgukcmeqhuuonvvnsl.jvtspu14-ttr2hjr.vyn@flex--jannh.bounces.google.com,,RULES_HIT:2:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1431:1437:1516:1518:1535:1593:1594:1605:1606:1730:1747:1777:1792:2393:2559:2562:2899:2915:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4119:4250:4321:4605:5007:6119:6261:6653:7875:7903:8603:9010:9969:10004:11026:11232:11233:11473:11658:11914:12043:12114:12296:12297:12438:12555:12663:12679:12895:12986:13161:13229:13870:14096:14097:14394:14659:21080:21365:21433:21444:21451:21627:21795:21796:21990:30003:30012:30036:30051:30054:30070,0,RBL:209.85.221.74:@flex--jannh.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: rake49_3a334d02d0c36 X-Filterd-Recvd-Size: 8658 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Apr 2020 22:23:19 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id q10so716374wrv.10 for ; Wed, 15 Apr 2020 15:23:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=1mIzf0uNJ1HQHSrhnAJYE+OaaNLEalR5nV8OrmGRp7Q=; b=poMVaLmZKcdcG4Bcnx75nTnUfAeTVUBbSMRILdWrxvxVpUbfv9HFq6Pemby1CsT/fT k+/QnoQB850GaS3TsSdLhVxdbwa9CfwxhS6O7eBACVCAPW0yD95HpJAYn2dq6LR2Obc6 ddDNnZa+R8Ko+HWsj7W5NbHrr8oPUcpODFOGeDKojz8RghWgVgLMoL5rkotLQ8zdEX5G kT98bNuIZPtygPXOe+idOGBcWhGsWm/hOYLhMMtGummX0ehzhvxXeIQqfO2wzs35lxWX GruxOJh+SGsWu0zIRfue5b+IlZHotMFkSNM6OewW/ijGMwt64yxNVJ7y0aGfUfCZcxdn pS1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=1mIzf0uNJ1HQHSrhnAJYE+OaaNLEalR5nV8OrmGRp7Q=; b=Rg8XIiPEgxNZsQejr2q365ijhct8+1T5JpTsyGcIb1YuNdMIzjFMRPmqPeVJedlslh 54zADH2zpMzpXLtziTpdmecfB2XOtaQON4F/M5wmTRcu3DET72ZioA+ceo7EgykgUnBO TU+7mFE+vrHFsRyrifthOB5G41n3L7lI5H3x4aL0JNkHN/otnKMmxjfG2BY8yWLkUqFT nV0cdTC/RpcbfQX/Ipfzl5ssNQigBK2fTAe5o0/K5zsM+yBpOw6vVZ/tLDnNOaIsOV4n u9P9nqoCHG9jWW8jcbW3w8kgNCKiahsS12tiiWFIy0PbUoiguywfyvMQWYa/pRlzWCrl 5bjQ== X-Gm-Message-State: AGi0PubtLn9ArMydcKGZzsva5iCw55BKiGpyOfyuiVyWF9x/uVJBigYq p7TgYhrIIFSj5lPvtD49q7q46dI8JQ== X-Google-Smtp-Source: APiQypJPKdrDSHFKJN23DtNTfbMmvVvnzTvIvil6PSbMBOaJJFx9sscPfhIkTKVVc6rydsLOzcHl9F2HyQ== X-Received: by 2002:adf:ed86:: with SMTP id c6mr29887988wro.286.1586989398145; Wed, 15 Apr 2020 15:23:18 -0700 (PDT) Date: Thu, 16 Apr 2020 00:23:12 +0200 Message-Id: <20200415222312.236431-1-jannh@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.26.0.110.g2183baf09c-goog Subject: [PATCH] vmalloc: Fix remap_vmalloc_range() bounds checks From: Jann Horn To: Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , bpf@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: remap_vmalloc_range() has had various issues with the bounds checks it promises to perform ("This function checks that addr is a valid vmalloc'ed area, and that it is big enough to cover the vma") over time, e.g.: - not detecting pgoff< --- I'm just sending this on the public list, since the worst-case impact for non-root users is leaking kernel pointers to userspace. In a context where you can reach BPF (no sandboxing), I don't think that kernel ASLR is very effective at the moment anyway. fs/proc/vmcore.c | 5 +++-- include/linux/vmalloc.h | 2 +- mm/vmalloc.c | 16 +++++++++++++--- samples/vfio-mdev/mdpy.c | 2 +- 4 files changed, 18 insertions(+), 7 deletions(-) base-commit: 8632e9b5645bbc2331d21d892b0d6961c1a08429 diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 7dc800cce3543..c663202da8de7 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -266,7 +266,8 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); buf = dump->buf + start - offset; - if (remap_vmalloc_range_partial(vma, dst, buf, tsz)) { + if (remap_vmalloc_range_partial(vma, dst, buf, 0, + tsz)) { ret = -EFAULT; goto out_unlock; } @@ -624,7 +625,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma) tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)start, size); kaddr = elfnotes_buf + start - elfcorebuf_sz - vmcoredd_orig_sz; if (remap_vmalloc_range_partial(vma, vma->vm_start + len, - kaddr, tsz)) + kaddr, 0, tsz)) goto fail; size -= tsz; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0507a162ccd0e..a95d3cc74d79b 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -137,7 +137,7 @@ extern void vunmap(const void *addr); extern int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr, void *kaddr, - unsigned long size); + unsigned long pgoff, unsigned long size); extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, unsigned long pgoff); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 399f219544f74..9a8227afa0738 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -3054,6 +3055,7 @@ long vwrite(char *buf, char *addr, unsigned long count) * @vma: vma to cover * @uaddr: target user address to start at * @kaddr: virtual address of vmalloc kernel memory + * @pgoff: offset from @kaddr to start at * @size: size of map area * * Returns: 0 for success, -Exxx on failure @@ -3066,9 +3068,15 @@ long vwrite(char *buf, char *addr, unsigned long count) * Similar to remap_pfn_range() (see mm/memory.c) */ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr, - void *kaddr, unsigned long size) + void *kaddr, unsigned long pgoff, + unsigned long size) { struct vm_struct *area; + unsigned long off; + unsigned long end_index; + + if (check_shl_overflow(pgoff, PAGE_SHIFT, &off)) + return -EINVAL; size = PAGE_ALIGN(size); @@ -3082,8 +3090,10 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr, if (!(area->flags & (VM_USERMAP | VM_DMA_COHERENT))) return -EINVAL; - if (kaddr + size > area->addr + get_vm_area_size(area)) + if (check_add_overflow(size, off, &end_index) || + end_index > get_vm_area_size(area)) return -EINVAL; + kaddr += off; do { struct page *page = vmalloc_to_page(kaddr); @@ -3122,7 +3132,7 @@ int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, unsigned long pgoff) { return remap_vmalloc_range_partial(vma, vma->vm_start, - addr + (pgoff << PAGE_SHIFT), + addr, pgoff, vma->vm_end - vma->vm_start); } EXPORT_SYMBOL(remap_vmalloc_range); diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c index cc86bf6566e42..9894693f3be17 100644 --- a/samples/vfio-mdev/mdpy.c +++ b/samples/vfio-mdev/mdpy.c @@ -418,7 +418,7 @@ static int mdpy_mmap(struct mdev_device *mdev, struct vm_area_struct *vma) return -EINVAL; return remap_vmalloc_range_partial(vma, vma->vm_start, - mdev_state->memblk, + mdev_state->memblk, 0, vma->vm_end - vma->vm_start); }