From patchwork Thu Jan 21 08:29:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 8079291 Return-Path: X-Original-To: patchwork-linux-media@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6C8CA9F744 for ; Thu, 21 Jan 2016 08:29:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8061E205DB for ; Thu, 21 Jan 2016 08:29:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 90AD7205D1 for ; Thu, 21 Jan 2016 08:29:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758964AbcAUI3a (ORCPT ); Thu, 21 Jan 2016 03:29:30 -0500 Received: from mail-wm0-f51.google.com ([74.125.82.51]:35749 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758914AbcAUI33 (ORCPT ); Thu, 21 Jan 2016 03:29:29 -0500 Received: by mail-wm0-f51.google.com with SMTP id r129so161847814wmr.0; Thu, 21 Jan 2016 00:29:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=nJTRnOX/5qk+YbBckAl4lXKbNGHCbx9LMmVd/xAMvbU=; b=cqfZvj54MM5x50yKBLjxvf+BI00NbqgyUN43SvY++2ibt2QCnXDPlpMACwT4XlQJPv 1eywDcspUAK50YT/EdMXw43eS9+xHk7Djq6YjoKgewpO8o2kkKnu/g6K12yLzYF9+VBD dzc055iOV0rma0TpYnSXVWeRKLhw79cq4M0xCuPN/aCr/gSaNQ+q9BadyzMKjzsjqgbc 8uxzOKO7dWPf6idvlZh/8RzQB3YlmDzJobJTa8j1pvBnLTP0lyyxYM0FVsQdmdhAACFr Gqo5HyWJ+EF19DycWcGTvaIW7iY5fqiRXLdYCNtcNB1oyHekmb1uk8UOmVKrp/BTVX7V 55bg== X-Gm-Message-State: ALoCoQl3QWZxspzNqlDXutT0xQ4kh3W8am88Asq2CE+xWa+N2ZWAYs9RpDyM2xYibnm1aMvYu85f+MA3T0p3LccninMPn5gHNA== X-Received: by 10.194.80.200 with SMTP id t8mr40460497wjx.74.1453364968050; Thu, 21 Jan 2016 00:29:28 -0800 (PST) Received: from tiehlicka.suse.cz (nat1.scz.suse.com. [213.151.88.250]) by smtp.gmail.com with ESMTPSA id y8sm1761639wmg.9.2016.01.21.00.29.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 21 Jan 2016 00:29:27 -0800 (PST) From: Michal Hocko To: Mauro Carvalho Chehab , Hans Verkuil Cc: linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Hocko Subject: [PATCH] [media] zoran: do not use kmalloc for memory mapped to userspace Date: Thu, 21 Jan 2016 09:29:18 +0100 Message-Id: <1453364958-29983-1-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.7.0.rc3 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko mmapping kmalloced memory is dangerous for two reasons. First the memory is not guaranteed to be page aligned and secondly and more imporatantly the allocates size _has_ to be in page units otherwise we grant an access to an unrelated memory. This is has security implications of course. zoran_mmap calls remap_pfn_range on fbuffer_phys which is allocated by kmalloc which alone is worrying. Yet the buffer size (buffer_size) is calculated by mmap_mode_raw, map_mode_jpg resp. zoran_v4l2_calc_bufsize which caps the resulting size to jpg_bufsize which is 512B so we expose a full slab page with an unrelated content via mmap. Fix the issue by using __get_free_pages/free_pages instead of kmalloc/kfree and also use __GFP_ZERO to make sure the buffer is zeroed out before it is exported to prevent from information leak. Signed-off-by: Michal Hocko --- Hi, I am sending this offlist for the review because this has security implications and I am not sure how you handle such issues. I do not own the HW so I haven't tested this so please be careful and give it a deep review. The issue has been pointed out by Sebastian Frias (CCed) who was asking for a similar pattern used in an out of tree driver [1]. I do not have any active exploit for the issue nor I am sure whether this can be exploited in the real life. Small sized slab caches (<=512) are used quite heavily so there is a theoretical chance of having something interesting in the same page though. [1] http://lkml.kernel.org/r/5667128B.3080704@sigmadesigns.com drivers/media/pci/zoran/zoran_driver.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/media/pci/zoran/zoran_driver.c b/drivers/media/pci/zoran/zoran_driver.c index 80caa70c6360..e71af08c4972 100644 --- a/drivers/media/pci/zoran/zoran_driver.c +++ b/drivers/media/pci/zoran/zoran_driver.c @@ -227,8 +227,8 @@ static int v4l_fbuffer_alloc(struct zoran_fh *fh) ZR_DEVNAME(zr), __func__, i); //udelay(20); - mem = kmalloc(fh->buffers.buffer_size, - GFP_KERNEL | __GFP_NOWARN); + mem = (unsigned char *)__get_free_pages(GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN, + get_order(fh->buffers.buffer_size)); if (!mem) { dprintk(1, KERN_ERR @@ -272,7 +272,8 @@ static void v4l_fbuffer_free(struct zoran_fh *fh) for (off = 0; off < fh->buffers.buffer_size; off += PAGE_SIZE) ClearPageReserved(virt_to_page(mem + off)); - kfree(fh->buffers.buffer[i].v4l.fbuffer); + free_pages((unsigned long)fh->buffers.buffer[i].v4l.fbuffer, + get_order(fh->buffers.buffer_size)); fh->buffers.buffer[i].v4l.fbuffer = NULL; } @@ -335,7 +336,8 @@ static int jpg_fbuffer_alloc(struct zoran_fh *fh) fh->buffers.buffer[i].jpg.frag_tab_bus = virt_to_bus(mem); if (fh->buffers.need_contiguous) { - mem = kmalloc(fh->buffers.buffer_size, GFP_KERNEL); + mem = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, + get_order(fh->buffers.buffer_size)); if (mem == NULL) { dprintk(1, KERN_ERR @@ -407,7 +409,8 @@ static void jpg_fbuffer_free(struct zoran_fh *fh) mem = bus_to_virt(le32_to_cpu(frag_tab)); for (off = 0; off < fh->buffers.buffer_size; off += PAGE_SIZE) ClearPageReserved(virt_to_page(mem + off)); - kfree(mem); + free_pages((unsigned long)mem, + get_order(fh->buffers.buffer_size)); buffer->jpg.frag_tab[0] = 0; buffer->jpg.frag_tab[1] = 0; }