From patchwork Tue Mar 17 19:41:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 11443871 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E7DB6CA for ; Tue, 17 Mar 2020 19:43:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 53B0A20738 for ; Tue, 17 Mar 2020 19:43:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="S2yFmBHB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726756AbgCQTnY (ORCPT ); Tue, 17 Mar 2020 15:43:24 -0400 Received: from us-smtp-delivery-74.mimecast.com ([63.128.21.74]:35862 "EHLO us-smtp-delivery-74.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726692AbgCQTnY (ORCPT ); Tue, 17 Mar 2020 15:43:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1584474203; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=VP/33sULw/Y4pBnAwwPfa1aLW7Jli8KU3auzMsRKrNI=; b=S2yFmBHBJ57BHjDW4MzJQjKuV8JgxZYhe2pEAS3dbfQunL1E7bhnZr2BAKqQUD0VYY1stM beNWEMY5YKxofzbkpYDSrh9RCW/wK5i2q6xDOfIc1a1zxy3KQfc1tdhxllOiK7jV1qQ53e y2U4hV9L9W3NOkiZW5B+58Db1ajeMg4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-454-3j8DzbXzMg2ZwixPszIWXg-1; Tue, 17 Mar 2020 15:43:07 -0400 X-MC-Unique: 3j8DzbXzMg2ZwixPszIWXg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B2B9B80259A; Tue, 17 Mar 2020 19:42:59 +0000 (UTC) Received: from llong.com (ovpn-115-15.rdu2.redhat.com [10.10.115.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 17FFD10027A3; Tue, 17 Mar 2020 19:42:48 +0000 (UTC) From: Waiman Long To: David Howells , Jarkko Sakkinen , James Morris , "Serge E. Hallyn" , Mimi Zohar , "David S. Miller" , Jakub Kicinski Cc: keyrings@vger.kernel.org, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, linux-integrity@vger.kernel.org, netdev@vger.kernel.org, linux-afs@lists.infradead.org, Sumit Garg , Jerry Snitselaar , Roberto Sassu , Eric Biggers , Chris von Recklinghausen , Waiman Long Subject: [PATCH v4 4/4] KEYS: Avoid false positive ENOMEM error on key read Date: Tue, 17 Mar 2020 15:41:40 -0400 Message-Id: <20200317194140.6031-5-longman@redhat.com> In-Reply-To: <20200317194140.6031-1-longman@redhat.com> References: <20200317194140.6031-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: By allocating a kernel buffer with a user-supplied buffer length, it is possible that a false positive ENOMEM error may be returned because the user-supplied length is just too large even if the system do have enough memory to hold the actual key data. Moreover, if the buffer length is larger than the maximum amount of memory that can be returned by kmalloc() (2^(MAX_ORDER-1) number of pages), a warning message will also be printed. To reduce this possibility, we set a threshold (PAGE_SIZE) over which we do check the actual key length first before allocating a buffer of the right size to hold it. The threshold is arbitrary, it is just used to trigger a buffer length check. It does not limit the actual key length as long as there is enough memory to satisfy the memory request. To further avoid large buffer allocation failure due to page fragmentation, kvmalloc() is used to allocate the buffer so that vmapped pages can be used when there is not a large enough contiguous set of pages available for allocation. Signed-off-by: Waiman Long --- security/keys/internal.h | 12 ++++++++++++ security/keys/keyctl.c | 41 ++++++++++++++++++++++++++++++++-------- 2 files changed, 45 insertions(+), 8 deletions(-) diff --git a/security/keys/internal.h b/security/keys/internal.h index ba3e2da14cef..6d0ca48ae9a5 100644 --- a/security/keys/internal.h +++ b/security/keys/internal.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include struct iovec; @@ -349,4 +351,14 @@ static inline void key_check(const struct key *key) #endif +/* + * Helper function to clear and free a kvmalloc'ed memory object. + */ +static inline void __kvzfree(const void *addr, size_t len) +{ + if (addr) { + memset((void *)addr, 0, len); + kvfree(addr); + } +} #endif /* _INTERNAL_H */ diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c index 81f68e434b9f..07eaa46d344c 100644 --- a/security/keys/keyctl.c +++ b/security/keys/keyctl.c @@ -339,7 +339,7 @@ long keyctl_update_key(key_serial_t id, payload = NULL; if (plen) { ret = -ENOMEM; - payload = kmalloc(plen, GFP_KERNEL); + payload = kvmalloc(plen, GFP_KERNEL); if (!payload) goto error; @@ -360,7 +360,7 @@ long keyctl_update_key(key_serial_t id, key_ref_put(key_ref); error2: - kzfree(payload); + __kvzfree(payload, plen); error: return ret; } @@ -877,13 +877,24 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen) * transferring them to user buffer to avoid potential * deadlock involving page fault and mmap_sem. */ - char *tmpbuf = kmalloc(buflen, GFP_KERNEL); + char *tmpbuf = NULL; + size_t tmpbuflen = buflen; - if (!tmpbuf) { - ret = -ENOMEM; - goto error2; + /* + * To prevent memory allocation failure with an arbitrary + * large user-supplied buflen, we do a key length check + * before allocating a buffer of the right size to hold + * key data if it exceeds a threshold (PAGE_SIZE). + */ + if (buflen <= PAGE_SIZE) { +allocbuf: + tmpbuf = kvmalloc(tmpbuflen, GFP_KERNEL); + if (!tmpbuf) { + ret = -ENOMEM; + goto error2; + } } - ret = __keyctl_read_key(key, tmpbuf, buflen); + ret = __keyctl_read_key(key, tmpbuf, tmpbuflen); /* * Read methods will just return the required length @@ -891,10 +902,24 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen) * enough. */ if ((ret > 0) && (ret <= buflen)) { + /* + * It is possible, though unlikely, that the key + * changes in between the up_read->down_read period. + * If the key becomes longer, we will have to + * allocate a larger buffer and redo the key read + * again. + */ + if (!tmpbuf || unlikely(ret > tmpbuflen)) { + if (unlikely(tmpbuf)) + __kvzfree(tmpbuf, tmpbuflen); + tmpbuflen = ret; + goto allocbuf; + } + if (copy_to_user(buffer, tmpbuf, ret)) ret = -EFAULT; } - kzfree(tmpbuf); + __kvzfree(tmpbuf, tmpbuflen); } error2: