Message ID | 1515636190-24061-14-git-send-email-keescook@chromium.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Jan 10, 2018 at 06:02:45PM -0800, Kees Cook wrote: > The ext4 symlink pathnames, stored in struct ext4_inode_info.i_data > and therefore contained in the ext4_inode_cache slab cache, need > to be copied to/from userspace. Symlink operations to/from userspace aren't common or in the hot path, and when they are in i_data, limited to at most 60 bytes. Is it worth it to copy through a bounce buffer so as to disallow any usercopies into struct ext4_inode_info? - Ted
On Thu, Jan 11, 2018 at 9:01 AM, Theodore Ts'o <tytso@mit.edu> wrote: > On Wed, Jan 10, 2018 at 06:02:45PM -0800, Kees Cook wrote: >> The ext4 symlink pathnames, stored in struct ext4_inode_info.i_data >> and therefore contained in the ext4_inode_cache slab cache, need >> to be copied to/from userspace. > > Symlink operations to/from userspace aren't common or in the hot path, > and when they are in i_data, limited to at most 60 bytes. Is it worth > it to copy through a bounce buffer so as to disallow any usercopies > into struct ext4_inode_info? If this is the only place it's exposed, yeah, that might be a way to avoid the per-FS patches. This would, AIUI, require changing readlink_copy() to include a bounce buffer, and that would require an allocation. I kind of prefer just leaving the per-FS whitelists, as then there's no global overhead added. -Kees
On Thu, Jan 11, 2018 at 03:05:14PM -0800, Kees Cook wrote: > On Thu, Jan 11, 2018 at 9:01 AM, Theodore Ts'o <tytso@mit.edu> wrote: > > On Wed, Jan 10, 2018 at 06:02:45PM -0800, Kees Cook wrote: > >> The ext4 symlink pathnames, stored in struct ext4_inode_info.i_data > >> and therefore contained in the ext4_inode_cache slab cache, need > >> to be copied to/from userspace. > > > > Symlink operations to/from userspace aren't common or in the hot path, > > and when they are in i_data, limited to at most 60 bytes. Is it worth > > it to copy through a bounce buffer so as to disallow any usercopies > > into struct ext4_inode_info? > > If this is the only place it's exposed, yeah, that might be a way to > avoid the per-FS patches. This would, AIUI, require changing > readlink_copy() to include a bounce buffer, and that would require an > allocation. I kind of prefer just leaving the per-FS whitelists, as > then there's no global overhead added. I think Ted was proposing having a per-FS patch that would, say, copy up to 60 bytes to the stack, then memcpy it into the ext4_inode_info.
diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 7c46693a14d7..57a8fa451d3e 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1036,11 +1036,13 @@ static void init_once(void *foo) static int __init init_inodecache(void) { - ext4_inode_cachep = kmem_cache_create("ext4_inode_cache", - sizeof(struct ext4_inode_info), - 0, (SLAB_RECLAIM_ACCOUNT| - SLAB_MEM_SPREAD|SLAB_ACCOUNT), - init_once); + ext4_inode_cachep = kmem_cache_create_usercopy("ext4_inode_cache", + sizeof(struct ext4_inode_info), 0, + (SLAB_RECLAIM_ACCOUNT|SLAB_MEM_SPREAD| + SLAB_ACCOUNT), + offsetof(struct ext4_inode_info, i_data), + sizeof_field(struct ext4_inode_info, i_data), + init_once); if (ext4_inode_cachep == NULL) return -ENOMEM; return 0;