Message ID | 20151228140440.GA13374@1wt.eu (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 2015/12/28 23:04, Willy Tarreau wrote: > > On no-so-small systems, it is possible for a single process to cause an > OOM condition by filling large pipes with data that are never read. A > typical process filling 4000 pipes with 1 MB of data will use 4 GB of > memory. On small systems it may be tricky to set the pipe max size to > prevent this from happening. > > This patch makes it possible to enforce a per-user limit above which > new pipes will be limited to a single page, effectively limiting them > to 4 kB each. This has the effect of protecting the system against > memory abuse without hurting other users, and still allowing pipes to > work correctly though with less data at once. > > The limit is controlled by the new sysctl user-max-pipe-pages, and may > be disabled by setting it to zero. The default limit allows the default > number of FDs per process (1024) to create pipes of the default size > (64kB), thus reaching a limit of 64MB before starting to create only > smaller pipes. With a process enforcing the FD limit to 4096, this > results in 64MB + 3072 * 4kB = 72 MB. "1024 * 64kB + 3072 * 4kB = 72MB" might be easier to understand. But description of upper limit is not clear for me. It is "With a *process* enforcing the FD limit to 4096", not "With a *user* enforcing the FD limit to 4096"? Then, stronger upper limit would be expected because actual upper limit is nearly "1024 * 64kB + ("max user processes" * "open files" - 1024) * 4kB" (e.g. "max user processes" = 4096 and "open files" = 1024 would result in 16GB that may be still too much). > > Reported-by: socketpair@gmail.com > Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> > Signed-off-by: Willy Tarreau <w@1wt.eu> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Mitigates: CVE-2013-4312 (Linux 2.0+) > +/* Maximum allocatable pages per user, matches default values */ > +unsigned long user_max_pipe_pages = PIPE_DEF_BUFFERS * INR_OPEN_CUR; > +EXPORT_SYMBOL(user_max_pipe_pages); I think both fs/pipe.o and kernel/sysctl.o are built-in modules. Do we need to export this variable? > @@ -1066,7 +1094,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg) > if (!nr_pages) > goto out; > > - if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) { > + if (!capable(CAP_SYS_RESOURCE) && > + (size > pipe_max_size || too_many_pipe_buffers(pipe->user))) { > ret = -EPERM; > goto out; > } Maybe !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN) is better for too_many_pipe_buffers(pipe->user) case because the reason to limit resembles too_many_unix_fds() ? Maybe -ENOMEM is better for too_many_pipe_buffers(pipe->user) case? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Jan 10, 2016 at 06:00:05PM +0900, Tetsuo Handa wrote: > On 2015/12/28 23:04, Willy Tarreau wrote: > > > > On no-so-small systems, it is possible for a single process to cause an > > OOM condition by filling large pipes with data that are never read. A > > typical process filling 4000 pipes with 1 MB of data will use 4 GB of > > memory. On small systems it may be tricky to set the pipe max size to > > prevent this from happening. > > > > This patch makes it possible to enforce a per-user limit above which > > new pipes will be limited to a single page, effectively limiting them > > to 4 kB each. This has the effect of protecting the system against > > memory abuse without hurting other users, and still allowing pipes to > > work correctly though with less data at once. > > > > The limit is controlled by the new sysctl user-max-pipe-pages, and may > > be disabled by setting it to zero. The default limit allows the default > > number of FDs per process (1024) to create pipes of the default size > > (64kB), thus reaching a limit of 64MB before starting to create only > > smaller pipes. With a process enforcing the FD limit to 4096, this > > results in 64MB + 3072 * 4kB = 72 MB. > > "1024 * 64kB + 3072 * 4kB = 72MB" might be easier to understand. > > But description of upper limit is not clear for me. It is "With a *process* > enforcing the FD limit to 4096", not "With a *user* enforcing the FD limit > to 4096"? Yes, and the reported amount of memory is for the first process of this user. > Then, stronger upper limit would be expected because actual upper > limit is nearly > "1024 * 64kB + ("max user processes" * "open files" - 1024) * 4kB" > (e.g. "max user processes" = 4096 and "open files" = 1024 would result in > 16GB that may be still too much). Actually it's not hard to limit the per-user number of process on a machine, and it's generally done on most (all?) massively shared user environments. Trying to enforce stricter limits may end up randomly breaking software, which is why I preferred to stick to the single-page pipe solution. The real issue is that by default, even trying to play with aggressive process limits and per-process FD limits, it's very hard to maintain low memory usage with pipes. Now it becomes possible. In a shared environment, limiting to 256 processes and 1024 FDs per process, you end up with "only" 1 GB of RAM which is more reasonable a on large shared system. Ideally we'd like to be able to only allocate the pipes on the fly when the limits are lowered. But that requires changing a lots more things in the system to handle a queue of pending allocation requests and such things. While it may make sense for the long term, it's not acceptable as a fix. > > > > > Reported-by: socketpair@gmail.com > > Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> > > Signed-off-by: Willy Tarreau <w@1wt.eu> > > Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > Mitigates: CVE-2013-4312 (Linux 2.0+) OK will add this. > > +/* Maximum allocatable pages per user, matches default values */ > > +unsigned long user_max_pipe_pages = PIPE_DEF_BUFFERS * INR_OPEN_CUR; > > +EXPORT_SYMBOL(user_max_pipe_pages); > > I think both fs/pipe.o and kernel/sysctl.o are built-in modules. > Do we need to export this variable? Good question, I don't think so, indeed. > > @@ -1066,7 +1094,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg) > > if (!nr_pages) > > goto out; > > > > - if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) { > > + if (!capable(CAP_SYS_RESOURCE) && > > + (size > pipe_max_size || too_many_pipe_buffers(pipe->user))) { > > ret = -EPERM; > > goto out; > > } > > Maybe !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN) is better > for too_many_pipe_buffers(pipe->user) case because the reason to limit > resembles too_many_unix_fds() ? That makes sense, I'll do this. > Maybe -ENOMEM is better for too_many_pipe_buffers(pipe->user) case? I have hesitated on this one. EPERM makes it clear that the error may go away with more privileges, while ENOMEM would not give this hint. But I'm not strongly attached to it. Thanks! Willy -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Willy Tarreau wrote: > In a shared environment, limiting to 256 > processes and 1024 FDs per process, you end up with "only" 1 GB of RAM which > is more reasonable a on large shared system. OK. I see no problems if we change example to something like With a user enforcing the FD limit to 262144, this results in 1024 * 64kB + (26214464 - 1024) * 4kB = 1084MB. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/Documentation/sysctl/fs.txt b/Documentation/sysctl/fs.txt index 88152f2..a9751b1 100644 --- a/Documentation/sysctl/fs.txt +++ b/Documentation/sysctl/fs.txt @@ -37,6 +37,7 @@ Currently, these files are in /proc/sys/fs: - suid_dumpable - super-max - super-nr +- user-max-pipe-pages ============================================================== @@ -234,6 +235,15 @@ allows you to. ============================================================== +user-max-pipe-pages: + +Maximum total number of pages a non-privileged user may allocate for pipes. +When this limit is reached, new pipes will be limited to a single page in size +in order to limit total memory usage. The default value allows to allocate up +to 1024 pipes at their default size. When set to 0, no limit is enforced. + +============================================================== + aio-nr & aio-max-nr: aio-nr shows the current system-wide number of asynchronous io diff --git a/fs/pipe.c b/fs/pipe.c index 42cf8dd..cc4874b 100644 --- a/fs/pipe.c +++ b/fs/pipe.c @@ -38,6 +38,10 @@ unsigned int pipe_max_size = 1048576; */ unsigned int pipe_min_size = PAGE_SIZE; +/* Maximum allocatable pages per user, matches default values */ +unsigned long user_max_pipe_pages = PIPE_DEF_BUFFERS * INR_OPEN_CUR; +EXPORT_SYMBOL(user_max_pipe_pages); + /* * We use a start+len construction, which provides full use of the * allocated memory. @@ -583,20 +587,41 @@ pipe_fasync(int fd, struct file *filp, int on) return retval; } +static void account_pipe_buffers(struct pipe_inode_info *pipe, + unsigned long old, unsigned long new) +{ + atomic_long_add(new - old, &pipe->user->pipe_bufs); +} + +static bool too_many_pipe_buffers(struct user_struct *user) +{ + return user_max_pipe_pages && + atomic_long_read(&user->pipe_bufs) >= user_max_pipe_pages; +} + struct pipe_inode_info *alloc_pipe_info(void) { struct pipe_inode_info *pipe; pipe = kzalloc(sizeof(struct pipe_inode_info), GFP_KERNEL); if (pipe) { - pipe->bufs = kzalloc(sizeof(struct pipe_buffer) * PIPE_DEF_BUFFERS, GFP_KERNEL); + unsigned long pipe_bufs = PIPE_DEF_BUFFERS; + struct user_struct *user = get_current_user(); + + if (too_many_pipe_buffers(user)) + pipe_bufs = 1; + + pipe->bufs = kzalloc(sizeof(struct pipe_buffer) * pipe_bufs, GFP_KERNEL); if (pipe->bufs) { init_waitqueue_head(&pipe->wait); pipe->r_counter = pipe->w_counter = 1; - pipe->buffers = PIPE_DEF_BUFFERS; + pipe->buffers = pipe_bufs; + pipe->user = user; + account_pipe_buffers(pipe, 0, pipe_bufs); mutex_init(&pipe->mutex); return pipe; } + free_uid(user); kfree(pipe); } @@ -607,6 +632,8 @@ void free_pipe_info(struct pipe_inode_info *pipe) { int i; + account_pipe_buffers(pipe, pipe->buffers, 0); + free_uid(pipe->user); for (i = 0; i < pipe->buffers; i++) { struct pipe_buffer *buf = pipe->bufs + i; if (buf->ops) @@ -998,6 +1025,7 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long nr_pages) memcpy(bufs + head, pipe->bufs, tail * sizeof(struct pipe_buffer)); } + account_pipe_buffers(pipe, pipe->buffers, nr_pages); pipe->curbuf = 0; kfree(pipe->bufs); pipe->bufs = bufs; @@ -1066,7 +1094,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg) if (!nr_pages) goto out; - if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) { + if (!capable(CAP_SYS_RESOURCE) && + (size > pipe_max_size || too_many_pipe_buffers(pipe->user))) { ret = -EPERM; goto out; } diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h index eb8b8ac..a4cee7c 100644 --- a/include/linux/pipe_fs_i.h +++ b/include/linux/pipe_fs_i.h @@ -57,6 +57,7 @@ struct pipe_inode_info { struct fasync_struct *fasync_readers; struct fasync_struct *fasync_writers; struct pipe_buffer *bufs; + struct user_struct *user; }; /* @@ -123,6 +124,7 @@ void pipe_unlock(struct pipe_inode_info *); void pipe_double_lock(struct pipe_inode_info *, struct pipe_inode_info *); extern unsigned int pipe_max_size, pipe_min_size; +extern unsigned long user_max_pipe_pages; int pipe_proc_fn(struct ctl_table *, int, void __user *, size_t *, loff_t *); diff --git a/include/linux/sched.h b/include/linux/sched.h index fbf25f1..93643762 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -831,6 +831,7 @@ struct user_struct { #endif unsigned long locked_shm; /* How many pages of mlocked shm ? */ unsigned long unix_inflight; /* How many files in flight in unix sockets */ + atomic_long_t pipe_bufs; /* how many pages are allocated in pipe buffers */ #ifdef CONFIG_KEYS struct key *uid_keyring; /* UID specific keyring */ diff --git a/kernel/sysctl.c b/kernel/sysctl.c index dc6858d..a288edf 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1735,6 +1735,13 @@ static struct ctl_table fs_table[] = { .proc_handler = &pipe_proc_fn, .extra1 = &pipe_min_size, }, + { + .procname = "user-max-pipe-pages", + .data = &user_max_pipe_pages, + .maxlen = sizeof(user_max_pipe_pages), + .mode = 0644, + .proc_handler = proc_doulongvec_minmax, + }, { } };
On no-so-small systems, it is possible for a single process to cause an OOM condition by filling large pipes with data that are never read. A typical process filling 4000 pipes with 1 MB of data will use 4 GB of memory. On small systems it may be tricky to set the pipe max size to prevent this from happening. This patch makes it possible to enforce a per-user limit above which new pipes will be limited to a single page, effectively limiting them to 4 kB each. This has the effect of protecting the system against memory abuse without hurting other users, and still allowing pipes to work correctly though with less data at once. The limit is controlled by the new sysctl user-max-pipe-pages, and may be disabled by setting it to zero. The default limit allows the default number of FDs per process (1024) to create pipes of the default size (64kB), thus reaching a limit of 64MB before starting to create only smaller pipes. With a process enforcing the FD limit to 4096, this results in 64MB + 3072 * 4kB = 72 MB. Reported-by: socketpair@gmail.com Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Willy Tarreau <w@1wt.eu> --- I don't know if it's better that the limit is not enforced by default (easy to change), nor do I know if it's better to specify the limit in bytes instead of pages. Some settings use pages already so I don't think it's a big deal. --- Documentation/sysctl/fs.txt | 10 ++++++++++ fs/pipe.c | 35 ++++++++++++++++++++++++++++++++--- include/linux/pipe_fs_i.h | 2 ++ include/linux/sched.h | 1 + kernel/sysctl.c | 7 +++++++ 5 files changed, 52 insertions(+), 3 deletions(-)