Message ID | ebf6c7fb-fec3-6a26-544f-710ed193c154@suse.cz (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 07/03/2018 09:36 AM, Vlastimil Babka wrote: > On 07/01/2018 08:31 PM, Thomas Lindroth wrote: >> While looking around in /proc on my v4.14.52 system I noticed that >> all processes got a lot of "Locked" memory in /proc/*/smaps. A lot >> more memory than a regular user can usually lock with mlock(). >> >> commit 493b0e9d945fa9dfe96be93ae41b4ca4b6fdb317 (v4.14-rc1) seems >> to have changed the behavior of "Locked". Oops, I forgot, thanks for the nice report :) Vlastimil
On Tue, Jul 3, 2018 at 12:36 AM, Vlastimil Babka <vbabka@suse.cz> wrote: > +CC > > On 07/01/2018 08:31 PM, Thomas Lindroth wrote: >> While looking around in /proc on my v4.14.52 system I noticed that >> all processes got a lot of "Locked" memory in /proc/*/smaps. A lot >> more memory than a regular user can usually lock with mlock(). >> >> commit 493b0e9d945fa9dfe96be93ae41b4ca4b6fdb317 (v4.14-rc1) seems >> to have changed the behavior of "Locked". Thanks for fixing that. I submitted a patch [1] for this bug and some others a while ago, but the patch didn't make it into the tree because or wasn't split up correctly or something, and I had to do other work. [1] https://marc.info/?l=linux-mm&m=151927723128134&w=2
On 07/03/2018 06:20 PM, Daniel Colascione wrote: > On Tue, Jul 3, 2018 at 12:36 AM, Vlastimil Babka <vbabka@suse.cz> wrote: >> +CC >> >> On 07/01/2018 08:31 PM, Thomas Lindroth wrote: >>> While looking around in /proc on my v4.14.52 system I noticed that >>> all processes got a lot of "Locked" memory in /proc/*/smaps. A lot >>> more memory than a regular user can usually lock with mlock(). >>> >>> commit 493b0e9d945fa9dfe96be93ae41b4ca4b6fdb317 (v4.14-rc1) seems >>> to have changed the behavior of "Locked". > > Thanks for fixing that. I submitted a patch [1] for this bug and some > others a while ago, but the patch didn't make it into the tree because > or wasn't split up correctly or something, and I had to do other work. Hmm I see. I pondered about the patch and wondered if the scenarios it fixes are really possible for smaps_rollup. Did you observe them in practice? Namely: - when seq_file starts and stops multiple times on a single open file description - when it issues multiple show calls for the same iterator value I don't think it can happen when all positions but the last one just return SEQ_SKIP. Anyway I think the seq_file iterator API usage for smaps_rollup is unnecessary. Semantically the file shows only one "element" and that's the set of rollup values for all vmas. Letting seq_file do the iteration over vmas brings only complications? > [1] https://marc.info/?l=linux-mm&m=151927723128134&w=2 >
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e9679016271f..dfd73a4616ce 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -831,7 +831,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid) SEQ_PUT_DEC(" kB\nSwap: ", mss->swap); SEQ_PUT_DEC(" kB\nSwapPss: ", mss->swap_pss >> PSS_SHIFT); - SEQ_PUT_DEC(" kB\nLocked: ", mss->pss >> PSS_SHIFT); + SEQ_PUT_DEC(" kB\nLocked: ", + mss->pss_locked >> PSS_SHIFT); seq_puts(m, " kB\n"); } if (!rollup_mode) {