mbox series

[00/10] Hardening page _refcount

Message ID 20211208203544.2297121-1-pasha.tatashin@soleen.com (mailing list archive)
Headers show
Series Hardening page _refcount | expand

Message

Pasha Tatashin Dec. 8, 2021, 8:35 p.m. UTC
Changelog:
v1:
- sync with the latest linux-next
RFCv2:
- use the "fetch" variant instead of "return" of atomic instructions
- allow negative values, as we are using all 32-bits of _refcount.


It is hard to root cause _refcount problems, because they usually
manifest after the damage has occurred.  Yet, they can lead to
catastrophic failures such memory corruptions. There were a number
of refcount related issues discovered recently [1], [2], [3].

Improve debugability by adding more checks that ensure that
page->_refcount never turns negative (i.e. double free does not
happen, or free after freeze etc).

- Check for overflow and underflow right from the functions that
  modify _refcount
- Remove set_page_count(), so we do not unconditionally overwrite
  _refcount with an unrestrained value
- Trace return values in all functions that modify _refcount

Applies against next-20211208.

Previous verions:
RFCv2: https://lore.kernel.org/all/20211117012059.141450-1-pasha.tatashin@soleen.com
RFCv1: https://lore.kernel.org/all/20211026173822.502506-1-pasha.tatashin@soleen.com

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

Pasha Tatashin (10):
  mm: page_ref_add_unless() does not trace 'u' argument
  mm: add overflow and underflow checks for page->_refcount
  mm: Avoid using set_page_count() in set_page_recounted()
  mm: remove set_page_count() from page_frag_alloc_align
  mm: avoid using set_page_count() when pages are freed into allocator
  mm: rename init_page_count() -> page_ref_init()
  mm: remove set_page_count()
  mm: simplify page_ref_* functions
  mm: do not use atomic_set_release in page_ref_unfreeze()
  mm: use atomic_cmpxchg_acquire in page_ref_freeze().

 arch/m68k/mm/motorola.c         |   2 +-
 include/linux/mm.h              |   2 +-
 include/linux/page_ref.h        | 159 +++++++++++++++-----------------
 include/trace/events/page_ref.h |  99 +++++++++++++++-----
 mm/debug_page_ref.c             |  30 ++----
 mm/internal.h                   |   6 +-
 mm/page_alloc.c                 |  19 ++--
 7 files changed, 180 insertions(+), 137 deletions(-)

Comments

Matthew Wilcox Dec. 8, 2021, 9:05 p.m. UTC | #1
On Wed, Dec 08, 2021 at 08:35:34PM +0000, Pasha Tatashin wrote:
> It is hard to root cause _refcount problems, because they usually
> manifest after the damage has occurred.  Yet, they can lead to
> catastrophic failures such memory corruptions. There were a number
> of refcount related issues discovered recently [1], [2], [3].
> 
> Improve debugability by adding more checks that ensure that
> page->_refcount never turns negative (i.e. double free does not
> happen, or free after freeze etc).
> 
> - Check for overflow and underflow right from the functions that
>   modify _refcount
> - Remove set_page_count(), so we do not unconditionally overwrite
>   _refcount with an unrestrained value
> - Trace return values in all functions that modify _refcount

You're doing a lot more atomic instructions with these patches.  Have you
done any performance measurements with these patches applied and debug
disabled?  I'm really not convinced it's worth closing
one-instruction-wide races of this kind when they are "shouldn't ever
happen" situations.  If the debugging will catch the problem in 99.99%
of cases and miss 0.01% without using atomic instructions, that seems
like a better set of tradeoffs than catching 100% of problems by using
the atomic instructions.
Pasha Tatashin Dec. 9, 2021, 1:23 a.m. UTC | #2
On Wed, Dec 8, 2021 at 4:05 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Dec 08, 2021 at 08:35:34PM +0000, Pasha Tatashin wrote:
> > It is hard to root cause _refcount problems, because they usually
> > manifest after the damage has occurred.  Yet, they can lead to
> > catastrophic failures such memory corruptions. There were a number
> > of refcount related issues discovered recently [1], [2], [3].
> >
> > Improve debugability by adding more checks that ensure that
> > page->_refcount never turns negative (i.e. double free does not
> > happen, or free after freeze etc).
> >
> > - Check for overflow and underflow right from the functions that
> >   modify _refcount
> > - Remove set_page_count(), so we do not unconditionally overwrite
> >   _refcount with an unrestrained value
> > - Trace return values in all functions that modify _refcount
>

Hi Matthew,

Thank you for looking at this series.

> You're doing a lot more atomic instructions with these patches.

This is not exactly so. There are no *more* atomic instructions. There
are, however, different atomic instructions:

For example:  atomic_add() becomes atomic_fetch_add()

On x86 it is:

atomic_add:
    lock add %eax,(%rsi)

atomic_fetch_add:
    lock xadd %eax,(%rsi)

On ARM64, I believe the same CAS instruction is used for both.

  Have you
> done any performance measurements with these patches applied and debug
> disabled?

Yes, I have done some performance tests exactly as you described with
CONFIG_DEBUG_VM disabled and these patches applied.
I tried: hackbench, unixbench, and a few more benchmarks; I did not
see any performance difference.

>  I'm really not convinced it's worth closing
> one-instruction-wide races of this kind when they are "shouldn't ever
> happen" situations.  If the debugging will catch the problem in 99.99%
> of cases and miss 0.01% without using atomic instructions, that seems
> like a better set of tradeoffs than catching 100% of problems by using
> the atomic instructions.

I think we should relax the precise catching of bugs only if there is
indeed a measurable performance impact. The problem is that if there
is a __refcount bug, the security consequences are dire as it may lead
to leaking memory from one process to another.

Thanks,
Pasha