Message ID | 20190919210138.13535-1-billodo@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Headers | show |
Series | [v4] xfs: assure zeroed memory buffers for certain kmem allocations | expand |
On Thu, Sep 19, 2019 at 04:01:38PM -0500, Bill O'Donnell wrote: > Guarantee zeroed memory buffers for cases where potential memory > leak to disk can occur. In these cases, kmem_alloc is used and > doesn't zero the buffer, opening the possibility of information > leakage to disk. > > Use existing infrastucture (xfs_buf_allocate_memory) to obtain > the already zeroed buffer from kernel memory. > > This solution avoids the performance issue that would occur if a > wholesale change to replace kmem_alloc with kmem_zalloc was done. > > Signed-off-by: Bill O'Donnell <billodo@redhat.com> > --- > v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO) > v3: remove XBF_ZERO flag, and instead use XBF_READ flag only. > v2: zeroed buffer not required for XBF_READ case. Correct placement > and rename the XBF_ZERO flag. > > fs/xfs/xfs_buf.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 120ef99d09e8..5d0a68de5fa6 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -345,6 +345,15 @@ xfs_buf_allocate_memory( > unsigned short page_count, i; > xfs_off_t start, end; > int error; > + uint kmflag_mask = 0; > + > + /* > + * assure zeroed buffer for non-read cases. > + */ > + if (!(flags & XBF_READ)) { > + kmflag_mask |= KM_ZERO; > + gfp_mask |= __GFP_ZERO; > + } Jeez it feels grody to have to set two different flags variables just to get __GFP_ZERO consistently but I'll run it through xfstests overnight. Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> --D > > /* > * for buffers that are contained within a single page, just allocate > @@ -354,7 +363,8 @@ xfs_buf_allocate_memory( > size = BBTOB(bp->b_length); > if (size < PAGE_SIZE) { > int align_mask = xfs_buftarg_dma_alignment(bp->b_target); > - bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS); > + bp->b_addr = kmem_alloc_io(size, align_mask, > + KM_NOFS | kmflag_mask); > if (!bp->b_addr) { > /* low memory - use alloc_page loop instead */ > goto use_alloc_page; > -- > 2.21.0 >
On Thu, Sep 19, 2019 at 04:01:38PM -0500, Bill O'Donnell wrote: > Guarantee zeroed memory buffers for cases where potential memory > leak to disk can occur. In these cases, kmem_alloc is used and > doesn't zero the buffer, opening the possibility of information > leakage to disk. > > Use existing infrastucture (xfs_buf_allocate_memory) to obtain > the already zeroed buffer from kernel memory. > > This solution avoids the performance issue that would occur if a > wholesale change to replace kmem_alloc with kmem_zalloc was done. > > Signed-off-by: Bill O'Donnell <billodo@redhat.com> > --- > v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO) > v3: remove XBF_ZERO flag, and instead use XBF_READ flag only. > v2: zeroed buffer not required for XBF_READ case. Correct placement > and rename the XBF_ZERO flag. > > fs/xfs/xfs_buf.c | 12 +++++++++++- /me wakes up and wonders, what about the other kmem_alloc_io callers in xfs? I think we need to slip a KM_ZERO into the allocation call when we allocate the log buffers, right? What about log recovery? --D > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 120ef99d09e8..5d0a68de5fa6 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -345,6 +345,15 @@ xfs_buf_allocate_memory( > unsigned short page_count, i; > xfs_off_t start, end; > int error; > + uint kmflag_mask = 0; > + > + /* > + * assure zeroed buffer for non-read cases. > + */ > + if (!(flags & XBF_READ)) { > + kmflag_mask |= KM_ZERO; > + gfp_mask |= __GFP_ZERO; > + } > > /* > * for buffers that are contained within a single page, just allocate > @@ -354,7 +363,8 @@ xfs_buf_allocate_memory( > size = BBTOB(bp->b_length); > if (size < PAGE_SIZE) { > int align_mask = xfs_buftarg_dma_alignment(bp->b_target); > - bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS); > + bp->b_addr = kmem_alloc_io(size, align_mask, > + KM_NOFS | kmflag_mask); > if (!bp->b_addr) { > /* low memory - use alloc_page loop instead */ > goto use_alloc_page; > -- > 2.21.0 >
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 120ef99d09e8..5d0a68de5fa6 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -345,6 +345,15 @@ xfs_buf_allocate_memory( unsigned short page_count, i; xfs_off_t start, end; int error; + uint kmflag_mask = 0; + + /* + * assure zeroed buffer for non-read cases. + */ + if (!(flags & XBF_READ)) { + kmflag_mask |= KM_ZERO; + gfp_mask |= __GFP_ZERO; + } /* * for buffers that are contained within a single page, just allocate @@ -354,7 +363,8 @@ xfs_buf_allocate_memory( size = BBTOB(bp->b_length); if (size < PAGE_SIZE) { int align_mask = xfs_buftarg_dma_alignment(bp->b_target); - bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS); + bp->b_addr = kmem_alloc_io(size, align_mask, + KM_NOFS | kmflag_mask); if (!bp->b_addr) { /* low memory - use alloc_page loop instead */ goto use_alloc_page;
Guarantee zeroed memory buffers for cases where potential memory leak to disk can occur. In these cases, kmem_alloc is used and doesn't zero the buffer, opening the possibility of information leakage to disk. Use existing infrastucture (xfs_buf_allocate_memory) to obtain the already zeroed buffer from kernel memory. This solution avoids the performance issue that would occur if a wholesale change to replace kmem_alloc with kmem_zalloc was done. Signed-off-by: Bill O'Donnell <billodo@redhat.com> --- v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO) v3: remove XBF_ZERO flag, and instead use XBF_READ flag only. v2: zeroed buffer not required for XBF_READ case. Correct placement and rename the XBF_ZERO flag. fs/xfs/xfs_buf.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)