diff mbox series

[v3] xfs: assure zeroed memory buffers for certain kmem allocations

Message ID 20190919150154.30302-1-billodo@redhat.com (mailing list archive)
State Superseded, archived
Headers show
Series [v3] xfs: assure zeroed memory buffers for certain kmem allocations | expand

Commit Message

Bill O'Donnell Sept. 19, 2019, 3:01 p.m. UTC
Guarantee zeroed memory buffers for cases where potential memory
leak to disk can occur. In these cases, kmem_alloc is used and
doesn't zero the buffer, opening the possibility of information
leakage to disk.

Use existing infrastucture (xfs_buf_allocate_memory) to obtain
the already zeroed buffer from kernel memory.

This solution avoids the performance issue that would occur if a
wholesale change to replace kmem_alloc with kmem_zalloc was done.

Signed-off-by: Bill O'Donnell <billodo@redhat.com>
---
v3: remove XBF_ZERO flag, and instead use XBF_READ flag only.
v2: zeroed buffer not required for XBF_READ case. Correct placement
    and rename the XBF_ZERO flag.


fs/xfs/xfs_buf.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig Sept. 19, 2019, 5:03 p.m. UTC | #1
On Thu, Sep 19, 2019 at 10:01:54AM -0500, Bill O'Donnell wrote:
> +	uint			kmflag_mask = 0;
> +
> +	if (!(flags & XBF_READ))
> +		kmflag_mask |= KM_ZERO;

> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
>  		struct page	*page;
>  		uint		retries = 0;
>  retry:
> -		page = alloc_page(gfp_mask);
> +		page = alloc_page(gfp_mask | kmflag_mask);

alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
about this.
Bill O'Donnell Sept. 19, 2019, 5:20 p.m. UTC | #2
On Thu, Sep 19, 2019 at 10:03:53AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 19, 2019 at 10:01:54AM -0500, Bill O'Donnell wrote:
> > +	uint			kmflag_mask = 0;
> > +
> > +	if (!(flags & XBF_READ))
> > +		kmflag_mask |= KM_ZERO;
> 
> > @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> >  		struct page	*page;
> >  		uint		retries = 0;
> >  retry:
> > -		page = alloc_page(gfp_mask);
> > +		page = alloc_page(gfp_mask | kmflag_mask);
> 
> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> about this.

I wondered if the KM flag needed conversion to GFP, but saw no warning.
Thanks-
Bill
Christoph Hellwig Sept. 19, 2019, 5:38 p.m. UTC | #3
On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
> > > @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> > >  		struct page	*page;
> > >  		uint		retries = 0;
> > >  retry:
> > > -		page = alloc_page(gfp_mask);
> > > +		page = alloc_page(gfp_mask | kmflag_mask);
> > 
> > alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> > about this.
> 
> I wondered if the KM flag needed conversion to GFP, but saw no warning.

I'd be tempted to just do a manual memset after either kind of
allocation.
Eric Sandeen Sept. 20, 2019, 2:59 p.m. UTC | #4
On 9/19/19 12:38 PM, Christoph Hellwig wrote:
> On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
>>>> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
>>>>  		struct page	*page;
>>>>  		uint		retries = 0;
>>>>  retry:
>>>> -		page = alloc_page(gfp_mask);
>>>> +		page = alloc_page(gfp_mask | kmflag_mask);
>>>
>>> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
>>> about this.
>>
>> I wondered if the KM flag needed conversion to GFP, but saw no warning.
> 
> I'd be tempted to just do a manual memset after either kind of
> allocation.

At some point I think Dave had suggested that at least when allocating pages,
using the flag would be more efficient?

-Eric
Dave Chinner Sept. 24, 2019, 4:13 a.m. UTC | #5
On Fri, Sep 20, 2019 at 09:59:41AM -0500, Eric Sandeen wrote:
> On 9/19/19 12:38 PM, Christoph Hellwig wrote:
> > On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
> >>>> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> >>>>  		struct page	*page;
> >>>>  		uint		retries = 0;
> >>>>  retry:
> >>>> -		page = alloc_page(gfp_mask);
> >>>> +		page = alloc_page(gfp_mask | kmflag_mask);
> >>>
> >>> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> >>> about this.
> >>
> >> I wondered if the KM flag needed conversion to GFP, but saw no warning.
> > 
> > I'd be tempted to just do a manual memset after either kind of
> > allocation.
> 
> At some point I think Dave had suggested that at least when allocating pages,
> using the flag would be more efficient?

With some configurations pages come from the free lists pre-zeroed,
and so don't need zeroing to initialise them (e.g. when memory
poisoning is turned on, or pages are being zeroed on free). Hence if
you use __GFP_ZERO the it will only zero if the page obtained from
the freelist isn't already zero. The __GFP_ZERO call will also use
the most efficient method of zeroing the page for the platform via
clear_page() rather than memset()....

/me shrugs and doesn't really care either way....

-Dave.
diff mbox series

Patch

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 120ef99d09e8..6fbe63f34a68 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -345,6 +345,10 @@  xfs_buf_allocate_memory(
 	unsigned short		page_count, i;
 	xfs_off_t		start, end;
 	int			error;
+	uint			kmflag_mask = 0;
+
+	if (!(flags & XBF_READ))
+		kmflag_mask |= KM_ZERO;
 
 	/*
 	 * for buffers that are contained within a single page, just allocate
@@ -354,7 +358,8 @@  xfs_buf_allocate_memory(
 	size = BBTOB(bp->b_length);
 	if (size < PAGE_SIZE) {
 		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
-		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
+		bp->b_addr = kmem_alloc_io(size, align_mask,
+					   KM_NOFS | kmflag_mask);
 		if (!bp->b_addr) {
 			/* low memory - use alloc_page loop instead */
 			goto use_alloc_page;
@@ -391,7 +396,7 @@  xfs_buf_allocate_memory(
 		struct page	*page;
 		uint		retries = 0;
 retry:
-		page = alloc_page(gfp_mask);
+		page = alloc_page(gfp_mask | kmflag_mask);
 		if (unlikely(page == NULL)) {
 			if (flags & XBF_READ_AHEAD) {
 				bp->b_page_count = i;