Message ID | 20220910023839.3964539-1-zhangshida@kylinos.cn (mailing list archive) |
---|---|
State | Deferred, archived |
Headers | show |
Series | xfs: eliminate the potential overflow risk in xfs_da_grow_inode_int | expand |
On Sat, Sep 10, 2022 at 10:38:39AM +0800, Stephen Zhang wrote: > The problem lies in the for-loop of xfs_da_grow_inode_int: > ====== > for(){ > nmap = min(XFS_BMAP_MAX_NMAP, count); > ... > error = xfs_bmapi_write(...,&mapp[mapi], &nmap);//(..., $1, $2) > ... > mapi += nmap; > } > ===== > where $1 stands for the start address of the array, > while $2 is used to indicate the size of the array. > > The array $1 will advanced by $nmap in each iteration after > the allocation of extents. > But the size $2 still remains constant, which is determined by > min(XFS_BMAP_MAX_NMAP, count). > > Hence there is a risk of overflow when the remained space in > the array is less than $2. > So variablize the array size $2 correspondingly in each iteration > to eliminate the risk. Except that xfs_bmapi_write() won't overrun the array.... > Signed-off-by: Shida Zhang <zhangshida@kylinos.cn> > --- > fs/xfs/libxfs/xfs_da_btree.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c > index e7201dc68f43..3ef8c04624cc 100644 > --- a/fs/xfs/libxfs/xfs_da_btree.c > +++ b/fs/xfs/libxfs/xfs_da_btree.c > @@ -2192,7 +2192,7 @@ xfs_da_grow_inode_int( > */ > mapp = kmem_alloc(sizeof(*mapp) * count, 0); > for (b = *bno, mapi = 0; b < *bno + count; ) { > - nmap = min(XFS_BMAP_MAX_NMAP, count); > + nmap = min(XFS_BMAP_MAX_NMAP, count - mapi); > c = (int)(*bno + count - b); > error = xfs_bmapi_write(tp, dp, b, c, > xfs_bmapi_aflag(w)|XFS_BMAPI_METADATA, ... because we've allocated a mapp array large enough for one extent map per filesystem block. The line: c = (int)(*bno + count - b); calculates the maximum length of the extent remaining to map, and hence the maximum number of blocks we might need to map. We're guaranteed that the array is large enough for all single block maps, and xfs_bmapi_write() will never overrun the array because it doesn't map extents beyond the length requested. IOWs, there isn't an array overrun bug here even though we don't trim the requested number of maps on the last call. So the question remains: Why do we need *two* calculations that calculate the remaining number of blocks to map here? i.e. surely all we need is this: - nmap = min(XFS_BMAP_MAX_NMAP, count); c = (int)(*bno + count - b); + nmap = min(XFS_BMAP_MAX_NMAP, c); Cheers, Dave.
diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index e7201dc68f43..3ef8c04624cc 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -2192,7 +2192,7 @@ xfs_da_grow_inode_int( */ mapp = kmem_alloc(sizeof(*mapp) * count, 0); for (b = *bno, mapi = 0; b < *bno + count; ) { - nmap = min(XFS_BMAP_MAX_NMAP, count); + nmap = min(XFS_BMAP_MAX_NMAP, count - mapi); c = (int)(*bno + count - b); error = xfs_bmapi_write(tp, dp, b, c, xfs_bmapi_aflag(w)|XFS_BMAPI_METADATA,
The problem lies in the for-loop of xfs_da_grow_inode_int: ====== for(){ nmap = min(XFS_BMAP_MAX_NMAP, count); ... error = xfs_bmapi_write(...,&mapp[mapi], &nmap);//(..., $1, $2) ... mapi += nmap; } ===== where $1 stands for the start address of the array, while $2 is used to indicate the size of the array. The array $1 will advanced by $nmap in each iteration after the allocation of extents. But the size $2 still remains constant, which is determined by min(XFS_BMAP_MAX_NMAP, count). Hence there is a risk of overflow when the remained space in the array is less than $2. So variablize the array size $2 correspondingly in each iteration to eliminate the risk. Signed-off-by: Shida Zhang <zhangshida@kylinos.cn> --- fs/xfs/libxfs/xfs_da_btree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)