Message ID | 20191204113419.2298-1-sjpark@amazon.com (mailing list archive) |
---|---|
Headers | show |
Series | xen/blkback: Aggressively shrink page pools if a memory pressure is detected | expand |
> -----Original Message----- > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of > SeongJae Park > Sent: 04 December 2019 11:34 > To: konrad.wilk@oracle.com; roger.pau@citrix.com; axboe@kernel.dk > Cc: sj38.park@gmail.com; xen-devel@lists.xenproject.org; linux- > block@vger.kernel.org; linux-kernel@vger.kernel.org; Park, Seongjae > <sjpark@amazon.com> > Subject: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page > pools if a memory pressure is detected > > Each `blkif` has a free pages pool for the grant mapping. The size of > the pool starts from zero and be increased on demand while processing > the I/O requests. If current I/O requests handling is finished or 100 > milliseconds has passed since last I/O requests handling, it checks and > shrinks the pool to not exceed the size limit, `max_buffer_pages`. > > Therefore, `blkfront` running guests can cause a memory pressure in the > `blkback` running guest by attaching arbitrarily large number of block > devices and inducing I/O. OOI... How do guests unilaterally cause the attachment of arbitrary numbers of PV devices? Paul
On 04.12.19 12:52, Durrant, Paul wrote: >> -----Original Message----- >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of >> SeongJae Park >> Sent: 04 December 2019 11:34 >> To: konrad.wilk@oracle.com; roger.pau@citrix.com; axboe@kernel.dk >> Cc: sj38.park@gmail.com; xen-devel@lists.xenproject.org; linux- >> block@vger.kernel.org; linux-kernel@vger.kernel.org; Park, Seongjae >> <sjpark@amazon.com> >> Subject: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page >> pools if a memory pressure is detected >> >> Each `blkif` has a free pages pool for the grant mapping. The size of >> the pool starts from zero and be increased on demand while processing >> the I/O requests. If current I/O requests handling is finished or 100 >> milliseconds has passed since last I/O requests handling, it checks and >> shrinks the pool to not exceed the size limit, `max_buffer_pages`. >> >> Therefore, `blkfront` running guests can cause a memory pressure in the >> `blkback` running guest by attaching arbitrarily large number of block >> devices and inducing I/O. > OOI... How do guests unilaterally cause the attachment of arbitrary numbers of PV devices? Good point. Many systems have their limit for the maximum number of the devices. Thus, 'arbitrarily' large number of devices cannot be attached. So, there is the upperbound. System administrators might be able to avoid the memory pressure problem by setting the limit low enough or giving more memory to the 'blkback' running guest. However, many systems also tempt to set the limit high enough so that guests can satisfy and to give minimal memory to the 'blkback' running guest for cost efficiency. I believe this patchset can be helpful for such situations. Anyway, using the term 'arbitrarily' is obvisously my fault. I will update the description in the next version of patchset. Thanks, SeongJae Park > > Paul >