Message ID | Pine.LNX.4.64.0901212227100.3027@hs20-bc2-1.build.redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Delegated to: | Alasdair Kergon |
Headers | show |
Pushing to linux-next. On Wed, Jan 21, 2009 at 10:32:20PM -0500, Mikulas Patocka wrote: > Very likely this was the reason for bug > https://bugzilla.redhat.com/show_bug.cgi?id=173153 So with this and the other fixes there've been, do we still have any arch-dependent restrictions on chunk_size? Or can a snapshot created on one arch be used successfully on any other now? Alasdair
On Fri, 30 Jan 2009, Alasdair G Kergon wrote: > Pushing to linux-next. > > On Wed, Jan 21, 2009 at 10:32:20PM -0500, Mikulas Patocka wrote: > > Very likely this was the reason for bug > > https://bugzilla.redhat.com/show_bug.cgi?id=173153 > > So with this and the other fixes there've been, do we still have > any arch-dependent restrictions on chunk_size? > > Or can a snapshot created on one arch be used successfully on any other > now? Now the chunk size is much less restricted. There used to be bug with chunk_size < page_size, but it was already fixed long time ago (Milan checked it on PPC64 in Brno lab). The userspace has limit of minimum 4kb chunk size and I think it could be lowered to 1kb (although such small chunks have little practical use --- they degrade perfromance because every bio is split on chunk boundaries). And with this fix, an upper bound on chunk size is the amount of memory in vmalloc arena divided by 2 (there are two chunks preallocated with vmalloc). The minimum vmalloc arena on ix86 is 128MB and the usual vmalloc arena size is 512MB. Mikulas > Alasdair > -- > agk@redhat.com > -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
Index: linux-2.6.29-rc1-devel/drivers/md/dm-io.c =================================================================== --- linux-2.6.29-rc1-devel.orig/drivers/md/dm-io.c 2009-01-22 04:13:45.000000000 +0100 +++ linux-2.6.29-rc1-devel/drivers/md/dm-io.c 2009-01-22 04:14:13.000000000 +0100 @@ -292,6 +292,8 @@ static void do_region(int rw, unsigned r (PAGE_SIZE >> SECTOR_SHIFT)); num_bvecs = 1 + min_t(int, bio_get_nr_vecs(where->bdev), num_bvecs); + if (unlikely(num_bvecs > BIO_MAX_PAGES)) + num_bvecs = BIO_MAX_PAGES; bio = bio_alloc_bioset(GFP_NOIO, num_bvecs, io->client->bios); bio->bi_sector = where->sector + (where->count - remaining); bio->bi_bdev = where->bdev;
dm-io calls bio_get_nr_vecs to get the maximum number of pages for a given device, then adds 1 to it and allocates bio with this size. The last vector is not used for i/o, it is used to hold information about the region this i/o belongs to. If bio_get_nr_vecs returned the maximum biovec size, dm-io attempts to allocate bio with one more vector and fails. Very likely this was the reason for bug https://bugzilla.redhat.com/show_bug.cgi?id=173153 (the bug was fixed with an userspace workaround preventing lvm from creating snapshots with chunksize >512k; after this patch is applied, that limit can be dropped) Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> --- drivers/md/dm-io.c | 2 ++ 1 file changed, 2 insertions(+) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel