From patchwork Wed Jun 8 22:23:22 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonthan Brassow X-Patchwork-Id: 862792 X-Patchwork-Delegate: agk@redhat.com Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p58MPX2K009884 for ; Wed, 8 Jun 2011 22:25:54 GMT Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p58MNTl1006741; Wed, 8 Jun 2011 18:23:29 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p58MNS4N006599 for ; Wed, 8 Jun 2011 18:23:28 -0400 Received: from [10.0.2.15] (vpn-11-174.rdu.redhat.com [10.11.11.174]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p58MNM88011899 for ; Wed, 8 Jun 2011 18:23:23 -0400 From: Jonathan Brassow To: dm-devel@redhat.com Organization: Red Hat, Inc Date: Wed, 08 Jun 2011 17:23:22 -0500 Message-ID: <1307571802.18638.15.camel@f14.redhat.com> Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-loop: dm-devel@redhat.com Subject: [dm-devel] [PATCH 6 of 7] DM RAID: add RAID1 support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Wed, 08 Jun 2011 22:25:54 +0000 (UTC) Add support to access the MD RAID1 personality through dm-raid. Signed-off-by: Jonathan Brassow --- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel Index: linux-2.6/drivers/md/dm-raid.c =================================================================== --- linux-2.6.orig/drivers/md/dm-raid.c +++ linux-2.6/drivers/md/dm-raid.c @@ -8,6 +8,7 @@ #include #include "md.h" +#include "raid1.h" #include "raid5.h" #include "dm.h" #include "bitmap.h" @@ -71,6 +72,7 @@ static struct raid_type { const unsigned level; /* RAID level. */ const unsigned algorithm; /* RAID algorithm. */ } raid_types[] = { + {"raid1", "RAID1 (mirroring)", 0, 2, 1, 0 /* NONE */}, {"raid4", "RAID4 (dedicated parity disk)", 1, 2, 5, ALGORITHM_PARITY_0}, {"raid5_la", "RAID5 (left asymmetric)", 1, 2, 5, ALGORITHM_LEFT_ASYMMETRIC}, {"raid5_ra", "RAID5 (right asymmetric)", 1, 2, 5, ALGORITHM_RIGHT_ASYMMETRIC}, @@ -104,7 +106,8 @@ static struct raid_set *context_alloc(st } sectors_per_dev = ti->len; - if (sector_div(sectors_per_dev, (raid_devs - raid_type->parity_devs))) { + if ((raid_type->level > 1) && + sector_div(sectors_per_dev, (raid_devs - raid_type->parity_devs))) { ti->error = "Target length not divisible by number of data devices"; return ERR_PTR(-EINVAL); } @@ -325,13 +328,16 @@ static int validate_region_size(struct r /* * Possible arguments are... - * RAID456: * [optional_args] * - * Optional args: - * [[no]sync] Force or prevent recovery of the entire array + * Argument definitions + * The number of sectors per disk that + * will form the "stripe" + * [[no]sync] Force or prevent recovery of the + * entire array * [rebuild ] Rebuild the drive indicated by the index - * [daemon_sleep ] Time between bitmap daemon work to clear bits + * [daemon_sleep ] Time between bitmap daemon work to + * clear bits * [min_recovery_rate ] Throttle RAID initialization * [max_recovery_rate ] Throttle RAID initialization * [write_mostly ] Indicate a write mostly drive via index @@ -348,13 +354,24 @@ static int parse_raid_params(struct raid /* * First, parse the in-order required arguments + * "chunk_sectors" is the only argument of this type */ - if ((strict_strtoul(argv[0], 10, &value) < 0) || - !is_power_of_2(value) || (value < 8)) { + if ((strict_strtoul(argv[0], 10, &value) < 0)) { rs->ti->error = "Bad chunk size"; return -EINVAL; + } else if (rs->raid_type->level == 1) { + if (value != 0) + DMERR("Ignoring chunk size parameter for RAID 1"); + value = 0; + } else if (!is_power_of_2(value)) { + rs->ti->error = "Chunk size must be a power of 2"; + return -EINVAL; + } else if (value < 8) { + rs->ti->error = "Chunk size value is too small"; + return -EINVAL; } + rs->md.new_chunk_sectors = rs->md.chunk_sectors = value; argv++; num_raid_params--; @@ -504,6 +521,11 @@ static int parse_raid_params(struct raid else rs->ti->split_io = region_size; + if (rs->md.chunk_sectors) + rs->ti->split_io = rs->md.chunk_sectors; + else + rs->ti->split_io = region_size; + /* Assume there are no metadata devices until the drives are parsed */ rs->md.persistent = 0; rs->md.external = 1; @@ -522,6 +544,9 @@ static int raid_is_congested(struct dm_t { struct raid_set *rs = container_of(cb, struct raid_set, callbacks); + if (rs->raid_type->level == 1) + return md_raid1_congested(&rs->md, bits); + return md_raid5_congested(&rs->md, bits); } @@ -946,6 +971,7 @@ static int raid_ctr(struct dm_target *ti rs->callbacks.congested_fn = raid_is_congested; dm_table_add_target_callbacks(ti->table, &rs->callbacks); + mddev_suspend(&rs->md); return 0; bad: