I'm trying to make raid1 out of 2 disks the size of which differs a lot (ec2 provides iops based on disk size, so I want to have more iops, but don't care about losing some free space).
However when I set the size manually, I get is smaller than given size error, but I can set the size lower, then grow to the size I originally wanted:
[~]# size=$( cat /proc/partitions | grep xvd[mb] | awk '{print $3}' | sort -n | head -n1 )
[~]# size=$(( size - 8192 ))
[~]# echo $size
15988224
[~]# yes | mdadm --create /dev/md0 --metadata=1.2 --level=1 --raid-devices=2 --size=$size --bitmap=internal --write-behind=1024 --assume-clean /dev/xvdb --write-mostly /dev/xvdm
mdadm: /dev/xvdb is smaller than given size. 15988152K < 15988224K + metadata
mdadm: /dev/xvdm appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed May 6 07:35:37 2015
mdadm: create aborted
[~]# size=$(( size - 18192 ))
[~]# yes | mdadm --create /dev/md0 --metadata=1.2 --level=1 --raid-devices=2 --size=$size --bitmap=internal --write-behind=1024 --assume-clean /dev/xvdb --write-mostly /dev/xvdm
mdadm: /dev/xvdb appears to contain an ext2fs file system
size=15996416K mtime=Thu Jan 1 00:00:00 1970
mdadm: /dev/xvdb appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed May 6 07:35:37 2015
mdadm: /dev/xvdm appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed May 6 07:35:37 2015
mdadm: largest drive (/dev/xvdm) exceeds size (15970032K) by more than 1%
Continue creating array? mdadm: array /dev/md0 started.
[~]# mdadm --grow /dev/md0 --size=max
mdadm: component size of /dev/md0 has been set to 15988224K
So why does not it allow me to set the size to 15988224 from the beginning, only after growing?
missingdevice, e.g. I create raid from a 20gb disk with data + missing device, add a 16gb disk and have the data resync to the disk. Without--sizeit would not let me add the second disk – Fluffy May 06 '15 at 08:50