As the title suggests, mdadm keeps marking a drive as "Removed" (from mdadm --detail) and I was hoping to get suggestions as to why that might happen.
I was wanting to fsck the drives however I got the following error:
$ fsck /dev/sda1
fsck from util-linux 2.20.1
fsck: fsck.linux_raid_member: not found
fsck: error 2 while executing fsck.linux_raid_member for /dev/sda1
I've since learned that an internal bitmap would help stop me from needing to --add the third drive back and avoiding the resync process/time however I'm assuming I need the third disk to be added back first for the bitmap to be of any use. Any other suggestions on how to avoid a costly resync would be appreciated. The usage of this RAID is for media serving, thus a high read low write application.
Update: At the request of MadHatter, here's the output from /proc/mdstat (the RAID is in the process of rebuilding).
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdc1[3] sda1[2] sdb1[1]
3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=====>...............] recovery = 25.2% (493990636/1953511936) finish=1893.9m
in speed=12843K/sec
unused devices: <none>
cat /proc/mdstatinto your question? – MadHatter Feb 18 '13 at 12:51