8

Short: Can I share a disk between two separate zpools?

Long: I want a raidz2 array but I can't afford it all at once. My possible solution is to start with two drives. I would create a mirror using only half the capacity of each disk. When I can afford a more drives I would would create a new zpool with a single raidz2 vdev using the new drives and the 50% unused space on the initial disks. I then plan to copy the data from one zpool to the other, delete the zpool containing the mirror and expand the zpool with the raidz2 vdev.

Would this work?

Links and references would be appreciated.

notpeter
  • 1,177
Alex
  • 81

2 Answers2

8
  1. Yes, you can create a pool which consists of a mirrored vdev between two partitions.
  2. Yes, you can have two pools each which share a single disk each using their own partition.
  3. Yes, when increasing all the sizes of the disks in a raid-z2 vdev it will increase the available capacity.

But you shouldn't do it. ZFS is best when you give it the entire disk and although partitions 'work', your world is much easier/better with whole disks. If you really wanna do the mirror -> raid-z2 pool transition down the road and are prepared to live within the bounds of 50% use of your mirror I have an alternate idea:

  1. Buy 2 disks (e.g. 2x2TB)
  2. Create a mirrored zpool: zpool create yourPool mirror cXt1d0 cXt2d0
  3. Buy 3+ more identically sized disks. (e.g. 4x2TB)
  4. Create a new filesystem: zfs create yourPool/fake
  5. Create two sparse files: mkfile -n 1TB /yourpool/fake/fakehda
  6. Create a double parity zpool: zpool create yourNewPool raidz2 cXt3d0 cXt4d0 cXt5d0 cXt6d0 /yourpool/fake/fakehda /yourpool/fake/fakehdb
  7. zfs send/recv your filesystems from one pool to the other.
  8. Detach one disk from your mirror zpool detach yourPool cXt1d0
  9. Replace your a fake disk with the real disk zpool replace yourNewPool /yourpool/fake/fakehda cXt1d0
  10. Wait for resilvering to complete. Monitor progress with zpool status yourNewPool.
  11. When it's completed resilvering, murder the mirror zpool destroy yourPool
  12. Re-use the second old disk zpool replace yourNewPool /yourpool/fake/fakehda cXt2d0

During this entire process you would be able to survive any single disk failing without data loss.

notpeter
  • 1,177
2

Of course you can, one of the nice things about software raid is you have that flexibility. However the popular opinion is you cant do it or shouldnt do it, why? because its the mindset of hardware raid users which dont have an option to work that way so they carry on working the way they always have.

I should also clarify not using whole disks does "not" prevent write cache from working on devices, not only do I know this from experience but its also in the source code.

By default many OS's that let you use ZFS in the installer, all the ones I am aware of do "not" allocate the whole disk, they instead partition the disk and put the pool on a partition, Proxmox even lets you specify the size of the partition.

There is downsides, e.g. its bad if you run scrub -a so one disk would have multiple scrub's running on it, so stagger your scrubs, it might also cause confusion with managing multiple pools.

I will provide an example of where you might want to do it.

Lets say you have two 8tb disks, and you also have two 3tb disks, you dont need to spend the money on an extra two 8tb disks, and you want raidz2 redundancy, so what do you do?

You could create a 3tb partition on all 4 disks, and set it up as a 4 device raid2 vdev, it would have 6tb capacity.

The two 8tb disks would have around 5tb free which you could leave empty for now or choose to also add a 5tb mirror on them. Then at a later date you could retire the 5tb mirror and replace the two 3tb disks to expand the raidz2 to 16tb.

Its this kind of flexibility which I like about software raid, its kind of outside the box, and hence the opposition.

You will almost certainly have to do such a configuration manually though so get comfortable with a partition editor and use of the zpool command.

Chris C
  • 61