...also -oashift=12 for 4k drives.
You cannot do this after the fact, only on pool creation.
Also the fix to multiple --service node= is WIP so stick to --failnode= .
On 17 Oct 2014 13:41, "Isaac Huang" <he.huang(a)intel.com> wrote:
On Thu, Oct 16, 2014 at 09:26:59PM +0000, Dilger, Andreas wrote:
> On 2014/10/16, 9:10 AM, "Kurt Strosahl" <strosahl(a)jlab.org> wrote:
>
> >Good Morning,
> >
> > Has anyone out there been using zfs as the back end for lustre? I
am
> >curious as to how you set up raid-z when setting up an ost.
>
> Two options exist:
> - specify raidz/raidz2 on mkfs.lustre command line (see man page):
>
> mkfs.lustre {other opts} --ost --backfstype=zfs mypool/testfs-OST0000 \
> raidz2 /dev/sd[a-j] raidz2 /dev/sd[k-t] raidz2 /dev/sd[u-z]
> /dev/sda[a-d]
>
> would, for example, create three RAID-Z2 VDEVs with 10 disks each. Of
> course the use of /dev/sd* device names is frowned upon in real life ZFS
> usage, but are shorter for this example.
>
> - create RAID-Z/Z2 datasets normally, then just point mkfs.lustre at
them:
>
> mkfs.lustre {other opts} --ost --backfstype=zfs mypool/testfs-OST0000
>
> IIRC you have to disable automatic mounting on the pool (legacy mode).
The xattr=sa option is also needed, for performance at least, if the
dataset is not created by mkfs.lustre.
-Isaac
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss