Thanks, Malcolm. Worked like a charm. Your interpretation was clearly
superior to mine.
bob
On 7/21/2013 6:37 PM, Cowe, Malcolm J wrote:
>From the Ops Manual (and hence not from direct experience),
setting the nosquash_nids on the MGS will affect all MDTs -- it is a global setting when
applied using conf_param on the MGS. In which case, the command will return an error when
you specify an individual MDT using conf_param on the MGS.
Instead, specify the file system that one wants to apply the squash rule to:
lctl conf_param <fsname>.mdt.nosquash_nids="<nids>"
e.g.:
lctl conf_param umt3.mdt.nosquash_nids="10.10.2.33@tcp"
To set this per MDT, use mkfs.lustre or tunefs.lustre (refer to the Lustre Operations
Manual, section 22.2).
Regards,
Malcolm.
> -----Original Message-----
> From: hpdd-discuss-bounces(a)lists.01.org [mailto:hpdd-discuss-
> bounces(a)lists.01.org] On Behalf Of Bob Ball
> Sent: Saturday, July 20, 2013 12:35 AM
> To: hpdd-discuss(a)lists.01.org; Lustre discussion
> Subject: [HPDD-discuss] root squash problem
>
> We have just installed Lustre 2.1.6 on SL6.4 systems. It is working
> well. However, I find that I am unable to apply root squash parameters.
>
> We have separate mgs and mdt machines. Under Lustre 1.8.4 this was
> not
> an issue for root squash commands applied on the mdt. However, when
> I
> modify the command syntax for lctl conf_param to what I think should
> now
> be appropriate, I run into difficulty.
>
> [root@lmd02 tools]# lctl conf_param
> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp"
> No device found for name MGS: Invalid argument
> This command must be run on the MGS.
> error: conf_param: No such device
>
> [root@mgs ~]# lctl conf_param
> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp"
> error: conf_param: Invalid argument
>
> I have not yet looked at setting the "root_squash" value, as this
> problem has stopped me cold. So, two questions:
>
> 1. Is this even possible with our split mgs/mdt machines?
> 2. If possible, what have I done wrong above?
>
> Thanks,
> bob
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss