Thanks for the tips
Everything is ok, I just have, and I don't know why, a connection listed
as 0@lo when running lshowmount
Le 07/07/2015 09:27, Martin Hecht a écrit :
maybe this is what you are looking for:
during recovery on the MDS:
cat /proc/fs/lustre/mds/lustrefs-MDT0000/recovery_status"
shows the number of clients already connected and the number the
server expects.
/proc/fs/lustre/mdt/lustre-MDT0000/recovery_status
during normal operation
cat /proc/fs/lustre/mds/lustrefs-MDT0000/num_exports
shows the number of exports (If I recall correctly, the MGS is a
client as well in this context)
/proc/fs/lustre/mgs/MGS/num_exports
/proc/fs/lustre/mdt/lustre-MDT0000/num_exports
On 07/06/2015 05:16 PM, Scott Nolin wrote:
> the 'lshowmount' command on the servers can do this.
>
> However I sometimes have seen where lshowmount doesn't show any
> connected clients, but they obviously still are. So maybe there's a
> lower-level way than that utility that's better.
>
> Scott
>
> On 7/6/2015 10:09 AM, Jérôme BECOT wrote:
>> That's how i proceed, but i had kernel crashes on the MDS so I
>> unmounted
>> and remounted it and it recovers
>>
>> No way to check wether Lustre servers considers any client still
>> connect
>> or not ?
>>
>> Le 04/07/2015 04:43, Kurt Strosahl a écrit :
>>> Hi,
>>>
>>> I just did a clean shutdown/startup of two lustre file systems
>>> last weekend (also for electrical work). The order that I've found
>>> most effective is to first shutdown all the clients, then unmount the
>>> osts/shutdown the oss systems, then unmount the mdt/mgs and shut down
>>> that system. On the up I've found that turning on the mdt/mgs system,
>>> letting it mount up. Then mounting the osts (my oss systems don't
>>> mount lustre on boot). Once the last ost to be mounted looks like it
>>> has finished any recovery it needs to do I mount a single client and
>>> make sure it can see the entire system... and then go from there with
>>> the rest of the clients.
>>>
>>> Since all the lustre clients I have are in the same room as the lustre
>>> file system itself I don't have to check for stray mounts (I just walk
>>> around the room looking for systems that are still on, and then power
>>> them off).
>>>
>>> Good luck,
>>> Kurt J. Strosahl
>>> System Administrator
>>> Scientific Computing Group, Thomas Jefferson National Accelerator
>>> Facility
>>>
>>>
>>> Message: 1
>>> Date: Fri, 03 Jul 2015 13:49:32 +0200
>>> From: J?r?me BECOT <jerome.becot(a)inserm.fr>
>>> To: HPDD-discuss(a)ml01.01.org
>>> Subject: [HPDD-discuss] Clean shutdown
>>> Message-ID: <559676CC.1060009(a)inserm.fr>
>>> Content-Type: text/plain; charset=utf-8; format=flowed
>>>
>>> Hi there,
>>>
>>> We are running a small lustre system (1MGS/MDS + 2 OSTs). I haven't
>>> found the information, so i ask you :
>>>
>>> What is the good order to shutdown the whole system (for electrical
>>> maintenance ...) when you run a combined MGT/MDT :
>>> - disconnect the clients
>>> - is there any way to check if there is a client still connected ?
>>> - unmount the MGT/MDT
>>> - unmount the OSTs
>>> - ... poweroff all the servers
>>> - mount the OSTs
>>> - then the MGT/MDT
>>> - then the clients ?
>>>
>>> or should I stop the ost first and mount them last ?
>>>
>>> Thanks
>>>
>>>
>>
>
>
>
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss
--
Jérome BECOT
Administrateur Systèmes et Réseaux
Molécules à visée Thérapeutique par des approches in Silico (MTi)
Univ Paris Diderot, UMRS973 Inserm
Case 013
Bât. Lamarck A, porte 412
35, rue Hélène Brion 75205 Paris Cedex 13
France
Tel : 01 57 27 83 82