Hi all,
I am using Lustre 2.5.3 under CentOS 6.5 both servers and clients and
the lustre file system is using 6 OSTs of 30TB size with 2 OSSs
/dev/mapper/ost0 31242014360 16292468776 13386948620 55% /lustre/ost0
/dev/mapper/ost1 31242014360 16925276168 12754141228 58% /lustre/ost1
/dev/mapper/ost2 31242014360 15247919148 14431498248 52% /lustre/ost2
/dev/mapper/ost3 31242014360 15155750732 14523666664 52% /lustre/ost3
/dev/mapper/ost4 31242014360 15184922564 14494494832 52% /lustre/ost4
/dev/mapper/ost5 31242014360 14094013852 15585403544 48% /lustre/ost5
The system has been very stable until now (2 years), but now since the
last
two days, OSSs servers reboot without a known reason:
No hardware issue and no relevant info before rebooting.
The only thing is with quotas, as I've just realized that quotas are not
show showing
used space for most users (for some other yes).
I tried to reenable quotas using:
# Disable quotas
lctl conf_param jffstg.quota.mdt=none
lctl conf_param jffstg.quota.ost=none
# Enable quotas
lctl conf_param jffstg.quota.mdt=ug
lctl conf_param jffstg.quota.ost=ug
But not difference with quotas issue. Time and uid/gid maps are OK
My doubts are:
1) Are the two issues related?
2) Any suggestion to solve the quota issue?
Any suggestions are welcomed
Best regards
--
Ramiro Alba
Centre Tecnològic de Tranferència de Calor
http://www.cttc.upc.edu
Escola Tècnica Superior d'Enginyeries
Industrial i Aeronàutica de Terrassa
Colom 11, E-08222, Terrassa, Barcelona, Spain
Tel: (+34) 93 739 8928
--
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que est� net.