Ed,
Not a solution, but another data point here.
We're running lustre 1.8.7 and deployed four servers, two OSTs each. However the
fourth one was deployed about a year later, and has very different (much lower) inode
counts. It's not causing us trouble (we're not running out of inodes) but it does
correlate with what you're seeing.
The first three servers formatted their disks with e2fsprogs-1.41.90.wc3-0redhat while the
fourth used e2fsprogs-1.42.6.wc2-0redhat
UUID Inodes IUsed IFree IUse%
Mounted on
usr-MDT0000_UUID 328666282 100965854 227700428 30% /data/user[MDT:0]
usr-OST0000_UUID 915537920 10821636 904716284 1% /data/user[OST:0]
usr-OST0001_UUID 915537920 10795374 904742546 1% /data/user[OST:1]
usr-OST0002_UUID 915537920 10969942 904567978 1% /data/user[OST:2]
usr-OST0003_UUID 915537920 10696761 904841159 1% /data/user[OST:3]
usr-OST0004_UUID 915537920 10923258 904614662 1% /data/user[OST:4]
usr-OST0005_UUID 915537920 10343271 905194649 1% /data/user[OST:5]
usr-OST0006_UUID 28610560 10371603 18238957 36% /data/user[OST:6]
usr-OST0007_UUID 28610560 10518932 18091628 36% /data/user[OST:7]
filesystem summary: 328666282 100965854 227700428 30% /data/user
The format command for all of these servers was identical except for the target
mkfs.lustre --fsname=usr --reformat --ost --mgsnode=red@tcp0
--mkfsoptions='-t ext4' /dev/ostvg/ostlv2
Hope that helps track down what might have changed.
John Richards
john.richards(a)icecube.wisc.edu