Robin,
Well our Lustre filesystem has around 500 million files in it and at least half of those
are accessed every evening - a unique workload for sure (backups). Even with
vfs_cache_pressure=1 we don't seem to do a very good job of keeping a high enough
proportion of those in memory and our RAID array is taking quite a hammering seeking here
and there every day. We have 400Gigs in our servers so we'd like to put it to
effective use.
Also looking at the odd dentry graph I sent, I wonder why our inodes increase but the
dentries do not follow a similar (but scaled down) trend. In our normal (xfs+nfs) Linux
filers the dentry cache follows the inode cache trend very closely. To read 100 files you
traverse 4 dirs say.... So every file/inode will touch x number of dirs.
Daire
----- Original Message -----
On Wed, Dec 04, 2013 at 12:25:06PM +0000, Daire Byrne wrote:
>I have been playing around with setting vfs_cache_pressure=0 on our v2.4.1
>MDS ...
maybe 2.4 has changed something (I last comprehensively tested with
1.8), but that seems like an odd thing to try to me...
MDS's rarely have any VM activity going on that isn't Lustre inodes and
dentries and so (in my experience) perform well (and safely) with just
the default vfs_cache_pressure=100 setting.
IMHO it's on OSS's where inodes/dentries fare badly in competition with
Lustre's read and write-through caches where vfs_cache_pressure=0 (used
with caution) can be a huge win.
cheers,
robin
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss