On Feb 3, 2015, at 12:22 PM, John Bauer <bauerj(a)iodoctors.com>
wrote:
Is there a Lustre config parameter that throttles the amount of system memory that a
given OSC can use when reading a file?
...
The client node has 128 GB of memory, so the file should easily fit in the system
buffers. The filesystem has 16 OSTs.
As indicated in the following plot, the maximum value for "Cached" when
stripe_count=2 is 17GB
stripe_count=3 is 25GB
stripe_count=4 is 33GB
stripe_count=5 is 42GB
stripe_count=6 is 50GB
stripe_count=7 is 59GB
stripe_count=8,12,16 are all 63GB
The total amount of cache used by lustre on a client should be controlled by
/proc/fs/lustre/llite/<fsname>/max_cached_mb. According to the manual, this
defaults to 3/4 of the client RAM. However, on my Cray system, it looks like it is
defaulting to 1/2 the RAM (not sure why). From your tests, it looks like the lustre cache
is topping out around 1/2 the RAM. Are you running on a Cray client by any chance?
There is also /proc/fs/lustre/osc/<object_name>/osc_cached_mb. From what I can
tell, you can read that file to see how much data is being cached. I have never tried
writing to that file to see if you can force a certain max cache for the osc, but a quick
look at the Lustre source seems to indicate that it can be (and it looks like it may be
tied to the LRU settings).
The file /proc/fs/lustre/osc/<object_name>/max_dirty_mb can be used to limit the
amount of dirty write data that can be cached, but I don’t think that has any effect on
the amount of read data cached.
--
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
http://www.nics.tennessee.edu