Oleg -
Ah, looking at your most recent note, I see the change I highlighted
made it non-conditional. It was conditionally true since the change you
noted.
The nodes John is testing on have a very large amount of memory, so they
should be at 1/2.
Interesting that Rick's seeing 3/4 on his system. The limit looks to be
< 512MB, if I'm reading correctly.
- Patrick
On 02/03/2015 02:51 PM, Patrick Farrell wrote:
Rick,
I can confirm we did.
Oleg, that change was made upstream in master here, as part of LU-3321:
http://review.whamcloud.com/7890
Which went in to 2.6.
- Patrick Farrell
On 02/03/2015 02:34 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
>> On Feb 3, 2015, at 3:26 PM, Drokin, Oleg <oleg.drokin(a)intel.com> wrote:
>>
>>> The total amount of cache used by lustre on a client should be
>>> controlled by /proc/fs/lustre/llite/<fsname>/max_cached_mb.
>>> According to the manual, this defaults to 3/4 of the client RAM.
>>> However, on my Cray system, it looks like it is defaulting to 1/2
>>> the RAM (not sure why). From your tests, it looks like the lustre
>>> cache is topping out around 1/2 the RAM. Are you running on a Cray
>>> client by any chance?
>> Recent lustre versions (2.4.0 may be? or was that 2.5.0?) did drop
>> that down to 1/2 indeed, because that shown some performance benefit.
>> Jinshan probably can elaborate on that in more details.
> Hmm. My Cray client is running 2.5.1 and has max_cached_mb set to
> 1/2 RAM. But I have another client running 2.5.3 which sets the
> parameter to 3/4 of the RAM. Maybe Cray patched their version to
> change the default value?
>
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
>
http://www.nics.tennessee.edu
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss