Dear All, Thank you for all of your responses, just happened to see other responses...
Something strange happened after I educated our users to change the stripsize, not sure if
they promptly took any action on moving those files with a different stripe size. But all
of a sudden those handful OST's have recoverd in the space left to about 85%. I think
this is odd, as I do not know enough to exaplin what just happened. Hope someone can throw
some light on this.
Andreas: Yes I was able to find the files that complained about ENOSPC and they did reside
on new full OST's.
Dr. Arman: I agree lfs_migrate works mostly well, I have used lfs_migrate in the past and
continue to use that...
Thank you,
Amit
-----Original Message-----
From: HPDD-discuss [mailto:hpdd-discuss-bounces@lists.01.org] On Behalf Of
Chris Hunter
Sent: Tuesday, June 02, 2015 1:15 PM
To: Mohr Jr, Richard Frank (Rick Mohr)
Cc: hpdd-discuss(a)lists.01.org
Subject: Re: [HPDD-discuss] Strange Issue, No space left, where there, actually
over 500TB
Thanks for the links. IMO using robinhood for one-shot rebalance is possible,
as a continuous background service would be more challenging.
regards,
chris hunter
yale hpc group
On 06/02/2015 01:10 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
>
>> On Jun 2, 2015, at 11:42 AM, Chris Hunter <chris.hunter(a)yale.edu>
wrote:
>>
>> Is background file redistribution/OST balancing on the roadmap for lustre ?
>> This would would be _very_ useful for capacity expansion of existing lustre
installations.
>
> Sort of. If you take a look at , there are a couple of features that could
pertain to this: Layout Enhancement and File Level Replication. In theory, the
use of composite layouts with some sort of background replication could
potentially be used to restripe an active file. Although this would still likely be
a manual process and not something that is automagically done in the
background to rebalance OST usage (unless someone decides to do some
clever tricks with something like RobinHood).
>
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss