root@lmd02 ~# df -h /mnt/tmp/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 1.7T 17G 1.6T 2% /mnt/tmp
So, at this moment, I'm thinking of shrinking to 500GB. Current "dd"
took around 30hrs when I did that, so smaller size will help here.
I have the 11GB attribute backup, and the 955MB tar backup, in addition
to the dd backup, so I guess I'm set.
I have one last question though; at least one document I saw said that,
following the upgrade, I should mount the mdt the first time with "-t
lustre -o upgrade" but I can't seem to find any documentation on that?
Is this real, or a figment of someones imagination?
bob
On 7/10/2013 2:14 AM, Dilger, Andreas wrote:
On 2013-07-09, at 19:42, "Bob Ball" <ball(a)umich.edu>
wrote:
> root@lmd02 ~# df -i /mnt/tmp
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/sdb 521142272 46438537 474703735 9% /mnt/tmp
You don't report the actual "df" output.
Note that the inodes themselves consume space that is not listed in the "Used"
column, because they are preallocated. The number listed in Used is the "extra"
space used on the MDT beyond the inodes, bitmaps, and other internal ext4 metadata.
The space usage for just the inode itself is 512 bytes, so you'd need at least 100M
inodes * 512 = 50GB just to store the inodes. Add overhead for the journal, filesystem
metadata, internal Lustre metadata, logs, etc. is why we recommend at minimum 1024 bytes
of space in the MDT per inode (ie. 100GB in your case). The default for newer Lustre
filesystems is one inode per 2048 bytes of MDT size, 1/2 of the default of older
filesystems.
One other benefit of reformatting your MDT with a newer e2fsprogs is that this will
enable the flex_bg feature, which will give you faster e2fsck times, over and above the
speedup from reducing the MDT size.
> So, it appears I could cut the size by a factor of 5, and still only have half the
inodes in use. I guess 500GB is a good, round number in that case. We are mostly bigger
file oriented, by policy and active advice.
>
> But, "following the manual verbatim" seems ominous.
If people have problems with the backup/restore process as listed in the manual (the
current one at
http://wiki.hpdd.intel.com/display/PUB/Documentation and not the
unmaintained one on
lustre.org), please file a bug so that it can be fixed.
Cheers, Andreas
> On 7/9/2013 9:19 PM, Carlson, Timothy S wrote:
>> But how many inodes are in use on the MDT? If you shrink the volume down you are
by default going to have way fewer inodes on the MDT. For example, my MDT is 450GB and
using 31GB or 8% of the space but it is using 17% of the inodes available.
>>
>> You might have lots of big files in which case you don't have to worry about
the inode count but I would check to see how many inodes were in use before you go crazy
shrinking things down.
>>
>> My past experience with the tar/xattr method of doing MDT movements following the
manual verbatim has never been successful.. YMMV
>>
>> Tim
>>
>> -----Original Message-----
>> From: lustre-discuss-bounces(a)lists.lustre.org
[mailto:lustre-discuss-bounces@lists.lustre.org] On Behalf Of Bob Ball
>> Sent: Tuesday, July 09, 2013 5:57 PM
>> To: hpdd-discuss(a)lists.01.org; Lustre discussion
>> Subject: [Lustre-discuss] Shirinking the mdt volume
>>
>> When we set up our mdt volume, lo these many years past, we did it with a 2TB
volume. Overkill. About 17G is actually in use.
>>
>> This is a Lustre 1.8.4 system backed by about 450TB of OST on 8 servers. I
would _love_ to shrink this mdt volume to a more manageable size, say, 50GB or so, as we
are now in a down time before we upgrade to Lustre 2.1.6 on SL6.4. I have taken a
"dd" of the volume, and am now in the process of doing getfattr -R -d -m
'.*' -P . > /tmp/mdt_ea.bak after which I will do a tar czf {backup file}.tgz
--sparse /bin/tar is the SL5 version tar-1.20-6.x86_64. This supports the --xattrs
switch. So, a choice here, should I instead use the --xattrs switch on the tar, or should
I use --no-xattrs since the mdt_ea.bak will have all of them?
>>
>> What are my prospects for success, if I restore that tar file to a smaller
volume, then apply the attr backup, before I upgrade?
>>
>> Answers and advice are greatly appreciated.
>>
>> bob
>>
>> _______________________________________________
>> Lustre-discuss mailing list
>> Lustre-discuss(a)lists.lustre.org
>>
http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss