unfortunately, that didn't work… I shutdown all clients, unmounted the mds
and ost's and tuner.lustre --writeconf on all the lustre servers.
I then remounted the mds, then the ost's and everything appears to be just
fine (ie. all the mounting worked etc). BUT on a client
OST000e : inactive device
OST000f : inactive device
and on the oss
2 UP obdfilter l1-OST000e l1-OST000e_UUID 5
3 UP obdfilter l1-OST000f l1-OST000f_UUID 5
and on the mds
19 UP osc l1-OST000e-osc-MDT0000 l1-MDT0000-mdtlov_UUID 5
20 UP osc l1-OST000f-osc-MDT0000 l1-MDT0000-mdtlov_UUID 5
On Wed, Mar 13, 2013 at 12:19 AM, Colin Faber <cfaber(a)gmail.com> wrote:
Yeah basically it registered and now the MGS expects it. You're
going to
have to writeconf everything to get rid of that entry.
-cf
On 03/12/2013 10:15 AM, Stu Midgley wrote:
> Evening
>
> A mistake was made during the addition of 1 oss (2 ost's) to our online
> lustre file system.
>
> The ost was formatted with
>
> mkfs.lustre --ost --mgsnode=172.29.0.251 --fsname=l1 --index 14
> --mkfsoptions="-T largefile -m 0.5 -E stride=16,stripe-width=160" /dev/sdb
>
>
> and then mounted... and then quickly unmounted... and then reformatted
> with the same command and then attempted to be remounted...
>
> We now have a hung file system.
>
> on the mds I see
>
> Mar 12 13:38:09 mds kernel: LustreError: 140-5: Server l1-OST000e
> requested index 14, but that index is already in use. Use --writeconf to
> force
>
> now what should I do? Bring everything down, writeconf all the servers
> and bring everything back up? reformat the ost? lfs df -h on a client is
> showing
>
> OST000e : inactive device
>
> Thanks.
>
>
> --
> Dr Stuart Midgley
> sdm900(a)sdm900.com <mailto:sdm900@sdm900.com>
>
>
> ______________________________**_________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/**listinfo/hpdd-discuss<https://lists.01....
>
--
Dr Stuart Midgley
sdm900(a)sdm900.com