Hi Roland,
Thanks for mentioning about Qlustar. I wasn't aware of that. I looked up
their description and they say that it is a custom OS for HPC systems based
on Debian/Ubuntu.
I am a little skeptical about the below stmt in their website:
"Fast storage is a must in most clusters these days. Qlustar supports
Lustre, the most popular parallel filesystem, out of the box, including
high-availability if required. Just create an image containing the Lustre
module, assign it to your storage nodes, format your disk devices and
you're done."
--> It says Lustre is available out of the box and again says to create an
image with the lustre module. Does this mean that even the Lustre server
code is available out of the box and we just need to configure after
booting up the node with Qlustar OS or Do I need to build an image with
Lustre code??
Regards,
Akhilesh Gadde.
On Mon, Apr 13, 2015 at 4:38 PM, Roland Fehrenbacher <rf(a)q-leap.de> wrote:
>>>>> "AG" == Akhilesh Gadde
<akhilesh.gadde(a)stonybrook.edu> writes:
Hi Akhilesh,
if you want to make your life simple use Qlustar (
qlustar.com). It has
ready to use ZFS Lustre (currently 2.6, soon 2.7) based on Ubuntu
14.04.
Best,
Roland
-------
http://www.q-leap.com /
http://qlustar.com
--- HPC / Storage / Cloud Linux Cluster OS ---
AG> Thanks Eli. I was trying to build the server on Ubuntu and
AG> couldn't find much help. Yes. I agree that not much work is
AG> needed for client since the client module is already
AG> incorporated as part of the kernel. Shifted to CentOS now due
AG> to time constraints for my project. If James could provide the
AG> details on how to do that, I would be glad to test during my
AG> spare time and post the details/results to the community.
AG> Regards, Akhilesh.
AG> On Sun, Apr 12, 2015 at 12:30 PM, E.S. Rosenberg
AG> <esr(a)cs.huji.ac.il> wrote:
>> On Sun, Apr 12, 2015 at 7:24 PM, Akhilesh Gadde
>> <akhilesh.gadde(a)stonybrook.edu> wrote:
>> > Hi Rosenberg,
>> >
>> > I have been trying the entire weekend on how to do this. Not
>> > finding the right way to do that. :(
>> Sorry to hear :(
>>
>> For clients you don't need anything special, the stock kernel
>> should suffice (though you may want a newer kernel for bugfixes
>> etc).
>>
>> For the servers I can't tell you (yet), but it seemed from James'
>> post that he built that successfully...
>>
>> Regards, Eli
>>
>> PS: this is the ./configure we use to build our clients:
>>
>> ./configure --disable-modules --disable-server --enable-client
>> --prefix=/usr/local/lustre/[version]
>>
>>
>> >
>> >
>> > Regards, Akhilesh Gadde.
>> >
>> > On Sun, Apr 12, 2015 at 7:50 AM, E.S. Rosenberg
>> > <esr+hpdd-discuss(a)mail.hebrew.edu> wrote:
>> >>
>> >> Please report how it went, here we have an all Debian-client
>> >> setup but since we/the consultants on the project didn't want
>> >> to be guinea pigs we stuck with CentOS 6 servers....
>> >>
>> >> On Fri, Apr 10, 2015 at 8:12 PM, Simmons, James
>> >> A. <simmonsja(a)ornl.gov> wrote:
>> >> >>Hi Simmons,
>> >> >
>> >> >>Thanks for the response. Now I got the confidence that I
>> >> >>could deploy
>> >> >> Lustre servers on Ubuntu nodes. :)
>> >> >
>> >> >>I am planning to use ZFS as the backing store. So as you
>> >> >>mentioned, I
>> >> >> should not be having a problem with that. Also, I am
>> >> >> planning to use the latest Lustre version - 2.7.
>> >> >
>> >> >>In
Lustre.org downloads section for 2.7 release, I see the
>> >> >>below:
>> >> >
>> >> >>Can I go with the packages in el6.6 for my deployment?
>> >> >
>> >> >>(I am guessing that 'el' implies Enterprise Linux
and 'sles'
>> >> >>implies
>> >> >> Solaris enterprise Linux)
>> >> >
>> >> > There is no prebuilt versus. You need to build yourself. To
>> >> > work with the latest code do a
>> >> >
>> >> > git clone
git://git.hpdd.intel.com/fs/lustre-release
>> >> >
>> >> > cd lustre-release
>> >> >
>> >> > sh ./autogen.sh
>> >> >
>> >> > ./configure –with-spl=/where/spl/is
>> >> > -–with-zfs=/where/zfs/lives
>> >> >
>> >> > dpkg-buildpackage -us –uc
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________ HPDD-discuss
>> >> > mailing list HPDD-discuss(a)lists.01.org
>> >> >
https://lists.01.org/mailman/listinfo/hpdd-discuss
>> >> >
>> >
>> >
>>
AG> _______________________________________________ HPDD-discuss
AG> mailing list HPDD-discuss(a)lists.01.org
AG>
https://lists.01.org/mailman/listinfo/hpdd-discuss