That's mostly right except if you are on RHEL/CentOS 6.3, you'll need an updated
kernel for full FDR support which then requires you to pull in the 2.6 compatibility
headers for Lustre to build.
Long story, short - it would be better for mellanox customers with complex software
environments if they (mellanox) worked more closely with OFA rather than maintaining a
separate, parallel stack and shoving it down their customers' throats (so to speak).
Charlie
On May 7, 2013, at 10:15 AM, Jerome, Ron wrote:
I am also a sucker for punishment (using the Mellanox OFED), thus
I've been down this road many, many times, (although with Lustre 1.8.x not 2.x) so I
can tell you that it should work and it's relatively painless (now) as long as you
follow the steps
* install Lustre kernel and kernel-devel packages
* rebuild Mellanox OFED against Lustre kernel
- mount -o loop MLNX_OFED.iso /root/mnt
- /root/mnt/docs/mlnx_add_kernel_support.sh -i /root/MLNX_OFED.iso
- umount /root/mnt
* install Mellanox OFED from rebuilt MLNX_OFED.iso (--msm on mds and --basic on oss)
- this is key, make sure you install from the newly rebuilt iso
* install kernel-ib-devel from rebuilt MLNX_OFED.iso
Now rebuld lustre-modules RPM to get ko2iblnd.ko which is compatible with Mellanox
kernel-ib drivers...
* cd /usr/src/lustre-x.x.x
* configure --with-o2ib=/usr/src/openib
* make rpms
* install newly built lustre-modules-x.x.x.RPM on server.
To simplify life somewhat, I maintain my own yum repository of lustre rpms that match the
Mellanox OFED I'm using.
Ron.
Ron Jerome
Research Computing Support | Soutien Informatique à la Recherche
Shared Services Canada | National Research Council Canada
Services partagés Canada | Conseil national de recherches Canada
1200 Montreal Road, Ottawa, Ontario K1A 0R6
Government of Canada | Gouvernement du Canada
From: hpdd-discuss-bounces(a)lists.01.org [mailto:hpdd-discuss-bounces@lists.01.org] On
Behalf Of linux freaker
Sent: May 7, 2013 10:00 AM
To: Charles Taylor
Cc: hpdd-discuss(a)ml01.01.org
Subject: Re: [HPDD-discuss] Mellanox OFED MLNX_OFED_LINUX-1.5.3-3 for Lustre not working
Charles,
How shall I install open ib back which comes with default Rhel. Did anyone tried using
openib with 40 gbps speed? Is it known issue.? I tried using it but it just shows
10gbps.How to bring it to 40gbps.
On 7 May 2013 19:00, "Charles Taylor" <taylor(a)hpc.ufl.edu> wrote:
Touche! We are being forced to use the Mellanox/OFED stack now to support our new FDR
IB cards and it *is* painful. We have a meeting with Mellanox today to give them
"what for" (as the Geico Gecko puts it). They seem little concerned with the
needs of their customers in this regard. Nonetheless, we will push.
Of course, a little pressure from a competitor such as Intel, could not hurt either. ;)
Charles A. Taylor
UF HPC Center
On May 7, 2013, at 7:36 AM, Brian J. Murrell wrote:
> On Tue, 2013-05-07 at 16:57 +0530, linux freaker wrote:
>
>
>>
>> I can try it once more but really I feel its painful.
>
> It is, which is why we encourage people to use the I/B stack provided by
> RedHat, already built into their kernel and the corresponding user-space
> packages.
>
> It seems that a lot of people (think they) need to install the Mellanox
> I/B stack instead. Perhaps this is true for your (and everyone else's)
> case, or perhaps not. I can't really tell, and am not really involved
> closely enough in I/B to know.
>
> But ultimately, I wonder why, given all of the apparent need for
> whatever Mellanox does differently than the upstream OFED and/or RH I/B
> stacks are doing, that these needs for the Mellanox stack are not being
> pushed back into either OFED or the upstream kernel I/B stack. Life
> really would be so much easier for so many people if it were.
>
> Maybe if Mellanox hears from enough of their customers about the pain it
> is to use their hardware they might try to get their stuff upstream.
>
> b.
>
>
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss
_______________________________________________
HPDD-discuss mailing list
HPDD-discuss(a)lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss