I am surprised to see the below configuration:
[root@oss2 ~]# lspci -vv | grep -i Mellanox
05:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0
5GT/s - IB QDR / 10GigE] (rev b0)
Subsystem: Mellanox Technologies Device 0022
[root@oss2 ~]# ibstat
CA 'mlx4_0'
CA type: MT26428
Number of ports: 1
Firmware version: 2.9.1000
Hardware version: b0
Node GUID: 0x0002c903000857a6
System image GUID: 0x0002c903000857a9
Port 1:
State: Active
Physical state: LinkUp
Rate: 8
Base lid: 1
LMC: 0
SM lid: 3
Capability mask: 0x02510868
Port GUID: 0x0002c903000857a7
Link layer: InfiniBand
[root@oss2 ~]# ibstatus
Infiniband device 'mlx4_0' port 1 status:
default gid: fe80:0000:0000:0000:0002:c903:0008:57a7
base lid: 0x1
sm lid: 0x3
state: 4: ACTIVE
phys state: 5: LinkUp
rate: 8.5 Gb/sec (4X)
link_layer: InfiniBand
[root@oss2 ~]#
I used watch linux utility to see how values changes and all in sudden
40Gbps changed to 8 GBps. Surprised !!!
On Tue, May 7, 2013 at 7:56 PM, linux freaker <linuxfreaker(a)gmail.com>wrote:
Any idea what is difference between IPoIB and IB? If I use openib
(which
comes with default RHEL) is it IPoIB? and if I use Mellanox MLNX
installation , is it IB alone?
Please clarify.
As of now I am trying to remove Mellanox and trying installing back
openib? Let see if I can make it work.
On Tue, May 7, 2013 at 7:40 PM, Mi Zhou <mi.zhou(a)stjude.org> wrote:
> Hi,
>
> We are using QDR, CentOS 6.2 and openib, it shows 40gpbs. Hope this helps.
>
> Mi
>
>
>
>
>
> On 05/07/2013 09:00 AM, linux freaker wrote:
>
> Charles,
> How shall I install open ib back which comes with default Rhel. Did
> anyone tried using openib with 40 gbps speed? Is it known issue.? I tried
> using it but it just shows 10gbps.How to bring it to 40gbps.
> On 7 May 2013 19:00, "Charles Taylor" <taylor(a)hpc.ufl.edu> wrote:
>
>> Touche! We are being forced to use the Mellanox/OFED stack now to
>> support our new FDR IB cards and it *is* painful. We have a meeting with
>> Mellanox today to give them "what for" (as the Geico Gecko puts it).
>> They seem little concerned with the needs of their customers in this
>> regard. Nonetheless, we will push.
>>
>> Of course, a little pressure from a competitor such as Intel, could not
>> hurt either. ;)
>>
>> Charles A. Taylor
>> UF HPC Center
>>
>> On May 7, 2013, at 7:36 AM, Brian J. Murrell wrote:
>>
>> > On Tue, 2013-05-07 at 16:57 +0530, linux freaker wrote:
>> >
>> >
>> >>
>> >> I can try it once more but really I feel its painful.
>> >
>> > It is, which is why we encourage people to use the I/B stack provided
>> by
>> > RedHat, already built into their kernel and the corresponding
>> user-space
>> > packages.
>> >
>> > It seems that a lot of people (think they) need to install the Mellanox
>> > I/B stack instead. Perhaps this is true for your (and everyone else's)
>> > case, or perhaps not. I can't really tell, and am not really involved
>> > closely enough in I/B to know.
>> >
>> > But ultimately, I wonder why, given all of the apparent need for
>> > whatever Mellanox does differently than the upstream OFED and/or RH I/B
>> > stacks are doing, that these needs for the Mellanox stack are not being
>> > pushed back into either OFED or the upstream kernel I/B stack. Life
>> > really would be so much easier for so many people if it were.
>> >
>> > Maybe if Mellanox hears from enough of their customers about the pain
>> it
>> > is to use their hardware they might try to get their stuff upstream.
>> >
>> > b.
>> >
>> >
>> >
>> > _______________________________________________
>> > HPDD-discuss mailing list
>> > HPDD-discuss(a)lists.01.org
>> >
https://lists.01.org/mailman/listinfo/hpdd-discuss
>>
>> _______________________________________________
>> HPDD-discuss mailing list
>> HPDD-discuss(a)lists.01.org
>>
https://lists.01.org/mailman/listinfo/hpdd-discuss
>>
>
>
> ------------------------------
> Email Disclaimer:
www.stjude.org/emaildisclaimer
> Consultation Disclaimer:
www.stjude.org/consultationdisclaimer
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/hpdd-discuss
>
>