You are not seeing 8GBps you are seeing 8.5Gbps (instead of reporting 10gbps), which is a bug.

 

This commit message from 1.5.4.1 of OFED should have addressed that.

 

commit 91290f5b55984cc71e0ac3896d27c2e7fb5ce51f
Author: Marcel Apfelbaum <marcela@xxxxxxxxxxxxxxxxxx>
Date:   Sun Jan 29 18:13:39 2012 +0200
 
    IB/core: fix wrong display of rate in sysfs
    
    commit "IB: Add new InfiniBand link speeds" introduced a bug under
    which the rate for IB SDR/4X links was displayed as 8.5Gbs
    instead of 10Gbs, fix that.
    
    Signed-off-by: Or Gerlitz <ogerlitz@xxxxxxxxxxxx>
    Signed-off-by: Marcel Apfelbaum <marcela@xxxxxxxxxxxxxxxxxx>

 

Please make sure you have a newer release.

 

BTW, IPoIB is an upper layer protocol that runs on top of IB(short for InfiniBand). It is like IP over Ethernet, except in this case the link/physical layer happens to be InfiniBand (hence the name IP Over IB). IPOIB and a bunch of kernel modules, IB driver from the HCA vendors are needed to run an IP over Infiniband interfaces. These usually come pre-packaged with the OS.

The OFED package provides more utilities on top of this.

 

-Suri

 

 


From: hpdd-discuss-bounces@ml01.01.org [mailto:hpdd-discuss-bounces@ml01.01.org] On Behalf Of linux freaker
Sent: Tuesday, May 07, 2013 10:29 AM
To: Mi Zhou
Cc: hpdd-discuss@lists.01.org
Subject: Re: [HPDD-discuss] Mellanox OFED MLNX_OFED_LINUX-1.5.3-3 for Lustre not working

 

I am surprised to see the below configuration:

 

[root@oss2 ~]# lspci -vv | grep -i  Mellanox

05:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0)

        Subsystem: Mellanox Technologies Device 0022

[root@oss2 ~]# ibstat

CA 'mlx4_0'

        CA type: MT26428

        Number of ports: 1

        Firmware version: 2.9.1000

        Hardware version: b0

        Node GUID: 0x0002c903000857a6

        System image GUID: 0x0002c903000857a9

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 8

                Base lid: 1

                LMC: 0

                SM lid: 3

                Capability mask: 0x02510868

                Port GUID: 0x0002c903000857a7

                Link layer: InfiniBand

[root@oss2 ~]# ibstatus

Infiniband device 'mlx4_0' port 1 status:

        default gid:     fe80:0000:0000:0000:0002:c903:0008:57a7

        base lid:        0x1

        sm lid:          0x3

        state:           4: ACTIVE

        phys state:      5: LinkUp

        rate:            8.5 Gb/sec (4X)

        link_layer:      InfiniBand

 

[root@oss2 ~]#

 

I used watch linux utility to see how values changes and all in sudden 40Gbps changed to 8 GBps. Surprised !!!

 

 

On Tue, May 7, 2013 at 7:56 PM, linux freaker <linuxfreaker@gmail.com> wrote:

Any idea what is difference between IPoIB and IB? If I use openib (which comes with default RHEL) is it IPoIB? and if I use Mellanox MLNX installation , is it IB alone?

Please clarify.

 

As of now I am trying to remove Mellanox and trying installing back openib? Let see if I can make it work.

 

On Tue, May 7, 2013 at 7:40 PM, Mi Zhou <mi.zhou@stjude.org> wrote:

Hi,

We are using QDR, CentOS 6.2 and openib, it shows 40gpbs. Hope this helps.

Mi






On 05/07/2013 09:00 AM, linux freaker wrote:

Charles,
How shall I install open ib back which comes with default Rhel. Did anyone tried using openib with 40 gbps speed? Is it known issue.? I tried using it but it just shows 10gbps.How to bring it to 40gbps.

On 7 May 2013 19:00, "Charles Taylor" <taylor@hpc.ufl.edu> wrote:

Touche!   We are being forced to use the Mellanox/OFED stack now to support our new FDR IB cards and it *is* painful.   We have a meeting with Mellanox today to give them "what for" (as the Geico Gecko  puts it).   They seem little concerned with the needs of their customers in this regard.   Nonetheless, we will push.

Of course, a little pressure from a competitor such as Intel, could not hurt either.  ;)

Charles A. Taylor
UF HPC Center

On May 7, 2013, at 7:36 AM, Brian J. Murrell wrote:

> On Tue, 2013-05-07 at 16:57 +0530, linux freaker wrote:
>
>
>>
>> I can try it once more but really I feel its painful.
>
> It is, which is why we encourage people to use the I/B stack provided by
> RedHat, already built into their kernel and the corresponding user-space
> packages.
>
> It seems that a lot of people (think they) need to install the Mellanox
> I/B stack instead.  Perhaps this is true for your (and everyone else's)
> case, or perhaps not.  I can't really tell, and am not really involved
> closely enough in I/B to know.
>
> But ultimately, I wonder why, given all of the apparent need for
> whatever Mellanox does differently than the upstream OFED and/or RH I/B
> stacks are doing, that these needs for the Mellanox stack are not being
> pushed back into either OFED or the upstream kernel I/B stack.  Life
> really would be so much easier for so many people if it were.
>
> Maybe if Mellanox hears from enough of their customers about the pain it
> is to use their hardware they might try to get their stuff upstream.
>
> b.
>
>
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss@lists.01.org
> https://lists.01.org/mailman/listinfo/hpdd-discuss

_______________________________________________
HPDD-discuss mailing list
HPDD-discuss@lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss

 


Email Disclaimer: www.stjude.org/emaildisclaimer
Consultation Disclaimer: www.stjude.org/consultationdisclaimer


_______________________________________________
HPDD-discuss mailing list
HPDD-discuss@lists.01.org
https://lists.01.org/mailman/listinfo/hpdd-discuss