On Mon, Mar 05, 2018 at 09:57:27PM +0200, Sagi Grimberg wrote:
Keith, while we're on this, regardless of cmb, is SQE memcopy and
ordering always guaranteed?
If you look at mlx4 (rdma device driver) that works exactly the same as
nvme you will find:
qp->sq.head += nreq;
* Make sure that descriptors are written before
* doorbell record.
to_mdev(ibqp->device)->uar_map + MLX4_SEND_DOORBELL);
* Make sure doorbells don't leak out of SQ spinlock
* and reach the HCA out of order.
That seems to explicitly make sure to place a barrier before updating
the doorbell. So as I see it, either ordering is guaranteed and the
above code is redundant, or nvme needs to do the same.
A wmb() is always required before operations that can trigger DMA.
The reason ARM has a barrier in writel() is not to make it ordered
with respect to CPU stores to cachable memory, but to make it ordered
with respect to *other* writels.
Eg Linux defines this:
To always produce two TLPs on PCI-E when mem is UC BAR memory.
And ARM cannot guarentee that without the extra barrier.
So now we see stuff like this:
Which says the TLPs to A and B can be issued in any order..
So when reading the above mlx code, we see the first wmb() being used
to ensure that CPU stores to cachable memory are visible to the DMA
triggered by the doorbell ring.
The mmiowb() is used to ensure that DB writes are not combined and not
issued in any order other than implied by the lock that encloses the
whole thing. This is needed because uar_map is WC memory.
We don't have ordering with respect to two writel's here, so if ARM
performance was a concern the writel could be switched to
Presumably nvme has similar requirments, although I guess the DB
register is mapped UC not WC?