On 01/03/18 03:45 PM, Jason Gunthorpe wrote:
I can appreciate you might have some special use case for that, but
absolutely should require special configuration and not just magically
Well if driver doesn't want someone doing p2p transfers with the memory
it shouldn't publish it to be used for exactly that purpose.
You bring up IB adaptor memory - if we put that into the allocator
then what is to stop the NMVe driver just using it instead of the CMB
buffer? That would be totally wrong in almost all cases.
If you mean for SQEs in the NVMe driver, look at the code, it
specifically allocates it from it's own device. If you mean the
nvmet-rdma then it's using that memory exactly as it was meant to.
Again, if the IB driver doesn't want someone to use that memory for P2P
transfers it shouldn't publish it as such.
Seems like a very subtle and hard to debug performance trap to leave
for the users, and pretty much the only reason to use P2P is
performance... So why have such a dangerous interface?
It's not at all dangerous, the code specifically only uses P2P memory
that's local enough. And the majority of the code is there to make sure
it will all work in all cases.
Honestly, though, I'd love to go back to the case where the user selects
which p2pmem device to use, but that was very unpopular last year. It
would simplify a bunch of things though.
Also, no, the reason to use P2P is not performance. Only if you have
very specific hardware can you get a performance bump and it isn't all
that significant. The reason to use P2P is so you can design performant
systems with small CPUs, less or slower DRAM, and low lane counts to the