* Ingo Molnar <mingo(a)kernel.org> wrote:
For anything more complex, that maps any of this storage to
user-space, or exposes it to higher level struct page based APIs,
etc., where references matter and it's more of a cache with
potentially multiple users, not an IO space, the natural API is
Let me walk back on this:
I'd say that this particular series mostly addresses the 'pfn
sector_t' side of the equation, where persistent memory is IO space,
not memory space, and as such it is the more natural and thus also
the cheaper/faster approach.
... but that does not appear to be the case: this series replaces a
'struct page' interface with a pure pfn interface for the express
purpose of being able to DMA to/from 'memory areas' that are not
struct page backed.
Linus probably disagrees? :-)
[ and he'd disagree rightfully ;-) ]
So what this patch set tries to achieve is (sector_t -> sector_t) IO
between storage devices (i.e. a rare and somewhat weird usecase), and
does it by squeezing one device's storage address into our formerly
struct page backed descriptor, via a pfn.
That looks like a layering violation and a mistake to me. If we want
to do direct (sector_t -> sector_t) IO, with no serialization worries,
it should have its own (simple) API - which things like hierarchical
RAID or RDMA APIs could use.
If what we want to do is to support say an mmap() of a file on
persistent storage, and then read() into that file from another device
via DMA, then I think we should have allocated struct page backing at
mmap() time already, and all regular syscall APIs would 'just work'
from that point on - far above what page-less, pfn-based APIs can do.
The temporary struct page backing can then be freed at munmap() time.
And if the usage is pure fd based, we don't really have fd-to-fd APIs
beyond the rarely used splice variants (and even those don't do pure
cross-IO, they use a pipe as an intermediary), so there's no problem
to solve I suspect.