On Thu, May 07, 2015 at 06:18:07PM +0200, Christoph Hellwig wrote:
On Wed, May 06, 2015 at 05:19:48PM -0700, Linus Torvalds wrote:
> What is the primary thing that is driving this need? Do we have a very
> concrete example?
FYI, I plan to to implement RAID acceleration using nvdimms, and I plan to
ue pages for that. The code just merge for 4.1 can easily support page
backing, and I plan to use that for now. This still leaves support
for the gigantic intel nvdimms discovered over EFI out, but given that
I don't have access to them, and I dont know of any publically available
there's little I can do for now. But adding on demand allocate struct
pages for the seems like the easiest way forward. Boaz already has
code to allocate pages for them, although not on demand but at boot / plug in
I think here other folks might be interested, i am ccing Paul. But for GPU
we are facing similar issue of trying to present the GPU memory to the kernel
in a coherent way (coherent from the design and linux kernel concept POV).
For this dynamicaly allocated struct page might effectively be a solution that
could be share btw persistent memory and GPU folks. We can even enforce thing
like VMEMMAP and have special region carveout where we can dynamicly map/unmap
backing page for range of device pfn. This would also allow to catch people
trying to access such page, we could add a set of new helper like :
get_page_dev()/put_page_dev() ... and only the _dev version would works on
this new kind of memory, regular get_page()/put_page() would throw error.
This should allow to make sure only legitimate users are referencing such
Issue might be that we can run out of kernel address space with 48bits but if
such monstruous computer ever see the light of day they might consider using
CPU with more bits.
Another issue is that we might care for the 32bits platform too, but that's
solvable at a small cost.