* Rik van Riel <riel(a)redhat.com> wrote:
The disadvantage is pretty obvious too: 4kB pages would no longer be
the fast case, with an indirection. I do not know how much of an
issue that would be, or whether it even makes sense for 4kB pages to
continue being the fast case going forward.
I strongly disagree that 4kB does not matter as much: it is _the_
bread and butter of 99% of Linux usecases. 4kB isn't going away
anytime soon - THP might look nice in benchmarks, but it does not
matter nearly as much in practice and for filesystems and IO it's
absolutely crazy to think about 2MB granularity.
Having said that, I don't think a single jump of indirection is a big
issue - except for the present case where all the pmem IO space is
mapped non-cacheable. Write-through caching patches are in the works
though, and that should make it plenty fast.
Memory trends point in one direction, file size trends in another.
For persistent memory, we would not need 4kB page struct pages
unless memory from a particular area was in small files AND those
files were being actively accessed. [...]
Average file size on my system's /usr is 12.5K:
triton:/usr> ( echo -n $(echo $(find . -type f -printf "%s\n") | sed 's/
/+/g' | bc); echo -n "/"; find . -type f -printf "%s\n" | wc -l; )
[...] Large files (mapped in 2MB chunks) or inactive small files
would not need the 4kB page structs around.
... they are the utter uncommon case. 4K is here to stay, and for a
very long time - until humans use computers I suspect.
But I don't think the 2MB metadata chunking is wrong per se.