And when the filesystem says no because the fs devs don't want to
have to deal with broken apps because app devs learn that "this is a
go fast knob" and data integrity be damned? It's "fsync is slow so I
won't use it" all over again...
And, please keep in mind: many application developers will not
design for pmem because they also have to support traditional
storage backed by page cache. If they use msync(), the app will work
on any storage stack, but just be much, much faster on pmem+DAX. So,
really, we have to make the msync()-only model work efficiently, so
we may as well design for that in the first place....
Both of these snippets seem to be arguing that we should make msync/fsync
more efficient. But I don't think anyone is arguing the opposite. Is
someone saying we shouldn't make the msync()-only model work efficiently?
Said another way: the common case for DAX will be applications simply
following the POSIX model. open, mmap, msync... That will work fine
and of course we should optimize that path as much as possible. Less
common are latency-sensitive applications built to leverage to byte-
addressable nature of pmem. File systems supporting this model will
indicate it using a new ioctl that says doing CPU cache flushes is
sufficient to flush stores to persistence. But I don't see how that
direction is getting turned into an argument against msync() efficiency.
Which brings up another point: advanced new functionality
is going to require native pmem filesystems.
I agree there's opportunity for new filesystems (and old) to leverage
what pmem provides. But the word "require" implies that's the only
way to go and we know that's not the case. Using ext4+dax to map
pmem into an application allows that application to use the pmem
directly and a good number of software projects are doing exactly that.