On Thu, Feb 25, 2016 at 02:11:49PM -0500, Jeff Moyer wrote:
Jeff Moyer <jmoyer(a)redhat.com> writes:
>> The big issue we have right now is that we haven't made the DAX/pmem
>> infrastructure work correctly and reliably for general use. Hence
>> adding new APIs to workaround cases where we haven't yet provided
>> correct behaviour, let alone optimised for performance is, quite
>> frankly, a clear case premature optimisation.
> Again, I see the two things as separate issues. You need both.
> Implementing MAP_SYNC doesn't mean we don't have to solve the bigger
> issue of making existing applications work safely.
I want to add one more thing to this discussion, just for the sake of
clarity. When I talk about existing applications and pmem, I mean
applications that already know how to detect and recover from torn
sectors. Any application that assumes hardware does not tear sectors
should be run on a file system layered on top of the btt.
Which turns off DAX, and hence makes this a moot discussion because
mmap is then buffered through the page cache and hence applications
*must use msync/fsync* to provide data integrity. Which also makes
them safe to use with DAX if we have a working fsync.
Keep in mind that existing storage technologies tear fileystem data
writes, too, because user data writes are filesystem block sized and
not atomic at the device level (i.e. typical is 512 byte sector, 4k
filesystem block size, so there are 7 points in a single write where
a tear can occur on a crash).
IOWs existing storage already has the capability of tearing user
data on crash and has been doing so for a least they last 30 years.
Hence I really don't see any fundamental difference here with
pmem+DAX - the only difference is that the tear granuarlity is
smaller (CPU cacheline rather than sector).