Dan Williams <dan.j.williams(a)intel.com> writes:
> Let's just focus on reporting errors when we know we have
That's the problem in my eyes. If software needs to contend with
latent error reporting then it should always contend otherwise
software has multiple error models to wrangle.
The only way for an application to know that the data has been written
successfully would be to issue a read after every write. That's not a
performance hit most applications are willing to take. And, of course,
the media can still go bad at a later time, so it only guarantees the
data is accessible immediately after having been written.
What I'm suggesting is that we should not complete a write successfully
if we know that the data will not be retrievable. I wouldn't call this
adding an extra error model to contend with. Applications should
already be checking for errors on write.
Does that make sense? Are we talking past each other?
Setting that aside we can start with just treating zeroing the same
the copy_from_iter() case and fail the I/O at the dax_direct_access()
I'd rather have a separate op that filesystems can use to clear
at block allocation time that can be enforced to have the correct
So would file systems always call that routine instead of zeroing, or
would they first check to see if there are badblocks?