>> >> Yes, the GUID will specifically identify this range as "Virtio
>> >> Memory" (or whatever name survives after a bikeshed debate). The
>> >> libnvdimm core then needs to grow a new region type that mostly
>> >> behaves the same as a "pmem" region, but
drivers/nvdimm/pmem.c grows a
>> >> new flush interface to perform the host communication. Device-dax
>> >> would be disallowed from attaching to this region type, or we could
>> >> grow a new device-dax type that does not allow the raw device to be
>> >> mapped, but allows a filesystem mounted on top to manage the flush
>> >> interface.
>> > I am afraid it is not a good idea that a single SPA is used for multiple
>> > purposes. For the region used as "pmem" is directly mapped to the
>> > that guest can freely access it without host's assistance, however,
>> > the region used as "host communication" is not mapped to VM, so
>> > it causes VM-exit and host gets the chance to do specific operations,
>> > e.g, flush cache. So we'd better distinctly define these two regions
>> > avoid the unnecessary complexity in hypervisor.
>> Good point, I was assuming that the mmio flush interface would be
>> discovered separately from the NFIT-defined memory range. Perhaps via
>> PCI in the guest? This piece of the proposal needs a bit more
> Also, in earlier discussions we agreed for entire device flush whenever
> performs a fsync on DAX file. If we do a MMIO call for this, guest CPU
> would be
> trapped for the duration device flush is completed.
> Instead, if we do perform an asynchronous flush guest CPU's can be utilized
> some other tasks till flush completes?
Yes, the interface for the guest to trigger and wait for flush
requests should be asynchronous, just like a storage "flush-cache"
One idea got while discussing this with Rik & Amit during KVM forum is to use
similar to Hyperv Key-value pair for sharing command between guest <=> host.
such thing exists yet for KVM? Or how we can utilize existing features in KVM to achieve