> - Qemu virtio-pmem device
> It exposes a persistent memory range to KVM guest which
> at host side is file backed memory and works as persistent
> memory device. In addition to this it provides virtio
> device handling of flushing interface. KVM guest performs
> Qemu side asynchronous sync using this interface.
a random high level question,
Have you considered using a separate (from memory itself)
virtio device as controller for exposing some memory, async flushing.
And then just slaving pc-dimm devices to it with notification/ACPI
code suppressed so that guest won't touch them?
That way it might be more scale-able, you consume only 1 PCI slot
for controller vs multiple for virtio-pmem devices.
That sounds like a good suggestion. I will note it as an
enhancement once we have other concerns related to basic working
of 'flush' interface addressed. Then probably we can work on
things 'need to optimize' with robust core flush functionality.
BTW any sample code doing this right now in Qemu?
> Changes from previous RFC:
> - Reuse existing 'pmem' code for registering persistent
> memory and other operations instead of creating an entirely
> new block driver.
> - Use VIRTIO driver to register memory information with
> nvdimm_bus and create region_type accordingly.
> - Call VIRTIO flush from existing pmem driver.
> Details of project idea for 'fake DAX' flushing interface is
> shared  & .
> Pankaj Gupta (2):
> Add virtio-pmem guest driver
> pmem: device flush over VIRTIO
>  https://marc.info/?l=linux-mm&m=150782346802290&w=2
>  https://www.spinics.net/lists/kvm/msg149761.html
>  https://www.spinics.net/lists/kvm/msg153095.html
> drivers/nvdimm/region_devs.c | 7 ++
> drivers/virtio/Kconfig | 12 +++
> drivers/virtio/Makefile | 1
> drivers/virtio/virtio_pmem.c | 118
> include/linux/libnvdimm.h | 4 +
> include/uapi/linux/virtio_ids.h | 1
> include/uapi/linux/virtio_pmem.h | 58 +++++++++++++++++++
> 7 files changed, 201 insertions(+)