Add py-spdk client for SPDK
by We We
Hi, all
I have submitted the py-spdk code on https://review.gerrithub.io/#/c/379741/, please take some time to visit it, I will be very grateful to you.
The py-spdk is client which can help the upper-level app to communicate with the SPDK-based app (such as: nvmf_tgt, vhost, iscsi_tgt, etc.). Should I submit it into the other repo I rebuild rather than SPDK repo? Because I think it is a relatively independent kit upon the SPDK.
If you have some thoughts about the py-spdk, please share with me.
Regards,
Helloway
2 years, 7 months
SPDK + user space appliance
by Shahar Salzman
Hi all,
Sorry for the delay, had to solve a quarantine issue in order to get access to the list.
Some clarifications regarding the user space application:
1. The application is not the nvmf_tgt, we have an entire applicance to which we are integrating spdk
2. We are currently using nvmf_tgt functions in order to activate spdk, and the bdev_user in order to handle IO
3. This is all in user space (I am used to the kernel/user distinction in order to separate protocol/appliance).
4. The bdev_user will also notify spdk of changes to namespaces (e.g. a new namespace has been added, and can be attached to the spdk subsystem)
I am glad that this is your intention, the question is, do you think that it would be useful to create such a bdev_user module which will allow other users to integrate spdk to their appliance using such a simple threading model? Perhaps such a module will allow easier integration of spdk.
I am attaching a reference application which is does NULL IO via bdev_user.
Regarding the RPC, we have an implementation of it, and will be happy to push it upstream.
I am not sure that using the RPC for this type of bdev_user namespaces is the correct approach in the long run, since the user appliance is the one adding/removing namespaces (like hot plugging of a new NVME device), so it can just call the "add_namespace_to_subsystem" interface directly, and does not need to use an RPC for it.
Thanks,
Shahar
3 years, 10 months
Need help for fixing NVMe probe problem in NVMeoF initiator running fio.
by Sreeni (Sreenivasa) Busam (Stellus)
Hello,
I have configured the target and initiator for a subsystem with 1 NVMe device in target.
Here are the errors I am getting on the initiator. I have a good NVMe device on the target side, but I am getting the error below.
If you know why the initiator does not initialize the controller and reason for the error, please let me know.
Target log:
Starting DPDK 17.08.0 initialization...
[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid27838 ]
EAL: Detected 32 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 364:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
nvmf_tgt.c: 178:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 178:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2017-06.io.spdk-MPcnode1 on lcore 0 on socket 0
rdma.c:1146:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1353:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 172.17.2.175 port 11345 ***
nvmf_tgt.c: 255:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on socket 0
rdma.c:1515:spdk_nvmf_rdma_poll_group_create: *NOTICE*: Skipping unused RDMA device when creating poll group.
Everything seems to be fine on the target application until the initiator connects to it and create a namespace.
NVMF configuration file:
[Nvmf]
MaxQueuesPerSession 4
AcceptorPollRate 10000
[Subsystem1]
NQN nqn.2017-06.io.spdk-MPcnode1
Core 1
SN SPDK0000000000000001
Listen RDMA 172.17.2.175:11345
AllowAnyHost Yes
NVMe 0000:84:00.0
Initiator log:
./fio --name=nvme --numjobs=1 --filename="trtype=RDMA adrfam=IPV4 traddr=172.17.2.175 trsvcid=11345 subnqn=nqn.2017-06.io.spdk-MPcnode1 ns=1" --bs=4K --iodepth=1 --ioengine=/home.local/sfast/spdk20/spdk/examples/nvme/fio_plugin/fio_plugin --sync=0 --norandommap --group_reporting --size=12K --runtime=3 -rwmixwrite=30 --thread=1 --rw=rw
nvme: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=1
fio-3.3
Starting 1 thread
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: fio -c 0x1 -m 512 --file-prefix=spdk_pid28214 ]
EAL: Detected 32 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
nvme_ctrlr.c:1031:nvme_ctrlr_construct_namespaces: *ERROR*: controller has 0 namespaces
fio_plugin.c: 298:spdk_fio_setup: *ERROR*: spdk_nvme_probe()
Thanks for your suggestion
Sreeni
4 years, 1 month
Re: [SPDK] Buffer I/O error on bigger block size running fio
by Harris, James R
Hi Victor,
Could you provide a few more details? This will help the list to provide some ideas.
1) On the client, are you using the SPDK NVMe-oF initiator or the kernel initiator?
2) Can you provide the fio configuration file or command line? Just so we can have more specifics on “bigger block size”.
3) Any details on the HW setup – specifically details on the RDMA NIC (or if you’re using SW RoCE).
Thanks,
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Victor Banh <victorb(a)mellanox.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, October 5, 2017 at 11:26 AM
To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
Subject: [SPDK] Buffer I/O error on bigger block size running fio
Hi
I have SPDK NVMeoF and keep getting error with bigger block size with fio on randwrite tests.
I am using Ubuntu 16.04 with kernel version 4.12.0-041200-generic on target and client.
The DPDK is 17.08 and SPDK is 17.07.1.
Thanks
Victor
[46905.233553] perf: interrupt took too long (2503 > 2500), lowering kernel.perf_event_max_sample_rate to 79750
[48285.159186] blk_update_request: I/O error, dev nvme1n1, sector 2507351968
[48285.159207] blk_update_request: I/O error, dev nvme1n1, sector 1301294496
[48285.159226] blk_update_request: I/O error, dev nvme1n1, sector 1947371168
[48285.159239] blk_update_request: I/O error, dev nvme1n1, sector 1891797568
[48285.159252] blk_update_request: I/O error, dev nvme1n1, sector 10833824
[48285.159265] blk_update_request: I/O error, dev nvme1n1, sector 614937152
[48285.159277] blk_update_request: I/O error, dev nvme1n1, sector 1872305088
[48285.159290] blk_update_request: I/O error, dev nvme1n1, sector 1504491040
[48285.159299] blk_update_request: I/O error, dev nvme1n1, sector 1182136128
[48285.159308] blk_update_request: I/O error, dev nvme1n1, sector 1662985792
[48285.191185] nvme nvme1: Reconnecting in 10 seconds...
[48285.191254] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191291] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191305] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191314] ldm_validate_partition_table(): Disk read failed.
[48285.191320] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191327] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191335] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191342] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191347] Dev nvme1n1: unable to read RDB block 0
[48285.191353] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191360] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191375] Buffer I/O error on dev nvme1n1, logical block 3, async page read
[48285.191389] nvme1n1: unable to read partition table
[48285.223197] nvme1n1: detected capacity change from 1600321314816 to 0
[48289.623192] nvme1n1: detected capacity change from 0 to -65647705833078784
[48289.623411] ldm_validate_partition_table(): Disk read failed.
[48289.623447] Dev nvme1n1: unable to read RDB block 0
[48289.623486] nvme1n1: unable to read partition table
[48289.643305] ldm_validate_partition_table(): Disk read failed.
[48289.643328] Dev nvme1n1: unable to read RDB block 0
[48289.643373] nvme1n1: unable to read partition table
4 years, 2 months
Re: [SPDK] SPDK support for NVMe CMBs and PMRs with WDS/RDS
by Stephen Bates
> I've done some initial review on your patches. Your NVMe changes look fine to me - but I'm not the one to review that.
Thanks a lot for the prompt review Darius!
> I have some comments regarding your memory management refactor. I posted all of them on a single patch -
> https://review.gerrithub.io/c/397038/
I have put some replies on Gerrit. I think your comments are good and I will include them in v2.
> I'm thinking the g_phys_regions could be a global vtophys override map. We could expose public
> spdk_vtophys_override(vaddr, len, paddr) API -> then we wouldn't need to modify spdk_mem_register.
I am not opposed to this but I would like others to chime in on this approach. It does involve two function calls rather than one which complicates life for the user. Does anyone else have opinion on this? I did like the way "improved" spdk_mem_register to accomodate this new class of memory region rather than add a separate function.
I am going to give it a few more days for others to review and comment on this and other aspects of the patch series and then I will resubmit.
Is there any chance a core maintainer can pull these into the CI environment so I can see if there are any other issues I need to be aware off before the respin?
Cheers
Stephen
On 2018-01-30, 1:45 AM, "SPDK on behalf of Stojaczyk, DariuszX" <spdk-bounces(a)lists.01.org on behalf of dariuszx.stojaczyk(a)intel.com> wrote:
Hi Stephen,
I've done some initial review on your patches. Your NVMe changes look fine to me - but I'm not the one to review that.
I have some comments regarding your memory management refactor. I posted all of them on a single patch - https://review.gerrithub.io/c/397038/
Basically:
```
I'm thinking the g_phys_regions could be a global vtophys override map.
We could expose public spdk_vtophys_override(vaddr, len, paddr) API -> then we wouldn't need to modify spdk_mem_register.
NVMe could do:
spdk_vtophys_override(cmb_vaddr, len, cmb_paddr);
spdk_mem_register(cmb_vaddr, len);
```
Regards,
D.
> -----Original Message-----
> From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Stephen Bates
> Sent: Tuesday, January 30, 2018 4:35 AM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] SPDK support for NVMe CMBs and PMRs with WDS/RDS
>
> Hi All
>
> I just uploaded a patchset to Gerrit that adds support for NVMe controllers with
> CMBs/PMRs that support WDS and RDS (my full branch of the code is at [1]).
> This allows NVMe controllers to move/copy data from a namespace on one
> controller to a namespace on a different controller without requiring a system
> memory buffer. There are lots of interesting use cases for such data-
> movement.
>
> The biggest issue with using a CMB or PMR for data copies is getting a vtophys
> translation. This series adds a new vtophys method for physical memory
> regions that fail via the existing methods (e.g. a PCIe BAR). This new method
> using a linked list of physical regions that can be added/deleted via
> spdk_reg_memory calls. We can then allocate/free memory from these
> regions and the current maps can handle both the reference counting and store
> the vtophys translations.
>
> The hello_world example is updated to utilize the CMB WDS/RDS capability if
> the associated controller supports it. It addition a new example application
> called cmb_copy is included that performs the aforementioned offloaded copy
> when a CMB is available.
>
> We have confirmed both cmb_copy and the new hello_world work as expected
> on both hardware from Eiditicom and Everspin. We plan to do more testing as
> more drives with WDS/RDS capable CMBs become available. We used PCIe
> packet counters in the Microsemi PCIe switch to confirm traffic is moving
> directly between the two NVMe SSDs and not being routed to the root complex
> on the CPU.
>
> Feedback on the patches is gratefully received!
>
> Cheers
>
> Stephen
>
> [1] https://github.com/Eideticom/spdk/tree/cmb-copy-v3
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
4 years, 3 months
SPDK v18.01 released
by Verkamp, Daniel
On behalf of the SPDK maintainers and contributors, I'd like to announce the v18.01 release:
https://github.com/spdk/spdk/releases/tag/v18.01
Thanks to everyone who contributed to the SPDK v18.01 release:
Barry Spinney
Ben Walker
Changpeng Liu
Chen Wang
Cunyin Chang
Daniel Mrzyglod
Daniel Verkamp
Dariusz Stojaczyk
Dave Boutcher
Ed Rodriguez
Felipe Franciosi
Gang Cao
Hailiang Wang
Huagen Xu
Isaac Otsiabah
Jim Harris
John Meneghini
Jonas Pfefferle
Karol Latecki
Lance Hartmann
Liang Yan
Lu Fan
Lukasz Galka
Maciej Szwed
Nathan Cutler
Paul Luse
Pawel Kaminski
Pawel Niedzwiecki
Pawel Wodkowski
Philipp Skadorov
Piotr Pelplinski
Sebastian Basierski
Seth Howell
Shuhei Matsumoto
Slawomir Mrozowicz
Stephen Bates
Tomasz Kulasek
Tomasz Zawadzki
Xiaodong Liu
Yanbo Zhou
Young Tack Jin
Ziye Yang
Wenzhong Wu
4 years, 3 months
Performance Scaling in BlobFS/RocksDB by Multiple I/O Threads
by Fenggang Wu
Hi All,
I read from the SPDK doc "NVMe Driver Design -- Scaling Performance" (here
<http://www.spdk.io/doc/nvme.html#nvme_design>), which saids:
" For example, if a device claims to be capable of 450,000 I/O per second
at queue depth 128, in practice it does not matter if the driver is using 4
queue pairs each with queue depth 32, or a single queue pair with queue
depth 128."
Does this consider the queuing latency? I am guessing the latency in the
two cases will be different ( in qp/qd = 4/32 and in qp/qd = 1/128). In the
4 threads case, the latency will be 1/4 of the 1 thread case. Do I get it
right?
If so, then I got confused as the document also says:
"In order to take full advantage of this scaling, applications should
consider organizing their internal data structures such that data is
assigned exclusively to a single thread."
Please correct me if I get it wrong. I understand that if the dedicate I/O
thread has the total ownership of the I/O data structures, there is no lock
contention to slow down the I/O. I believe that BlobFS is also designed in
this philosophy in that only one thread is doing I/O.
But considering the RocksDB case, if the shared data structure has already
been largely taken care of by the RocksDB logic via locking (which is
inevitable anyway), the I/O requests each RocksDB thread sends to the
BlobFS could also has its own queue pair to do I/O. More I/O threads means
shorter queue depth and smaller queuing delay.
Even if there is some FS metadata operations that may require some locking,
but I would guest such metadata operation takes only a small portion.
Therefore, is it a viable idea to have more I/O threads in the BlobFS to
serve the multi-threaded RocksDB for a smaller delay? What will be the
pitfalls, or challenges?
Any thoughts/comments are appreciated. Thank you very much!
Best!
-Fenggang
4 years, 3 months
Hugepage allocation, and issue with non contiguous memory
by Shahar Salzman
Hi,
On our system, we make extensive use of hugepages, so only a fraction of the hugepages are for spdk usage, and the memory allocated may be fragmented at the hugepage level.
Initially we used "--socket-mem=2048,0", but init time was very long, probably since dpdk built its hugepage info from all the hugepages on the system.
Currently I am working around the long init time by this patch to dpdk:
diff --git a/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c b/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
index 18858e2..f7e8199 100644
--- a/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
+++ b/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
@@ -97,6 +97,10 @@ get_num_hugepages(const char *subdir)
if (num_pages > UINT32_MAX)
num_pages = UINT32_MAX;
+#define MAX_NUM_HUGEPAGES (2048)
+ if (num_pages > MAX_NUM_HUGEPAGES)
+ num_pages = MAX_NUM_HUGEPAGES;
+
return num_pages;
}
For the fragmentation I am running a small program that initializes dpdk before the rest of the hugepage owners start allocating their pages.
Is there a better way to limit the # of pages that dpdk works on, and to preallocate a contiguous amount of hugepages?
Shahar
4 years, 3 months
Re: [SPDK] strcpy forbidden
by Meneghini, John
How about using stpncpy or strncpy? This should be supported instead of strcpy.
The stpncpy() and strncpy() functions copy at most len characters from src into dst. If src is less than len characters long, the remainder of dst is filled with `\0' characters. Otherwise, dst is not terminated.
/John
On 1/29/18, 4:50 PM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:
LOL, it got me too Stephen. To avoid buffer overflow attacks. Use strdup instead....
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Stephen Bates
Sent: Monday, January 29, 2018 2:48 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] strcpy forbidden
Hi All
I am sure there is a really good reason why strcpy is forbidden by check_format.sh but I cannot find it documented anywhere. Can someone enlighten me and point to the documentation if it exists?
Cheers
Stephen
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
4 years, 3 months
Queue Depth of the QPair (was: Performance Scaling in BlobFS/RocksDB by Multiple I/O Threads)
by Fenggang Wu
Thank you very much Jim and Ben!
Maybe I have an incorrect mental image of the queue.
My understanding is that the queue depth is the capacity of the command
queue pair. Even though the queue element identifier support up to 65536,
the actual space allocated for the queue pair is a far smaller number (e.g.
128) than that to match the SSD's internal capability of parallelism
(determined by the number of independent components such as planes/dies).
The I/O requests in the queue pair are processed in a FIFO order. I/O
threads will put I/O commands in the submission queue and ring the door
bell, while the SSD will take command out of the queue pair, DMA the data,
put a completion in the completion queue, and finally ring the doorbell.
A related question for me to understand is:
for a single thread case, why does the IOPS scale as the QD increases
before the saturation?
I understand that when the SSD gets saturated, it cannot keep up with the
speed the host generate I/Os, the submission queue is always full. The IOPS
will equal to the SSD's max IOPS. The average latency will grow as the
queue depth increases, as the requests is handled one by one by the device,
the more requests are ahead in the queue, the longer the wait (or latency)
is.
However, when the SSD is not saturated, the SSD is fast enough to process
the request, i.e., to deplete the submission queue. Therefore, the
submission queue is most of the time empty, and the majority of the
space(command slots) allocated for the queue pair is wasted. So a queue
depth of 128 is equivalent to a queue depth of 1 and thus the IOPS will be
the same.
However the data does tell as the IOPS increases as the QD grows. I am
just wondering from which point I go astray.
Or, in the first place, why a large queue depth will saturate the SSD but a
small QD will not, given that the host is always generating I/O fast enough?
Thanks!
-Fenggang
On Wed, Jan 31, 2018 at 12:22 PM Walker, Benjamin <benjamin.walker(a)intel.com>
wrote:
> On Wed, 2018-01-31 at 17:49 +0000, Fenggang Wu wrote:
> > Hi All,
> >
> > I read from the SPDK doc "NVMe Driver Design -- Scaling Performance"
> (here),
> > which saids:
> >
> > " For example, if a device claims to be capable of 450,000 I/O per
> second at
> > queue depth 128, in practice it does not matter if the driver is using 4
> queue
> > pairs each with queue depth 32, or a single queue pair with queue depth
> 128."
> >
> > Does this consider the queuing latency? I am guessing the latency in the
> two
> > cases will be different ( in qp/qd = 4/32 and in qp/qd = 1/128). In the 4
> > threads case, the latency will be 1/4 of the 1 thread case. Do I get it
> right?
>
> Officially, it is entirely up to the internal design of the device. But
> for the
> NVMe devices I've encountered on the market today you can use as a mental
> model
> a single thread inside the SSD processing incoming messages that
> correspond to
> doorbell writes. It simply takes the doorbell write message and does
> simple math
> to calculate where the command is located in host memory, and then issues
> a DMA
> to pull it into device local memory. It doesn't matter which queue the I/O
> is on
> - the math is the same. So no, the latency of 1 queue pair at 128 queue
> depth is
> the same as 4 queue pairs at 32 queue depth.
>
> > If so, then I got confused as the document also says:
> >
> > "In order to take full advantage of this scaling, applications should
> consider
> > organizing their internal data structures such that data is assigned
> > exclusively to a single thread."
> >
> > Please correct me if I get it wrong. I understand that if the dedicate
> I/O
> > thread has the total ownership of the I/O data structures, there is no
> lock
> > contention to slow down the I/O. I believe that BlobFS is also designed
> in
> > this philosophy in that only one thread is doing I/O.
> >
> > But considering the RocksDB case, if the shared data structure has
> already
> > been largely taken care of by the RocksDB logic via locking (which is
> > inevitable anyway), the I/O requests each RocksDB thread sends to the
> BlobFS
> > could also has its own queue pair to do I/O. More I/O threads means
> shorter
> > queue depth and smaller queuing delay.
>
> > Even if there is some FS metadata operations that may require some
> locking,
> > but I would guest such metadata operation takes only a small portion.
> >
> > Therefore, is it a viable idea to have more I/O threads in the BlobFS to
> serve
> > the multi-threaded RocksDB for a smaller delay? What will be the
> pitfalls, or
> > challenges?
>
> You're right that RocksDB has already worked out all of its internal data
> sharing using locks. It then uses a thread pool to issue simultaneous
> blocking
> I/O requests to the filesystem. That's where the SPDK RocksDB backend
> intercepts. As you suspect, the filesystem itself (BlobFS, in this case)
> has
> shared data structures that must be coordinated for some operations
> (creating
> and deleting files, resizing files, etc. - but not regular read/write).
> That's a
> small part of the reason why we elected, in our first attempt at writing a
> RocksDB backend, to route all I/O from each thread in the thread pool to a
> single thread doing asynchronous I/O.
>
> The main reason we route all I/O to a single thread, however, is to
> minimize CPU
> usage. RocksDB makes blocking calls on all threads in the thread pool. We
> could
> implement that in SPDK by spinning in a tight loop, polling for the I/O to
> complete. But that means every thread in the RocksDB thread pool would be
> burning the full core. Instead, we send all I/O to a single thread that is
> polling for completions, and put the threads in the pool to sleep on a
> semaphore. When an I/O completes, we send a message back to the originating
> thread and kick the semaphore to wake it up. This introduces some latency
> (but
> the rest of SPDK is more than fast enough to compensate for that), but it
> saves
> a lot of CPU usage.
>
> In an ideal world, we'd be integrating with a fully asynchronous K/V
> database,
> where the user could call Put() or Get() and have it return immediately
> and call
> a callback when the data was actually inserted. But that's just not how
> RocksDB
> works today. Even the background thread pool doing compaction is designed
> to do
> blocking operations. It would integrate with SPDK much better if it
> instead had
> a smaller set of threads each doing asynchronous compaction operations on a
> whole set of files at once. Changing RocksDB in this way is a huge lift,
> but
> would be an impressive project.
>
> >
> >
> >
> > Any thoughts/comments are appreciated. Thank you very much!
> >
> > Best!
> > -Fenggang
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
4 years, 3 months