With this message I wanted to update SPDK community on state of VPP socket abstraction as of SPDK 19.07 release.
At this time there does not seem to be a clear efficiency improvements with VPP. There is no further work planned on SPDK and VPP integration.
As some of you may remember, SPDK 18.04 release introduced support for alternative socket types. Along with that release, Vector Packet Processing (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by expanding socket abstraction to use VPP Communications Library (VCL). TCP/IP stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early stages back then and has seen improvements throughout the last year.
To better use VPP capabilities, following fruitful collaboration with VPP team, in SPDK 19.07, this implementation was changed from VCL to VPP Session API from VPP 19.04.2.
VPP socket abstraction has met some challenges due to inherent design of both projects, in particular related to running separate processes and memory copies.
Seeing improvements from original implementation was encouraging, yet measuring against posix socket abstraction (taking into consideration entire system, i.e. both processes), results are comparable. In other words, at this time there does not seem to be a clear benefit of either socket abstraction from standpoint of CPU efficiency or IOPS.
With this message I just wanted to update SPDK community on state of socket abstraction layers as of SPDK 19.07 release. Each SPDK release always brings improvements to the abstraction and its implementations, with exciting work on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK 20.01.
However there is no active involvement at this point around VPP implementation of socket abstraction in SPDK. Contributions in this area are always welcome. In case you're interested in implementing further enhancements of VPP and SPDK integration feel free to reply, or to use one of the many SPDK community communications channels<https://spdk.io/community/>.
We are taking several steps towards solidifying the ABI strategy for SPDK. These include the following:
1. Creating an ABI major/minor versioning scheme for all shared libraries. When a new symbol is added to a library, the shared object minor version will be incremented to show that the library is still backwards compatible. When a symbol is removed or changed, the ABI major version will be updated to indicate that the library is no longer backwards compatible. This versioning scheme will be enforced by the following new test in the SPDK CI: https://review.spdk.io/gerrit/c/spdk/spdk/+/1068.
2. Incrementing the shared object version of each library independently.
3. Creating individual map files for each shared library.
As we create individual map files for the libraries, ABIs are likely to change, so please keep an eye out for changes in the major/minor version of SPDK libraries as you continue to link your applications to them.
I have a below scenario:
A disk is controlled by SPDK APP A.
SPDK APP A exports the disk as a NVMEoF target
SPDK APP B is a NVMEoF initiator, it connects to that disk.
SPDk APP B creates a lvs and a lvol on the disk.
When I restart the APP A and APP B. The APP A will examine the disk
and find there is a lvs, and then the lvol module on APP A will claim
this disk, then this disk can't be exported over NVMEoF.
Could we add a api to the lvol module? The api could add a dev name
pattern to a black list. When the lvol module examine a disk, it will
check against the black list, if a dev name matches the black list, it
will ignore this device.
Tomek, I'm looking through your patch example/fio: add option to load json_config  and have some concerns to the overall design of applications utilizing SPDK without the app framework. First of all, we seem to use spdk_app_json_config_load() to load a JSON config in fio_plugin. The spdk_app_ prefix in that function suggests it's the app framework API, which we don't use in fio_plugin. I'm guessing it should be at least renamed? spdk_json_config_load() maybe?
spdk_app_json_config_load() is defined in spdk_internal/event.h. Why is this internal anyway? This practically forces external SPDK users to stick with either legacy json config files, or spdk_app_start(). The question is where should we move it? lib/event.h is currently specific to the app framework. Do you have any plans for SPDK app design without the app framework? I'm guessing we do want to support it (see fio_plugin), but currently it's messy.
Nice to meet you. I’m adding SPDK mailing list for wider audience (someone may find it helpful someday 😊 ). Maybe it has to do something with additional comments on each line in configuration file – try to remove them. Also in “spdk_rpc_port =4420” you are missing space after “=” – I doubt that this may has something to do with that missing space, but it is always worth trying removing it. Everything else looks good. Let me know if that won’t work – I’ll try to reproduce environment on my side.
From: Kumar Ranjan <Kumar.Ranjan(a)wdc.com>
Sent: Monday, March 23, 2020 6:57 PM
To: Szwed, Maciej <maciej.szwed(a)intel.com>
Subject: Fail to start cinder service
I am Ranjan from WDC. Request your help in understanding as why I am not able to start the cinder service.
I saw some of your patches and looks like you are more into this field. Hence pinging you.
For experiment : devstack as well my client is running in the same server.
My host IP for Horizon (private network) : 192.168.1.14
My target IP where rpc is listening : 192.168.1.111
My target has 2 data ports , one is : 192.168.1.111 and another : 18.104.22.168
Spdk entried in my Cinder.conf
spdk_rpc_ip = 192.168.1.111 #address to machine with SPDK
spdk_rpc_port =4420 #port to use to send commands to SPDK once running with -r
#spdk_rpc_username = admin#username set in remote_rpc.py
#spdk_rpc_password = admin#password set in remote_rpc.py
target_ip_address = 22.214.171.124 #NVMe-oF interface
target_port = 4420
target_protocol = nvmet_rdma
target_helper = spdk-nvmeof
target_prefix = nqn.1992-05.com.wdc
volume_driver = cinder.volume.drivers.spdk.SPDKDriver
volume_backend_name = SPDK
root@ubuntu:~# systemctl status devstack(a)c-vol.service<mailto:firstname.lastname@example.org>
● devstack(a)c-vol.service<mailto:email@example.com> - Devstack devstack(a)c-vol.service<mailto:firstname.lastname@example.org>
Loaded: loaded (/etc/systemd/system/devstack(a)c-vol.service<mailto:/email@example.com>; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-03-23 23:16:59 IST; 5s ago
Process: 15636 ExecStart=/usr/local/bin/cinder-volume --config-file /etc/cinder/cinder.conf (code=exited, status=1/FAILURE)
Main PID: 15636 (code=exited, status=1/FAILURE)
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume return self._conf._get(name, self._group)
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume File "/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2639, in _get
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume value, loc = self._do_get(name, group, namespace)
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume File "/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2713, in _do_get
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume raise ConfigFileValueError(message)
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume oslo_config.cfg.ConfigFileValueError: Value for option spdk_rpc_port from LocationInfo(location=<Locations.user: (4, Tru
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume
Mar 23 23:16:59 ubuntu cinder-volume: ERROR cinder.cmd.volume [None req-5dddc9ec-a2a6-4c94-913c-caea68014ac5 None None] No volume service(s) started successfully, terminating.
Mar 23 23:16:59 ubuntu systemd: devstack(a)c-vol.service<mailto:firstname.lastname@example.org>: Main process exited, code=exited, status=1/FAILURE
Mar 23 23:16:59 ubuntu systemd: devstack(a)c-vol.service<mailto:email@example.com>: Failed with result 'exit-code'.
root@KAMSHED:/usr/share/spdk/scripts# ps aux | grep nvmf
root 14676 691 5.0 4271148 409508 pts/0 SLl 01:28 37:21 /usr/sbin/nvmf_tgt -o -j -c /usr/share/spdk/nvmf.conf -r 192.168.1.111:4420
root 15844 0.0 0.0 2316 464 pts/0 S+ 01:33 0:00 grep nvmf
On behalf of the SPDK community I'm pleased to announce the release of SPDK 20.01.1 LTS!
SPDK 20.01.1 is a bug fix and maintenance LTS release.
The full changelog for this release is available at:
Thanks to everyone for your contributions, participation, and effort!
I think that you are not in the default list, so your patch needs to be approved first, then the patch will be tested. Or some one needs to add you into the default list. Let me drop a message to Karol or Seth for help.
From: Richael Zhuang <Richael.Zhuang(a)arm.com>
Sent: Friday, February 14, 2020 9:47 AM
Subject: [SPDK] ask for help about SPDK CI
I got one problem about the new CI system. The CI is not triggered after I submit a patch to review.spdk.io.
So I'm wondering whether I miss something about the process to submit a patch?
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
In order to keep the Community and GitHub Issue Review meetings running smoothly, they will be now hosted using Skype.
The change was motivated by previous service being increasingly troublesome to join and use with each passing week.
Please see https://spdk.io/community/ for details on joining.
There are no changes to dates or time of the meetings.
Yes - there is a limitation. See spdk_bs_opts_init(). There is a max_channel_ops field that you can set if you need a higher limit on the number of outstanding I/O operations. Currently the default is 512, but you can increase it by changing the max_channel_ops value before initializing or loading the blobstore.
This was done by design, so that we could avoid memory allocations in the I/O path. These per-I/O data structures are allocated when the io_channel is created (spdk_bs_alloc_io_channel).
On 3/11/20, 8:36 PM, "Niu, Yawei" <yawei.niu(a)intel.com> wrote:
Is there any limitation on the number of inflight blob io requests? I’m asking because we hit a problem like consecutively submitting more than 1000 blob reads (spdk_blob_io_read() calls), an ENOMEM error will be observed on the last read completion callback. Please be aware that’s just my speculation after reading some debug log, so the error isn’t necessary from SPDK, and our engineer is working on reproducer to narrow down the problem now, at the same time, I’d like to find out if there is any hard limitation on the number of inflight io requests by design?
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org