On Fri, 2012-03-16 at 15:35 +0100, Patrick Ohly wrote:
On Fri, 2012-03-16 at 13:52 +0100, Patrick Ohly wrote:
> On Fri, 2012-03-16 at 13:00 +0100, Chris Kühl wrote:
> > On Fri, Mar 16, 2012 at 11:54 AM, Patrick Ohly <patrick.ohly(a)intel.com>
> > > In a nutshell, I think we need to go back to the original std::list and
> > > its brute-force insertion sort. Agreed?
> > >
> > I found it a bit nicer to use the comparison class but if you're not
> > convinced feel free to revert that.
> It's nicer, but I am not convinced that it is correct.
> I'm wondering whether the old approach would be easier:
> * keep the Connection logic inside syncevo-dbus-server
> * schedule a normal SessionResource there
> * use it once it is ready
> One downside of this is that Connection needs to parse the incoming
> message, for which it must instantiate libsynthesis. It's linked to
> anyway via libsyncevolution, but not used elsewhere, so long term we
> could get rid of it in syncevo-dbus-server by refactoring the libs if it
> wasn't for this one aspect.
I'm going to experiment with the following changes on a branch:
* restore old work queue (list of sessions)
* run Connection entirely in syncevo-dbus-server, with blocking
sync done in a normal helper session
I've finished the initial server rewrite, see ccs-pohly. I have also
done the auto sync manager rewrite that I had planned. I think the key
difference (only run a subset of Session in the helper, no dead code for
running multiple sessions in parallel) makes the code easier and avoided
changes elsewhere in the server (like an entirely different internal
Connection API, asynchronous status queries, etc.).
The new design is less ambitious in some ways. For example, backends are
still loaded and activated in the main syncevo-dbus-server, and there is
no support for running multiple sync sessions in parallel. But I believe
that this can be added later without design changes.
All tests passed on my desktop, now I am testing on the nightly test
However, while working on this I noticed quite a few areas for which
there are no tests.
For example, running a command line operation is not tested. Support for
it is implemented, but only without the output handling. Before adding
that, the tests from the Cmdline unit tests should be incorporated into
test-dbus.py. That way it can be verified that the tests work (because
they fail initially with the new branch) and developing the feature
becomes easier (because testing doesn't have to be manual).
The main commit message has the details (pri1 is something that I'll
work on immediately, pri2 is necessary before 1.3, pri3 is optional).
Help for anything in pri2 or pri3 is welcome.
Author: Patrick Ohly <patrick.ohly(a)intel.com>
Date: Mon Mar 26 17:19:25 2012 +0200
D-Bus server: run operations in syncevo-dbus-helper
This commit moves running the blocking syncing, database restore and
command line execution into the syncevo-dbus-helper process. The
advantage is that the main syncevo-dbus-server remains responsive
under all circumstances (fully asynchronous now) and suffers less from
memory leaks and/or crashes during a sync.
All test-dbus.py tests pass.
The core idea behind the new architecture is that Session remains the
D-Bus facing side of a session. It continues to run inside
syncevo-dbus-server and uses the syncevo-dbus-helper transparently via
a custom D-Bus interface between the two processes. State changes of
the helper are mirrored in the server.
Later the helper might also be used multiple times in a Session. For
example, anything related to loading backends should be moved into the
helper (currently the "is config usable" check still runs in the
syncevo-dbus-server and needs to load/initialize backends). The
startup code of the helper already handles that (see boolean result of
operation callback), but it is not used yet in practice.
At the moment, only the helper provides a D-Bus API. It sends out
signals when it needs information from the server. The server watches
those and replies when ready. It has the disadvantage that the helper
is not automatically notified with a "connection lost" error when
waiting for information from the server. This is not currently
handled (see below). Might revise this approach as part of resolving
The problem of "helper died unexpectedly" is already handled, by not
returning a D-Bus method reply until the requested operation is
The Connection class continues to use such a Session, as before. It's
now fully asynchronous and exchanges messages with the helper via the
Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks
infrastructure for asynchronous methods execution are used heavily
now. The glib event loop is entered exactly once and only left to shut
The AutoSyncManager was completely rewritten. The data structure is a
lot simpler now (basically just a cache of transient information about
a sync config and the relevant config properties that define auto
syncing). The main work happens inside the schedule() call, which
verifies whether a session can run and, if not possible for some
reasons, ensures that it gets invoked again when that blocker is
gone (timeout over, server idle, etc.). The new code also uses
signals/slots instead of explicit coupling between the different
syncevo-dbus-helper and syncevo-local-sync are conceptually very
similar. Should investigate whether a single executable can serve both
All code still lives inside the src/dbus/server directory. This
simplifies checking differences in partly modified files like
dbus-sync.cpp. The next commit will move the helper files.
Redirecting output from a running sync or the command line operation
in the helper to the client of syncevo-dbus-server is not implemented.
Need to write tests for the old implementation first, then implement
the missing functionality.
A sync is meant to finish even if the syncevo-dbus-server gets
killed. On the other hand, in some cases (pending password request,
waiting for next message from Connection) the helper needs to detect
that syncevo-dbus-server process died, because the helper will never
get a response. Need a test for these aspects.
When syncevo-dbus-server is invoked without SYNCEVOLUTION_DEBUG, it
logs to the syslog instead of stdout. Need a test.
The syncevo-dbus-server now sends the final "Session.StatusChanged
done" signal immediately. The old implementation accidentally delayed
sending that for 100 seconds. The revised test-dbus.py checks for
more "session done" quit events to cover this fix.
Best Regards, Patrick Ohly
The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.