On Tue, 2011-08-16 at 01:35 +0200, Lukas Zeller wrote:
sorry for "vacation mode" and not keeping up with my inbox
these days...
Good for you :-)
On Aug 3, 2011, at 16:31 , Patrick Ohly wrote:
> Suppose the same meeting invitation for event UID=FOO is processed in
> both Evolution and Google Calendar. On the Evolution side, the
> invitation is accepted. In Google Calendar this is still open.
>
> Both sides now have a new item when syncing takes place.
>
> What happens is that the engine itself doesn't recognize that the two
> new items are in fact the same. It asks both sides to store the others
> item. The SyncEvolution EDS backend recognizes the UID and updates the
> item, but without merging. The PARTSTAT=ACCEPTED is overwritten with
> Google's PARTSTAT=NEEDS-ACTION. At the same time, Google's version of
> the items similarly overwritten.
>
> After modifying the event series in Evolution it is sent as update to
> Google, at which point the correct PARTSTAT is lost everywhere.
>
> Is there some way to force UID comparison for added items even in a
> normal, incremental sync?
Something like a slow sync match on UIDs for adds, to convert them to updates?
Right.
That's something that could be added to the engine if we add
UID-based
matching support natively (would make sense, but also quite some work
to get it right).
Would it be simpler if it was done without native UID-based matching?
Perhaps by specifying that a comparescript also needs to be run for
adds?
However I once had a similar case, where a backend would do an
implicit merge instead of an add based on the contents (wasn't the
UID, but that doesn't matter) of the to-be-added entry.
For that we added the possibility for the plugin to return
DB_DataMerged (207) when adding the item caused some sort of merge in
the backend.
SyncEvolution already does that. But because it can't do a real merge of
the data, some information is lost.
This causes the engine to read the item back from the backend after
having added it, assuming to get back the merged data, which is then
sent back to the client in the same session.
This only works server-side, because on the client there is no way to
send updates after having received server updates (must wait for the
next session).
Your backend would need to issue a delete for the other item in a later session to clean
up.
SyncEvolution is not doing that. Can you elaborate?
What happens is this, from the backend's perspective:
* Item with local ID FOO exists.
* Backend asked to add new item.
* Backend detects that the new item is the same as the existing
item, returns 207 and local ID FOO.
* In the next sync, the item with ID FOO is reported as
"unchanged". Nothing is said about the other item because as far
as the backend is concerned, it doesn't exist and never has.
Overall, I'm not sure where these types of cross-item merges
belong
conceptually. I guess with series and exceptions such a merge could
easily get more complicated and involve multiple items, so I wonder if
that should better be implemented outside the SyncML scope, like a
"robot user" who is editing the database between syncs on one end or
the other.
I think it fits into the engine. The Funambol server has worked in this
mode (always do mandatory duplicate checking on adds) for a long time. I
have my doubts whether it is always the right choice (for example, one
cannot add "similar" contacts that the server deems identical), but for
iCalendar 2.0 UID it would be.
The problematic part is indeed the client side, because a duplicate
detected there will temporarily leave client and server out of sync. But
when assuming that the server does the same duplicate checking, it can
only happen when a new item was added on the client during the sync
(otherwise the server would have detected the duplicate).
--
Best Regards, Patrick Ohly
The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.