Merged kolab branch

Review Request #123183 - Created March 30, 2015 and submitted

Christian Mollekopf

Merged the kolab branch into master.

I tried fixing all coding style issues and all tests are passing again (except some of the link tests that are disabled).

I currently can't push the branch due to too many commits, I'll follow up on that.

dbinitializer: Also insert items when Q_ASSERT is disabled.

Merge branch 'master' into mergeKolabIntegration

Merge branch 'kolab/integration/1.12.0' into mergeKolabIntegration


Create task to other resources makes kmail unresponsible

The problem is that relations are only resoucewise allowed. At the
moment nobody can explain me, why this is needed. So I created a
workaround in disabling that check for now.

KOLAB: 4513

Prepare release.

Fixed typo

Try harder to find the user's home directory and the config therein on Windows.

We need the definition of Collection, not just a forward declaration.

Fix build on Windows where we need to check for unistd.h

Fix build at least on Windows by bringing in the Akonadi namespace.

Filter items by resource when fetching from a resource.

A resource should never sync tag members from another resource.

Only emit a notification if somethin changed on the tag.

This avoids bouncing notifications back and forth betwenn resources
that update the remoteId.

Merge remote-tracking branch 'kolab/dev/fix_tagremote' into kolab/integration/1.12.0

Do not cleanup all entries for a resouce if one tag is updated.

If we delete all entries for one resource, than we loose the remote id
for the tags. Only delete the specific entry, that is inserted/updated

KOLAB: 3541

NotificationSource: Send notifications to referenced collections.

We no longer rely on referenced collections being monitored.

NotificationSource: set the session to filter referenced notifications.

LIST: filter referenced collections also with an invalid toplevel collection

Otherwise we get unwanted collections from a toplevel list if another session
referenced the collection.

Notify about dereferenced collections.

Otherwise zanshin won't be notified about dereferenced collections.

Include Tag RID in Tag removal notification

When notification is removed, we need to pass along Tag RID, so that resource
can sync the change to server. However, Tags have per-resource RIDs, and regular
clients are not supposed to be aware of Tag RIDs at all. This means that we
need to emit a dedicated notification for each resource, that provided tag RID,
and one additional notification without RID for clients.

The NotificationSource then filters out the notifications, so that notification
with a RID is delivered only to the resource that owns this particular RIDs.

Since NotificationSource is not aware which resource it belongs to, we use an
ugly hack and rely on the fact, that each ResourceBase ignores it's own session,
and that session is always called the same as the resource.

TAGSTORE: Read the command until end, even when we are going to delete the tag

We must read the command until end, otherwise processing of subsequent
commands gets stuck.

TAGSTORE: Remove tag when its last RID is unset by TAGSTORE command

Resources use TAGSTORE to tell the Akonadi that tag has been removed on
the remote server. They simply unset RID of that tag (because tags have
per-resource RID). To emulate the situation, when all remote references
to a tag have been removed, we need to remove the tag in Akonadi server
and untag all items when the very last RID is removed by a resource.

Disabled the imapparser benchmark.

It takes forever and tells us nothing without values to compare to.

ImapStreamParser: Properly parse empty sequence sets.

It used to insert -1. We require this to clear tags using an empty set.

Avoid trying to set negative timer intervals.

resolveTags: support empty set of tags

We need this to clear tags.

Since we removed the notification compression, we also can't test it any longer.

Made linkhandlertest pass.

RID operations are broken, but we don't currently need them.

NotificationSource: Don't drop unsubscribe or enabled notifications

Since the collection is already disabled we require an exception for this.
Otherwise the monitortest doesn't pass.

NotificationSource: Make it possible to setExclusive

Don't dispatch notifications about disabled collections

The only exception are exclusive subscribers, which expect even notifications
about disabled collections.

For this we add a little hack: a flag that is set by NotificationCollector to
the NotificationMessage that says that the collection in this notification is
disabled. NotificationCollector knows, because it is passed the actual Collection
from caller. The presence of the flag is then evaluated by NotificationSource
when deciding whether to accept the notification or not. This is because
NotificationSource only works with what's in the notification, and querying
database for each Collection to check, whether it's enabled or not would be
very expensive.

When Collection is disabled, the notification is dropped, unless the current
subscriber is exclusive or is monitoring the Collection explicitly, or the
Collection is referenced.

CollectionScheduler: only try to schedule enabled or referenced collections in initScheduler()

Otherwise we end up iterating over a growing QMap of already-scheduled collections
over and over again, which causes Akonadi server to take ages before it starts.

NotificationCompression: Limit search to last 10 items, and remove modify after add.

Removing the modify after add avoids that each collection is initially indexed twice.

Reduced notification compression.

This are three squashed commits:

Remove notification compression algorithm

The compression does not have much sense nowadays, as we rarely end up in
situation where merging notifications would result in performance improvement
(mostly because the most intensive operations now use batch notifications).
The only actual remaining use for the compression algorithm is in case of removal:

If there is a notification in the queue for a certain Item, and a removal
notification for the same Item is appended to the queue, it means that the Item
no longer exists in Akonadi and at time the first notification is delivered to
Monitor, EntityCache will no longer be able to retrieve the Item, which renders
the notification useless and we need to remove it to prevent EntityCache from
getting Item/CollectionFetchJob error.

However when massive removal operation takes place (in case of Collections that
is, removal of Items is again delivered as Batch notification), the queue has
to be iterated over and over again - and as the queue grows, the algorithm
stops being an optimization and instead becomes a bottleneck.

For that reason we dicided to remove the compression algorithm completely and
instead make sure, that Monitor (and EntityCache) on the client side can
properly handle the case of Item/Collection no longer existing in Akonadi at
the time the Notification is delivered.

Bring back notification compression - at lest partially

Turned out that adding a new Collection generates one ADD and several
(usually 3 in my case) MODIFY notifications - since the MODIFY is
expensive and slows down initial sync, we compress it ito one notification again.

All other notifications are dispatched directly without any compression.

Notification compression: further optimization to minimalize worst-case scenarios

We are now iterating over the list of pending notifications in reverse order,
because it's more probably that compressable notifications are at the end of
the queue rather at the beginning.

Iterating from the end also means that once we reach Add notification for the
same collection, we can break immediatelly and don't have to iterate through the
rest of the list, since there can be no other notifications for this Collection
before "Add"

Preserve local-only flags during MERGE

Some flags are never synced to server (because the server does not
support it for instance), so the resource cannot know about them when
syncing changes in the Item (another flags change, payload change,
whatever) and since ItemSync always crates non-incremental flags merging,
we would lost these local-only flags.

This hack ensures, that during MERGE when we have non-incremental flags
changes, certain local flags ($ATTACHMENT, $WATCHED, $INVITATION, ...)
are preserved.

make Akonadi::Server::Connection to build for 12.04

handler::list: CollectionAttributs values are QByteArray

We should read/write them as QByteArray and not as QString

Akonadi::Server: add a benchmark fetch test for a popolated collection

QueryBuilder::setForwardOnly: use ForwardOnly mode for sql query

* less RAM

LIST: comments and simplifications.

LIST: Retrieve attributes also for collections that only have been included because they are part of the tree.

LIST: The base collection listing should also succeed if the filter doesn't match.

Otherwise an ETM with a list filter cannot fetch collections that are part of the tree,
but don't match the tree. In particular it would also not receive updates from the
monitor for said collections.

Catch invalid parents (happendend to me once), so better make sure we catch this in non-debug builds.

LIST: Retrieve attributes like mimetypes in a parallel running query.

We still have to load attributes for ancestors into memory, but at least the
rest we can stream directly.

LIST: list mimetypes directly instead of loading them into memory.

It's faster and saves memory.

LIST: Improved performance

Instead of issuing at least 2 queries per collection (attributes and mimetype),
we collect all required data in queries for all collections. This improves
the list performance by a factor of ~3. Some more memory is used as a tradeoff,
but this currently doesn't seem to bad.

FakeClient: Support for ignoring lines.

Useful for benchmarks with thousands of responses.

Support for writing StorageDebugger content to a file.

Handler time reporting

TAGSTORE: allow clearing the rid

FETCH-Tests and made collectioncontext non-persistent.


TAGSTORE: Actually working tagupdates + test.

With this it becomes possible to update the rid.

Relations support


LIST: Collection ancestor fetch scope

parentLookup is only used locally.

LIST: Fix ancestor retrieval

My previous fix has introduced a regression that has broken ancestor retrieval,
this commit fixes that.


adjusted test

Modify: Make sure the referenced state is stored before we trigger the sync.

Otherwise it's possible that we get a LIST that expectes the referenced state,
before it's stored.

Changed ancestors listing to use regular collection format and include name.

LIST: Fetch ancestor attributes


LIST: list referenced collections to resources.

Otherwise referenced collections are never synchronized resulting in no content.

Fix LIST handler listing irrelevant collections in mimetype filter

The hasChildCollections check would include every collection with at least
one children, regardless whether it passes the mimetype filter or not. This
patch changes child collection check to only evalute to true if at least one
collection in the subtree matches the mimetype filter.


LIST: Non-recursive listing

Instead of recursively query for children,
we query for a list of all collections we're interested in,
and then assemble a tree.

This potentially results in a small overhead with few collections,
but should scale much better when we have many collections while most
are getting filtered by the initial query (i.e. disabled collections).

Additionally this patch ensures enabled collections that are nested in
a disabled collection are correctly found.

REVIEW: 119418


Test-DbInitializer: improve for listhandler test.

Share DbInitializer for other tests and make use in listhandlertest.

use dbinitializer

remove the collections manually since sqlite doesn't deal with constraints.


FETCH: Fetch tags with item

REVIEW: 119468

Don't call CREATE DATABASE in FakeAkonadiServer

Unit-tests always use SQLite, which does not support CREATE DATABASE. This
for some reason fails silently in Qt 4, but produces (a valid) error in Qt 5.

TagFetchHelper: Fix SQL query for fetching tag RIDs

The original query was wrong and was returning a single tag multiple
times, if it had multiple remote IDs.

CollectonReferenceTest: Fixed the tests after fixing the reference manager

This test couldn't work because the referenced state is reset after
each scenario (due to the session resetting the reference count).
I adapated the test now accordingly.

Fix a bug in DataStore::virtualCollections()

The code was not correctly resolving all virtual collections that
given items belong too.

Fix race condition in StorageDebugger

Thanks to David Faure for discovering this.

TagAppend: fix repeated appends with merge enabled

This was triggered by the knutresource which creates it's tags everytime
it's syncing collections. For updating the record we cant use the convenience
class, since we need to identify it using both id and resourceId, and not only id.

REVIEW: 119320

Use the right identifier to remove the session from the CollectionReferenceManager.

Fix HRID-based FETCH not searching in correct collection

Fix compiler warning in MERGE handler

compile with Clang

Qt 4.8 doesn't support Clang (and so doesn't have initializer lists with Clang)

STORE: Allow modifying items tags via Tag RID or GID

Tags RID is of course allowed only to resources


Optimize the cacheOnly fetch scope override hack

With Baloo we introduced a hack that ignores CACHEONLY parameter to FETCH command
when all requested items are local (provided by maildir/mbox resources) so that
Baloo was able to index local mails despite cache expiration policy.

This (together with the broken CollectionScheduler) lead to local mails never
expiring from Akonadi Cache.

This change permits the CACHEONLY override only to the Baloo Indexer agent.
Unlike some other agents/clients, the Baloo agent is smart and does only incremental
checks, so it will not cause the entire maildirs to be loaded into Akonadi cache.

BUG: 333514
FIXED-IN: 1.13.0

Fix CollectionScheduler's timer not resuming when uninhibited

Since CacheCleaner is inhibited during various operations after start, the
timer was never correctly resumed when the inhibitor was removed, and so
payload parts were never expired.

CCBUG: 333514

PartStreamer: don't UPDATE Part twice

When using SQLite, always start transactions in IMMEDIATE mode

IMMEDIATE mode acquires RESERVED lock immediately, which seems to resolve
the database deadlock errors we've been seeing with SQLite.

Disable COPY and MOVE into virtual collection

Although technically possible, it makes no sense for virtual collections to own
any items and we already prevent X-AKAPPEND to virtual collections anyway.

MERGE: Only emit change notification if something has really changed

The MERGE request does not have to contain all data (like GID, for instance),
so instead of overwriting existing GID with an empty GID, we ignore this
"change". The same applies for flags: I noticed that when syncing a folder,
KMail was eating about 50% CPU, because ETM generated dataChanged() for every
item in the synced folder. It was because the Server assumed that non-incremental
flags change must always generate a change, which is not true of course.

Fix a bug in setToVector() conversion

QVector(int) constructor allocates n items, so appending set values to it creates
a vector of n zeros followed by the appended values, which luckily in our case had
no side-effect, but was confusing. Instead we use std::copy to copy the QSet to the

Fix and optimize DataStore::virtualCollections()

The method now returns QMap<Entity::Id, QList<PimItem> > instead of being a
multimap, and the PimItem now correctly contains mime type.

Implement fetching Virtual References via FETCH command

This adds a new parameter to fetch scope called VIRTREF. When listed in fetch
scope, the FETCH response will include list of virtual collections that the item
is linked to. This allows resources to link/unlink items incrementally, without
having to make a full listing of all collections to find exact inconsistency
between the cache and backend storage.


LINK: Restore resource context if it was unset

Fix retrieving statistics for virtual collections

ModifyHandlerTest: Removed duplicate test.

Fixed listhandlertest.

Or is the order not guaranteed?

Support for temporary referenced collections.


Option to avoid db population in fakedatastore.

This allows tests to avoid the population by the xml data so they can
populate the store themselves.

Add benchmark for ImapParser::parseParenthesizedList.

The test data comes from real-world data, with the actual email
addresses and names replaced by dummy data.

Explicitly set size of hiddenAttribute

When appending a hidden attribute, we must specify the size, because
otherwise PSQL will try to insert a NULL value, which is not permitted
in PartTable.datasize column

Register the /SearchManager object on main thread connection

Otherwise the /SearchManager path would never appear on our service.
This won't affect the crash in SearchTaskManager, as that is using
it's own per-thread connection.

FETCH: Drop the requirement that GID-based fetch must have collection context

PartStreamer: Make sure the external part storage directory exists

Bumped protocol version and server version.


Migrated subscribed to enabled, fixed tests.

Only emit subscribed notification after modified notification, cleanup, fixed tests.

Only schedule collections for sync that are enabled or have a corresponding preference.

Convert Tristate to SMALLINT on PostgreSQL

TINYINT (1-byte int) is not part of SQL standard and is only supported by MySQL
and SQLite. In PostgreSQL, the smallest numerical type is SMALLINT, which has
2 bytes.

Make FakeAkonadiServer a subclass of AkonadiServer

This way we make sure that parts that call AkonadiServer::instance() will in fact
get a pointer to FakeAkonadiServer.

Also make sure that all unit-tests that use FakeAkonadiServer call quit() at the
end to clean up the test environment.

The cachecleaner and intervalcheck are optional and not available during unittests.

Made CacheCleaner and IntervalCheck non-singletons.

AkonadiServer serves as the singleton to access the main instances,
IntervalCheck and CacheCleaner are in fact optional and the returned pointer
needs to be checked accordingly.

LIST/MODIFY/CREATE: support for enabled state and sync/display/index preference.

This patch adds support for the new enabled state plus the local
preference for sync/display/index. The enabled state gives a preference
that can be synchronized accross devices, while the local preference
allows to override that setting individually for each purpose.
This mechanism should eventually replace the local subscription mechanism.

Use AKONADI_EXCEPTION_MAKE_INSTANCE for proper Exception realization in FakeAkonadiServerException.

Implement direct streaming of part data

This implements a new optimization for item streaming, which improves performance
and memory efficiency when appending or modifying items, but allowing clients to
write the payload directly to a file instead of sending it to the server.

This effectively eliminates the need of transmitting (possibly large, but always
at least DbConfig::sizeTreshold() bytes-large) part data to the server, which has
to process it and write it to a file. This feature of course only works for external
parts, i.e. parts larger than the configured treshold. Smaller parts or non-literals
are still parsed as usually.

The main change that clients have to cope with is that the continuation response
can now contain parseable data that they must check for:

+ STREAM [FILE external_file_name]

If client receives this response, it parses the file name and writes the data
directly into the file instead of sending them to the server. In case of error,
the client replies "* NO Error message", otherwise it just sends next payload part
header, or end of list.

The server only verifies that the file size matches the advertised size of the part
and continues parsing next data.

Clients have to advertise that they support direct payload streaming by listing
DIRECTSTREAMING in CAPABILITY command. Clients without this capability will never
receive the STREAM continuation response and must provide any payload the usualy way.


Make FakeAkonadiServer a singleton initialized early in AKTEST_FAKESERVER_MAIN macro

This makes sure that all environment variables will be set before anything
else has a change to call XdgDataDirs or anything else that caches the paths.

FakeAkonadiServer: don't overwrite user's config in ~/.config/akonadi

Fix handler unit-tests failing because of protocol version mismatch

Add more complex test cases for X-AKAPPEND handler

Improve macros in aktest.h

Add setWaitTimeout() to ImapStreamParser to configure timeout for unit-test

Unlike in real world, the client will always send response to server immediately
so waiting for 30 seconds slows down the tests unneccesarilly when we want to test
whether server can cope with read timeout correctly.

Terminate FakeClient when a QTest::qVerify or QTest::qCompare fails

This requires special version of QCOMPARE and QVERIFY that calls quit()
when the check fails. Otherwise the test got stuck after a failure, because
the FakeClient did not play the scenario further, but did not 'disconnect'
from FakeServer.

Backport some ImapStreamParser bug fixes from KIMAP

Discovered by akappend handler test

Base X-AKAPPEND handler unit-test

AkTest: more comfortable test helpers

Add a nasty hack to remove custom cmdline arguments before calling QTest::qExec

QTest's QApplication will abort on 'unkown argument', if we pass it some of
ours, so we process our arguments first, then remove them from argv and pass
the stripped argv to QTest::qExec.

Rework the Fake testing infrastructure to be more like server-client

This is inspired by FakeServer from KIMAP, where server (a thread) is given a
scenario and it compares whether requests from client match the scenario and
sends back replies from the scenario, except here the roles are switched and
the scenario is played by client, which verifies whether server responds
correctly to it's commands.

Add FakeSearchManager and instantiate an inactive PreprocessorManager from FakeServer

FakeSearchManager is needed to make SearchManager::instance() return a valid
pointer (which is expected by all callers) but not really do anything when
called, as we don't want the search code to interfere with our tests.

Same applies for the PreprocessorManager, but that can be simply turned of and
really does nothing when there are no preprocessor registered.

Don't emit notify() when there's no notification in NotificationCollector's queue

This prevents 'ghost' notifications appearing on some places and makes the
code easier to unit-test.

Extend LinkHandlerTest by tests for UNLINK command

Implement a unittest for LINK handler and fix a bug it has found

Add empty constructor to Scope so that it can be a metatype

Add QDebug support to NotificationMessageV3 and few handy methods for unit-testing

Introduce some Fake classes to allow unit-testing of internal parts

Namely we have FakeDataStore, which records all calls and calls the actual
implementation, FakeConnection, which allows injecting specific handlers and
specifying commands to process as a QByteArray (so no need to use socket) and
finally FakeAkonadiServer, which when started sets up the entire environment
in /tmp, intializes and populates database and cleans up everything again when

Make Connection a QObject and create ConnectionThread that owns Connection

This solves some problems with QObject ownership (Connection lives in a thread in which
it's executed) and allows unit-tests to be single-threaded.

Add a XML file that describes content the database will be populated with

The XML file is transformed into a C++ class called DbPopulator using XSL transformation.

Due to my experience in writing XSL stylesheets being non-existent, the XSL templates are
rather messy and the format and content of the input XML file are limited, but should be
sufficient to populate the database with some testing data that unit-tests can refer to.

Fix handling of tags in AK-APPEND and MERGE commands

The handlers are now able to process tags identified by GID or RID
and will create a new tag, if no such tag exists.

RTag (identifying tag by RID) requires resource context.

Manually rollback transaction when SQLITE reports deadlock

Unlike other databases, SQLITE won't rollback the transaction for us.

Improve detection whether SQLite is built with unlock_notify API

Support for tag types + and fixed tag fetch by RID.

The type is stored a separate table and always returned when fetching a tag.
Additionally we're now picking the right result of the query as rid.

REVIEW: 117769


Remove space from an argument passed to postgres server

Some users reported problem with starting PostgreSQL when there's a space
between name and value of an argument passed to postgres. Removing the
space fixes problem for them (withouth breaking PostgreSQL for those who
didn't have a problem with the space)

Thanks SergTruf for the patch.

BUG: 332988
FIXED-IN: 1.12.2

Fix PostgreSQL start when is not removed after non-clean shutdown

When PostgreSQL is not terminated nicely, a pidfile is left behind in db_data,
which will prevent pg_ctl from starting a new PostgreSQL server.

We check for file and verify that postgres server with PID
specified in the pidfile is not running anymore, then delete the pidfile to
allow pg_ctl to start a new server. If the postgres server is still running
(possibly after Akonadi server crash), we try to connect to it right away.

BUG: 286826
FIXED-IN: 1.12.2

Fix PostgreSQL startup when akonadiserverrc contains empty values

When there are empty configuration options in akonadiserverrc (like Host),
we would still pick them up and use them as valid values, which is wrong,
because then we work with empty socket path etc.

Instead we always replace non-existent and empty config values with the
default ones, so that Akonadi with PostgreSQL is always able to start.

Added an index for the remoteId column.

The MERGE does a lot of lookups by rid and therefore is a lot faster with the index.

Incremental changes for MERGE and tag support for MERGE and APPEND.

Incremental changes make it possible to i.e. only update the flags.

Tag support allows clients to create a new item with tags specified via
\Tag['tagRemoteId'] flag (the flag can be repeated with different
remote IDs to assign multiple tags. The tags MUST exist.

When SILENT is specified on MERGE, the resulting item won't be sent in the response
of MERGE, instead just an UID is sent.

Daniel Vratil is of course responsible for writing the patch ;-)

Update Doxygen documentation

Fix links and add information about some new features. This is still far from
complete or perfect.

Include <numeric> for std::accumulate.

This fixes the build with at least libc++ after f212406e.

Fix build when sqlite is not installed

Check whether Sqlite is compiled with SQLITE_ENABLE_UNLOCK_NOTIFY

The QSQLITE3 driver now requires that SQLite is compiled with
SQLITE_ENABLE_UNLOCK_NOTIFY, which enables the unlock_notify API.

It turns out that some distributions ship SQlite compiled without
this option and Akonadi then fails to compile.

To prevent this, we detect whether SQlite is compiled with unlock_notify
API at configuration time by compiling a simple check in FindSqlite.cmake.

REMOVE: Parse in-command context

Parse collection and/or tag context specified within command.

Tag context should not be persistent accross commands

Unlike with SELECT for collections, we have no way to change tag context
or unset it, so if one command sets the context, all subsequent commands
would have to override it.

Instead we make tag context non-persistent (it's reset before every new
handler is started).

Once (if) we kill SELECT in KF5 and specify the context in each command
instead for collections too, we can make the entire context to be per-
command instead of per-connection.

Fix change detection when storing an updated attribute part

DataStore: sqllite does not support the QuerySize feature, so we can't rely on it.

MERGE: Only emit itemChanged if the item is actually modified by the merge

This adds a new optional argument to PartHelper::storeStreamedParts() that will
be set to true if the part data have changed, or a new part was created. The
code always compares data sizes first and will fall back to comparing the actual
data only when the 'changed' argument is set by caller and when part sizes are
the same.

Implement MERGE command handler

MERGE command allows merging new item into an existing one based on
GID and/or RID match. If no such item exists a new one is created.
If it exists, the new item is merged into the existing one, overwriting
conflicting flags, and parts.

Merging is restricted to parent collection and mime type, to minimize
potential RID/GID conflicts, since they are not guaranteed to be unique.

Syntax of MERGE command is equivalent to syntax of X-AKAPPEND command,
except for the command ("MERGE") is followed by parenthesized list of
attributes to merge by (currently supported values are "REMOTEID" and

Implementation-wise, Merge handler is a subclass of AkAppend handler,
which allows reusing all the existing code for insertion of a new item
(in case no candidate for merging is found) and only implements merging.


Fix retrieving of GID from SQL query result in FetchHelper

This has been broken since the day one, but nobody noticed. I guess
we were lucky enough to always query other arguments, so that
ItemQueryPimItemGidColumn actually matched indexed of the GID column
in query.

Another reason why we need proper unit-tests on the server...

Remove the invalid GID part from PartTable before starting PartTable migration

More people than we expected have invalid 'GID' part in their PartTable,
which breaks migration to schema 25, because it expects all part types
to have a valid name.

To work around this fact, we DELETE all parts with name 'GID' from PartTable
before starting the actual migration. This will not fix the migration for
people with other invalid parts, but I haven't heard of any such. To make
this completely bullet-proof, we would need to iterate through all entries,
which would be massively slower than current INSERT INTO ... SELECT FROM approach.

Distributions, this is a good choice for backporting into 1.12.1 ;-)

BUG: 331867
FIXED-IN: 1.12.2

Use per-thread QDBusConnections

This moves DBusConnectionPool from Nepomuk search code to Akonadi and makes
use of it in search infrastructure and couple other classes that interact
with DBus from non-main thread.

QDBusConnection is not thread-safe in Qt 4, so we need to workaround it by
having a connection for each thread. Qt 5 should be OK, so we can remove this
in Frameworks.

This should fix random crashes I've been seeing when SearchTaskManager::addTask()
was called from multiple threads simultaneously.

With the Baloo port completed, don't require Soprano by default.

Approved by Dan.

Merge branch 'master' into kolab/integration/1.12.0


Appended kolab version number.

All tests pass, I'll test it with kdepimlibs once that merge is done as well.

Christian Mollekopf
Daniel Vrátil
Volker Krause
Christian Mollekopf
Review request changed

Status: Closed (submitted)