speed up of rxx_allocator

Review Request #100730 - Created Feb. 23, 2011 and discarded

Information
Floris Ruijter
kdevelop
Reviewers
kdevelop
rxx_allocator was according to my measurements done with kcachegrind, valgrind, duchainify and iostream. The allocator had three basic defects:
1) all allocated memory was deallocated whilst we need a lot of rxx_allocators (1 per file i presume?), so these blocks can be reused
2) it cleared the memory on a per block basis, but if not all of the block is used, then that is a waste of effort
3) it used realloc to manage the list of blocks, this isn't too bad but could cause a move of the list which is totaly unnecessary

i solved the problems mostly by making the blocks act as linked list nodes: a next pointer + a really long char array. deallocated blocks are kept in a static linked list, whilst actual rxx_allocators have their own(personal some would say)linked list of blocks. access to the deallocated blocks list is synchronized through a static QMutex.

the access could be threadsafe by using a thread local linked list of deallocated items too, but i don't think that'd be practical, the global static list is probably more effective (eventhough it requires locking) 
as mentioned i ran a file which only included iostream through duchainify which i callgrinded.

                      old:              new: 
pool::allocate        ~450 000 000      ~7 000 000

all time spend in libkdev4cppparser:
                      ~585 000 000      ~140 000 000


the pool::allocate numbers are both the 'inclusive' numbers

looking at the data for the amount of "operator new" calls I can see that the cost per call are pretty much the same but that the old implementation called it about 50x more.
Floris Ruijter
David Nolden
Albert Astals Cid
Floris Ruijter
Review request changed

Status: Discarded

Change Summary:

KDevelop actually has a thread local allocator cache now for 4.5 - see MemoryPool in languages/cpp/parser/memorypool.h
Loading...