Hans-J. Boehm (boehm@parc.xerox.com) writes in
<4044qp$b63@news.parc.xerox.com>
> I think this all depends on so many factors (e.g allocation algorithms,
> presence of VM, scarcity of physical memory) that it is very hard to
> generalize. My Cedar environment stays up for at least weeks with a purely
> noncompacting collector. Though it runs as a group of SunOS processes, it
> also has thread switcher and many client threads (mailers, editors, etc.)
> running in one address space. I don't notice any performance deterioration.
One crude measure of the effect of fragmentation is:
size of store
-------------
size of biggest stored object
This was about 500 for Multipop, assuming we stored visual data for the robot
by separate rows. For Cedar, I imagine it could be over a hundred thousand,
though you don't tell us how for example the editors store their text.
However, those who speak of megabyte single objects are certainly speaking of
systems that could get seriously fragmented.
> But the heap rarely has more than 25 MB or so live data, and recently I've
> always had at least 64MB of physical memory.
Multipop often ran close to having no free memory whatever, at which point
users who exceeded their memory quota would be thrown off. Thus the compactor
had to realise all the available store so that we could use almost all of
it in some applications.
> I suspect that the right kind of compaction is occasionally useful. But
> you can go a long way without it.
In a modern environment, compaction is more a matter of efficiency, determined
by interactions between the garbage collector and the various memory systems.
However systems that can do compaction (or in general relocation of objects)
do possess degrees of freedom that systems that cannot relocate objects
do not possess. This freedom offers potential for enhancing performance.
Robin Popplestone.
|