[Date Prev] [Date Next] [Thread Prev] [Thread Next] Date Index Thread Index Search archive:
Date:Sun, 25 Apr 2004 10:24:48 +0000 (UTC) 
Subject:Re: new problem: speed declines very much 
From:A . Sloman 
Volume-ID: 

On Sun, 25 Apr 2004, Fatemeh wrote:

>
> Now, there is another problem. My simulation is run at 260 to 300 time
> slices, each time(or each cycle). I should run my simulation very much
> (because agents use neural net for learning). When it has to run 100 cycles,
> its duration is 6 to 7 hours. But for more cycles, speed declines very much
> as for 300 cycles, it needs several days.

If you are *very* careful you can try following some of the tips in the
file HELP EFFICIENCY.

Some of them improve speed by reducing run-time checking, so you have to
be very confident that your program works without those features before
you try using them. Otherwise you can get quite obscure bugs, like C
programmers!


> My simulation at the end of each
> cycle, puts result at a global list.

This will steadily increase the size of your process, increasing the
frequency of garbage collections, and if you are short of memory that
can also increas the amount of paging and swapping in your machine.

You can try to overcome that problem by writing results directly to
files on the hard drive instead adding them to memory. You'll need
to use sysopen, syswrite, sysclose, as described in REF SYSIO.

Make sure you close a file before the program ends or you will lose
data.

Alternatively use 'discout', and in the procedure that previously added
things to lists you can instead 'print' the information to a file,
provided that your information does not include pointers to pop11 data
structures, only printable objects like words, numbers and strings, or
lists of them or vectors of them.

To see how to write a procedure that takes a file name F and a procedure
P and then runs P() in an environment where all print commands print
to the file F, look at the save library

    SHOWLIB save
    HELP save

It uses

    dlocal cucharout = discout(file);

to temporarily divert printing output to the file.

You must eventually close the file by doing

    cucharout(termin);

in the scope of the procedure containing that dlocal expression.

If you wish you can open several files and write to them, if
you want different parts of your output to go to different files.


> Also, I don .t use C language for neural
> net, now.

If you were accumulating data in your running process you would
have the same problem whether you used C or not. Switching from
pop11 to C for the neural-net components might speed up those
components between 5 and 10 times, I guess.

> Also for tracking, different information are printed at each
> cycle.

If you print into a Ved buffer which is getting longer and longer
that also takes up space in your program and will slow it down.

If you want to keep the trace output send it to a file instead
of to a ved buffer. Otherwise you can make your program shorten
the Ved buffer every now and again. E.g. to delete the
first N lines

    define truncate_buffer(N);
        dlocal vvedmarklo = 1, vvedmarkhi = N;
        ved_d();
        ;;; add this if you are viewing the file and it looks wrong
        chain(vedrefresh);
    enddefine;


> My system is AMD Duron 750 with 128 MB RAM.

Not long ago a 40mhz machine with 32MB RAM was a luxury.
30 years ago people were trying to do AI with 0.5Mbyte
and 1mhz cpu, or less.

> Is it normal? Or May speed problem related to my simulation?
> Is there any solution for speed recovery? Or the problem is related to my
> system(small RAM)?

Adding RAM helps with many speed problems. But first make sure the main
problem is not software. Creating constantly growing huge
data-structures (like lists of data) will eventually bring any
machine to its knees.

Aaron