On 2004-10-12 10:48:02 -0700 Aaron Sloman wrote:
> However, I think it would be best to replace the name
> 'poplog' in the 'shootout' test reports with 'pop11'
> because the actual language you were using was pop11,
> and poplog is the whole environment which includes
> several languages.
Done. Please let me know if anyone notices a place I missed.
> Poplog common lisp does have some optimisation declarations.
> (See LISP help optimize == $usepop/pop/lisp/help/optimize )
I'll have to take a look at these to see if any apply.
In addition, the fact that Poplog Common Lisp complies with
"Common Lisp the Language, Rev 2", and not ANSI Common
Lisp is a bit of a disappointment. With Steel Bank Common Lisp,
CMU Common Lisp, and CLISP, I can use the same source file to
run all the tests.
It would be nice if Poplog Common Lisp met this standard as well.
> When I looked at your pop11 code the only possible optimisations
> that occurred to me were
> o remove output locals,
> o use the non-checking fast integer arithmetic operations,
> o declare the procedure name to be of procedure constant
> type:
Done. I didn't see much of a change in speed. In fact, on at least
one test things became fractionally slower.
I think the main problem is that each run of the test is invoked as
something like "/usr/bin/pop11 ackermann.poplog 5", which causes
Poplog to recompile the program every time. Perhaps the compiler
does a slight bit more work to optimize the routines when they are
'constant' and so forth, and I'm paying this cost each time.
Next on the agenda is to dump an image with the test implementations,
then invoke this image each time (just as I do for the Common Lisp
and Smalltalk tests).
> And if you know what you are doing (and are careful) you
> can get even better results. But never as good as the
> best compiled lisp systems, or C.
Well, that's a shame. But Poplog exists in the same problem space
as Erlang and Mozart/Oz, and so is targeted at a different
kind of problem. Unfortunately, the shootout tests are not good
at testing these categories of problems. I hope to change that
assuming I can identify a set of problems that are:
o Easy to implement in a variety of languages (usually this
equates to being short programs.)
o Can be scaled across a variety of inputs, allowing a range
of input tests to be run.
o Allow the use of a known "good" answer/output that can be
used to judge (in an automated fashion) that two implementations
of a test have arrived at the correct answer.
Thanks,
-Brent
|