Jonathan (another late bird) wrote:
> FWIW, I think the ease-if-use v. efficiency trade-off has gone too
> far in the direction of efficiency. Certainly it should be *possible*
> to make things more efficient, bt flexibility and ease-of-use are
> the primary advantages (IMNSHO) of Poplog, so Dave should never haeve
> had this error in the first place.
I don't think lexical scoping is a matter of efficiency. It's a matter
of clarity.
But I agree that in principle the compiler should have been able to
guess his intention (just as you and I did). That could be done if it
were a two pass compiler, which on the first pass collected all the
local declarations and on the second pass used them. However, I am not
sure that this is possible in general in a language as flexible as
pop-11 which can have arbitrarily complex macros and syntax words that
can do anything, so that they may have a different semantics on the
second pass.
So I wonder if a modification to the one pass compiler could do it: i.e.
plant virtual machine code in which undeclared identifiers are noted
and perhaps represented by a reference which could be on a list of
undeclared identifiers. Then if a declaration is found later on in the
same procedure body the contents of the reference can be updated.
By the end of the process the non-updated items will cause warning
messages to be printed out and the reference will be changed to indicate
the need for a global variable.
Then the back end compiler which generates machine code (or assembler
when rebuilding poplog) would have to know how to de-reference these
references.
I suspect a John Gibson could easily make this work....
Pity he has retired.
> Although I can't help wondering if Pascal was his first language, since
> in C you don't have nested procedures. My guess is that he was using
> nesting as a means of "hiding" the names of the internal procedures,
> because he didn't want to make them global. My personal preference
> would be for a mechanism to do this that didn't require lexical
> nesting of the text.
Sometimes lexical nesting is useful, because you want the nested
procedure to access a non-local variable, possibly to make a closure.
But in general it is easier to use file-local lexicals (lvars and
lconstant).
define lconstant P();
...
enddefine;
define Q();
P()
enddefine;
However, mutual recursion remains a problem, but that can be dealt with
using forward declarations, lvars, lconstant, etc. The compiler will
check for consistency.
If P is declared globally in a file as lvars or lconstant, then
nothing outside the file can access P, unless one file is 'included' in
another opposed to being compiled by the other file.
In that case perfect isolation can be achieved by using
lblock ... endlblock
around the relevant portion of code.
There's a section on this in HELP LEXICAL
(Are file-local lexicals available in common lisp? Maybe the package
mechanism provides an equivalent.)
[AS]
> >Very few languages have the kind of flexiblity and power that require
> >all the subtlety described in the documentation.
>
[JLC]
> I'm not sure that sublety is a Good Thing(tm) in a programming language;
> perhaps you meant Power and Convenience? :-)
I am ambivalent about this. Certainly some of the things that can be
done with lexically scoped identifiers (e.g. returning closures, or
passing lexical closures to other procedures -- such as applist) are
often very convenient.
In many cases this could be done just as well using partial application
and a non-nested procedure, but perhaps not always.
The distinction between different types of lexical closures is certainly
a cause of confusion.
The main efficiency issue is, I believe, reducing garbage collections if
closures are created frequently. In many cases the system can do the
optimisation without help from the user. So part of the complication is
giving the user the option to advise the compiler, via the use of
'dlvars'.
Reducing garbage collection used to be far more important than it is now
1. because large memories are now commonplace and cheap,
2. because increased cpu speeds, bus speeds, memory speeds, etc. have
radically reduced the time required by a garbage collector -- especially
in poplog as the garbage collector is so very fast.
I am writing all this in Ved, using a poplog process about 13 Mbytes
in size (most of which I am not using). I am sure garbage
collections must have occured several times during this session,
but because they now take a fraction of a second (on a 1.5 ghz
pentium 4, running linux) I never notice them.
To illustrate using a forced garbage collection:
true -> popgctrace;
sysgarbage();
;;; GC-user(C) 0.02 MEM: u 269458 + f 531309 + s 1 = 800768
(i.e. it took about 1/50th of a second.)
But I suspect there are cases where the difference between lvars and
dlvars is very significant, e.g. garbage collections when the heap is
very large and there's a lot of paging....
Aaron
|