I am moved to write by Helen's musing on the difference between
high and low level languages :-
> I used to program embedded systems in assembly language. The code was
> easy to write efficiently, and usually worked first time.
How things look rosier from a distance! You forget about the dreadful
performance because you didn't have enough time to encode the dynamic
allocation of 2D hash tables that you knew would be the key to good
performance. Well, you wouldn't have done it in FORTRAN either so
"clearly" using a high-level language doesn't help. You forget about
the vast zones of memory wasted because you didn't bother with a proper
garbage collector (gee, a buddy system is good enough, what do you think?)
You forget about those awkward occasions when your neighbor (it was
never you) crashed every terminal on the system because they missed out
a full stop.
Most of all, you forget that you were pleased when you wrote a chunk
of code to perform formatted output -- for the 15th time -- and it only
took you two weeks. Yes, those were the days alright.
> To debug it
> was easy because there was no one else's compiler or operating system
> to screw things up.
It was also easy because it was so hard to write anything one only ever
wrote trivial code. I, too, can debug assembler which consists of a bunch
of loads, adds, and stores. It is when I am writing the code for
computing the transitive closure of relationships I keep screwing up.
Something to do with storage management keeps cropping up. Perhaps
one day they'll be a general purpose solution ....
> All the errors were mine (with one exception) so ...
And all the working code was written by yourself, too. It's about the
25th time I wrote a slightly different formatted print routine I began
to suspect there might have been an easier way of working. (N.B. Who
else have written formatted print routines in Prolog? Hmmm, quite a
lot of you ... might be a pattern.)
> Maths was fast and safe because such things as divide by zero
> conditions were all caught by a routine check on the floating point
> unit's registers and stack.
And fundamentally incorrect, too. Every time you add one to a number in
assembler you are obliged to check for overflow and branch to the
arbitrary precision arithmetic package. This in turn relies on the
correctly written (first time, no doubt) dynamic allocation package
which relies in turn on the safe deallocation of store which relies
on the sound tagging scheme for arithmetic ... oops, we just
invented Lisp.
Arithmetic on today's processors is just plain wrong. It was always
wrong. Read the RISKS digest for overwhelming support of this view.
It must count as one of the most powerful reasons for serious s/w
designers to move to high-level languages. And I don't count model-T
type languages such as FORTRAN or C++ as high-level for this very
reason.
Let's face it, if you can't add one to an integer with a reasonable
hope of safety, what makes you imagine you can write a complex
program at all?
> My colleague Alistair has said that I could have developed my image
> analysis and multivariate analysis system faster and more efficiently
> if I had used Z80 machine code rather than poplog.
This sounds like a straightforward comparison between a system you are
very familiar with and one which you are learning. To illustrate
this with a more obvious example -- I have no doubt that
a FORTRAN programmer would be best off doing almost any one-off job
in FORTRAN rather than POPLOG because of the extensive learning curve
associated with POPLOG. Learning a new system like POPLOG is a
significant investment, unfortunately.
The beauty of POPLOG is that it doesn't force you to write the core
analysis routines in Pop. If you want to write them in C or even
assembler -- go ahead! That's the way POPLOG was built to be used.
All you do is link them into POPLOG and get the best of both worlds.
Steve
|