> Now, in my humble opinion, this results in a lot of garbage. Most
> purely functional I/O systems are intensely painful to use. The
> imperative model of I/O is almost universally easier to work with. The
> question of language selection becomes: what's more important, the
> advantages gained by using a functional language, or the advantage of
> easy I/O?
As far as input goes, this is just not true. It is the state-based
systems that are harder to use. The POP-2 model of input (designed
following Peter Landin's ideas in 1967) offered both functional and
state-based approaches. In the functional approach an input was represented
as a list of entities. This makes it -far easier- to write and debug
parsers etc. since you can just use ordinary lists for testing. Thus:
parse_expr([x + y])
will evaluate to the parse-tree for the expression. The ordinary -hd- and
-tl- functions of POP are what might be called "quasi-lazy". The techniques
were of course used elsewhere, e.g. in Scheme and ML, but as add-ons.
We offered an impure model of input because a purely functional approach
carries efficiency overheads we were not prepared to force upon users of a
0.05MIPS machine with 64Kwords of memory. In fact the POP-2 tokeniser
(lexer) used the state-model, and the parser used the functional model.
The higher-order function called -fntolist- mapped from the state-model to
the function-model. In POP-11:
vars rep = discin('.article'); /* get a state-model of input */
vars rep_item = incharitem(rep); /* apply the lexer obtaining a
state-model of a token-stream */
vars list_items = pdtolist(rep_item); /* convert to functional model
as a list of tokens */
list_items => /* Print unevaluated list */
** [...] /* prints thus */
vars third = hd(tl(tl(tl(list_items)))); /* force expansion */
list_items => /* Print partly evaluated
list*/
** [Newsgroups : comp . ...] /* This article of course */
Robin.
|