## 16 April 2012

### Frighteningly Ambitious Programming Language Ideas

Use versus Mention

The use–mention distinction—in addition to being one of the rare opportunities to use an en dash—is a distinction between application of a term for its inherent meaning (Brian is alive and has no letters) and examination of the term for its superficial representation (“Brian” has five letters and is not alive). In natural languages, we typically use quotation to differentiate between use and mention.

This distinction is one of those apparently mundane linguistic phenomena that turns out to have rather subtle and interesting philosophical implications. In particular, use without mention is the basis of pronouns and demonstratives—in “He picked up the book and studied it”, it refers to the same thing as the book, without repeated mention of the book. These structures in turn allow self-reference, as in the liar paradox (“This statement is false”) and Quine’s paradox:

“Yields falsehood when preceded by its quotation” yields falsehood when preceded by its quotation.

Reference and self-reference are two things that come up quite often in programming. If you’re a language designer with a mind to fundamentally alter how programming is done, then the ideas that follow are yours for the taking. They are difficult, unattractive, dragonish ideas, and it will be many years before ordinary developers take them seriously.

But first, some background.

♦ ♦ ♦

Reference and Quotation

Every aspect of natural language has a parallel in artificial language. Concrete referential types such as pointers are also references in the abstract sense. Pointers are like pronouns in that they allow you to refer to a thing itself, rather than through its name. Take this C code:

T x;
T *y = &x;

Here, *y refers to the very thing that x does. In addition, it allows a form of lazy evaluation: we can pass around copies of y as much as we want, without creating copies of *y. Deferred evaluation thus has performance benefits, but more fundamentally, pointers allow us to create data structures of a wholly different shape than the memory in which they live—trees and graphs and so forth. More on that later.

So in C, taking the address of a variable is a form of quotation, preventing the evaluation (copying) of that variable until the pointer is dereferenced. In Lisp, the quote special form likewise denotes mention rather than use, but of any expression:

(f x y)         ; Use
(quote (f x y)) ; Mention
'(f x y)        ; (Shorthand for the above.)

Concatenative languages have this as well:

y x f     # Use
[ y x f ] # Mention

This is essentially a generalisation of pointers—deferring evaluation through quotation. Lisp programs can pass around code unevaluated as easily as C programs pass around data uncopied. This largely eliminates the distinction between data and code, and allows still more kinds of laziness. Anonymous functions are similar to quotations, but a bit heavier in that they also have a closure of variables from their (lexical or dynamic) environment.

♦ ♦ ♦

To Infinity

But as it turns out, it’s not even strictly necessary to make a distinction between use and mention. The most visible example of this in practice is probably Haskell, where evaluation is non-strict by default. Every expression is nominally a reference (thunk) to be evaluated only as needed. To support this, most types are implicitly referential (boxed). Non-referential (unboxed) types are available for internal use and low-level optimisations.

In a way, lazy evaluation is rather like garbage collection. GC simulates a machine with infinite memory; lazy evaluation simulates a machine capable of infinite computations. In Haskell, it’s normal practice to construct infinite lists and cull them as needed. That’s backward as far as most languages are concerned, but it also turns out to be a very intuitive way of modelling many kinds of problems.

So we can work with infinities of space and time on finite machines. But can we generalise further? Well, yes, but the results look pretty alien. Here be the aforementioned dragons.

♦ ♦ ♦

Dimensionality and Topology

Our computations are essentially two-dimensional, with one dimension of space (memory) and one of time. More time dimensions are possible, but I don’t see the use, considering our universe seems to have only one. I’d appreciate comments from anyone who can offer an application of such a thing.

You can very easily generalise to more memory dimensions, though. 2D memory allows you to construct, say, a matrix where each element is adjacent in memory to all of its neighbours. You just allocate memory in rectangular rather than linear regions. Unfortunately, you lose a sane ordering relation on pointers.

You can also play with the memory topology—cylindrical, spherical, or toroidal memory lets you construct fixed-size circular arrays, where each element is adjacent to the next, and the last is also adjacent to the first. Slightly stranger topologies allow variable-length circular arrays. And beyond familiar shapes and grids, any lattice graph can serve: in a 2D hexagonal topology, for instance, each memory location is adjacent to six others.

Our programming languages, too, tend to be one-dimensional, much like our natural languages; but this, too, is not a requirement. Indeed, two-dimensional languages and higher-dimensional languages have been made, but it’s probably more practical to add 2D features to a predominantly 1D language:

# Probably a good idea:

matrix = 1 2 3
4 5 6
7 8 9

# Maybe a good idea:

lispy_boxes = +-------------+
| * +-------+ |
|   | + 2 3 | |
|   +-------+ |
|   +-------+ |
|   | + 4 5 | |
|   +-------+ |
+-------------+

# DEAR GOD WHY

(   divisor  ) exponent    n
result = | ---------- |          * sum set(i)
(  dividend  )            i=0

Furthermore, 1D languages require jumps for flow control, but since any graph can be embedded in three dimensions, languages of three or more dimensions can be written jump-free. An easy way to do this is to allow changing the direction of the instruction pointer rather than its position.

And for you CS theory types looking for something to do a Ph.D. on, it’s a fun problem to figure out whether a language with a strictly planar state diagram can be Turing-complete. Go nuts.

♦ ♦ ♦

Diffing, Visualisation, and Editing

As with memory, languages could be given any number of interesting topologies beyond the usual grid. Tool support, of course, becomes something of an issue. It’s relatively easy to see how a diff would work in a 2D or 3D Cartesian language, but more complex spaces are problematic.

In one dimension, diffing relies on the longest common subsequence problem (LCS). While this problem is NP-hard, it can be solved in polynomial time using dynamic programming for a constant number of sequences—usually two, the previous and current versions of a file.

In more complex spaces, you have to rely on the maximum common subgraph isomorphism problem (MCS), which too is NP-hard. Luckily, MCS is solvable in polynomial time when the circular ordering of edges around vertices in an embedding is constrained, as it would (almost?) certainly be. But you don’t need to add complexity when you generalise—hell, for some languages, you might only need simple graph difference.

No, the really problematic things are visualisation and editing. How do you display removals and insertions in multiple dimensions and unusual topologies? Even for 2D images there are many equally good ways to visualise differences. How do you let a programmer efficiently and reliably create, alter, and share programs outside the realm of 1D text?

♦ ♦ ♦

And Beyond

Programming is still a fairly new field. Our languages are hovering around a local maximum, but there’s little reason to think we couldn’t come up with something better, more fun, and more productive. We need to be willing to sacrifice some of the things that make existing languages great, because they might have no analogue in a better system.

In other words, we should not imagine aliens in terms of the creatures we know. A green dog with antennae is still a dog. We should instead imagine things that make us legitimately uncomfortable to think about. Things that are unattractive not because they’re peculiar or dangerous, but because they’re deeply unsettling. Think Marvin the Martian versus the blind idiot demon sultan Azathoth. If those aliens exist, then not only does it mean we’re not alone—it means we’ve been wrong this whole time. And that, unfortunately, is how progress works.