Thursday, June 11, 2015

Improving Idea Representation and Debate

Note: you can skip the prelude by scrolling past the screenshot.

No category of invention has ever been more influential than that of concept representation schemes, which claims such notable alumni as natural language and mathematics.

Concepts have deeply symbiotic relationships with their representations; our inklings are pitiful, delicate agoraphobes without some formal carapace to curl inside of. And, once linked, the two inevitably fuse; discerning a point of separation becomes an endless task: their boundary expands into infinite, alluring detail whose completion always appears near.

Further, our choices of formalization accrete in layers, each developing its own character, without ever fully concealing its origin.

Some representations excel at distinguishing multitudes and attaching to them, separately, myriad, manifold impressions; others have no memory for names, but are remarkably intuitive with relationships. There are factions, and contests: there are those who would have it that rigor stably supports life, and those who claim it the first sign of death. Others plead vociferously, and vaguely, for a kind of harmony. 

It was said, in a representation long forgotten, that, "no two forms possess intrinsic merit disparity; the worth of a form exists only in relation to a thought,"—but then! There arose a structure so impossibly well-balanced, so gloriously lacking in error, so frustratingly perfect, that psychotic murmurs of, "messiah," crying whispers of "divinity," and hopeless exhalations of, "God" filled the once heathen mound of abstract rabble. The one true form emerged:
The text in this screenshot is unfortunately misleading—each node should have a single sentence, plain, statement of some 'claim'—a 'proposition,' if you prefer.

Actually, no—my mistake—that's just a screenshot of a project I started nearly a year ago. Ah, well, I guess I'll just talk about it instead.

Natural language has many virtues. Formal mathematical languages also have many virtues—in fact, in a certain sense, it may even be said that they are the more virtuous of the two. Unfortunately, however, even if that is the case, in some sense, it is irrelevant: people have to actually use the language, and nobody has the time for something much more formal than natural language—not even mathematicians!

So, the idea here is to add just the smallest amount of structure to natural language: one must break their overall idea into the separate claims one desires to make—then state those claims with whatever level of formality you'd like. Furthermore, arrange your various claims so that some are supporting others; the supporting claims will be rendered as children in a hierarchy.

Once you have represented some idea, or hypothesis, in this manner, you may do a number of things with it: you can re-use its parts to state new ideas, you may share their parts or whole (though they'll be subject to rating at this point), you can request that a whole 'hypothesis' be critiqued—or, you can put it up for public debate, in the CRUCIBLE (or private debate, not in the crucible, is also fine).

The debate system emphasis the constructive qualities of argumentation: two competing hypotheses are reconciled into one improved hypothesis, with individual claims having been treated individually.

It's possible that a more refined taxonomy of rhetorical devices could be sequestered from conventional expository essay structure than just claims/justification. For instance, why not have 'alternate phrasings' or 'examples' or 'empirical evidence' be node types in the hierarchy of one's hypothesis?

Using such a system, one could represent: political arguments, business decisions, philosophical or scientific ideas—you name it!

Sunday, June 7, 2015

How to Make View-Independent Program Models


In Part 1 of this two part series, I made an argument that the standard way of structuring program authoring tools involves a peculiar and unnecessary model/view coupling that makes a number of problems in the programming tools domain more difficult than they need to be. Here I'll be describing a simple, general way of constructing generic 'program models,' which have much in common with ASTs, but improve on them in a couple of critical ways. I'll be referring to this method as the 'path formulation' of program models.

Why this structure?

Short answer: because it's simple and general, and bears a very close relation to the essential activity in high-level program construction.

The 'path formulation,' a particular way of creating program models, comes from asking the question, "what are we really doing when writing source code?" and finding the answer, "selecting and configuring abstractions provided by a programming language." In that case, writing source code is just one way of doing a more general and essential activity: selecting and configuring abstract 'language constructs.' Here's a short dialogue illustrating the point a bit more.

So, let's say your programming language provides an abstract language construct called 'function_declaration'; it is comprised of a few parts: a 'name,' a 'return_type,' and an 'argument_list.' A configuration for this construct would be a particular 'name,' 'return_type,' and 'argument_list.' What makes it 'abstract' is that none of these entities are tied to a representation. In other words, while it has become reflex to think of these sorts of constructs in terms of character sequences, here we intentionally leave open the question of how they should look.

Making Program Models using the Path Formulation


The idea behind the path formulation is: if you have a model of a programming language in the form of a graph, then individual program models are just paths through the graph.

Here's a high-level, three step recipe for making these language/program models:

(1) Represent your programming language 'abstractly,' using a formal grammar that has no lexical section (now called an 'abstract grammar'): its fundamental units are the fundamental units from your language in the abstract, rather than character sequences. Only consider which abstractions your language should include and the rules for composing them; worry about how to represent them visually elsewhere. For example, we can say that a 'class' is made up of a 'name,' a set of 'variables,' and a set of 'methods,' without any assumptions about how these things are going to look. (I talk about this in the first part, too.)

(2) Convert your abstract grammar into a graph, which will serve as the 'model' for your language. (I have some Java code that will do this for ANTLR grammars, btw, which I can clean up and share on Github if there's interest.)

(3) Represent individual programs as specific paths through the language graph.

So, if your grammar contains a subsection like this ('class_reference' is for inheritance, indicating a parent class):

        program
: class+
interface*
;
class
: name
class_reference?
function_declaration*
;
       function_declaration
           : etc. etc. etc.

Your graph-based language model will have a subsection like this (black nodes are 'language constructs,' orange nodes are 'language atoms'):


And a particular program in your language, consisting of one simple class might look like this:

These paths could be represented by just listing the edges taken—though of course you have to number the edges:

Taking that approach, the model of our simple program looks like this:

 (2 2 0 1 3 0 0 3)


If we would like a little structure in our model representation, we can distinguish between 'language constructs' and 'language atoms' (black nodes and orange nodes), by using opening and closing parentheses to mark the start and end of language constructs.

(2 (2 0 1 (0... 1... 2... 3) 0 0 3))

Note: the ellipses are where we followed some hypothetical edges in 'function declaration.'

Language constructs are just composite abstractions, made up of more than one part; language atoms cannot be broken into smaller units. Actually, 'language constructs' and 'atoms' have a relationship that mirrors S-expressions in Lisp, so it's no surprise that the notation for program models resembles Lisp (here, however, we aren't tied to using this as the visual interface for the programmer).

Using Insight from the Language Model

A big advantage of the path formulation of program models is that it maintains a strong connection between elements of a user's program, and the programming language itself (since programs are paths in the language graph). This connection can be taken advantage of by programming tools to guide programmers in using the language.

As an example, let's say a programmer has just selected a 'class' construct (maybe by typing out the keyword 'class'—or in some new way); the program model now contains a node for that 'class' construct, and the editor, just by examining the 'class' node in the language model, knows all the legal options for proceeding, because each option corresponds to an edge going out of the 'class' node:

To make this as concrete as possible, I'll show one possible UI that takes advantage of this connection to the language graph. Keep in mind, though: you could still render it like a traditional text editor, using the UI contemporary IDEs use for 'auto complete' to display alternatives. It's like automatic auto complete for all aspects of the language—at the least, this would be tremendously useful to new users of a language.

In this hypothetical editor, the UI is split into two main sections: the top is our document, which is just a rendering of the program model; and the bottom contains controls for selecting and configuring language constructs.

Let's say we've just instantiated a class definition by supplying all the necessary parameters; it has the name 'InputHandler,' and our editor has 'collapsed' it, so we just see the name and type:

Since our editor is following along in the language graph, it knows we're back in the 'program' node, from which point we can begin specifying either a class or an interface. Let's say we select 'class.' Our editor now looks like this:

Notice the whole 'class' construct and the 'name' section have red borders; this is to indicate that 'class' hasn't been fully instantiated: it still has 'free' parameters that must be bound to something (in this case 'name' must be bound to something). Also notice that in the bottom section, the options appearing in the grid are just the neighbors of the 'class' node in the language graph. 

I imagine that in using a system like this, the cells in the bottom area would map to keys on your keyboard: this way you could accomplish a task like creating the skeleton of a new method declaration with a single keystroke. Something along these lines would also be much better than text editors for programming with virtual/augmented reality systems and mobile devices. Anyway, this UI is a just a quick sketch of one possible approach. The document region could also be something like this (in that video, I'm rendering the AST in manner identical to how I'm suggesting we render program models).

Conclusion

It's been a long time since we laid down the character sequence and parsing-based architecture of program authoring tools, and contemporary work on programming languages is deeply invested in that established approach. It seems like the program model approach could be an improvement—but who knows what lethal oversights might still be lurking. What's especially needed at this point is a concrete implementation. I'm working on it, slowly, in my free time—but my hope is that others will read the ideas here, and if the they prove to be generally interesting after all, expand and solidify them into serious tools that will improve the experience of programming for the upcoming years. If you'd like to hire me to work on something related, I can be contacted at 'westoncb[at google's mail service]' (or even if you just want to talk about it—though the comment section is probably best for that).

----------------------------------------

Appendix:

Identifiers etc.
Let's take a look at the abstract program graph one last time:

Notice that 'class reference' and 'name' are language atoms, but these things need structure of their own, so where does that come from? First I'll point out the reason we don't include identifiers in the language definition is that they are mnemonics for humans, not part of the abstract structure of a language; all the language needs is a unique identifier, so we just generate a random one. As for the mnemonic, maybe it should be a string, maybe something else—accordingly we leave it outside of the language spec. In the program model, we just attach the random ID (discussed earlier, too) which can be associated to some representation specified by the programmer (probably totally without their notice, by just typing in their editor as always). A program model with IDs might look like:

(2 (3 (#49843345) 0 0 (#95728745) 0  1 (... ... ... 3) 4))

There's an external file that maps these IDs to representations (often it's just a string) for editors to use.

The following section discusses how to handle the 'class reference' node and others like it.

Referencing the Program Model
There is an aspect of programming languages that isn't captured by the 'abstract grammar' that I've described so far. The abstract grammar only allows us to describe 'free' language constructs which, when supplied with specific parameters, are 'instantiated'; program models contain only instantiated language constructs. However, the 'abstract grammar' should describe the full capabilities of the language, and programming languages always contain mechanisms for referencing already instantiated language constructs: e.g., I have instantiated a 'function_declaration,' which had its 'name' parameter bound to the value 'testFunction'; other parts of my program should be able to reference this specific, instantiated 'function_declaration' by using it's 'name,' 'testFunction,' as a reference.

To be honest, I'm very curious to hear other people's ideas on how to go about doing this, though I do have an approach that seems like it would work well: extend the notation of our 'abstract grammar' (which is just some variation of BNF at the moment) to express 'queries' on program models: i.e. "select all the nodes from the program model of type 'class'." More concretely, let's say our 'class' construct is defined as follows:

       class
: name
class_reference?
function_declaration*
visibility?
;

(The 'class_reference?' component is used to reference a parent class.)

'class_reference' would be defined in our grammar as follows (except using some appropriate notation, not English):

      class_reference
           : "select nodes on program model of type 'class' in same 'package'"
           ;

So, in order to instantiate a 'class_reference' the programmer would have to select a node from the program model that meets the criteria in the query. Ideally, the programmer's IDE would parse the query, run it on the program model, and offer up a selection of valid nodes. Present IDEs do this sort of thing of course, but including the necessary information in a unified, abstract language specification would be beneficial.

Why Programming Languages Use Only One 'View,' and How to Fix That

(note 1: This is the first of a two part series; part two is here: How to Make View-Independent Program Models)

(note 2: In the interest of a little more context/concreteness: here's a video of an editor I made that works by rendering a tree-based program model as described in the article (this just uses an AST, though, not a 'pure' model): Tiled Text)

---------------

It's pretty well acknowledged that the reason source languages exist is that they are better interfaces for program creation than, say, typing and reading machine code. And yet, it's kind of weird to call them 'user interfaces'—probably because they are each comprised partly of the concrete interface to some external text editor and partly of an abstract specification, this combination being a more nebulous construct than we're accustomed to labeling 'user interface.' At the same time, however, even if its form differs from our typical user interfaces, its function matches well: programmers use a text editor and programming language features as their interface to program creation.

User interfaces are something we've learned a lot about since the original architecture for program authoring software was laid down (which determines that programs will arrive in the form of character sequences and then be parsed into something more useful). Among these things learned is a useful way of analysing systems into 'models' and representations of those models, called 'views'. Furthermore, probably the strongest reason for an architecture to adopt a model/view split is that the software will require multiple representations of one model. This is very much the case for program authoring tools: for instance, source code and machine code could be generated as two representations from a pure program model (which I describe how to build here)—and we could far more easily allow multiple source views.

There are a few points I've come to believe are true which have led to a new perspective on program creation systems:

(Note: I'm going to use 'machine code' as catchall for any kind of target/output language in any kind of interpreter/compiler.)

(1) The only reason we have source languages (as opposed to machine code) is because they are better interfaces for human programmers.

(2) Program source and machine code are two representations of one abstract thing. This common 'abstract thing' is much like the models in MVC systems; accordingly, I call it a 'program model' here.

(3) ASTs are an approximation to this 'program model,' but they are biased towards one potential representation: source code.

(4) It's possible to construct generic program models, which are better at capturing the essential properties of programming languages (including, e.g., 'type' information), and which are not tied to any particular representation.

The more I think about 'program models,' the more our treatment of program source code seems bizarre. Why aren't the models of computer programs prior to their source code representations? Shouldn't program models be the basis for generating program views? Shouldn't we store models on disk and build views when using our programming tools? Shouldn't we pass models around the internet instead of views?

This more symmetric relation of the various views of a language to a single generic model, as in the image, seems more natural:


Among the many benefits we could expect to arise from structuring things this way:

  • Swappable views for programming languages (think, e.g., personally configurable 'syntactic sugar,' but on a much grander scale—definitely edging in on the realm of meat and potatoes. A primitive example: you check a box in your editor,  and now your Python-like language is rendered with curly braces instead of just tabs. A little more complex: instead of 'new Color(20, 30, 50);'—your editor renders a color picker).
  • Experimenting with UI concepts for program construction would be far less costly.
  • No need for parsing at all.
  • Dead simple for tools to give feedback on what's possible to do within your language, significantly reducing the learning curve of using a new language. (More on this in Part 2.)
  • More diverse, interesting textual representations are possible since you don't have to parse.
  • Program models could be much simpler than source code (aside from LISP: my particular formulation ends up looking like this:  (2 (3 0 0 0 1 (0 3) 4))), hence they are easier to manipulate via algorithms.

Program Views on Generic Models

The editor 'Moonchild' has some examples of the sorts of 'node views' I'm imagining in the section below. 

So, assuming we now have a generic program model (I describe how to actually build these in part two: How to Make View-Independent Program Models), what would it look like to build views based on such a model? There is a powerful, natural approach to this, which is for programming tools to mimic web browsers, whose displays are built up using algorithms on DOM trees. Our 'program model' is very similar to a DOM tree: first, it's a tree, and second, it contains the abstract structure and content of the thing to render, but doesn't say how to render it. The modern web is ample evidence that this kind of structure works well as the basis for rendering complex, diverse, text-centric layouts, with varied interaction styles.

It's probably not hard to see that we could use this structure to build visual representations of programs that exactly mirror the look of present day, pure text source code—so if 100% text is preferred, that's absolutely possible. But imagine, now that it's built on a structure that matches the web, how we could evolve pieces of our document into rich, interactive elements, when desired.


What I mean to propose here isn't that we use one view or another, though; rather, only that we maintain a more direct correspondence between elements in the visual display of our programs, and their backing models (as in the image). This is in strong contrast to the unfortunately indirect relationship found in most modern programming systems, which is necessarily mediated by parsing. For example, with this more direct relationship, just by placing your cursor somewhere in the document your programming tool knows which node in your program's model you are considering interacting with. The node might have an "on click" script attached—it might expand its visual representation, pushing the views for other nodes to the side, or shrinking nodes whose types are irrelevant to what you're interacting with.

Again though: I don't know what the better interface is, just that this opens up our freedom to explore. In our current systems, the only time we experiment with alternate views is when designing whole new programming languages; using generic program models, we can keep every other aspect of the language and just change how it's rendered for programmers.

The Difference Between ASTs and 'Program Models'

While ASTs are a sort of approximation to program models, It's probably more accurate to call them models of program source, than of programs themselves. Well, what if we model 'programs themselves' instead of 'program source'? After all, modeling program source does imply that unhealthy dependence on one view; can't we consider 'language constructs' in the abstract, separate from a visual representation? We can!

It's very common to use formal grammars to generate software for building ASTs out of program source. These grammars define the set of abstractions (i.e. 'language constructs') available to the programmers of a given language—but they do so in terms of a textual representation of these abstractions. However, if our grammars were to not bottom out in a 'lexical section,' i.e. without describing the decomposition of abstractions into character sequences—we could more directly address the 'essence' of our programming language, handling representation later.

More important than the actual act of omitting the lexical section, though, is the different way of thinking about grammars when you aren't tied to characters as the 'atoms' of your language. What would your primitive types be if not characters? Well, why not the abstract primitive types of your language?

Here's an example of the difference: let's consider how a simple function call is represented in an AST first, then in a generic program model. Here's our function call:

doCoolThing(intVar, boolVar);


And this is what it might look like as represented by an AST:

Now, as represented by a generic program model:
Notice how in the AST the function name and both arguments all appear to be the same kind of thing? That's because in the model of the source code, they are all the same thing: they're all character sequences following the rules for identifiers. In the program model, the node types relate to meaningful types within the programming language—it's not talking about character sequences at all. This makes it a much more useful model for programming systems to work with: it's a structure that relates more closely to meaning within a programming language.

The long numbers you see in the program model diagram are IDs used to associate certain nodes with arbitrary, external representations. If you'd like, there's no reason this representation couldn't just be a sequence of characters—but now it could be interchanged with any number of other things, without changing the rest of the language.

The next half of this essay: How to Make View-Independent Program Models


You might also check out: A Short Dialogue on How Crazy Human Programmers Are

Some more examples of editors that operate on models instead of text, but aren't visual languages: LamduUnisonJetBrains MPSeco

Friday, June 5, 2015

A Short Dialogue on How Crazy Human Programmers Are

This was initially part of my explanation for why people should be interested in 'syntax free' programming, but I decided to separate it. Here is the original essay.

Aliens have landed on earth. One of their computer scientists has been speaking with one of our own. They understand each for the most part, at high levels of abstraction—but they find plenty of discrepancies in the details. Now the alien is trying to grok how our programming tools work:

Alien: I understand that you have added abstractions over your 'machine code' because people find it unmanageable to build systems in a binary alphabet—but I don't see how your programmers interact with the abstractions, how they specify which they would like to use.

Human: Okay, I think I see what you mean. Well, whenever we write programs, it is in a particular 'programming language'; this language contains a set of abstractions which a programmer is free to choose from and use how they'd like. Each language assigns names to its abstractions; so the programmer types the abstraction's name in order to use it.

Alien: Alright. Maybe I do understand then—it's not so different at home: we don't usually type out the name, but we might say it out loud, or reach our hand toward it, if we see it on the 'program construct' list. I'm curious, though, after you have typed the name of an abstraction, how does this update the 'machine code' for the program you're building?

Human: Well, it's not usually just one name that has been typed, but there's a whole 'document'—just like we talked about for email, or academic papers. This document is then 'parsed,' which is a way of converting the document into a 'model': a form suitable for a running program to interact with—the program being a 'compiler' in this case. So, once the document is parsed—if it was well-formed—the compiler uses the resulting model as a basis for generating machine code.

Alien: I understand what you mean by 'parsing'—we have something similar, too. But this seems strange to me that you would involve something as complex as inferring semantics through syntax in order to update the model of your program. You mean to tell me that in order to change a single part of a program's model, you must use a document editor, find the part of the document that corresponds the part of the model you want to change, recall from memory all of the abstractions available, type out the replacement abstraction's name one character at a time, and re-parse the changed document!? Just to change one thing in the model!? You guys must be much better programmers than us after all; that would be far too difficult for us: we need to see which abstractions are valid to use at different times, and when we replace one, or add a new one—or anything like that—the program's model is updated directly, which lets our software give feedback on how to proceed.

Human (grinning and slicking hair back, then shrugging): well, it's not too hard: we've gotten really good at automating parsing, and our programming tools have document editors built in. But yeah... I guess we are pretty good!

(In case it's not obvious: the aliens were the better programmers.)