Saturday, July 18, 2015

The Subtle Narrowness of Human Concepts

Note: the first part was mostly just messing around with writing, and the second part is a much more direct statement of the ideas which the first is founded on.

Understanding is partially and significantly a generative capacity; and, being able to generate instances of our generic grasping, we feel ourselves to have some ambiguous command of reality. To look upon a house and consequently feel it recreated in the center of one’s self is surely a more profound act of possession than any deed could confer; but to extract from multiple unplanned viewings that which makes a house a house, and turn it around, and freely and fluidly construct never-constructed new variations, in that same most personal inner chamber—perhaps no form of ownership could be more complete. But what is the true value of this ability? In offering even a superlative commendation of the human capacity to know, one inevitably compares its worth against those of its contenders; an act of measurement, always comparative, necessarily precedes our praise. It seems, however, that we are too apt to mistake beating all contenders as furnishing proof of transcendent superiority, when really, as far as we know, the match was rather local and our champion would be devastated outside his hometown. There is a negative and a positive side to considering the situation thusly. The ostensibly negative side is that understanding begins to look more like our various other capacities—like, for example, the one which allows us to determine with our noses whether a shirt is unclean or not. The positive side is that we may attribute to reality—which, if we aren’t solidly a part of, what are we?—a dazzlingly less bounded potential, to say nothing of the general merits of taking a more realist stock of things.


For most of my life I'd assumed that human concepts were this limitless sort of thing capable of making fundamental connections to what's going on in the universe at some deep level. In more recent years I've come to consider them as almost like another sense: they are symbolic patterns consistently formed when we are exposed to certain stimuli (like our experience of smells consistently reappearing when exposed to similar molecules). Our conceptual faculty augments the pure pattern-correspondence of our more primitive senses in that the patterns can be associated with other internal patterns, and in that we can generate new patterns purely from existing patterns (using logical and analogical processes), which at some future time may be usefully associated with some never-encountered external stimulus. This is of course extremely useful—but we really must be falling into what should be an obvious trap of anthropocentrism in ascribing to them much more than that.

Thursday, June 11, 2015

Improving Idea Representation and Debate

Note: you can skip the prelude by scrolling past the screenshot.

No category of invention has ever been more influential than that of concept representation schemes, which claims such notable alumni as natural language and mathematics.

Concepts have deeply symbiotic relationships with their representations; our inklings are pitiful, delicate agoraphobes without some formal carapace to curl inside of. And, once linked, the two inevitably fuse; discerning a point of separation becomes an endless task: their boundary expands into infinite, alluring detail whose completion always appears near.

Further, our choices of formalization accrete in layers, each developing its own character, without ever fully concealing its origin.

Some representations excel at distinguishing multitudes and attaching to them, separately, myriad, manifold impressions; others have no memory for names, but are remarkably intuitive with relationships. There are factions, and contests: there are those who would have it that rigor stably supports life, and those who claim it the first sign of death. Others plead vociferously, and vaguely, for a kind of harmony. 

It was said, in a representation long forgotten, that, "no two forms possess intrinsic merit disparity; the worth of a form exists only in relation to a thought,"—but then! There arose a structure so impossibly well-balanced, so gloriously lacking in error, so frustratingly perfect, that psychotic murmurs of, "messiah," crying whispers of "divinity," and hopeless exhalations of, "God" filled the once heathen mound of abstract rabble. The one true form emerged:
The text in this screenshot is unfortunately misleading—each node should have a single sentence, plain, statement of some 'claim'—a 'proposition,' if you prefer.

Actually, no—my mistake—that's just a screenshot of a project I started nearly a year ago. Ah, well, I guess I'll just talk about it instead.

Natural language has many virtues. Formal mathematical languages also have many virtues—in fact, in a certain sense, it may even be said that they are the more virtuous of the two. Unfortunately, however, even if that is the case, in some sense, it is irrelevant: people have to actually use the language, and nobody has the time for something much more formal than natural language—not even mathematicians!

So, the idea here is to add just the smallest amount of structure to natural language: one must break their overall idea into the separate claims one desires to make—then state those claims with whatever level of formality you'd like. Furthermore, arrange your various claims so that some are supporting others; the supporting claims will be rendered as children in a hierarchy.

Once you have represented some idea, or hypothesis, in this manner, you may do a number of things with it: you can re-use its parts to state new ideas, you may share their parts or whole (though they'll be subject to rating at this point), you can request that a whole 'hypothesis' be critiqued—or, you can put it up for public debate, in the CRUCIBLE (or private debate, not in the crucible, is also fine).

The debate system emphasis the constructive qualities of argumentation: two competing hypotheses are reconciled into one improved hypothesis, with individual claims having been treated individually.

It's possible that a more refined taxonomy of rhetorical devices could be sequestered from conventional expository essay structure than just claims/justification. For instance, why not have 'alternate phrasings' or 'examples' or 'empirical evidence' be node types in the hierarchy of one's hypothesis?

Using such a system, one could represent: political arguments, business decisions, philosophical or scientific ideas—you name it!

Sunday, June 7, 2015

How to Make View-Independent Program Models

In Part 1 of this two part series, I made an argument that the standard way of structuring program authoring tools involves a peculiar and unnecessary model/view coupling that makes a number of problems in the programming tools domain more difficult than they need to be. Here I'll be describing a simple, general way of constructing generic 'program models,' which have much in common with ASTs, but improve on them in a couple of critical ways. I'll be referring to this method as the 'path formulation' of program models.

Why this structure?

Short answer: because it's simple and general, and bears a very close relation to the essential activity in high-level program construction.

The 'path formulation,' a particular way of creating program models, comes from asking the question, "what are we really doing when writing source code?" and finding the answer, "selecting and configuring abstractions provided by a programming language." In that case, writing source code is just one way of doing a more general and essential activity: selecting and configuring abstract 'language constructs.' Here's a short dialogue illustrating the point a bit more.

So, let's say your programming language provides an abstract language construct called 'function_declaration'; it is comprised of a few parts: a 'name,' a 'return_type,' and an 'argument_list.' A configuration for this construct would be a particular 'name,' 'return_type,' and 'argument_list.' What makes it 'abstract' is that none of these entities are tied to a representation. In other words, while it has become reflex to think of these sorts of constructs in terms of character sequences, here we intentionally leave open the question of how they should look.

Making Program Models using the Path Formulation

The idea behind the path formulation is: if you have a model of a programming language in the form of a graph, then individual program models are just paths through the graph.

Here's a high-level, three step recipe for making these language/program models:

(1) Represent your programming language 'abstractly,' using a formal grammar that has no lexical section (now called an 'abstract grammar'): its fundamental units are the fundamental units from your language in the abstract, rather than character sequences. Only consider which abstractions your language should include and the rules for composing them; worry about how to represent them visually elsewhere. For example, we can say that a 'class' is made up of a 'name,' a set of 'variables,' and a set of 'methods,' without any assumptions about how these things are going to look. (I talk about this in the first part, too.)

(2) Convert your abstract grammar into a graph, which will serve as the 'model' for your language. (I have some Java code that will do this for ANTLR grammars, btw, which I can clean up and share on Github if there's interest.)

(3) Represent individual programs as specific paths through the language graph.

So, if your grammar contains a subsection like this ('class_reference' is for inheritance, indicating a parent class):

: class+
: name
           : etc. etc. etc.

Your graph-based language model will have a subsection like this (black nodes are 'language constructs,' orange nodes are 'language atoms'):

And a particular program in your language, consisting of one simple class might look like this:

These paths could be represented by just listing the edges taken—though of course you have to number the edges:

Taking that approach, the model of our simple program looks like this:

 (2 2 0 1 3 0 0 3)

If we would like a little structure in our model representation, we can distinguish between 'language constructs' and 'language atoms' (black nodes and orange nodes), by using opening and closing parentheses to mark the start and end of language constructs.

(2 (2 0 1 (0... 1... 2... 3) 0 0 3))

Note: the ellipses are where we followed some hypothetical edges in 'function declaration.'

Language constructs are just composite abstractions, made up of more than one part; language atoms cannot be broken into smaller units. Actually, 'language constructs' and 'atoms' have a relationship that mirrors S-expressions in Lisp, so it's no surprise that the notation for program models resembles Lisp (here, however, we aren't tied to using this as the visual interface for the programmer).

Using Insight from the Language Model

A big advantage of the path formulation of program models is that it maintains a strong connection between elements of a user's program, and the programming language itself (since programs are paths in the language graph). This connection can be taken advantage of by programming tools to guide programmers in using the language.

As an example, let's say a programmer has just selected a 'class' construct (maybe by typing out the keyword 'class'—or in some new way); the program model now contains a node for that 'class' construct, and the editor, just by examining the 'class' node in the language model, knows all the legal options for proceeding, because each option corresponds to an edge going out of the 'class' node:

To make this as concrete as possible, I'll show one possible UI that takes advantage of this connection to the language graph. Keep in mind, though: you could still render it like a traditional text editor, using the UI contemporary IDEs use for 'auto complete' to display alternatives. It's like automatic auto complete for all aspects of the language—at the least, this would be tremendously useful to new users of a language.

In this hypothetical editor, the UI is split into two main sections: the top is our document, which is just a rendering of the program model; and the bottom contains controls for selecting and configuring language constructs.

Let's say we've just instantiated a class definition by supplying all the necessary parameters; it has the name 'InputHandler,' and our editor has 'collapsed' it, so we just see the name and type:

Since our editor is following along in the language graph, it knows we're back in the 'program' node, from which point we can begin specifying either a class or an interface. Let's say we select 'class.' Our editor now looks like this:

Notice the whole 'class' construct and the 'name' section have red borders; this is to indicate that 'class' hasn't been fully instantiated: it still has 'free' parameters that must be bound to something (in this case 'name' must be bound to something). Also notice that in the bottom section, the options appearing in the grid are just the neighbors of the 'class' node in the language graph. 

I imagine that in using a system like this, the cells in the bottom area would map to keys on your keyboard: this way you could accomplish a task like creating the skeleton of a new method declaration with a single keystroke. Something along these lines would also be much better than text editors for programming with virtual/augmented reality systems and mobile devices. Anyway, this UI is a just a quick sketch of one possible approach. The document region could also be something like this (in that video, I'm rendering the AST in manner identical to how I'm suggesting we render program models).


It's been a long time since we laid down the character sequence and parsing-based architecture of program authoring tools, and contemporary work on programming languages is deeply invested in that established approach. It seems like the program model approach could be an improvement—but who knows what lethal oversights might still be lurking. What's especially needed at this point is a concrete implementation. I'm working on it, slowly, in my free time—but my hope is that others will read the ideas here, and if the they prove to be generally interesting after all, expand and solidify them into serious tools that will improve the experience of programming for the upcoming years. If you'd like to hire me to work on something related, I can be contacted at 'westoncb[at google's mail service]' (or even if you just want to talk about it—though the comment section is probably best for that).



Identifiers etc.
Let's take a look at the abstract program graph one last time:

Notice that 'class reference' and 'name' are language atoms, but these things need structure of their own, so where does that come from? First I'll point out the reason we don't include identifiers in the language definition is that they are mnemonics for humans, not part of the abstract structure of a language; all the language needs is a unique identifier, so we just generate a random one. As for the mnemonic, maybe it should be a string, maybe something else—accordingly we leave it outside of the language spec. In the program model, we just attach the random ID (discussed earlier, too) which can be associated to some representation specified by the programmer (probably totally without their notice, by just typing in their editor as always). A program model with IDs might look like:

(2 (3 (#49843345) 0 0 (#95728745) 0  1 (... ... ... 3) 4))

There's an external file that maps these IDs to representations (often it's just a string) for editors to use.

The following section discusses how to handle the 'class reference' node and others like it.

Referencing the Program Model
There is an aspect of programming languages that isn't captured by the 'abstract grammar' that I've described so far. The abstract grammar only allows us to describe 'free' language constructs which, when supplied with specific parameters, are 'instantiated'; program models contain only instantiated language constructs. However, the 'abstract grammar' should describe the full capabilities of the language, and programming languages always contain mechanisms for referencing already instantiated language constructs: e.g., I have instantiated a 'function_declaration,' which had its 'name' parameter bound to the value 'testFunction'; other parts of my program should be able to reference this specific, instantiated 'function_declaration' by using it's 'name,' 'testFunction,' as a reference.

To be honest, I'm very curious to hear other people's ideas on how to go about doing this, though I do have an approach that seems like it would work well: extend the notation of our 'abstract grammar' (which is just some variation of BNF at the moment) to express 'queries' on program models: i.e. "select all the nodes from the program model of type 'class'." More concretely, let's say our 'class' construct is defined as follows:

: name

(The 'class_reference?' component is used to reference a parent class.)

'class_reference' would be defined in our grammar as follows (except using some appropriate notation, not English):

           : "select nodes on program model of type 'class' in same 'package'"

So, in order to instantiate a 'class_reference' the programmer would have to select a node from the program model that meets the criteria in the query. Ideally, the programmer's IDE would parse the query, run it on the program model, and offer up a selection of valid nodes. Present IDEs do this sort of thing of course, but including the necessary information in a unified, abstract language specification would be beneficial.

Why Programming Languages Use Only One 'View,' and How to Fix That

(note 1: This is the first of a two part series; part two is here: How to Make View-Independent Program Models)

(note 2: In the interest of a little more context/concreteness: here's a video of an editor I made that works by rendering a tree-based program model as described in the article (this just uses an AST, though, not a 'pure' model): Tiled Text)


It's pretty well acknowledged that the reason source languages exist is that they are better interfaces for program creation than, say, typing and reading machine code. And yet, it's kind of weird to call them 'user interfaces'—probably because they are each comprised partly of the concrete interface to some external text editor and partly of an abstract specification, this combination being a more nebulous construct than we're accustomed to labeling 'user interface.' At the same time, however, even if its form differs from our typical user interfaces, its function matches well: programmers use a text editor and programming language features as their interface to program creation.

User interfaces are something we've learned a lot about since the original architecture for program authoring software was laid down (which determines that programs will arrive in the form of character sequences and then be parsed into something more useful). Among these things learned is a useful way of analysing systems into 'models' and representations of those models, called 'views'. Furthermore, probably the strongest reason for an architecture to adopt a model/view split is that the software will require multiple representations of one model. This is very much the case for program authoring tools: for instance, source code and machine code could be generated as two representations from a pure program model (which I describe how to build here)—and we could far more easily allow multiple source views.

There are a few points I've come to believe are true which have led to a new perspective on program creation systems:

(Note: I'm going to use 'machine code' as catchall for any kind of target/output language in any kind of interpreter/compiler.)

(1) The only reason we have source languages (as opposed to machine code) is because they are better interfaces for human programmers.

(2) Program source and machine code are two representations of one abstract thing. This common 'abstract thing' is much like the models in MVC systems; accordingly, I call it a 'program model' here.

(3) ASTs are an approximation to this 'program model,' but they are biased towards one potential representation: source code.

(4) It's possible to construct generic program models, which are better at capturing the essential properties of programming languages (including, e.g., 'type' information), and which are not tied to any particular representation.

The more I think about 'program models,' the more our treatment of program source code seems bizarre. Why aren't the models of computer programs prior to their source code representations? Shouldn't program models be the basis for generating program views? Shouldn't we store models on disk and build views when using our programming tools? Shouldn't we pass models around the internet instead of views?

This more symmetric relation of the various views of a language to a single generic model, as in the image, seems more natural:

Among the many benefits we could expect to arise from structuring things this way:

  • Swappable views for programming languages (think, e.g., personally configurable 'syntactic sugar,' but on a much grander scale—definitely edging in on the realm of meat and potatoes. A primitive example: you check a box in your editor,  and now your Python-like language is rendered with curly braces instead of just tabs. A little more complex: instead of 'new Color(20, 30, 50);'—your editor renders a color picker).
  • Experimenting with UI concepts for program construction would be far less costly.
  • No need for parsing at all.
  • Dead simple for tools to give feedback on what's possible to do within your language, significantly reducing the learning curve of using a new language. (More on this in Part 2.)
  • More diverse, interesting textual representations are possible since you don't have to parse.
  • Program models could be much simpler than source code (aside from LISP: my particular formulation ends up looking like this:  (2 (3 0 0 0 1 (0 3) 4))), hence they are easier to manipulate via algorithms.

Program Views on Generic Models

The editor 'Moonchild' has some examples of the sorts of 'node views' I'm imagining in the section below. 

So, assuming we now have a generic program model (I describe how to actually build these in part two: How to Make View-Independent Program Models), what would it look like to build views based on such a model? There is a powerful, natural approach to this, which is for programming tools to mimic web browsers, whose displays are built up using algorithms on DOM trees. Our 'program model' is very similar to a DOM tree: first, it's a tree, and second, it contains the abstract structure and content of the thing to render, but doesn't say how to render it. The modern web is ample evidence that this kind of structure works well as the basis for rendering complex, diverse, text-centric layouts, with varied interaction styles.

It's probably not hard to see that we could use this structure to build visual representations of programs that exactly mirror the look of present day, pure text source code—so if 100% text is preferred, that's absolutely possible. But imagine, now that it's built on a structure that matches the web, how we could evolve pieces of our document into rich, interactive elements, when desired.

What I mean to propose here isn't that we use one view or another, though; rather, only that we maintain a more direct correspondence between elements in the visual display of our programs, and their backing models (as in the image). This is in strong contrast to the unfortunately indirect relationship found in most modern programming systems, which is necessarily mediated by parsing. For example, with this more direct relationship, just by placing your cursor somewhere in the document your programming tool knows which node in your program's model you are considering interacting with. The node might have an "on click" script attached—it might expand its visual representation, pushing the views for other nodes to the side, or shrinking nodes whose types are irrelevant to what you're interacting with.

Again though: I don't know what the better interface is, just that this opens up our freedom to explore. In our current systems, the only time we experiment with alternate views is when designing whole new programming languages; using generic program models, we can keep every other aspect of the language and just change how it's rendered for programmers.

The Difference Between ASTs and 'Program Models'

While ASTs are a sort of approximation to program models, It's probably more accurate to call them models of program source, than of programs themselves. Well, what if we model 'programs themselves' instead of 'program source'? After all, modeling program source does imply that unhealthy dependence on one view; can't we consider 'language constructs' in the abstract, separate from a visual representation? We can!

It's very common to use formal grammars to generate software for building ASTs out of program source. These grammars define the set of abstractions (i.e. 'language constructs') available to the programmers of a given language—but they do so in terms of a textual representation of these abstractions. However, if our grammars were to not bottom out in a 'lexical section,' i.e. without describing the decomposition of abstractions into character sequences—we could more directly address the 'essence' of our programming language, handling representation later.

More important than the actual act of omitting the lexical section, though, is the different way of thinking about grammars when you aren't tied to characters as the 'atoms' of your language. What would your primitive types be if not characters? Well, why not the abstract primitive types of your language?

Here's an example of the difference: let's consider how a simple function call is represented in an AST first, then in a generic program model. Here's our function call:

doCoolThing(intVar, boolVar);

And this is what it might look like as represented by an AST:

Now, as represented by a generic program model:
Notice how in the AST the function name and both arguments all appear to be the same kind of thing? That's because in the model of the source code, they are all the same thing: they're all character sequences following the rules for identifiers. In the program model, the node types relate to meaningful types within the programming language—it's not talking about character sequences at all. This makes it a much more useful model for programming systems to work with: it's a structure that relates more closely to meaning within a programming language.

The long numbers you see in the program model diagram are IDs used to associate certain nodes with arbitrary, external representations. If you'd like, there's no reason this representation couldn't just be a sequence of characters—but now it could be interchanged with any number of other things, without changing the rest of the language.

The next half of this essay: How to Make View-Independent Program Models

You might also check out: A Short Dialogue on How Crazy Human Programmers Are

Some more examples of editors that operate on models instead of text, but aren't visual languages: LamduUnisonJetBrains MPSeco

Friday, June 5, 2015

A Short Dialogue on How Crazy Human Programmers Are

This was initially part of my explanation for why people should be interested in 'syntax free' programming, but I decided to separate it. Here is the original essay.

Aliens have landed on earth. One of their computer scientists has been speaking with one of our own. They understand each for the most part, at high levels of abstraction—but they find plenty of discrepancies in the details. Now the alien is trying to grok how our programming tools work:

Alien: I understand that you have added abstractions over your 'machine code' because people find it unmanageable to build systems in a binary alphabet—but I don't see how your programmers interact with the abstractions, how they specify which they would like to use.

Human: Okay, I think I see what you mean. Well, whenever we write programs, it is in a particular 'programming language'; this language contains a set of abstractions which a programmer is free to choose from and use how they'd like. Each language assigns names to its abstractions; so the programmer types the abstraction's name in order to use it.

Alien: Alright. Maybe I do understand then—it's not so different at home: we don't usually type out the name, but we might say it out loud, or reach our hand toward it, if we see it on the 'program construct' list. I'm curious, though, after you have typed the name of an abstraction, how does this update the 'machine code' for the program you're building?

Human: Well, it's not usually just one name that has been typed, but there's a whole 'document'—just like we talked about for email, or academic papers. This document is then 'parsed,' which is a way of converting the document into a 'model': a form suitable for a running program to interact with—the program being a 'compiler' in this case. So, once the document is parsed—if it was well-formed—the compiler uses the resulting model as a basis for generating machine code.

Alien: I understand what you mean by 'parsing'—we have something similar, too. But this seems strange to me that you would involve something as complex as inferring semantics through syntax in order to update the model of your program. You mean to tell me that in order to change a single part of a program's model, you must use a document editor, find the part of the document that corresponds the part of the model you want to change, recall from memory all of the abstractions available, type out the replacement abstraction's name one character at a time, and re-parse the changed document!? Just to change one thing in the model!? You guys must be much better programmers than us after all; that would be far too difficult for us: we need to see which abstractions are valid to use at different times, and when we replace one, or add a new one—or anything like that—the program's model is updated directly, which lets our software give feedback on how to proceed.

Human (grinning and slicking hair back, then shrugging): well, it's not too hard: we've gotten really good at automating parsing, and our programming tools have document editors built in. But yeah... I guess we are pretty good!

(In case it's not obvious: the aliens were the better programmers.)

Sunday, May 3, 2015

Flow and Grothendieck-like Problem Solving

Flow: a pleasant, efficient, focused state of mind.
Alexander Grothendieck: revered mathematician living between 1928 and 2014 who had an unusual approach to mathematics.

The idea that being in a state of flow is desirable for computer programming is now commonplace; good information on the nature of this mind-state and techniques for inducing it are more sparse. My idea here is that the meditative mind-state and flow are one and the same—which I suspect is uncontroversial. From there, however, I'd like to address a general problem in learning how to flow at will, which I think can be made more clear by analogy to a style of problem solving that Grothendieck was especially known for (not that it was unique to him by any means, but it seems to have been a sort of trademark because he took it to such extremes).


After spending years dealing with a computer-related repetitive strain injury, I found that using computers was not the same even after I’d resolved the physical component of the problem. Eventually, in trying to discover the source of the problem that remained, I came across the notion of ‘flow,’ which instantly connected with certain common occurrences from my pre-injury programming days. Recent occurrences of this state were utterly absent; and, in fact, the more I thought about it, the more sense it made that my current problem in using computers could be clearly characterized as ‘anti-flow.’ If there’s some axis in the space of mental states where a large positive value indicates being in flow, my problem was that I had a large negative value on this axis (such an axis could sensibly be named ‘acceptance of things as they are’—which will make more sense by the end of the essay). So, my interest in flow began.

Reading more, it would’ve been difficult not to connect this concept to the ‘meditative mind-state’: a state in which all experience is accepted as okay, lacking in self-awareness, and characterized by direct interaction with the objects of experience. The opposite involves the heavy presence of meta-thought, analyzing what one is doing as one does it. 

Reading more, I discovered this excessive meta-thinking is basically synonymous with anxiety. This makes intuitive sense: you meta-think about something you’re doing when you think there might be something wrong with it. And, being in a state of flow, characterized as behavior lacking meta-thought, has an intuitive reason for being more efficient: it’s like the difference between running a procedure encoded in software, executed on a general purpose chip—and having the procedure directly represented in hardware. In both cases you have this trade off of efficiency for mutability. Further, it makes sense that flow would be enjoyable and meta-thinking not: anxious experiences are unpleasant ‘by design’—they encourage the experiencer to change something, to get out of that mode; by contrast, the more you’re able to stay in a state that doesn’t require meta-thought, the better things must be going for you, and your mind rewards you.

So, in order to resolve my computer use issues, which I could now more usefully state as, “in order to move toward Flow on the ‘Acceptance Axis’” or “in order to reduce anxiety while using computers”—I could draw from methods in Western psychology, or Eastern philosophy. I have explored both and both have been useful. Ultimately, however, the literature surrounding Eastern meditation practice and, to a surprising extent, life perspective—was the most directly applicable. Isn't it strange, though, that 'life perspective' should concretely affect my ability to engage in some activity? I thought it was, and I spent a lot of time thinking about how that could work. Here's how I think of it now.

‘Life perspective’ is another way of referring to the conceptual framework in which one thinks, their total set of Schemas, the conceptual categories available for organizing experience and the rules relating those categories. Just to give an idea the kind of thing I’m talking about, I’ll give two ridiculously stripped down, simplified examples of personal conceptual frameworks.

(1)I am an autonomous entity with a fixed set of actions and senses. I exist in a universe made up of small objects that follow fixed rules. I am made up of these objects. After a certain duration I will cease to exist. While I exist I will experience some things that are good and some that are bad, and the proportion of good and bad experiences will depend in part on my actions.

(2)I am an eternal soul that can suffer or feel joy. Temporarily I exist in a ‘physical’ universe, and depending on the moral character of my decisions while here, I will suffer or feel joy eternally afterward.

One’s ‘life perspective,’ as a framework, provides a special kind of vocabulary—a set of terms to make statements 'in terms of.' What makes this kind of vocabulary special is that it provides terms that could be ordered based on how general or particular they are, where the more particular terms are arrived at by supplying parameters to the more general terms. For instance, if you say, “the most fundamental things in existence are ‘physical entities’,” then you have the general term ‘physical entity,’ and everything else in your framework must be expressible as some parameterization of that general term (e.g. "that is a physical entity with location X, Y, Z, smelling like W, with color R". So, the more general a term is, the more other terms are potentially impacted by it. This is the same as in any framework, and modifications to these very general terms are what we call 'fundamental' changes. In science, such changes are often called ‘revolutions’; they are how we change ‘paradigms’ (this is Thomas Kuhn’s popular terminology anyway), rather than elaborate the present one. These changes of very general framework terms are also what take place in significant software refactors; they are what determine whether you work with elegant structures, or whether you become an expert in the application of duct tape. 

Personal ‘life perspectives’ are this same way. Incidentally, this is why some people are so interested in metaphysics: it’s a field explicitly addressing the most general structures of personal conceptual frameworks or ‘life perspectives.’ But what difference does it make if you personally have one framework or another? Consider the difference in behavior that results when someone really, truly believes that an afterlife with eternal suffering or pleasure exists and they’ll go to one or another based on their actions, or if someone were to believe that people are essentially robots with no real experience of pleasure or pain. Or, if someone were to believe that all of the universe, including all of humanity, was truly one indivisible whole, so that it becomes meaningless to say "this part of the universe is me, that part is not"; is selfish behavior possible if you deeply believe this? Is fear of death or any form of material loss? Those are extreme examples, with extreme consequences for adoption, but: people's beliefs, their conceptual frameworks (the true contents of which are actually quite hidden from us!), have a pervasive influence not only on how we behave, but on how we interpret our experience.

Let's switch to Grothendieck for a minute. I’ll quote from Pierre Cartier’s essay on him:

Grothendieck’s favorite method was similar to Joshua’s for conquering Jericho. One must seize the place by sapping it; at a certain point, it succumbs without a fight. Grothendieck was convinced that if one had a sufficiently unifying vision of mathematics, if one could sufficiently penetrate its conceptual essence, then particular problems would be nothing but tests that no longer need to be solved for their own sake.” -Pierre Cartier, “A country known only by name” (

This sounds like the same wisdom we hear about refactoring in software: you can change the framework so that the problems within it are easy, or you can work hard solving the problems in the framework. Here’s another quote on Grothendieck's approach:

My friend Bob Thomason once told me that the reason Grothendieck succeeded so often where others had failed was that while everyone else was out to prove a theorem, Grothendieck was out to understand geometry. So when Grothendieck set out to attack the notoriously difficult Weil Conjectures, the goal wasn’t so much to solve the problems as to use them as a test for the philosophy that if you generalize sufficiently, all difficult problems become easy.” -Steven Landsberg, “The Generalist” (

Okay, back to meditation for a minute.

If you try to meditate, you are immediately faced with a problem: “how do I go about intentionally doing something which requires that I do nothing?” If you try to get into a state of flow, you will run into the same problem: it’s characterized by effortlessness; you don’t monitor your actions and decide beforehand which to take next; it's often described as a cognitive mode where you act without taking actions (non-paradoxically: acting without consciously choosing your actions). In the terminology of Taoism, this is called ‘wu wei,’ or ‘non-action.’ To be in this state is a normal part of human experience; it 'just happens' under all sorts of circumstances. The intentional cultivation of wu wei, or flow, is important largely because circumstances often act to prevent it—and if your environment is particularly stressful, those circumstances may be ever-present.

Here's the problem in another form: if practice as a general human activity requires monitoring performance and making corrections, how do you practice the skill of non-monitoring, or intentionally avoiding meta-thought? My proposal is that all valid forms of solution use a Grothendieck-like method, restructuring one’s personal conceptual framework, so that there's nothing to do when it comes time to 'practice,' and yet, in time, you do improve. By contrast, normal practice consists in monitoring yourself as you take some action, then revising your way of taking the action and repeating; this is like proving individual theorems within some existing framework. 

As an example of the framework revision approach: one of the first things one learns when studying Eastern frameworks is a formulation of ‘self’ that differs from the typical Western conception: we think of it as what’s often called the ‘ego’ or the ‘will’—that which decides; but if you think that your most essential identity is the part of you that makes decisions, how are you going to take actions without deciding to take them? Interestingly though, it seems that, most often, the ‘insights’ leading to fundamental personal conceptual revisions are derived from direct, personal experience. Regarding one's conception of self: if you consistently experience yourself doing things without consciously deciding to do them, while still feeling that it was yourself doing them, you are collecting data that implies the ‘ego’ or ‘will’ is too narrow a definition of 'self' and you're inductively led to a revision. So, in this way, meditation is a sort of strictly empirical metaphysics. I think the efficacy of the empirical approach is what makes it possible for Eastern philosophy to not philosophize, rather insisting on non-conceptual approaches. 

When starting to learn to meditate, one inevitably imagines the process will be similar to other things they’ve learned before and they generate all sorts of images of what success would look and feel like; they imagine that they’ll one day be able to say, “now I’m going to meditate” and they’ll begin using this skill called ‘meditation’ that they’ve developed. My understanding is that, in actuality, you will have learned no skill whatsoever; you will have just changed your understanding of self and life so that you arrive at ‘nothing needs to be done’ (where 'be done' refers to intentional action) in more and more situations. Alternatively, it could be said that you more often find yourself in a state of mind where “it’s not necessary that I do anything, but it’s okay to do things if I like,” which is probably also a description of being in the mindset for ‘play.’