TL;DR: what if we generalized the notion of an algebraic system in a different way than abstract/modern algebra has, instead focusing on something that may be more at the heart of the utility of algebraic systems: the ability to mechanically alter related mathematical expressions while preserving the underlying meaning, or giving up known degrees of it. It seems possible to me that algebra as we know it retains arbitrary vestiges related to its origins in manipulating symbol sequences (e.g. the usage of inverses to move symbols between related statements), while it may be possible to reach new insights by generalizing from a basis unrelated to sequences or the limitations of handwriting.
Note: There's an annoying mistake running through the article where the only operation type I discuss is 'ortho-semantic'. I realized later there is likely a hierarchy of three operation types: 'ortho-semantic', 'same-relation-preserving', 'some-relation-preserving'. It needs some editing 0_0
Note: There's an annoying mistake running through the article where the only operation type I discuss is 'ortho-semantic'. I realized later there is likely a hierarchy of three operation types: 'ortho-semantic', 'same-relation-preserving', 'some-relation-preserving'. It needs some editing 0_0
It seems like there probably exists a category of systems which exhibit some of the most interesting and useful aspects of algebra, and yet are very dissimilar in other aspects. It’s probably a good starting point for seeking them out to consider what is arbitrary and what is significant in the operation of extant algebraic systems—in other words what is their ‘essence’, and what are their incidental aspects.
My understanding is that their main utility lies in their providing a systematic means of exploring alternate, equivalent representations of values. They let us take two representations which each evaluate to the same thing, and gives us a set of rules for transforming only the representations while the value stays the same (I call these transformations ‘ortho-semantic’ operations since they do not affect the meaning/evaluation of representations—they are orthogonal to semantics). By shifting representations around in this way, while maintaining consistent relations between them, we can discover new patterns and relationships which we would never otherwise have expected to exist.
One especially common/important goal of carrying out these transformations is to 'solve for' unknown elements by taking a pair of equivalent representations and transforming them until one consists only of a single atomic representational element whose value is unknown, while the other representation may be more complex, but readily evaluated. This may generalize to something like: representations may include ‘placeholder’ elements which cannot be evaluated in isolation; however, when equivalent representations are put into some kind of correspondence (e.g. like being placed on either side of an equals sign in traditional algebra), applying sequences of ortho-semantic operations to the representations may put the system into a state in which the correct evaluation for a ‘placeholder’ element is unambiguously revealed.
I think viewing the situation in that way frees the imagination a little to explore new forms of systems exhibiting those properties, while implementing them in potentially radically different ways.
My suspicion is that some of the specific features of algebra as we know it are accidental outgrowths of the historical fact that we had to do these representation transformations by writing symbols on paper. Especially given our familiarity with written language, this would bias us to a sequential, symbolic representation. However, (what I claim are) the essential characteristics of algebra—i.e. the capability of systematically transforming equivalent representations without changing their value/meaning—do not depend intrinsically on symbol sequences.
And the kicker is: the ortho-semantic operations in the various algebras we use all depend on the notions of inverse and identity, since the two together provide a simple means of moving a symbol (or group of symbols) from one representation to its equivalent; and transforming the representations in that manner is of the essence if your machinery for carrying out and recording the transformations consists entirely in symbols drawn on paper via pencil by a human. That’s my central thesis here: we have assumed identity/inverses are part of the essence of systems that behave like algebras—but maybe they aren’t as necessary as they seem.
In the way mathematics has presently generalized specific algebraic systems into an abstract theory, inverse and identities play a very central role. I wonder two main things: 1) Could specific, alternate algebra-like systems be developed which have similar or greater power than traditional algebras, yet do not depend on inverses/identity? 2) Could a generalized mathematical theory be developed which deals more directly with what I claim is at the heart of algebra’s operation (systematically transforming equivalent representations without changing meaning/value), rather than focusing on a particular implementation of that behavior which happens to require inverses and identities? I know many will say that the rich theory around algebra is a sort of proof that it’s the ‘correct’ track—and I would counter that there’s plenty of space in the realm of pure mathematics for more such rich theories. One could also point to uses of abstract algebra in the physical sciences as a kind of proof of the same thing, but if I’m not mistaken it’s often the case already that the ground is covered by alternate theories as well, e.g. Category-theoretic formulations, so I don’t think of success of pre-existing systems as proof of any kind of ultimate correctness.
I think any algebra-like system must have these parts (and only these parts?):
- A syntax (i.e. a definition of allowed symbols and how they may be arranged into statements; doesn’t actually have to be symbol sequences though, any consistent representation whose rules may be stated is fine).
- A semantics: a mapping of syntactically valid statements into some other domain of ‘values’. For the system to work well it should be the case that the semantics frequently maps multiple unique syntactic statements to the same value.
- A set of ortho-semantic operations, describing the ways in which one may convert sets of statements into equivalent sets of statements (equivalent in that the semantics would map the statements to the same values).
- (NOTE: this needs to be modified taking into account the introductory note at the top here.)
—maybe the general theory of these systems would use those ‘parts’ as its central terms?
One thing that occurs to me is that it’s not necessarily necessary to isolate single variables (i.e. ‘solve for’ single variables) in order to discover unknown values in other algebra-like systems. It’s necessary in algebra because if you had something like this: x + y + z = a + b + c, where x, y, and z are unknown and a, b, and c are known, it’s ambiguous which variable maps to which since addition is commutative. It’s possible that in alternate, algebra-like systems, you could perform some ortho-semantic operation which causes a number of representational parts to align unambiguously and have their meaning revealed through correspondence to the ‘partner representation’ (I would call each side of an equation one of the ‘partner representations’ in traditional algebra systems).
My best guess is that any alternate, algebra-like systems which are constructed will exist only in software, and would be very inconvenient to try drawing on paper (or at least the set of these is much larger and less explored, so we’re more likely to find something there). It could be that the ortho-semantic operations are much more complex, also, so that it’s not simple to state exactly what it does in any way than reading the algorithm that does it. So, a user of one of these alternate algebra-like systems would probably press buttons corresponding to the ortho-semantic operations in order to shift the representations around and investigate relationships.
——————————
Another mostly unrelated idea:
Why is it that we mathematically represent physical laws with equations? Generally speaking, what we’re attempting to document are how states of physical systems evolve in time after some operation occurs; so wouldn’t it make more sense to use a representation like
[state 1] {static collision} [state 2]
—where two states are related by an operation? If we were to develop an algebra-like system around this, every operation would have its own set of ortho-semantic operations (kind of like the different rules that exist if you relate two algebraic expressions by ‘<‘ or ‘>’ instead of ‘=’). Sounds like a lot of work, but it may be that there’s a more general system which can be used to automatically give sets of ortho-operations for particular physical operations (like ‘static collision’ in this example).
Actually though, I don’t think this would work… —looking at a more concrete example:
[
[mass: 5, elasticity: 0.1, position: {0,0,0}, velocity: {0,0,0}],
[mass: 8, elasticity: 0.25, position: {10,0,0}, velocity: {-2,0,0}]
]
{static collision}
[
[mass: 5, elasticity: 0.1, position: {0,0,0}, velocity: {-3,0,0}],
[mass: 8, elasticity: 0.25, position: {0.234,0.43,0}, velocity: {0.5,0,0}]
]
It is interesting though that scientific laws aren’t generally in the imperative form, like if I have a physical system in state X and do BLAH to it, Y will be the resulting state; my guess is that we use equivalence relations instead because it’s our only means of ‘doing theory’, by encoding results in equations and then looking for new relations by applying ortho-semantic operations to algebraic representations. However, that probably shapes our view of science quite a bit, in a very Sapir-Whorf manner.
The first thing you describe reminds me a of unrestricted grammars and the mathematical plumbing around Gödel's Incompleteness Theorem.
ReplyDeleteThe second bears a VERY strong resemblance to S-matrix calculations from quantum theory - the S-matrix is an operator which transforms a wavefunction at an initial time into a wavefunction at a later time. This framework can be used to analyze everything from radioactive decay to particle collisions at the LHC.