turing machine

2008.11.11

At the core of every contemporary algorithmic machine sits a feedback machine. But sitting next to that core, lies yet another, second core: the abstraction machine. The birth of this abstraction machine extends far beyond the invention of the computer, but converges in full view at right about 1936 and the publication of Alan Turing's « On Computable Numbers, With an Application to the Entscheidungsproblem » {1}. In this historically significant text, Turing proposes the « Universal Machine », a machine capable of calculating an arbitrary series of instructions and which through various twists and turns would more or less become the blueprint for what today we call the computer. If feedback introduced the concept of temporal circularity and what is today often called « interactivity », the Turing Machine would have to be the introduction of temporal indeterminacy and the endless cycle of « repurposing » proper to all algorithmic machines. But oddly enough, this temporal indeterminacy grows out of a strict adherence to a very peculiar form of linearity. One of the stranger designs of Turing's machine is its iterative execution strand : in a Turing machine, an infinite ribbon contains a series of discrete instructions that are acted upon by a single moving read/write manipulation head, or cursor; while the initial states of the instructions can be placed in advance upon the ribbon, one can only know the final state of the instructions after the cursor has run through each instruction one at a time and modified it. While we are free to record the original state of the machine and the resulting state after it has run through its algorithm, all of the intermediary states that lead us from one to the other cannot be proven until each step has been acted upon. In other words, there is no temporally transcendent perspective that would allow us to understand the « proof » of the algorithm without actually taking all the steps into account and running them through the machine. It is precisely at this point that the Turing Machine shifts from being a problem of resolving computability, and transforms itself into the blueprint for contemporary modular machines: by creating a series of linear steps that are entirely contingent on one another temporally, it becomes possible to construct a potentially infinite number of machines within the Turing Machine by simply adjusting the state of each individual (future) step. If the machine does not entirely know what it must do at step X, Y or Z until it actually gets there, there is nothing stopping us from transforming the states (and therefore the « role » or the « purpose ») of these steps before the machine actually arrives. The only constraint — and a significant one — becomes the requirement to respect the protocol of what the machine considers valid (or « legal ») instructions in order to function. Within this un-benign limitation, however, an infinite number of algorithmic structures can be constructed, including non-linear algorithmic structures that simulate massive parallelism such as our « multitasking » computers of today. The Turing Machine frees up machines from ontological determinism, and yet does so within an entirely determined (i.e. purposeful) design.