Almost all of our machines are characterised by components connected together to produce behaviour not characteristic of the components alone.
In a heater, the thermostat turns the heating coil on and off.
In an aircraft, the engine pulls the wing along and the wing lifts the engine.
Machines are characterised by connections among the components - connections that are either strongly directional like the heater, or bi-directional, like the engine-wing connection. There are implicit connections - it is assumed that turning the heating coil on will heat up the room, and lead to the thermostat turning the heater off - the heating coil is "connected" to the thermostat through the air.
Connection is an iron discipline - not connected, no effect.
Not all machines have a static structure:
|An aircraft lowers its undercarriage - this may sound trivial until you try to apply the plane's weight to its folded undercarriage.|
|A reverse cycle air conditioner changes its plumbing and its control system connections to change from one mode to the other.|
|A mobile crane travels to a site, jumps off its wheels onto its feet, then uses itself to erect itself.|
|A computer sets up its memory latches to access one address out of millions.|
All these changes of machine topology we take for granted.
The only exception to this machine paradigm is the sequential instruction computer - the one you are using to read this document. It operates by executing a stream of instructions. So what's the problem?
After each instruction, the state of the machine or the world may have changed - all it knows to do is to execute the next instruction. If we suddenly decide we want it to do something else, executing the next instruction may not be the best policy. One way of overcoming this is to interrupt it and make it do something else. When it returns from the interruption, it will execute the next instruction from where it left off. The interruption may have lasted a tenth of a second or an hour, and the states on which it is implicitly relying to carry out the next instruction may have changed in the meantime, so it may do something trivially or disastrously wrong. We could put in checking before and after every instruction, but this still wouldn't work, because the interrupt can occur anywhere, within the checking as well, or immediately after it. We could make the checking and the instruction 'atomic' - that is, a minimal packet of activity that cannot be interrupted, but as states become increasingly complex, the checking would expand to enormous proportions and this is not a practical solution.
The other thing that a sequential computer does, or at least its program does, is reach out and change things - put a value in an address in storage. What happens if other parts of the program want to know that - the program needs to be written so that other parts are activated on the basis of the change - but we may not know about the connection when the program is written, or the other parts may only want to know conditionally, and we are back to the same problem - we have to add so much around every instruction to prevent something going wrong that it becomes unworkable in complex situations. But isn't OO programming supposed to get around this problem - in the main, it assumes static classes and directed behaviour. Changes in the classes can easily bring the whole structure down.
Our brains operate as a huge finite state machine with self-modifying connections, through which ensemble messages are sent. Finite doesn't mean too much here -we have a hundred billion neuronal cells, each of which can modify its own connections, so let's leave 'finite' out of the description. Ensemble means that the messages are complex and changing. Each cell operates atomically - once it is activated, it will fire without further outside control. By dint of back-connections, structures can self-excite, making the directional properties of the neuronal cells irrelevant - the knee-reflex as a simple example.
If we encapsulate atomic operations in little boxes, and connect those boxes together with pipes that can carry ensemble messages and arrange to mask directionality, we begin to emulate some of the properties of our own mental apparatus.
One example is a PLUS operator, uncommitted as to direction, and uncommitted as to number of connections. Values can come in any connection, and flow out on any connection.
Here is an example of a FOR loop and a GET and PUT operation occurring within the loop cycle, typical in programming.
An example of the structure surrounding an alternative activity in a project plan.
This is typical of a machine approach - build and add structure to create a complex environment. The ACTIVITY operator only needs to maintain consistency of the values on its connections, and so do all the other operators that surround it. Attempting to handle this directly in a program, the programmer becomes lost in an increasingly complex environment, not knowing which requirement to allow for or to respond to next.
These examples are of static structure - the state machine paradigm would be rather boring if it could not change its structure during its operations, as a consequence of those operations.
A simple example of self-modification:
X = SUM (List )
It looks trivial, but there are two different machine elements here - one that connects things, and one that adds numbers. Keeping them separate but connected has significant advantages.
See Presentation - Simple Self-Modification
Information Extraction from free text is a good example of the flexibility and range of the state machine paradigm - something we do so easily, and yet on which sequential machines and their programs struggle so badly.
See Presentation - Words to Knowledge
By changing from the execution of sequential instructions to signalling in a "soft" structure, we can change sequential computers in a way that allows us to enforce the same discipline that physics enforces on all our other machines - influences only flow through connections.