### Active Structure - What Does It Mean?

The picture of an ape inside a person's head building a structure to represent what it sees in the world is how we visualise Active Structure. Of course, if you look closely at the ape, it too is made of structure...

We don't believe there is some giant algorithm in our heads determining everything we do. Someone can speak a few words to us, and our behaviour can be changed (for better or worse) for the rest of our lives. The words caused a change in structure, and that structure is active in modifying and controlling our behaviour. We don't question that what we learn can be folded into what we already know, and immediately put into practice. This is obviously not algorithmic.

For almost all our computer systems, we use an entirely different approach, where there is data which we can dynamically modify, and an algorithm which we can't. We can struggle with metadata, but it is still data, requiring something in the algorithm to know what to do with it. Active structure isn't like that - it is in control, rather than an algorithm. An algorithm is fine if the world never changes - working out prime numbers, for example - but is fundamentally unsuited to interacting with a changing world. When computers had 4K of memory, a simple algorithm was the only way to go, but people persist with this approach long past a reasonable turning point.

Taking an extremely simple example:

A = B + C

Here is a piece of knowledge which can be used four ways (who said it was true? - and five if we count existence). How will we need to use it - who knows, why not let the piece of knowledge decide when it becomes activated by some inputs, wherever they come from.

Active structure means building structures and propagating messages in those structures, including messages to change the structure. Another simple example:

A = SUM(List)

List resolves to {B,C}

The structure ends up the same as the first example, but was built by one element of the structure, and only when the list was resolved. This example is a simple but important one, because it introduces self-modification - see some more stuff about knowledge networks.

What this last example is about, is demonstrating the transformation required to convert messages about knowledge into active structure. The knowledge that people have in their heads is active. They can create a static message about that knowledge by transforming the active knowledge structure by writing it down, like A = SUM(List). They presume the message will be taken up by another active structure (another person), so all of the work involved in the transformation back to active structure goes without saying (and is actually very difficult to describe). If we want to automate the deployment of knowledge, we should be using active devices that cause the transformation - see Knowledge Management.

Some people build structure that is directed, in the mistaken belief that because a neuron is directed, systems built from it must also be directed (and yet they never question the feedback in superhet radios, engine management, the humble room heater, you name it). Real biological neuronal circuits have dense back-connections, which overwhelm the directional property. If notional wires with their bi-directional conduction are used in a computer-based active structure, many of these back-connections can be avoided.

The examples may have sounded mathematical, but Active Structure is about building structure to accurately represent (and simulate and therefore predict the behaviour of)   anything in the world - see NLP for a wider viewpoint.

But what about "hugely parallel" - how is that going to work on a computer with one or a few processors?

We can't machine process in a hugely parallel way, but we can take very small ("atomic") steps and immediately broadcast the result to everything else that is interested - a densely connected structure takes the place of a hugely parallel one, which is also densely connected. It turns out that in reading text, a very fast processor building an active structure in computer memory to represent what the text means runs at about the same speed as an attentive human, but the human will forget most of what the text says shortly after reading it, whereas the computer- built structure remains. In some areas we have no choice but to directly simulate massively parallel activity, and there we use constructionist and diffuse operator techniques - see Extensions to Active Structure.