Systemic complexity is often beyond the threshold of understanding of an individual. Because of this, modern teaching methods use abstraction and modularity in order to bring the subject matter closer to the student as painlessly as possible. For example, while studying anatomy, a student is first introduced to the various systems comprising the human body, and then each of those systems is explained in more detail. It is a well-tested and general learning method, independent of the type of subject matter.
By observing and analyzing a given system we are gradually forming a perception of it. The closeness of that perception to reality varies based on the amount of information available to us. In Simulacra and Simulation, Jean Baudrillard examines (among other things) the impossibility of envisioning reality, or rather, the perception of possibility which masks the impossibility of such envisioning.
Inspired by this book and by my ever-growing fascination with complexity, I’m writing down my thoughts on what I perceive as the different orders of perception of systems and system-like phenomena. The concepts I’m writing about here are forming the basis for my next project.
First Order
By having direct insight into the functioning of a system we gain first-order information about it. Direct insight is impossible.
The illusion of first-order perception is based on having a sufficient number of sources credible enough to make our brain classify the information as true. This is, unfortunately, the definition of second order perception.
In the process of observing a system there are invariably layers of perception which are implicit and thus ignored. In practical terms, we say that we have direct knowledge of a system only if we can completely observe it using tools and mechanisms which we trust unconditionally. Trust is the key here, and it is trust alone that enables the illusion of first order perception.
Baudrillard says that the subjective mental models we form of an event or system are only simulations built around the data available to us. Crucially, all simulations are true, and absolute truth doesn’t exist because it is masked by countless different and simultaneously true simulations. Each simulation depends on the multitude of parameters enabling it, and the parameters themselves vary wildly based on the sources of information and the individual characteristics of the person “performing” the simulation.
The problem complicates further if we consider the systems for which we have no credible information sources whatsoever. For example, let’s consider a couple of questions:
- How many servers were put to work by my web search query?
- What other data (besides my query string) was used to generate the search results?
- How many times has my search history been used to feed the machine learning algorithms?
There are no conclusive answers. Even if we were to somehow gain access to these bits of information, they still wouldn’t be of the first order. For example, even if someone were to hand us the full source code of the given search engine, it would still only be an information of the second order, because our (false) first order perception would be relying on trust in the person giving us the information.
As a mental exercise, let’s say that the first order perception can be had by an engineer who is the author of the code in question, and who is directly compiling, packaging, and deploying the code to the production server. Even in this simplified scenario, the engineer’s trust in her tools is crucial – her text editor, her compiler, her deployment toolchain, and so on.
We can go arbitrarily deep into the levels of trust the engineer has in her tools, her operating system, and so on, but the example is illustrative enough without going deeper into that rabbit hole. The best that even this “alpha engineer” can get is second-order perception, because her insight must end at a certain level of abstraction. It is important to distinguish knowledge from perception – knowledge can grow, but perception is impossible.
Another common example is the case of systems infected by viruses, and/or broken into and compromised by malicious hackers. Don’t the engineers maintaining these systems have “perfect” perception of them as systems working perfectly well and without any problems?
The procession of systems into which we cannot have direct insight cancels the possibility of first-order perception and brings with it the problem of procession of trust. The illusion of first-order perception is possible only by establishing absolute trust starting from the agreed-upon level of abstraction.
Second order
Second-order perception is established by examining the outcomes of a system and the subsequent forming of quasi-knowledge about it.
In our everyday interaction with systems, the best we can hope for is second-order perception. The number of systems we can interact with directly “Directly” is a very problematic term, but let’s not go there. is not as large as what is perhaps intuitively expected. However, even for these “directly” interactive systems, there exists a problem of an incomplete feedback loop.
A complete feedback loop is an idealized process which enables the user complete insight of a system solely by examining its outputs in relation to the given inputs. A complete feedback loop is impossible, for all the reasons mentioned in the preceding chapter about first-order perception.
In addition to this inherent impossibility of a complete feedback loop, there’s a practical problem – the user’s system perception is quite often intentionally limited by the system’s creators. The user is only allowed to scratch the surface, while the bulk of the algorithm is playing out below, within the black box.
All man-made systems – not only algorithmic ones – have this “intentional perception limiting” problem. The creation of successively more complex systems has always implied raising the abstraction level so that the user is isolated from the complexity of the black box. In day-to-day work, this is not a problem – it’s a commodity.
By going to the bank, you have “touched” the Great Banking Algorithm™. You’ve brought some papers with you, interacted with the clerk, got some papers back. What can you tell about the functioning of a bank?
Second-order perception is impossible, and its illusion is omnipresent.
Third order
Third-order perception is formed by observing the far-reaching effects of the system. We’re not in direct contact with the system in question and we cannot observe its immediate outputs, but we are aware of its consequences.
A good example is the 2008 global financial crisis. Most of us haven’t the slightest clue as to how this globally distributed financial algorithm is functioning, nor have any direct contact with it, and yet everyone was aware of its catastrophic consequences.
The systems we reason about only through third-level perception are often of such gigantic proportions that an individual or a small group can not have any influence on its functioning. These systems are deeply entrenched into existing power structures (nations, corporations) and they are susceptible to change only under tremendous external pressures – revolutions, financial crises, class action lawsuits, etc.
Fourth order
There are systems out there which are so abstract and divorced from any possibility of insight that their existence is brought into question. This impossibility of perception only confirms the system’s existence.
If it sounds too abstract, that’s because at this point the very definition of a system/algorithm begins to fade. If we define an algorithm as a series of rules meant to be followed in a predefined order, then the preceding paragraph makes little sense. If, however, we observe the algorithm from a different angle, the definition complicates.
Using observation and analysis we can abstract an existing sequence of actions into strictly defined rules. By defining those rules and their ordering, we have created an algorithm “out of thin air”. By describing a system we are bringing it into the world of formal perception, gradually, starting from third-level perception towards the nearer ones.
The perception of an algorithm is not necessary to confirm its existence because all algorithms already exist – it is only the perspective of its observation that is bringing it into the world of understanding. By arbitrarily choosing the perspective of observation and analysis, we can deduce arbitrary rules and thus define arbitrary algorithms out of those rules. It is the perspective that defines an algorithm, and the number of perspectives is infinite. As a practical exercise, compare the perspectives of an astrologer and an astronomer.
Do we, then, bring an algorithm into existence by the very attempt of perceiving it? Is this initial process of curiosity actually a spark triggering an infinite number of observation perspectives, which are then decreasing in number until only one “true” perspective, and thus only one “true” algorithm remains?
Is the very act of observation – creation?
Fifth order / anti perception
While the fourth order of perception implies a certain curiosity and the hint of closer levels of perception (no matter their distance), fifth-order perception is the self-induced illusion of first-order perception. It is based on the false premise of one’s own presence inside the algorithm and on the apparent (but false) ability to influence the algorithm’s functioning from within.
This blindness by the “first” order of perception is all-encompassing. The unlucky soul hasn’t got a clue as to how much “digging” it needs to do in order to reach even the fourth-order perception. Voting on elections, “healthy” food, regular physical exercise, donating to charity – all examples of grotesque fifth-order perception, anti perception, quasi-first-order perception, an everyday illusion.
As all the ones before it, fifth-order perception is – true.