Often lay people and philosophers talk about affecting change as a proof of free agency or the will to act. It’s a mess of interacting parts. Our brains. Our society. The atoms bashing around in our sun. Its unpredictable and none of us, even sitting right in the middle of it, know for sure what will happen next. Your mind anthropomorphizes this unpredictability and inserts agency. Picks out singularities and characters of the story for the sake of an explanatory narrative.

Autonomy is the concept gunking up the waters of responsibility and influence here. It’s a shorthand math we do. Autonomy is the vague absolutist concept that can’t be carried very far without dissolving. Like infinity. It’s a hand-waving approximation about something far outside the scope of what you’re talking about (‘Let’s just assume this just keeps going forever — back to the problem at hand’). The problem with autonomy is that nothing is actually autonomous. You peel back layers of cause-and-effect in hopes of finding a core of autonomy but it doesn’t exist. There is no mini-god presented with an infinite array of choices hidden somewhere in the core. Willfulness has more to do with awareness of what will happen then any real autonomy.

Well, applying the purest concept of willful is a conceptual paradox akin to ultimate power (i.e., the paradox being can god make a weight so heavy that he cannot lift it himself). Absolute free will is exactly that. A non-existent, paradoxical concept. The reality is that we have laws that place culpability and prosecute people who violate liberties of others because that works better than not having those laws in place. The agents in our social system, willful or not, will take into account these very likely costs of acting that way, costs our society has added in the form of a legal system, and modify their behavior and strategies.

If that’s not clear, imagine trying to do paperwork outside in the park. You use paper clips and any weights you can find, to keep them from blowing around. You can assume willful intent on the wind or paperwork’s part but the wind is just kicking them around and you’re just doing your best to not have them fly off. Whatever narrative you impose in the form of motivations and autonomy doesn’t change the reality of the actions or events around you that you’re doing your best to handle.

When you try to train your dog and you’re confronted by its sporadic random behavior, you could infer all sorts of free will out it. In your mind, the fact that it’s not perfectly predictable is what implies a will to do anything it wants. Although I guess philosophers and theologians might argue that pets have free will as well. I don’t know where they stand on that one. Try a swarm of insects instead. Or a hurricane.

Autonomy is just some rough concept. It emphasizes that there remains, inaccessible, unpredictable behaviors in our daily experiences. How true is this? How theoretical do you want to get? This is how philosophical discussions get inane. Certainly we can extend predictability with more understanding and brain scanning, etc. Philosophers might think it’s not an inane quest to dig deep with word definitions but it is. Without fresh evidence, or relative definitions, it’s pointless.

What is the ultimate implication of ‘free will’? As we peel back the mechanisms of the brain, somewhere, will there be dice? A random number generator? Yeah, sure. That idea might explain something. But just as doably, it could just stand to reason that underlying complicated neural calculations, inaccessible by monitoring, produces behavior that is not easily guessed. In this way, the same thing that has been explaining everything else we know, deterministic mechanisms, continues to explain people, too. To some extent. As all explanations can only hope to do.

To further beat this philosophical donkey, a very human concern is that somehow, human behavior becomes perfectly predictable if we don’t have ‘free will.’ That a universe of determinism is just playing out. That we have no autonomy.

Well, that is true. But it’s also true that we don’t have the simulatory capacity to simulate the universe or a person. We also don’t have perfect knowledge of all the factors either. From our points of view, there’s plenty that remains unpredictable in a universe that meticulously follows patterns.

Social behavior is motivated by legal consequence, by slow changes in cultural values, by peer pressure, by a ton of things. These are all cause and effects. Thin-slicing the outcomes by effectiveness and costs, in all forms, is what leads us to better systems. These concepts and narratives are just broad brushstrokes that only work in certain contexts at certain resolutions. They are word shortcuts and we should be aware of that.

Your mind does its calculus on all these inputs. Willful intent works so far insomuch to describe a concept, as shorthand, for legal purposes of culpability. Not as an actual indictment of any real autonomy, but because that’s how our social behavior works. We think it works better with enforced disincentives. Again, only to alter the calculus of the automaton brain of a potential criminal. Or remove from society or limit the actions of a potential repeat offender.

The fact of the matter is that our information processing organs are taking in information and optimizing its priorities, with limitations. Any behavior that falls outside of clear, obvious causation, to us, as outside observers, we narratize as ‘free will’. Why did that guy get drunk and streak past the police officer? Free will. Yeah, it’s a ballpark explanation for sure.

Instead of asking ‘why?’ or even ‘how?’ in a serial chain of questioning about the nature of the universe, let me suggest an alternative contemplative exercise and ask ‘so what?’

Consider, if you can, that just because someone is able to guess or predict what will happen (and it happens), why does this assume they have any sort of control or will over it? They knew it was going to happen. So what? Say they think they even planned it. So what?

In the past, once in a while, the shaman got the rain report right on a few occasions. He may have even shook a bone in a particular way at the sky in all those cases.

Autonomy is the dismissive or shorthand approximation about the agency of something. Really because you can’t figure out how something or someone is going to act. This makes sense when nations worry about other nations. Or when you worry about the guy two seats over on the subway. But, if you were able to look hard enough, more and more of their behavior is explainable. Whether or not all their behavior is explainable, every single bit of it is caused by something. They cease to be an impenetrable encapsulation, snowflake-like in its uniqueness, and start to be broken down into parts that are more common and identifiable. That are less autonomous. Deconstructed. Driven by common priorities. Sometimes you can do this a little, sometimes you can do this a lot.

Sometimes you don’t care.

I’m flinching that the last statement will only bring up an onslaught of witless criticism. I can hear the philosophers telling me there’s autonomy implied in the decision of not caring. I guess I will assert my free will this one time and parse this a little deeper. To care a little more, if you will. How does that work?

Caring. Interest. Boredom. This happens all the time. Our exploratory capabilities have limitations of time and energy. I was half-tempted to not even write anything. Just call out the loops of logic for the waste of effort they are and refuse to engage in any of this. Why do you not care sometimes? Perhaps you were actually physically incapable and you generated a self-protective narrative. Perhaps your mental algorithms just decided not to. Why did they decide not to?

Well, let’s go deeper into this hole and deconstruct us some rabbits. Your mental algorithms weren’t capable of assessing it at some level. Or perhaps an assessment could be done but the algorithms monitoring those algorithms, with their back-of-the-envelope calculus, did the math on the effort in terms of resources and time against the potential informational and resource reward and said screw it. These meta-algorithms run in the background and float up into perception space as feelings. Interest. Not caring. It’s a shorthand assessment your brain does to itself.

Say you’re reading this right now on a subway, noticing that slightly shady character two seats over. Your brain not only assesses them with complex calculus, it also assesses how much assessing is worthwhile. Your consciousness, you, your perception space (pick one) with its expectant attitude, gets the abbreviated memo on these things as an assortment of salient features that stand out to you. Unkempt hair, dirty pants, darty eyes. Or, time and resources not permitting, your consciousness only gets the subject line of the memo, in the form of a wordless gut feeling: Meh, he’s okay.

In fact, the grey mass telling your eyes to dart momentarily to collect a bit more information, or again a third time to really get a good stare, are all assessments being done. Aspects of the quick scan you did over the occupants in the seats of the train car when you were first entered and were mostly trying to find a non-sticky seat and not drop your stuff may have been relatively unconscious.

These assessment of assessment recursions do not go on infinitely. There may be many layers of them, yes. These layers may be software in nature and not hardwired as actual loops of neurons giving them a seemingly limitless looping ability. For sure. Manifested in neuroses and revealing insight, alike.

They don’t just assess environmental information. They assess the output of their own assessment, self-assessment of their assessors if you will. A lot human conversation takes place at multi-layered levels like that.

Any complex information system does this. Disciplines write reviews, meta-reviews and editorials. Corporations do internal assessments, market assessments and bring in process consultants. People are perceptive, introspective, and contemplative. The recursive possibilities may be vast but for parts of your mind assessing the worthwhile nature of doing that, is doing its own math, too (why you stop listening, watching, reading something). And at some point, like all information systems, an assessment process goes: close enough, moving on. Your brain organ, an information processing organ working purely in informational currency, tries to estimate the subsequent value from doing something whether it be a new experience, review something old or to sit and wait.

It’s in these recursive assessments people confuse complexity for an emergence of some kind of mystical ether like a ghost spirit or a mini-god with an autonomy to act however he or she wants. This is the ignorance of resolution retreating into whatever terrain we haven’t fully characterized. We are characterizing behaviors of individuals, societies, families, cultures in our psychological, sociological and brain sciences. Trying to characterize mass behavior, ethical behavior or genetic behavior.

The new-age spirituality thrives on this deconstruction as being a reduction of complexity of who we are. A removal of spirit. It’s not. I’m not saying that I see anything but tremendous complexity when I look at another individual. Hell, I’ll even grant you that they can have a spirited nature.

Our previous observations about human behavior don’t suddenly go away. Our explanations however can be completely rewritten.

Instead of a soul, essentially a mini-god inhabiting a body (as aggrandizing and comforting of a concept as that is), we explain people with layered motivations. Other people’s behavior cease to look as erratic and mysterious. We may lose some of that old Iron Age allure (or rather, Information Era allure, given the pace of progress on this one) but trust that it will be replaced with other interesting things that will provide the attractive mystery this idea once did. Other explanations including a miniature ‘you’ behind the levers of your brain or any kind of universal ‘subjectivity’ renting out your brain space have the same problems. If these explanations are unfamiliar to you, I apologize for exposing you to them.

As a side note, your ‘attention,’ a somewhat conflated concept in the field of neuroscience, speaks to a ‘spotlight’ of your mind being captured by a sudden bright color or a purposeful redirect of your mental spotlight. Of course, this redirect capability isn’t a monolith nor does it involve the universal control that descriptions of it often have.

It is the exercise of varying neural systems recognizing differences. At many different levels. Either your memory retrieval system gets momentarily and involuntarily activated. Or your eyes are unconsciously redirected for a split second. Or a physiological reaction such as a minor adrenaline response or an associated feeling becomes remembered or generated by an environmental or a mental cue. The activations and combinations of different information systems of your head. Attention, like ‘consciousness’, is a rough summary of these system interactions but we should keep that in mind that it’s a shortcut explanation and not an actual entity when looking at functional brain data. 

Depending on the resolution of our explanation, we can explain all these complex behaviors of the mind differently. For instance, as a pure information system hooked up to a bunch of biological sensors trying to increase its utility, you could say there’s a psychoinformational drive, balanced against all its other priorities, to acquire new, relevant, information. Or another way to explain it, is that it’s drawn to potentially useful novelty.

It works at the lowest levels of your neural systems all the way up. Motion draws your attention against a visual field of stillness. A bright color against a background will not only be spotted but cause your eyes to redirect towards it.

This works at higher features, too. An unusual face you see in a crowd on the morning commute. Even a single feature of a face, like unusual eyes. Or an unusual gait. Anything that stands out, at any level of abstraction, compared to past experience. Interesting concepts and ideas are ones that stand out as unusual against a background of what you’ve been exposed to.

It’s partially why you make go poking around a new environment or neighborhood, despite the cost in time and resources and potential losses of safety or comfort, for pursuit of aimless exploration. It’s why you re-engage with people or situations you’ve already failed once at and are likely to again. Retrying things includes the benefits of gaining deeper information about the environment and systems. In this way, your underlying subconscious intelligence (really, your intelligence or ‘you’) can take action or have impulses to do things to explore your environments. It’s why you take on failure in a seeming conflict with ‘conscious rationality.’

Also, it’s why things you’ve experienced or seen a bunch of times before, you may not even remember at all except for the minor differences. Your memory, your perception that you’re conscious, is reliant on comparison of your memory to whatever slight differences of experience that are happening. Even if it’s purely internal thought, or a sense of passing time based on the season or a calendar.

Your brain system works at many levels at once and none of it requires you to be in charge.

Table of Contents

Posted in

One response to “[Book Chapter] Chapter 3: Free will, autonomy, and the psychoinformational drive: Assessment of assessments and the underlying recursive calculus”