I’m not a philosopher. Well at least not by degree. I might have a philosophical nature. Oh, also, I once took a philosophy of physics class and dropped it after being frustrated by it. Great story, right? How to quit against a small amount of odds? Let me paint you a picture before you levy your judgment against me.

There was not a single other scientifically-backgrounded student in the class. The class was mostly graduate students of philosophy with a handful of history students. As a consequence, they often delved into scenarios that always seemed to involve Hitler, World War 2 and a bus stop. I swear. In a class that was supposed to be on the philosophy of physics. I understood what they were trying to get at (and the books of the class which I did read). The crux of it was: how can the universe be continuous? How can causality work in a continuous universe? If you can subdivide cause and effect into smaller and smaller chunks, you can always take the ’cause’ and divide that into two with the earlier sub-event being ’cause’ and the latter being an ‘effect.’ Their discussions of causality, for some odd reason, had to almost invariably involve waiting for a bus and the theoretical murder of Adolf Hitler but that’s beside the point.

Beyond the problematic implication that time and matter can be divided infinitely, they were overburdening the word ‘continuous’ to its breaking point. It’s a word used to discriminate the keys of a piano from the string of a violin (a favorite metaphor of physicists). It’s a line in the sand. In this case, the answer to the two rambling books and discussion of the course was: stop trying to divide infinitely. Things may not be continuous but actually be discrete at a fundamental level. And there’s already reason to suspect that. In fact, there may be a recursion of discreteness as far as we’ve been able to explore. Cool. My point isn’t to dive into modern physics.

My point is that an exploration of the limits of word usage tires me out quickly. What I mean is creating a self-collapsing construct isn’t that remarkable of an achievement. Going outside the bounds of definitions, which are just word arrows to abstractions of an observation, and watching them dissolve might be worth a mental chuckle. Pursuing it gratuitously is boring. Using it for argument is linguistically viral behavior.

Like all systems of representation (including math), language is a rough approximation of the reality around us. An estimate. It’s akin to sharpening stones with stones. You start with some rough stones, eventually creating some seventh-generation stone tool which you can go back and use on the second-generation one to sharpen that one up a bit more. It started as bashing. Now there’s etching and beveling. A collection sustains, tools within it are modified and not all the old tools are tossed out because the perfect or all-purpose stone tool doesn’t exist. Nor is any language or representational system going to represent anything with comprehensive perfection.

My point isn’t to regurgitate a cliche about absolute truth or to sneak in a platitude about respecting the old ways. My point was that out-of-context use of language is bound to happen in human conversation with these recursive jumps. You can try to take the time to trace back all the intervening generational layers between a concept and some other related-but-not-really-related concept to understand the seemingly discontinuous jump, or you can shorthand it. But recognize it for what it is.

Discreteness and continuity. Revolutions and incrementalism. These are continuums of many generations. Helicoptering in and out is bound to cause different resolution patterns to appear just as dragging different tools into different contexts is bound to cause some mismatches. This isn’t beyond comprehension.

Efficient inefficiency. Stressing words and concepts, really simple ones, to an extent more than they can bear. Creating word paradoxes, then trying to resolve them awkwardly. Or just causing them to fail outright. It’s running up against the limits of simplistic concepts. Perhaps this is a novelty to some but concepts, words and equations are tools of comprehension that can, in fact, be broken. Math, logic, language all have limits. Paradoxes, when they occur, should be a reminder of how easy it is to misuse these tools and not serve as some grandiose proof of the impossibility of comprehension or extensions to invented transcendent realities.

A broken question isn’t worth pursuing and shouldn’t be rewarded. Does this have to be said? Apparently.

Take a commonly associated feature of our comprehension, free will. ‘How can free will coexist with determinism?’ Well, its conceptual conflation. Free will is a stupid concept. I mean that in the most nuanced way possible. It requires clarification. When you say, “I could have done things differently.” Well, in a physics sense, no. You couldn’t have. If you rewound the universe, you would do it the same way. Not that anyone can rewind the universe or call their past self up and have a conversation so that context is stupid. Obviously, what is done is done. Perhaps that’s not so obvious.

Precisely what you mean is, next time something similar to this happens, I think I will do something different than what I did. That’s what you mean by ‘could.’ Partially because your mind is informed by, now, additional actions and experiences. And the conclusions from any simulated scenarios you ran through. Your decisional parameters have been modified. The implication that you could change your action in that instance is a mental shortcut. What you mean, obviously, is in a future instance that will resemble that one, you may respond differently. That’s what we mean when we go over decisions of government leaders, businesses, or our personal lives.

But wherever your decisional parameters stand at the moment that you are confronted with a situation will dictate your actions. This is true for all moments. There. I just summarized hundreds, if not thousands of pages of yammering on the question of free will.

‘Theoretically.’ ‘Could.’ I ‘could’ do all sorts of things but I did what I did. Our language is replete with theoretical redo language to process past experiences or possible future scenarios. This isn’t bad or need to be changed. But awareness of it enables some clarity. For purposes of conversation, in our evolved language, battered to a honed, complex practicality, its fine to use a shorthand verb so that you can slightly alter your behavior or thinking the next time a similar situation comes up. I could have done this instead. It enables conversation.

All it requires is to understand that the language which we usually speak of when we say we ‘can’ or ‘can’t’ do, refers to the future. The annoying thing to do is to inject physics into culpability as if we had an ability to reverse time. It’s a conflation of our language.

It’s been an annoying exercise, to say the least, to figure out to fully ascertain exactly where ‘compatibilists’, a self-labeled branch of philosophy, sit on free will. Implied in their self-given label, is that free will as a working concept and its ‘compatible’ with the universe. But the implication buried in many of their statements seems to be that, yes, free will is not actually real but we should allow people to think that anyways. This realization has been profoundly more annoying than wading through their overwrought labeling and shoddy scenarios.

No matter what I say, they may say I’ve misunderstood their position, that they don’t think it is a working concept. Or that it is. But this non-explicit acknowledgement is unbelievably disingenuous given thousands of hours and the hundreds of pages they’ve written on it. Not only do they seem fret over the paradoxes of their invented definitions, but they speak about it with a high level of self-consciousness as if their conclusions could undo language and social order. In the true spirit of the Philosopher’s stone, I suppose. This is a pessimistic view, if not irrelevantly arrogant, or a delusional attempt at sequestering human intelligence.

None of these self-important ditherings would seem to be relevant to anyone. None of their blathering is revealing or insightful to anyone with a scientific education. You might ask why bother addressing it at all. Because these are old ideas mucking up the machinations of progress, akin to someone asking whether they can see their soul on an MRI scan. It’s a juxtaposition of sophistication in our current times that begs to be addressed.

This isn’t a narrow call for empirical science as the only tool. Obviously. As I make my entire case non-empirically. But arguments that point at past attempts at scientifically studying these old concepts have failed is not only a non-argument for forward progress, but isn’t true. The failure is in these concepts, not in our sciences. These failures should be an obvious call to advance our thinking past these old ideas, as linguistically well-fortified as they may be.

Formalized philosophy is a target of extremely critical attention because philosophers have taken it upon themselves to serve as apologists of medieval concepts that are mucking up the cogs of progress. People can have philosophical discussions and authors can write philosophical books. But formalized philosophy insists on digging deeper moats around the old crumbling castles of the dark ages.

Building layers of exclusive definitions distracts you into thinking an art project with words is equivalent to progress. Language evolves in complexity by new realities, new data, new experiences, and new scope. It branches out, covers new ground and forms a labyrinth of connections. Language emerges out of an ecosystem of changing context.

There is a tendency to project thinking onto the processes of thinking. Or thinking onto the workings of the universe. This is our human tendency to do this. But they have formalized the worst tendencies in a field that stacks a bunch words on top of each other in a precarious pile that falls over frequently. Jargon-ization is supposed to serve as a shortcut for conceptual integration. Not as a cloak to obscure stagnation and confusion.

Other forms of philosophical stupidity take on modern theory of physics of a universe born of nothing. In addition to writing that modern philosophy does not influence the process of his work (a reasonable statement for which he’s taken flak), Lawrence Krauss has run up against these old ideas. Krauss, a physicist who studies the makeup and history of the universe, wrote a book explaining how space and time can spring out from nothing. He responded on stage to a philosophical statement about the ‘philosophical nothing’ being different than the ‘physical nothing’ that modern physics studies:

“That’s true but I don’t exactly know that those words mean. It seems to me that nothing is a physical quantity just like something is and we need to look to the universe to ask what it is.  Now, from a physical point of view, one can get a little closer to that vague and I would say ill-defined notion of non-being that philosophers and theologians might have argued about for thousands of years by saying, well, in fact there are other versions of nothing and in fact, you can imagine no space and no time, which of course I think is a better version of non-being and non-existence.  But if we apply the laws of quantum mechanics to gravity, then in that theory, even space itself can pop into existence, spontaneously – space and time, where there were no time and space before.   And, boy, that’s pretty close to what I think anyone would argue is nothing but, you know, if it doesn’t satisfy those who think that that’s non-being, so be it.  But I’m much more interested in the nothing of the actual universe than some vague, a priori definition of what non-being is.”

In this case, they are using a definition from our Neolithic language that we have moved past. Like the ‘ethers’ of the universe. Or the giant tortoise shell that our ancestors thought the planet rested on. They are simply talking about something that doesn’t exist. The metaphysical ‘nothing’ is a creation generated by us when humankind didn’t have the creative imagination or knowledge to know about the quantum world or the plasticity of space and time. Similarly, Platonic ideals of perfect ‘circles’ literally existing out there in some metaphysical realm doesn’t explain any level of ‘circular-ness’ in real everyday objects.

Input from circular musing about ‘nothingness’ wasn’t required for physics to land at the same point they have landed at. Understanding how the brain works or finessing the best legal system possible doesn’t require running loops of paradoxical concepts like ‘free will’. It is a purely contemplative attempt to bypass trial-and-error processes of progress based on the aggrandized notion that a sufficiently intelligent enough brain in a dark room can conjure up progress through force of will.

What happens instead is they either are forced to borrow from a field or context, and become the interpreters of results as basically applied philosophers. However, experimentalists interpret their own results all the time. Field-specific theoreticians also exist but they usually go beyond only definitions and try to diagram specific patterns in the form of schematics or equations. Staying away from context quickly renders definitions to a useless level of abstraction. Objective versus subjective. Causation versus effect. Extracting out concepts and isolating them in too austere of a manner in an attempt at an absolute. It also results in a product that’s so limited because it’s ventured out too far from the relativism of our language and context.

Similarly, conclusions about manufactured concepts such as ‘absolute nothing’ or ‘absolute continuousness’ bring no bearing on scientific conclusions and just aren’t useful in understanding. Don’t get me wrong. I have no problem with manufactured outrageous concepts in the context of literature, poems, songs, movies or entertaining conversations. But its posturing as an academic field only serves to embarrassingly put their overreaching word pies on the same stage as sophisticated, observation-backed theory.

The frequent Freudian-slips of comparing theologians and philosophers can be understood. There are similar patterns to theology. The lame reverence of old material for one: The thinking of our ancestors grounded by their limited intuition which was shaped by infantile levels of human knowledge relative to now. No one extracted the sun positions in the solar system by extracting it from religious text. And no one figured out how memory encoding works by extracting from lengthy treatises on what it really means to remember or experience something.

The flag-planting justification for its field, where they point to a recorded philosopher 20 years, 100 years or 1000 years ago who said something similar that observational science now confirms, draws more theological comparisons. The revisionism required to exclude the other countless idiotic statements and warp the context of one statement to fit modern evidence is akin to theology trying to cram its seven-day creation mythos into a 13 billion year framework that we now know about. Democritus of ancient Greece coming up with a concept roughly similar to atoms is relevant only as a historical fact.

In addition to trying to preserve layers of old thought, as if bad thinking is suddenly valuable in the context of writing it down, there’s no fresh input of data. Any field builds on previous work. One possible problem of a field that is pure definitional collection is that it finds itself simultaneously stale and collapsing, standing against the evolution of language and knowledge. Either way, the preservation of previous unknowing speculation is a terrible foundation to work off of in the context of explosive science.

But philosophy has the fine distinction of trying to create a field soley through what is one of the worst elements of science, medicine, law, engineering and fields in general: the over-formalization of language. Technicalization of words in any content area is an illusion of intellectual synthesis. In this case, this is done without context and without data.

It’s a process of labeling that doesn’t create or discover new information. One can be ‘philosophical’ in thought. That’s fine. Even admirable. But to have a field going around putting awkward definitional limits trying to isolate and embalm every thought-up concept is akin to a serial taxidermist killing and stuffing every living creature on a natural preserve and sticking a label on it. There’s no value creation in this. There’s a serial killer pride in their eyes as they open a backdoor to reveal a whole warehouse of old ideas they’ve murdered, stuffed, dismembered and labeled.

To codify critical thinking in a field is bizarre. I can try to clarify a concept as succinctly, interestingly, and accurately as possible — all without the need for their formalized labels. They can swoop in, label it and claim it as their own if they wish. Positivism, relativism, materialism, whateverism. But critical thinking and clarity of language doesn’t belong to anyone. It’s yours and mine to do with what we wish.

Formalized philosophy is the game of running into the limitations of words. Seriously, this is a quote from Wikipedia on ‘category error’:

“A category mistake, or category error, is a semantic or ontological error in which “things of one kind are presented as if they belonged to another”, or, alternatively, a property is ascribed to a thing that could not possibly have that property. Thus the claim that “Most Americans are Christians” is not a category mistake, since most Americans are Christians. On the other hand, “Most bananas are Christians” is a category mistake. This is because bananas belong to a category of things that cannot be said to have beliefs.”

Okay then. Glad they can clear that up. I’m still on bananas being Christian.

This isn’t a criticism of formalized attempts at trying to understand linguistics. Rigorously defining a concept may help a more objective comparison of two existing languages. Or comparing language use to human action as in psycholinguistics. The formalized products generated by cognitive philosophy speak more to examples of limited thinking rather than insight on thinking.

Formalizing critical thought is not a substitute for critical thought nor does it give them ownership of critical thought. Critical thinking is the grammar school of human discourse. Logical rigor, speculative scenarios, and definitions are the apostrophes and commas in the communication of all fields. Not just scholarly fields but any area of human discourse.

But these areas have context. Actual content. The intuitive tendency for scientists to dismiss philosophy is not unfair. Its similarity to theology is evident. The contributions of its taxidermy approach to concepts is irrelevant. If philosophers try to claim any clarifying scientific narratives as their own, they may see themselves as conceptual explorers sticking a labeled flag in the ground only to look up and realize civilization all around them. And then claim that they were helping direct traffic by naming a street.

Akin to the paradox-riddled statements of theology, in volumes where everything is said, you can construct any meaning out of it; its lack of clarity leaves nothing of value. As a field, if you say everything, you’re bound to be right eventually.

Well, you’d think. But as it’s been proven many times already, exhaustive attempts at ideation comprehensiveness actually doesn’t cover all possible ground. This is borne out often whenever an unintuitive discovery is made. Evolutionary design, quantum action, neural algorithms and even a sun-centered universe are all examples of this. Somewhat obvious concepts now but requiring non-obvious perturbations of the system and a clever explanatory pattern.

I’m not interesting in covering formalized philosophy of the mind. At least not how it’s covered with ‘dualism’, ‘free will’, ‘universal consciousness’ and other shamanistic concepts. Only because of the prominent wreckage left by the wake of shoddy genetically-inbred ideas, do I feel compelled to address some of them for clarity’s sake for no other reason to clear them out of the way.

Duality of mind, free will, sentience; these have resulted in exhaustive rules layered over the years in a self-collapsing field of definitions unable to bear its own weight. Akin to an invented boardgame your younger sibling may have played with you, inventing more, and eventually self-conflicting rules in an attempt to keep the game from ending. Old tomes of arcane ideas, odd attempts at paradoxical ethereal concepts are referred to with priestly reverence in an attempt to keep these dead ideas on life support in the same way that theology apologists exhaustively twist the old statements of its pre-printing press era texts to salvage some credibility.

Perhaps, some of what is reflected here exists in some version and array of definitions some philosopher has already described and labeled, so be it. Good for them. Claim the entirety of this book in the name of some philosophical sub-branch. Minus the need for obfuscating formalized language or the need for any deep familiarity with all the historical arguments.

Bizarrely, to me at least, often when a scientifically-minded individual accidentally lets slip out a negative statement about a philosophical stupidity, or the field, they are asked to gingerly restate themselves in response to a simpering request for an apology. I’m not sure if this is because they share positions at academic environments. I would suggest if they have critical thinking to contribute, they move to adjacent departments and take on a discipline with context. Or write literature. Seriously.

The need for a field containing good and bad explanations escapes me because explanations arise from a condensation of context and data and not out of a historical cycling of every bad explanation. Exhaustive speculating merely produces an illusion of comprehensiveness.  

Language and explanations are the end product. That’s what this is. An explanation. Trying to capture that in a historically-reverent field only results in obscure, self-referencing and increasingly narrow explanations. A definitionally-reverent field only results in stifling axiomatic logic being carried out in oversimplified, overwrought scenarios in lieu of observational data. Problems with over reverence for historical explanations and stale definitions apply to any field.

Be offended and defensive. For good reason. Philosophic ideas stood where astronomy, biology and chemistry were yet to be born. Alchemists of pure speculation. It had to be speculation because once they start interpreting data, they would find themselves in a field. Unfortunately for them, these fields have encroached all over them. Free will, divinity, dualism, intent, purpose, continuity, nothingness have all fallen before the demystifying revelations of observational progress. They may think they are retreating into areas and fields still unclarified by observation but often its speculation in their own gaps. The self-proclaimed full-time champions of critical thinking, a secondary function many of us take on without holding up the badge, have found themselves remarkably behind on the consequences of what we know to be true of cognition.

Philosophers might think it’s not an inane quest to dig deep with word definitions but it is. Without fresh evidence, discovery or in-context definitions, it’s pointless. Lengthy expositions on the levels and sub-levels of hell or serial questioning about the specific gravity of a soul of an angel are stupid questions deserving to be ignored. Similarly, philosophical apologism trying to cram the mythological speculations of the past into the neural, physical and biological world we now understand is an assault on clarity.

Table of Contents

Posted in

One response to “[Book Chapter] Chapter 7: Formalized philosophy: the limits of bad questions, absolute definitions and no context”

  1. […] [Book Chapter] Chapter 7: Formalized philosophy: the limits of bad questions, absolute definitions a… […]

    Like