There is a group of people construing human intelligence as an ethereal phenomena. They will criticize any potential attempts at artificial intelligence by arguing that a ‘ghost in the machine’ will still be missing from the most sophisticated machine intelligences we could ever make. I bring this up not because I’m that interested in artificial intelligence, but because this gets at a core misunderstanding of what it means to understand something.

These statements stem from a sense that “understanding” is grander than it really is. In actuality, having two methods to estimate something or solve something will give you a sense that you comprehensively understand something. A metaphor that seems to work for two or more aspects of the situation or problem, imbues people with a solid feeling of comprehension. More than that, sometimes it also gives them the sense that this is not easily replicated in another person, much less in a machine.

One principle of information representation, whether an equation, a sentence or a neural network inhaling its environment through sensors, is to communicate the most amount of information in the most efficient way. This is done by noticing redundancies and summarizing them in patterns. Large summarizations of information in efficient patterns without losing meaning is the real power behind equations, language and brains. This is a definition of intelligence that works broadly (by definition making this, itself, an intelligent concept).

Genius is relatively large jumps of this sort of progress. Looking at the incremental steps to achieving a large insight will usually demystify its genius reputation a bit. Understandably, as the assessment is relative.

Intelligence, in the broadest definition, can be found in all sorts of things: legal systems, ant colonies, sentences, actions, behaviors. This shouldn’t be conflated with consciousness. Also, keep in mind the broadness of any intelligence process is critical in evaluating its intelligence. In other words, there’s no reason to think an intelligence is broader than the confines of its action or environment. There is no reason to think that a well-designed conversational artificial intelligence created to simulate a human conversational partner could also paint, design a zen garden or even write a book. Those are functions outside its capabilities.

There’s also no reason to think a human analogue of intelligence, with all its broad capabilities, can be found in the processes forming the stars, shaping evolution or controlling the complex camouflaging action of an octopus.

When evaluating intelligence, behavioral output is all we have to go by. All we can measure or look at. Evaluating whether a government’s action, or a child’s aptitude, or your neighbor’s behavior is intelligent can only go by what inputs were available and its responsive actions and compare this to what we can figure to be the most optimal behavior. Of course, assessment of most optimal behavior carries many possible criteria (short-term vs long-term, well-being of self vs other, etc) as well as the limitation of the assessor’s intelligence. Additionally, to further complicate things, elegance (a sort of intuitive gut assessment of intelligent action) is characterized by well-concealed, simplified or condensed intelligence. Finally, the ability of an intelligence to react to a wide-range of potential scenarios that it has never, and may never be exposed to, remains hidden until it’s specifically probed.

Intelligence is not a sacred cow belonging only to humanity.  Artificial intelligence has many naysayers. I don’t mean in how it’s done. I mean in whether it’s even possible. One example of such a criticism is of chatbots which mimic human conversation. Couple that with a Chinese translation app and you have a giant group of algorithms capable of carrying out a conversation in Chinese. The critics of artificial intelligence will argue that these algorithms don’t actually understand anything. However, the process of these chatty-translating algorithms does imply understanding. Limited and in only that context though.

Those are just algorithms. Sure. But you are just neuron connections. Implied in these neuron weights or the procedural machine code, is understanding. Your understanding can only be judged by its processing ability exhibited by some kind of output whether it be bodily action, a verbal response or a wordless reading of neurons by neurotechnology.

The chatbot does contain intelligence, in a limited way. Presumably, the argument is about meaningful intelligence which is a relative line in the sand. A measure of intelligence would also include the specificity of the situation required for it to work. A healthy human being can be taught to do a bunch of different things and try to extract generalities from these when encountered with a new situation. An answer sheet or simple computer cannot. A complex intelligence differentiates itself by the broadness of possible situations which it can function in and how well.

Information theory is sort of a general characterization of information in any form. Pretty much anything that potentially reorganizes some kind of input. It could computer algorithms inputting information, the possible poker hands deal-able from a deck of cards that have been shuffled, or the amount of molecular destruction and reorganization that a blackhole does to a comet.

I have no problem using an information theory viewpoint to characterize the intelligence of a brain to characterizing the intelligence of a protein switching its shape in response to a chemical shift in environment. Or a bacteria propelling towards a gradient. Or an electron ‘acting’ on and responding to the local field. Yes, these are all inputs and outputs and may have a universal characterization — a mathematical way to quantitate ‘complexity’ is an interesting thought. These may demonstrate some low-level intelligence by that information definition. Like really low-level.

The term ‘intelligent design’ is a lightning rod in this country. I will go ahead and say that biology isintelligently designed and has an intelligent design. But there’s no evidence that its design was blueprinted (nor suddenly manifested) by a god, as we imagine it. In other words, no argument for requiring some unlimited supraintelligence to guide the design. But clearly there’s an incredible amount of intelligence in the design. Intelligent design shaped by an environment (our planet containing more organization and complexity than say an orbiting mass of ice or gas) allowing complexity to accrue over time. Enormous eons of trial and error very roughly stored into a molecular memory system in the dNA of whatever fragments of the microbial, plant and animal ecosystems that happened to survive. We can also point to parts that we could have designed better (using the intelligence that emerged out of the design of our brains). This intelligence generated by billions of years of trial-and-error is vast and not completely captured by our knowledge banks yet, but the intelligence in biological design is also clearly has limitations we can already see by even our current standards.

The obscurity enters in when people apply a definition of intelligence to the ‘universe’, the atoms, and us because the false implication is precisely this: that simply aggregating enough ‘atomic-sized intelligence’ explains away our intelligence. Well, this simply isn’t true. A star has a ton of atoms but is just a giant fusion reactor. Exhibiting complex behaviors…yes, but not to the level of complexity we refer to even when a clever crow adapts around a bunch of possible obstacles such as accessing food submerged in a bucket of water by using stones to raise the water level. Or using a stick to retrieve food from behind a chain-link fence.

Competitively, absolutes and principles fall to the power of more complex input and output. Contexts, situations, behaviors can find the caveats, failures, the fly-in-the-ointment of any rule of thumb or absolute. This is where intelligence steps in. Where the ball will find its way around the wall. The Chinese chat bot has its limits. The chess novice given a sheet of paper listing clever moves will run into more intelligent, more sophisticated, more graceful action of an experienced chess player.

Ethics is sometimes defined as a contrast to rational behavior. It is not. It’s an extension of intelligent action. To put these in conflict never made sense to me. Irrational behavior is essentially behavior that we haven’t rationalized yet.

Characterizing ethics as attempts to reach theoretical optimums of behavior, measured by long-term outcome; this I can deal with. But, definitionally putting it at odds with intelligent action, in the sense that selfish or rational action is on the opposite side of ethical action is ridiculous. This is a children’s game masquerading as self-importance.

Intelligence, can only be judged by action. Comparisons to theoretical optimums of behavior, measured by long-term outcome. Actions can be interpreted as courageous or faith-based to those who fail to comprehend the deeper logic. The subtlety of communal action, the cautionary wisdom of non-confrontation, the sociological power of compassion. Presumably, higher levels of feature extraction and more powerful simulatory capability will grant greater awareness of more possibilities in the form of a more complex intelligence that could produce more optimal long-term behavior.

The reason for having a broader, less-principled definition of ethics is that this frees us up to transcend the boiled down nuggets of wisdom we already have. Many of which are already too limiting for us.

Moral relativism isn’t a phrase I’m sure how to interpret. If it means moral valuation is dependent on context, then for sure that’s true. Intelligent action is wholly dependent on context. Consider possible life-saving drug programs in parts of the world. There is serious human value consideration to be given for more-risk drug trials in a region that loses significant people or functioning health to a disease at a high-rate. Having non-absolute ethical principles doesn’t automatically assume some kind of recklessness.

There is a moral universality in the sense that intelligence, as a concept of optimal information-gathering and action, is somewhat universal. The optimum way to handle something will differ on the situation. Similar situations will have similar or identical moralities.

Codes of action and honor have well-known morale impacts on human psychology. Take humans out of it and imagine intelligent systems. The principle remains that each agent is more emboldened to do away with some aspects of selfish intent knowing how rigorously ethical action is codified among the other agents. These codes which are prosecuted legally, culturally and socially. And psychologically. This is the reason for legal definitions of culpability. This is the reason for ethics. Not anything mysterious. It’s a settled-upon, optimal group action.

Even if you can get away with something, and many studies show that people will in fact then act more selfishly, which is unsurprising, the reason for the instances of acting ethically sometimes with no witnesses partly has to do with the degradation of the value of social binding. Even if only in mental awareness of a previously unbreakable boundary, now violated. This impacts the underlying intuitive experiential system — this is why one bad incident can send someone in paranoia for years even though the chances of it happening twice to one person is improbable. It’s the limitation of a system, a system which is remarkable on many other levels, in trying to extract probabilities and outcome conditions from only a few million hours of experience.

Of particular annoyance is technology ‘ethics experts’ whose only suggestion of action is obstruction. Development of fundamental technologies, they will suggest to ban or stop. Two problems at least, one of which is assuming no one else will pursue it. Second, their only course of action seems to be to obstruct. To not suggest this leaves them having no influence on the process which makes me suspect they have a strong bias towards wielding ethical righteousness in the only form, their options as uncreative as the points they raise: obstructionism.

Of course they never have the conviction of sophisticated arguments to just call for it. They timidly suggest a pause on progress after hiding behind a laundry list of grade-school-level “what if” questions. Shuffling around in the priestly shrouds of goodness mumbling vaguely about possible repercussions of our progress.

Potentially, as we put our own minds under microscopes or even nanoscopes, and build simulations of our own minds, there will be a fear of learning the operating system of our minds too well.  Or predicting too well the commonplace actions of our snowflake minds, varying slightly in unique neural weightings and connections but really not that much. Or uncovering all of the calculatory machinations at last only to be disappointed by the relative simplicity of our wiring compared to what can be built artificially.

Well, get over it.

Mincing around in speech, hiding behind professional titles, making profoundly unbold and progress-halting suggestions through rhetorical questioning is an affectation trying to mimic intelligence and sophisticated decision making. Perhaps I’ve had the bad luck of only catching all of the un-nuanced discussions. It would seem that not only are these future technologies technologically inevitable, but the subsequent technologies that may help, enhance or control are there in the future as well.

I’ve been on ethics committees in hospitals where we reviewed cases such as an expectant mother with stage 4 cancer. I’ve also submitted research protocols to institutional review boards for human research on volunteer participants. In both these cases, the role of the committee stands as a dispassionate patient or participant advocate and is largely legal in nature. Usually, it serves as a double check process to make sure they aren’t being forced into a decision, being misinformed in some significant way or having their privacy violated. These ‘ethical’ principles already codified in law. These boards essentially serve as a review board by the hospital or university to avoid lawsuits. These were as dry and undramatic as you could imagine. Blanket obstruction of entire research on entire areas (like neurotechnology or ‘bioenhancing’ technologies) on theoretical apocalyptic futures just seems like an enormous lack of imagination to me.

Civilizing progress is a project of human defiance. Conflict of interests, privacy, human testing regulations — contrary to what you’ve been made to believe, are not complex concepts. They have complex context sometimes; but to take ownership of these concepts as ethics experts and articles is an embarrassing affectation of civilizing intelligence.

Ethical decision making in science is given way too much credit and automatic leeway. Top tier science journals throw down a red carpet and clear the way for poorly written op-ed pieces using platitudes such as ‘no such thing as a free lunch’ and asking a series of half-formed questions about potential futures that may not contain an utopian equality. This is asinine. They flash the ethics badge when speaking to scientists and their scientific credentials when dealing with lawmakers. This sleight-of-hand distracts from the fact that ethics serves as a megaphone amplifying the most profoundly vacuous thoughts by the most witless people.

Ethics discussions are all about the dramatics.  It’s as if our mastery over the world is so great that the only thing stopping us from our own god-like powers is self-restraint.  Its hubris. It ignores an inherent progress.  It’s a progress that works at a grand scale and is undeniable.

In medicine, there exists layers of laws at the federal, state and local levels. Additionally, if you work with a group, in any profession, you are compared to social standards of practice of your community.

Decision have consequences on multiple levels at multiple timepoints in the future. This is true of all policy and lawmaking. They stand as formalized philosophers of policy.

Ideologies can be similarly deconstructed. For instance, ideologies about top-down or bottom-up control in management or government may be a good way to analyze or come up with action initially. Ideological debates are attempts at figuring out what is the best outcome. Until they become too grounded in simplistic reductions and stagnate outside context.

Even subtle behaviors not quite captured by morality such as politeness, civility and discretion, are obviously intelligent behavior. Recently, the evolutionary sociology and psychology sciences, thankfully, have attempted to generate nuanced rationalizations of this behavior. To generate updated explanations of why we act that way.

Communication can be rationalized as information sharing or diplomatic manipulation, with indirect speech saving self from consequences of loss of standing and nonspecific speech limiting what you give away in an information exchange.  Not only is this true with us, these are all natural consequences of communication with any sophisticated intelligence.

There is a definition of elegance I like. Elegance is a subjective assessment dealing with the grace and style. It refers to something pleasingly ingenious and simple. By definition, it is the outward manifestation of intelligence. A subjective reaction to the well-arranged, intelligently designed, ingeniously organized.

In the same vein, I have a subjective reaction to elegance and manifestations of intelligence. In form. In action. In process.

I briefly mentioned narratives as explanations in a previous chapter and the causal structure and motivational summaries embedded within them. Future computer structures will be able to generate and modify narratives in this way. It’s a natural consequence of intelligence, in this case artificial intelligence.

Beyond the explicitly socially cooperative implications, communication may subtly belie underlying intelligence in action or cautious information handling. Eye contact, pausing and ‘ums’ in dialogue, hedging of any direct implication, even the way someone walks through a crowd or a public space. Before you think this concept is too high-minded, consider how gingerly two dogs, highly social intelligent animals, approach each other when they are still strangers.

Personal priorities, including friends and kindred spirits who share them, are extracted from their action. Organizations, movements or cultures that you identify with. Writing, music or art whose intelligent structures are identifiable and agreeable with yours. Solidarity with intelligences of a similar ilk.

To those who settle at simpler structures because it works for them, the dismissiveness that follows may not just be a social exclusionary tactic as it’s often accused. It comes from the intuition that these ideas won’t contribute to the forward momentum of human progress. The dismissal may be interpreted as coming from an arrogance but it may also come from the resignation that those ideas will die a fading death, one generation at a time, as utility of better ideas are slowly born out in tangible progress.

Better ideas than the old ideas conserved by the old fields will update and be made more sophisticated and accurate. More useful. Tremendous amount of organized complexity built from base components, layered on top of each other, create an incredible processing ability and flexibility in response. None of this is enabled by ghosts inhabiting our minds or machines, mystically linking physics and psychology, or using conflated language to build moats around ethereal concepts.

None of these are reasons to stop really. The inevitability of intelligent progress should be comforting to all involved.

Table of Contents

Posted in

One response to “[Book Chapter] Chapter 8: Intelligence as information extraction, prediction and representation”

  1. […] Chapter 8: Intelligence as information extraction, prediction and representation […]

    Like