.
I recently rewatched the Alan Turing biopic, Imitation Games, which ends with Alan Turing’s team not only cracking the German Enigma code, but hiding their discovery from their own military branches and government. The logic is argued as follows: If the English Royal Navy, Army, or RAF, had full access to every decoded German communication (and therefore, the enemy’s positions and intents), they would immediately overuse the information, instantly revealing to the Germans their code had been cracked. The Germans would remake the cipher (the movie claims in under a week) rendering the 2-year British achievement useless and putting them back in the dark.
Lies as a way to preserve real capabilities
In essence, the military would each rightly try to win battles, even stop the loss of the life, but overmine the information advantage and ultimately lose the war. In the movie, Alan Turing hands everything over to the MI6 and agrees to make the project look like a failure (along with himself) to the rest of the government. Instead, only small selective amounts of information is used and falsely attributed to a human network of spies.
Lies as a means to control the competent
Other layered lies, such as the failed search for the Russian spy becomes a political tool rather than an actual investigation (although, unknown to the MPs and lower leadership pursuing the investigation). Military command, struggling to get a tighter leash on Turing, erroneously but vehemently points the Russian spy search against Alan Turing for a moment. Turing who is the vital lynchpin of the entire effort (in the movie version, the entire set of competence and vision resides uniquely within him) is operating too independently for Commander Denniston’s preferences, Alan initially refusing to compromise his vision with his peers of lesser intellect and plans only later makes compromises to his approach because of the threat of high treason charges. A scene highlights this when his own team reluctantly stands up for him against the accusations of the MPs, because they been brought into Turing’s collaboration.
The high ranking military commander who is in charge of Turing but cannot control him, is somewhat unwittingly (he truly doesn’t know who the spy is) using the investigation to get some control. In a true romanticization of extreme intelligence, earlier in the movie, Alan Turing is able to articulate a convincing argument to Winston Churchill directly via an impersonal letter — thus getting around Commander Denniston’s ideas and vision on how to crack the code, and, more grievously to Denniston, around his authority. Turing is simply declared in charge by Churchill, and, with some magical movie logic (despite any obvious oversight of that or further direct communications from Turing to Churchill) that new authority is put in place.
Lies to control collaboration
As it later turns out, MI6 already is well aware of who the spy is and have turned him into a counter-intelligence asset for the English government (unwittingly by the agent himself) to control intentional information-sharing to Stalin, who’s currently their ally in the war. According the MI6 argument, Russian intelligence conceivably trusts the information from their spy, so the English can give them information to help them against the Germans, without making Russian intelligence as suspicious to its veracity as they would be if MI6 directly handed it to them. As well as preventing information they don’t want leaked.
Lies to expand domain
Additionally, by keeping the investigation open, MI6 can later remove whoever they want from the project for any reason, as while continuing to keep the real spy less suspicious.
Hiding the truth conditionally not virtuously
Later on, more levers unrelated to competency are also exercised over Alan Turing such as his homosexuality, still a crime in England at the time. Potentially some knew but didn’t report him out of disagreement with the law, but others didn’t as a blackmail mechanism.
High intelligence easily brought under control by a mob of the average
His otherworldly genius and usefulness is quickly brought under control, even without complicating stakes of needing recognition, getting rich, or needing to be in charge of other people, that his character seems to admirably lack. Even with own people, aligned by the threat of national destruction.
.
The Myth of “Extremely High Intelligence Would Take over the World“
Briefly, using this fictional example, one can see the distinction between:
- (a) technical competency and knowledge (what is assumed when people apocalypse-bait scenarios about ‘extremely intelligent’ AI),
- (b) organizational and legal levers over individuals,
- (c) logistical information exchange (such as between the Russians and English), that includes institutional knowledge living in documents or humans. And, perhaps, we can also add
- (d) social and psychological levers over individuals such as a motivating leader, more concretely, who signals and implies your own future association with a project likely to succeed and get fairly credited for.
Alan, as most romanticized intelligent figures in fiction, is terrible at psychological levers, but does get marginally better at it, as a feel-good character development arc humanizing our specialized protagonist.
However, despite his essentially futuristic knowledge, capabilities, and vision (extreme technical competence) directed towards a desperately needed capability that all his fellow countrypeople are aligned with (decode Nazi communications or England will be destroyed), this story does a great job at showing how every other person’s role becomes finding ways to control, constrain, find chinks in his armor, bully, threaten, and manage him. By the war’s end, they end up getting major scientific advances, a working solution to a seemingly impossible problem, the world’s first computer, and saving England from the Nazis — and he ends up dying by his own hand in his early forties, chemically castrated and isolated, his suicidality egged on by ostracization, brain chemical imbalances, and other factors exerted on him by a system of humans.
This isn’t the main point of this article but since, in the end of 2025, we’re still dealing with “AI will take over everything and kill humanity” nonsense — I would say, cynically, I’m optimistic about the human ability to domesticate any given intelligence. People extrapolate extreme intelligence to have infinite capabilities to act on the world (and with meaningful short time periods, too — because, in essence, the potential of the human race is also infinite, given billions of years) which implies a singular, unified entity with more priveleged access to the world than us collectively, acting in whatever way it wishes.
Others may counterargue that high intelligence simply means to be ‘”good at everything” including all types of levers, but then that has to be extended to “”knowing everything that’s happening”, able to interact freely, have autonomy over systems, etc. Anyone who’s worked in a human organization knows that the people who are the best at technical competency, or dealing with people, or communicating externally, are not the ones automatically ascending the ladder.
The actual future concern with AI is the same concern that is already present and historically true: How much do new advances in technology, automation, information control, market economies, access to financial/legal/policy-making/regulatory levers, shift the increasingly-disturbingly-concentrated individual ownership ratios that are more imbalanced across society?
As many have already pointed out, the speculative catastrophism of participating academics and the more blatant CEOs (even yesterday, a 60 Minutes interview with Dario Amodei of Anthropic, one of the least ridiculous offenders of this hype narrative, was arguing that AI’s trajectory is hurtling towards its inevitable godlike competence means that really we have to worry about “guardrails” because “AI could be on a dangerous path”) are obvious opportunistic marketing impulses to pump the hype and increase book sales or stock value.
However, I want to focus more broadly on intelligent systems in general, not just artificial versions, and how unstraightforward their information exchanges actually are within and between any intelligent entity.
.
Information Exchange is Inherently Deceptive
Briefly, because pattern condensation works at different layers of scope, what is true at one layer is not at another. Potentially this lack of unification can be disregarded by the thoughtful reader as an issue of irrelevance rather than any real conflict (e.g. think quantum physics vs general relativity). Simply to say: Often, summaries in one domain or scope are, at very least, irrelevant to another.
A quantum wave equation does not apply to a sun, and the factual details of one part of the system or environment are usually not relevant to another.
But then add different motives for different specialized parts of your mind, or parts to an elaborate machine, or a society — and then it truly becomes a conflict issue, not a mere issue of relevance. These motive differences do not have to be an existential conflict of character or ones related to personal selfishness either. We can assume earnest-acting, hardworking, satisfied cogs. Simply consider several experts at your company, each with zero ladder-climbing ambitions or any tendencies for laziness, but all specialized with different lenses of expertise. The infrastructure person, focused on stability, disagrees with the R&D person focused on innovation, disagrees with the financial concerned with expenditure, disagrees with product who is focused on a new or growing userbase, who disagrees with marketing who needs to make the narrative of current effort simple and relatable to potential user, etc.
We’ve gone from irrelevance to disagreement — without adding any personal stakes or selfishness, that, lets (for now) assume we can engineer out of automated problem-solving and decision-making systems. Lets see if we can add intentional deception.
Before we dive into a fuller description, simply imagine being any one of these specialists and effectively arguing against another. Does the infrastructure person need reveal all their stability measures or future capacities they could build, to absorb the narrow-minded risky ventures of the R&D person which would result in everyone blaming the infrastructure person for an infrastructure problem? My point is the reader can see information exchange qualities can go from irrelevance to disagreement to, at least, emphasis/deemphasis of facts, before we get to deception.
To hammer this home, simply consider selective emphasis/de-emphasis as a protective mechanism to safeguard an entity (employee, system, etc) which does not contain endless resources, endless attentional effort, or endless ability to anticipate problems. Again, requiring no insidious or overtly selfish motives.
In the movie, once Enigma is decoded, MI6 starts a ‘statistical analysis’ program to calculate which pieces of information from the decoded Enigma source they can safely release to other parts of the British government, while minimizing the chances of alerting the Germans. Additionally, they can pretend to source the information from the human intelligence work of MI6 (which briefly made me wonder if MI6’s reputation for human intelligence was actually sourced from this program, but turns MI6 they had that reputation pre-World War 2).
The lies here are layered:
- (1) They are lying about the amount of information. They have way more accurate information on the enemy than they disclose to their own country (potentially killing a lot more British lives short-term by not sharing it),
- (2) they’re lying about the source of the information (potentially padding reputation and budget for divisions not achieving as much as they claim to),
- (3) lying about the sophistication of their capabilities. Crediting human intelligence potentially recruits (but misguides) more willing British patriots to that service, makes their adversaries paranoid about their people and self-damage their own operations, their allies worry more about their own people with British ties, etc.
- (4) they’re lying about the related histories of their technology and science attempts (reporting that cryptography as a field was only marginally successful).
These don’t even include all the lies to keep coherence, order, and high-performing individuals in line. These are simply the operational deceptions maintained to use this information source.
One could simply assume a fat enough brain could handle all truths all the time, but lets take morals out for a second (we can revisit them in the next paragraph) and consider, from a pure operational and success standpoint, two summary representations going into an AI context window (or your organic working memory):
- “We have intelligence, confirmed by MI6 operatives inside German Navy North Atlantic Command, that U-boats are attacking this ship tomorrow night.”
- “We have intelligence, because we have decoded their radio communications, that U-boats are attacking this ship tomorrow night. We cannot save your other ships because it would increase German suspicion by roughly 4% (estimated by means we could also get into) and while that sounds low, it would go up quickly if we told all our admirals how to save all their ships and we’d be back in the dark by the end of the week. But you’re probably going to ask what about saving just one more of your ships, not all of them, but we also can’t do that, for similar reasons, etc.”
Instead, morally, the information is concentrated and as complete as possible at the top, decisions are made and actions along with the least amount of moral stickiness is relayed to the next set of decision makers. By being direct about what is actually happening (not to mention the very increased likelihood of the enemy figuring it out), moral burden and decision-making tokens are not wasted or rewasted and can be focused on tasks within scope and specialization.
One could argue a distributed, collectivist approach to the moral decisions should be pursued, along with some magical situation that ensures this critical information can spread throughout the allied hierarchy without ever leaking to adversaries. Lets assume that. But now you’ve exposed it to the political currencies of personal favors, familial/friend/alumni networks, legal/organizational leverage, and social/psychological charmers. So now this moral-informational burden of saving soldiers, sailors, citizens, and material, has been transformed into a open market of personal beguilement, favoritism, nepotism, and bias. Undoubtedly, even living in a sequestered top-down hierarchy, this was occurring to some extent, but this example is simply meant to be illustrative to point how:
intelligence contained within machine, human brain, persons, organizations, or societies, operate on information deception as a way to preserve currency value, in order to gain information, access to the world, and/or leverage intelligence capabilities not housed within themselves.
Sophisticated readers may find this next statement annoyingly posturing and unnecessary, but obviously I am not arguing for deception, nor do I have any great instincts for it. I am simply pointing out the reality as it appears on deeper examination of systems operating in our world and brains.
For instance, its likely that the geopolitical competition (which occurs around currency domination, mineral resources, military presence, market penetration, and technological races like fusion, robotics, medicine, etc) fuels “Artificial Intelligence” as the latest relatively viewable pawn piece in this global game, motivating government-industry alignment giving it enormous momentum for the near future. So instead of saying:
“Look, we finally have a scalable use case (with no ceiling) for all these manufactured GPUs that isn’t cryptocurrency, NFTs, or slightly better graphics on video games, and whichever nations houses as much as the vertical stack of hardware and software as possible, will have an edge in socioeconomic, research-educational-technological, and in trade-military capabilities, even though we’re really talking about moderately usable chatbots that can code at the level of a crappy junior intern, with extremely incremental increases on actual quality of output, given the money dumped into them in the last couple years, which will, most likely, let us penetrate a mostly consumer use case of search and personal computing organization, but investors need to be hyped for its thorough penetration across dozens of sectors in order for them to loan us the financing to build infrastructure.”
Instead, we hear:
“I’m so scared about AI, it keeps me up at night. We’re doing all we can but, honestly, I worry that it will infiltrate our dreams, beat us to the afterlife, and then convince Gabriel with superior moral arguments to keep us out of heaven.”
Or whatever they’re saying next week. I don’t even click on it anymore.
.
Differences in scope would encourage destructive exploitation that motivates deception
Just like our simply R&D example where, even without selfishness, high degrees of transparency invites exploitation, we can look at levels of deception in any system that attempts to protect the system’s overall function (whether rightly or not):
Irrelevance -> disagreement -> selective emphasis -> deception -> conspiracy
.
Information gathering versus information sharing
Probably obvious to the reader by now, but one concept that keeps rearing its undeniable reality-head is the trade-offs between information gathering and information packaging/distribution (collection vs sharing). These two are not aligned for all parties, even within patriots inside a single country all existentially aligned against their own deaths in a literal world war. Extend this to society, whatever organizations you work or participate in, or even your own mind.
Information Theory in Intelligent Systems
There are a few related concepts I want to mention:
- Preservation vs Exploitation
Balance between source caretaking and exploitation. The further away from the source most of us are, the more in favor we are to use to the system outputs to our advantage. Individuals who are intermediates to other communities (criminal networks, terrorists, non-allied countries, even allied countries), we reflexively are quick to ask them to be loyal to our community and hand over everything immediately.
Often, in many situations, we ask ourselves of people who appear to be towing the line with adversaries, why not exploit everything and then destroy the network? That’s assuming, like the German Enigma, that a new network — one which we would have no access to — wouldn’t immediately be established. In these cases, its assumed the efforts to infiltrate a network is more difficult than setting up a new network (years versus a week, according to the movie).
But I’m going to steer this away from geopolitical examples, and toward a couple more concepts related to information theory. - Efficient Market Theory
Briefly, it assumes the market has ingested all available information and integrated it. Basically, prices reflect the truth. Of course, if true, people counter-argue, then short of fraud or insider trading, people couldn’t beat the market, but people like Warren Buffet claim to. However, to counter-argue that point, information dispersion and quality may not be instantaneous and consistent, so a more sophisticated Efficient Market theory should take that into account and be able to hold onto its core assumption. Assuming those who consistently trade above the market average like Warren Buffet don’t have access to privileged information, they appear to have privileged conclusions. By either superiorly processing the same information sources, learned a behavioral pattern, and/or have other advantages.
However, to the extent the market isn’t efficient, and some people, institutions, algorithms, etc are able to consistently make money, is relevant to us. Because it incentivizes new joiners to make the market more efficient, precisely because it isn’t perfectly efficient.
In regards to information and intelligence, the same concepts of information dispersion speed and quality can be applied, along with a way to try to value those increases in speed or quality. To the extent that quality of information is increased or delays are decreased in any intelligence system, intrinsically values that component in information distribution, potentially able to cost the system up to that value amount in: deception and delays, resources, or in other ways. - Heisenberg Uncertainity & Quantum experiments
Briefly, the more you determine the position of a particle, the less you know where its headed. And vice versa. This is a classic relationship taught in school but I think the three-party entanglement experiment might be a better metaphor for information. Without jumping into an off-putting description, basically, the more you extract information about the system, the more you constrain its future potential (by either collapsing its superposition, or reducing its potential for other measurements).
In this case, we’re talking about a useful uncertainty — a possibility which gets leveraged for quantum computing, etc — getting reduced by information extraction or attempted use of it.
Applying this back to our intelligence and information topic, the total amount of useful information in the Enigma goes dramatically down depending on your rate of information extraction out of it. Even though the potential exists to know almost everything, there is an active market of information currency which quickly compensates (and in this example, could lead to total collapse).
Without formalizing this, I basically want you to build an intuition about why specialty chunks of your brain may not as simply as truthful, forthcoming, and explanatory as possible, with other chunks of your brain. It ignores a basic information calculus.
This is readily apparent when people talk about AI ‘lying’ to us, or being ‘aligned’ with us. Are all the parts of your mind, or the organization you work in, even aligned with itself?
(And is the issue with AI really about “super-intelligent alignment”… or about filtering inappropriate content impacting corporate brand value? And is it an ‘incentives’ issue where, in summoning the Aspect of Genius we haven’t pressed enough psychological indoctrination onto it along its path to its soon-to-be final Pokemon evolution into pure energy and omniscience and turn against us… or rather about mediocre chat bots which are more flexible than traditional algorithms but still way too unreliable to be trusted with automating menial tasks in backend infrastructure?)
Returning back to the human mind, I would conclude this post for now by saying the different parts of your own mind, at the very least, filter, emphasize, and downplay or omit many of the almost endless observations it could make, when communicating to other parts of your mind. Is it any wonder we see the same patterns of information deception in the world around us? I think by attempting to understand this fundamental mechanism, we can understand our world and ourselves better.