[Reposted from April 2012 in response a inquiry about how multiresolution modelling could apply to human cognition]

This is a condensed summary of modelling applied to brains/intelligent systems, that I wrote after reading a report on MRM. I’ve reposted here as a possible useful summary of intelligence/cognition.

The report’s discussion was centered around how to describe a complex system, what do you take away from that description, and how do you come up with some actionable small conclusions about it. I’ve tried to distill high-level concepts below. Most of these concepts are not in the report but triggered this cascade of thoughts about models about human cognition: I see the following attributes being included in the most modern, sophisticated models (also when trying to describe complex systems):

1. Multi-scale resolution/scope

— basically you have different levels of scale/resolution/detail. For instance, consider modelling toxoplasmosis (a cat parasite) within HIV populations. It has a behavior (in the human organ systems) of being orally ingested, travelling through the circulatory system, ending up in the brain. Its behavior in an environment (slightly bigger scale) involves a cat, a house and a person. Its city-wide environment (epidemiological) might involve hotspots of the disease in the city. Now, modelling this might not be terribly fascinating since all interesting questions on this are probably answered already (we know how to stop it, prevent it, etc) but its a good example of multiple scales I think for a clinical audience.

2. Probabilistic assessment

basically, don’t represent things with certainty. There’s a probability of things happening, and there’s your certainty about that probability. You can design confidence metrics, use Bayesian statistics, or just model it separately so that you’re not just describing expected frequencies of occurrence (probability), but describes probabilities and the confidence of those probabilities (the ‘sureness’ of those estimates). This can also be recursive.

3. Feature extraction/generalization/prediction by analogy

— basically, when looking at a complex situations/behaviors/stories, you step down from the high-level picture and assess the ‘low-level’ features one-by-one and scan for similarities. This feature extraction and comparison is basically how neural networks work. Neural networks are a gamified attempt at describing neurons in a computer — they’re used to predict protein structure, stock market behavior, etc. Limited but sporadically useful tools. In actual human brains, this ‘feature extraction’ occurs as well which will get to my bigger point in a second. Its the act of noticing similarities in different situations, and basically ‘remembering’ them in your model (feature generalization).

4. Narrative structure

Narrative structure is really about: (1) causation/causality and (2) time sequences/structure. You want to be able to say Event A causes Event B, etc — which is why humans have a strong tendency to explain everything sometimes falsely associating causes, motivations, intentions. There are numerous examples of natural intervention experiments demonstrating this when the brain has some screwed up input (because of a stroke, etc) but will still automatically explain it (“narratize” it away) (i.e. hemiagnostic neglect, phantom limb, schizophrenia, etc.), using inferences about causality from temporal order and association. If you had a part of the model that could use this structure to A.) summarize, really ‘story-fy’ a sequence of complex actions as a nice, human-friendly processed version of massive datasets (weather patterns, political events, financial behavior, etc), it might be useful for the human user even if it falsely ascribes things and is limited. B.) On a smaller scale, ascribe ‘motivations’ to agents in a system that your model is observing by ‘reverse-narratizing’. This would basically give ‘human understanding’ (which is, partially, this narratization ability) to models on everything. I see these as distinct functions: (a) compressed information transmission (stories through language), and (b) prediction (or description) of complex agents (psychology/anthropomorphization).

5. Role-specialized thin-slicing/virtual-committee-of-experts

— Its the equivalent of running a company without a CEO, but with a CMO, CFO, CSO, CLO, CPO etc. Role specialization gives everyone a perspective to be occupied about. They intake the world (or company developments) through this biased lens and act through the company to optimize their side (legal, medical, science, financial, public relations, etc). Similar, a model running or predicting something complex needs to have multiple specialized motivations/lenses to ‘extract’ multiple, distinct, high-level views of the same situation. 

These are raw, brief notes but the applications would apply to: 1.) improved interactive technologies (input and output) 2.) human training and practice 3.) improved information models and intelligent systems.

Posted in