brain.
Third, neural activity is patterned . The cumulative effect of many incoming kicks may push the target neuron over the brink, causing it to fire. Because their individual contributions are typically very small, spikes from many source neurons must converge on the target neuron within a short time of each other for it to fire. Moreover, because each spike’s kick is regulated by the efficacy or “weight” of the synapse through which it is delivered, only certain specific coalitions of other neurons can set off a given neuron. Thus, neural activity is highly structured both in time and in space.
Fourth, neurons learn from experience . Much of this learning takes the form of activity-dependent modification of the hundreds of billions of synapses connecting neurons to each other. In this type of learning, a synapse is made a bit stronger every time it delivers a kick just before its target neuron fires and a bit weaker every time the kick arrives slightly too late to make a difference. Mathematical analysis and empirical investigations show that this simple rule for synaptic modification can cause ensembles of neurons to self-organize into performing certain kinds of statistical inference on their inputs, thereby learning representations that support Bayesian decision making.
To see how statistical computation can be carried out by brains, think of a neuron in the brain of a mole rat that comes to represent the presence of a ditch within her tactile sensory range. If this neuron’s axon connects to another neuron, which has learned to represent the sensory quality of the echo that the mole-rat experienced shortly beforehand, then the weight of the synapse between them can be seen to represent the conditional probability of the echo, given the presence of the ditch—one of the quantities that the Bayesian brain needs to exercise foresight.
It turns out that a network of neurons is a natural biological medium for representing a network of causal relationships. Within this medium, the activities of neurons stand for objects or events, and their connections represent the patterns of causation: conditional probabilities, context, exceptions, and so on. Being inherently numerical, the currency that neurons trade with one another—the numbers of spikes they emit and their timing—is the most versatile kind of physical symbol. By being able to learn and use numerical representations, neurons leave pebbles and anvils (considered as physical symbols) in the dust. Of course, any symbol, numerical or not, can stand for anything at all. However, numerical symbols are absolutely required if the representational system needs to deal with quantities that must be mathematically relatable to each other, as in the probabilistic computation of causal knowledge.
Thus, not only are networks of neurons exquisitely suitable for representing the world and making statistically grounded foresight possible, but they can learn to do so on their own, as synapses change in response to the relative strength and timing of the activities of the neurons they connect. Seeing neural computation in this light goes a long way toward demystifying the role of the brain in making its owner be mindful of the world at large. The collective doings of the brain’s multitudes of neurons may be mind-boggling to contemplate, but that’s only because explanatory value—that is, conceptual simplicity—is found in the principles, not the details, of what the brain does.
Minds Without Brains
One of my favorite concise descriptions of the nature of the human mind comes from mathematician and computer scientist Marvin Minsky, who once observed that the mind is what the brain does. Having gotten a glimpse of the principles of what the mind is (a bundle of computations in the service of forethought) and of what the brain does (carrying out those computations), we can appreciate Minsky’s quip, but also discern that it is open to a very intriguing