first hominid species actually started carrying tools to places where they knew the objects would be useful (as opposed to just using tools in the immediate area). This indicates that their brains had developed the capacity to represent the fact that at some point in the future they might want to eat, even though right at this moment they might not be hungry. So rather than being driven by their current state, i.e., hunger, early humans began to prepare for future states.
This necessarily requires more memory to represent the past and the future. The ability to plan for future states of hunger, cold, or thirst as opposed to just reacting to immediate desires, is perhaps what began the rapid cultural advance of human beings.
It is interesting to muse about when the concept of work coalesced in human culture. Presumably it would have been after the evolution of language. It is doubtful chimpanzees have any concept of work, but they are very social and there is some evidence that they can plan for the future to a very limited degree.
Our hominid line broke with chimps about five to seven million years ago, and something starting to resemble human culture began about 1.8 million years ago. Language is more recent. So when did âworkâ as something onerous and obligatory replace just being active as a function of external or internal stimuli? There must be some higher-order conscious reflection that is necessary to be able to say that you are working as opposed to doing nothing, or just trying to satisfy your hunger.
The other side of the idleness-is-good-for-the-brain coin is that our brains come with design limitations. In much the same way that James Cameron could not have made Avatar on one normal computer, an individual human brain can handle only so much information.
Our brains took millions of years to evolve in very different types of environments than, for example, a modern office. Humans only began reading and writing about five thousand years ago. This is why it is still such a struggle for us to learn how to read. We lack genetically specified neuronal structures for reading, and our brains have to recycle other brain structures when we learn to read. Speaking, on the other hand, evolved much earlier and we normally do not have to struggle to learn how to speak. There are stages to language acquisition that happen whenever a healthy brain develops in a language community, e.g., English, Spanish, or Chinese.
We have specialized brain structures that are attuned to speech perception and speech production. By the time we reach adolescence, we have mastered our native language without any special instructions. However, in contrast, many otherwise healthy people with normally functioning brains reach adulthood not being able to read.
I point this out because our modern way of life and our work ethic are much more recent cultural inventions than reading. Swedish neuroscientist Torkel Klingberg calls this âThe Stone Age brain meeting the Information Age.â For example, we do not have genetically specified brain structures for multitasking, and studies now show that multitasking makes you worse at each thing you are simultaneously attempting to do.
In a famous series of studies, Stanford professor of communication Clifford Nass wanted to find out what gives multitaskers their proclaimed abilities. Professor Nass marveled at his colleagues and friends who claimed to be expert multitaskers, people who chat with three people at a time, while answering emails and surfing the web.
In one experiment, Professor Nass showed a pair of red triangles surrounded by two, four, or six blue rectangles for a brief moment to both high multitaskers and low multitaskers (people who donât normally try to do more than one thing at a time). Then he showed the same picture again, sometimes altering the position of the red triangles.
The subjects were told to ignore the blue rectangles and to judge whether the red