We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I have read many articles about how the brain is the most power-hungry organ in any living complex organism, requiring about 70% of it's oxygen supplies in the resting state.
Since the usual medium of oxygen delivery is through respiration, this should mean that a lot of blood is supplied to (and hence present in) the brain.
Why then is the color of the brain almost always white? There is usually a little red tainting on the exterior and that's it, you can see the interior is still white. Should it not be red? Presuming that for a higher surface area to volume ratio, most of the brain cells are in direct contact with the bloodstream.
The brain is indeed stacked with blood vessels, as shown in a 3D model in Fig. 1.
Fig. 1. 3D-printed model of blood vaculature. Source: Biobots.
The blood supply on the surface of a live brain is readily seen during a craniotomy (Fig. 2.)
Fig. 2. Surface of the brain. Source: The Sterile Eye.
When freshly prepared, the interior of the brain appears pink, indicative of the presence of blood (Fig. 3):
Fig. 3. Coronal sections of a non-fixated brain. Source: Documenting Reality.
Mostly, images of the brain are taken after fixation of the brain. The fixation process includes removal of the blood in the brain and the fixative pales the color of the tissue markedly. Fixation results in the more familiar pale-appearing pictures of the brain as shown in Fig. 4.
Fig. 4. Coronal section through a fixated brain. Source: Wikimedia.
Also note that the blood supply mainly targets the areas where the cell bodies are located, i.e., the gray matter. The white matter is pretty much devoid of blood supply and mainly consists of heavily myelinated, fatty tissue supporting axonal fiber tracts (Fig. 5).
Fig. 5. Arterial innervation of the brain. Source: Human Physiology Academy.
The Human Brain
The following amazing facts about the human brain will demonstrate just how sophisticated this organ really is.
Physical Facts About the Brain
The adult human brain weights about 3 pounds and the cerebrum accounts for about 85% of the brain. The brain is composed of 40% gray and 60% white matter. The gray matter is made up of about 100 billion neurons that gather and transmit signals while the white matter is made of dendrites and axons that the neurons use to transmit signals. The brain is composted of about 75% water and is the fattiest organ in the body, consisting of a minimum of 60% fat. Humans have the largest brain to body ratio of any animal, and the blood vessels in the brain, if stretched end-to-end, would be about 100,000 miles long. The neocortex, or language center of the brain, comprises about 76% of the organ.
The Developing Brain
Brain development begins very early in the life of a fetus. The brain grows at a rate of about a quarter million neurons per minute in the first trimester, with brain growth continuing through age 18. The brain of a newborn baby triples in size during the first year. The amount of brain stimulation a child receives can effect brain growth by as much as 25%. If continued mental activity takes place, new neurons will develop throughout the life of a brain.
Brains use about 20% of the total blood and 20% of the total oxygen that is circulating through the body at any given time. If blood supply to the brain is cut off for more than 8 to 10 seconds, loss of consciousness will occur, yet the human brain can survive for between 4 and 6 minutes without any oxygen. The information processing speed of the human brain can reach up to 120 meters per per second, and in a waking state can generate 10 - 23 watts of energy.
The brain is all this and more, providing us with the ability to bond socially with other humans and giving us the facilities for the creativity that makes us the innovators of the animal kingdom.
White matter hyperintensities is a term used to describe spots in the brain that show up on magnetic resonance imaging (MRIs) as bright white areas.
According to Charles DeCarli, the director of UC Davis Alzheimer's Disease Center, these areas may indicate some type of injury to the brain, perhaps due to decreased blood flow in that area.
The presence of white matter hyperintensities has been correlated with a higher risk of stroke, which can lead to vascular dementia.
White matter hyperintensities are often referred to as white matter disease.
Initially, white matter disease was thought to simply be related to aging. However, we now know there are other specific risk factors for white matter disease, which include:
- High blood pressure
- Cardiovascular disease
- High cholesterol.
While white matter disease has been associated with strokes, cognitive loss, and dementia, it also has some physical and emotional symptoms such as balance problems, falls, depression, and difficulty multitasking (e.g., walking and talking.)
The Science of Your Racist Brain
The amygdala <a href="http://www.shutterstock.com/pic-103381406/stock-photo-human-brain-amygdala-cross-section.html?src=lC-F-_dLu_KbwBeqWTi9ug-1-0">CLIPAREA</a>/Shutterstock
When the audio of Los Angeles Clippers owner Donald Sterling telling a female friend not to “bring black people” to his team’s games hit the internet, the condemnations were immediate. It was clear to all that Sterling was a racist, and the punishment was swift: The NBA banned him for life. It was, you might say, a pretty straightforward case.
When you take a look at the emerging science of what motivates people to behave in a racist or prejudiced way, though, matters quickly grow complicated. In fact, if there’s one cornerstone finding when it comes to the psychological underpinnings of prejudice, it’s that out-and-out or “explicit” racists&mdashlike Sterling&mdashare just one part of the story. Perhaps far more common are cases of so-called “implicit” prejudice, where people harbor subconscious biases, of which they may not even be aware, but that come out in controlled psychology experiments.
Much of the time, these are not the sort of people whom we would normally think of as racists. “They might say they think it’s wrong to be prejudiced,” explains New York University neuroscientist David Amodio, an expert on the psychology of intergroup bias. Amodio says that white participants in his studies “might write down on a questionnaire that they are positive in their attitudes towards black people&hellipbut when you give them a behavioral measure, of how they respond to pictures of black people, compared with white people, that’s when we start to see the effects come out.” You can listen to our interview with Amodio on the Inquiring Minds podcast below:
Welcome to the world of implicit racial biases, which research suggests are all around us, and which can be very difficult for even the most well-intentioned person to control. But that doesn’t mean we can’t do anything about them: We can draw attention to the insidious nature of these subconscious influences, and we can work to prevent them from exerting harmful effects not only on interpersonal behavior, but also on policy, employment practices, and public life. That’s what Amodio’s research (and that of many other social psychologists and neuroscientists who study prejudice) is centrally aimed at achieving.
How do we know implicit biases exist? In a number of classic studies, research subjects are asked to complete a seemingly simple task, such as watching words pop up on a screen and quickly categorizing those words as either positive, like “happy,” or negative, like “fear.” But right before the word appears, a face, either black or white, flashes on the screen. “What we find over and over again in the literature,” explains Amodio, “is that if a black person’s face was shown really quickly, then people are quicker at categorizing negative words than positive words that follow it. Versus if a white face was shown really quickly, people are usually quicker to categorize the positive words, compared with the negative words.”
These types of biases are quite prevalent. According to a research summary by Stanford University’s Recruitment to Expand Diversity and Excellence program, “about 75% of whites and Asians demonstrated an implicit bias in favor of whites compared to blacks.” In other words, despite your best intentions, you might be a little bit racist. (Similar unconscious biases have been documented in people’s views of those of different genders, the elderly, and other groups.)
And why do these split-second negative responses exist? The underlying problem is that our brains have evolved to see patterns in things that are complex, and to categorize the world in order to simplify it. Thus, when we encounter another person, our brains rapidly and subconsciously try to figure out if he or she is friend or foe: in-group or out-group.
We make these calculations based on many factors, but if we know very little about the person, we often categorize her based on race. What tells us how to do so? The culture in which we live. According to Amodio, while a general categorizing tendency has been with us for “as long as there has been a human mind,” the specific categories that we use&mdashLatino, black, white, Asian American, and so on&mdashand how we feel about them, are a social phenomenon. As such, they’re heavily shaped by the strong prevalence of stereotypes in our society, stereotypes that are so common that even children pick them up at a very young age.
And thus, while our society has made progress when it comes to matters of race, subliminal categorization tendencies still permeate human behavior. This much has been demonstrated in the lab multiple times, but here are some noteworthy findings on the psychology of implicit racial bias:
1. Associating skin color with physical, rather than mental, abilities: In a 2006 study of more than 150 white college students, Amodio and his colleague Patricia Devine asked them to categorize words as either pleasant (such as “peace,” “heaven,” and “honor”) or unpleasant (“cancer,” “vomit,” “poverty”) and as either mental (“math,” “brainy,” “scientist”) or physical (“basketball,” “agile,” “dance”). Before each categorization task, the subjects were shown black or white faces. The result? These largely liberal college students were faster at categorizing unpleasant and physical words when shown a black face, and faster at categorizing pleasant and mental words when they were preceded by a white face. Once again, implicit biases shone through in the results.
2. Keeping their distance: Amodio and Devine then went a step further, seeking to identify other ways in which a subtle bias against members of a different race might manifest themselves. So in a new experiment, they told study participants that they were going to work, with a partner, to answer a variety of questions. In fact, when the subjects first arrived for the study, their name was called out along with that of their supposed partner (who had not yet arrived). The partner’s name was either “Tyrone Washington” or “Darnell Stewart.”
After this cue had been planted, the participants were then asked to decide which study tasks they would do and which their partner should do, after being informed that some of the tasks involved answering questions similar to those found in the math and verbal sections of the SATs, and others involved answering questions about sports and popular culture. Sure enough, study participants who had shown higher implicit bias were more likely to assign their (presumably black) partner to answer the questions about sports and popular culture, rather than academics.
But that was just the beginning: Then the study subjects were taken to a waiting room, and told that their partner had arrived but had just gone to the bathroom. A coat and backpack, supposedly belonging to the partner, was placed on a chair and the study participant was told to sit and wait. Once again, the unconscious bias came out: Subjects who had ranked higher on implicit biases now showed different seating choices. “They’ll sit further, in a row of chairs, away from the jacket and backpack of what they think is a black person,” says Amodio.
3. Not voting for Obama: In a very different context, these tendencies also cropped up in the 2008 presidential election, pitting a white candidate (John McCain) against a black one (Barack Obama). In a 2009 study, B. Keith Payne of the University of North Carolina-Chapel Hill and his colleagues compared explicit and implicit bias scores for a large number of individuals with their self-reported voting behavior in the election. Not surprisingly, the conscious racists, those showing explicit anti-black bias, tended to vote against Obama and for McCain. But after controlling for explicit bias, the study found that the remaining implicit bias had a surprising effect. It didn’t push voters towards McCain, but it did take votes away from Obama, because these people either tended to favor a third-party candidate or were less likely to vote at all.
4. Racial bias at the doctor’s office: Implicit bias can also affect how white doctors treat black medical patients. In one disturbing 2007 study, 220 medical residents took an implicit association test, to detect subtle racial bias, and also read a medical history of a patient (either black or white) experiencing chest pain, with clinical details suggestive of a heart attack. The result was that among white doctors, as their implicit bias increased, their medical decision-making about black patients changed as well. In particular, their likelihood of treating a black patient with thrombolysis, a drug treatment to reduce blood clots (and prevent heart attacks), decreased. In other words, they were less likely to administer a potentially life-saving treatment.
And that’s just the beginning. Other studies have shown that doctors are more likely to recommend and perform unnecessary surgeries on racial and ethnic minority patients than on their white counterparts. They’ve also shown that Latina and Chinese women are less likely to receive hormone therapy (which decreases the risk of recurrence of breast cancer) than white women.
So what’s happening in the brain that’s causing these unconscious biases to affect behavior? It turns out that we can distinguish between brain activities that are associated with implicit racial biases, of the sort described above, and those associated with the self-regulation or cognitive control processes that kick in to prevent most of us from consciously behaving like bigots.
When we look at faces of individuals of a different race, a part of our brain called the amygdala often gets active. The amygdala is involved in learning and, specifically, in a type of learning called fear conditioning&mdashtracking what kinds of things predict bad outcomes, much like a rat learning that a specific tone will lead to an electric shock. Essentially, its job is to figure out what parts of the environment are threatening and remind us to stay away from them.
The problem is that because our culture is filled with racial stereotypes, many of us “learn” inaccurate and prejudicial information about those who look different. And the amygdala operates extremely rapidly, long before our conscious thoughts have time to react. Thus, the operations of this and related brain regions, “if left unchecked, they might lead to the expression of some bias in a way that you don’t intend,” says Amodio.
Fortunately, the amygdala alone doesn’t drive all of our behavior. Our brains have evolved such that we have a large and highly-complex frontal cortex, which allows us to inhibit impulses, make complicated decisions and behave in socially appropriate ways. It’s the frontal cortex that helps most of us tamp down our gut reactions and, in our conscious behaviors, strive to treat members of all races equally. “The human mind is extremely adept at control and regulation,” Amodio says, “and the fact that we have these biases should really be seen as an opportunity for us to be aware and do something about them.”
That’s why, in the end, Amodio doesn’t think that the mere existence of implicit biases provides any excuse for the display of overt or explicit racism. After all, stereotypes are ubiquitous. We all perceive them in our culture, but we do not all act upon them. In other words, we have the ability&mdashand the responsibility&mdashto regulate our own behavior.
“I don’t really think humans have any good excuses for acting on their automatic biases,” says Amodio.
For the entire Inquiring Minds interview with David Amodio, you can stream below:
This episode of Inquiring Minds, a podcast hosted by neuroscientist and musician Indre Viskontas and best-selling author Chris Mooney, also features a discussion of how scientists turned to a group of video gamers to help solve a complex problem involving how the human retina detects motion, and of the release of the groundbreaking National Climate Assessment.
To catch future shows right when they are released, subscribe to Inquiring Minds via iTunes or RSS. We are also available on Stitcher and on Swell. You can follow the show on Twitter at @inquiringshow and like us on Facebook. Inquiring Minds was also recently singled out as one of the “Best of 2013” on iTunes&mdashyou can learn more here.
Neuroimaging White Matter
Although introduction of computerized axial tomography (the CAT scan) in the 1970s revolutionized how researchers could visualize and study the brain, the detailed interpretation of the structure of white matter awaited the advent of magnetic resonance imaging (MRI) in the 1980s. MRI became the method of choice not only for research on white matter and its disorders but for diagnosis of MS and many other problems. Older disorders were better understood, new ones recognized. It became possible to correlate a patient’s complex behavioral problems, particularly in the case of higher mental functions, with white matter pathology.
Still newer imaging technology has further reﬁned our visualization of white matter. Most exciting is diffusion tensor imaging, which measures the dispersion of water within white matter tracts. In normal white matter, water diffuses in the direction of the speciﬁc tract being imaged (called anisotropic diffusion). In damaged white matter, water diffusion is isotropic, meaning it is less directional and more chaotic. Because diffusion tensor imaging can detect these different types of diffusion, it offers the intriguing prospect of mapping the conﬁguration and connectivity of both abnormal and normal white matter regions.
Using magnetic resonance spectroscopy, we can discover the chemical composition of white matter regions. Sometimes called a noninvasive biopsy, magnetic resonance spectroscopy can be used to measure certain products of cell energy use in selected regions, giving us a clue to the integrity of the myelin and axons. Evidence is growing that this technique can ﬁnd white matter alterations even in areas that appear normal on conventional MRI. Finally, magnetization transfer imaging (a technology based on the interactions of protons and macromolecules) generates data on the ﬁne structure of white matter, yielding still more information on myelin integrity and possible myelin injury, including in otherwise normal-appearing white matter.
Such neuroimaging techniques reveal structure, but there are equally impressive imaging studies of brain function. To date, these studies have focused primarily on the cerebral cortex because of gray matter’s obvious role in mental operations and its high use of energy, which can be detected by positron emission tomography (PET). A less expensive but also less elegant technique of this type is single photon emission computed tomography (SPECT). More recently, functional MRI (fMRI) was added to the available imaging methods.
The neuroimaging of function and structure are complementary in pursuing understanding of higher brain capacities. In particular, PET and fMRI can identify cortical regions involved in cognitive processing, whereas other MRI methods can establish the patterns of connections between these areas. Most scientists are persuaded that distributed neural networks, linked into arrays of both gray and white matter structures, may underlie our conscious mental operations. It should be possible to map these neural networks and link them to speciﬁc functions, such as memory and attention, by combining structural and functional neuroimaging methods. This exciting prospect is driving new research efforts.
The arbor vitae lies within the center of the cerebellum and helps provide valuable sensory information to the brain. The primary function of the cerebellum involves taking information necessary for motor control. Without the arbor vitae and cerebellum, a person would not have the ability to coordinate actions between his arms and legs, catch a ball or perform other actions that require eye-hand coordination. The cerebellum connects with organs in the rest of the body, and the arbor vitae provides the necessary sensory and motor input to perform actions.
The Biology of Bigotry
Given the result of the recent US presidential election, minority groups worldwide are understandably worried about the future. Already social media platforms are awash with reports of emboldened bigots verbally, and in some disturbing cases, physically assaulting people of color, Muslims, and members of the LGBTQI+ community (though see update).
But where does this animosity come from? A popular adage is that racists are not born but rather they are made, and while this message is ultimately a hopeful one, it ignores a substantial portion of the story: that of biology. As I discuss in the Cambridge Handbook of the Psychology of Prejudice, there are multiple lines of evidence pointing to a genetic basis for prejudicial attitudes and behavior and if we hope to reduce net prejudice, understanding this is of critical importance.
The primary source of this evidence comes from the field of behavioral genetics. This endeavor relies on the natural experiment provided by contrasting identical and non-identical twin pairs. Where identical twins share all of their parents’ segregating genes (i.e. are 100% genetically identical), non-identical twins on average share only half of these genes. Based on this knowledge, one can assume that a trait shared more strongly by identical twins than non-identical twins is influenced by genes. If, however, identical and non-identical twin pairs are equally similar then presumably the environment the twins share has had an influence on the trait  .
Combining this relatively simple premise with complex mathematical modeling yields researchers incredible power to dissect variation (i.e. differences between individuals) into that caused by genetic effects, the environment shared by twins (e.g. their household and religious and political up-bringing), and finally, variation caused by individual experiences, chance biological effects, and any error in measurement of the trait.
For instance, 30 years of research using the twin method has revealed that genes account for between 20 and 40% of the variation in political orientation (as I’ve covered previously). Similarly, twin studies have revealed substantial genetic effects on prejudicial attitudes.
One study from 1986 observed that identical twins were more similar in their attitudes towards white superiority, apartheid and mixed marriage than non-identical twins. Further modeling revealed that between 30 and 40% of variation in these attitudes was due to genes. Surprisingly, the shared environment of the twins, which includes shared aspects of the family household, accounted for less than 15% of variation in the same attitudes  .
A more recent study including attitudes to equal rights for gays and women found that a third of the variation support for these ideas was due to genetic effects  . From ethnocentrism (18%)  , negative attitudes to non-Europeans (32%)  , generalized prejudice (38%)  and in-group favoritism (i.e. preferring one’s own religious, political or ethnic group: 46%)  , twin data has consistently shown strong genetic effects influencing bigoted attitudes, generally with small influences of the home environment.
This seems counter-intuitive: how can genes influence acquired attitudes while shared environmental influences are small?
It’s important to understand that genes aren’t coding for specific attitudes: there is no gene for racism. What these results indicate is that genes are contributing to behavioral and psychological dispositions to regard out-group members, such as those that are ethnically or culturally different, negatively. Whether these genes are different for different types of out-groups, such as those of a different sexuality vs. those of a different religion, remains to be seen, though it seems likely that most prejudices stem from a similar mechanism, one that fosters fear and suspicion of out-group members. Despite these genetic influences, the specific attitudes (i.e. “immigrants get more than they deserve from the government”) are almost certainly derived from the environment, with genes influencing the degree to which they are endorsed. Consequently, in an environment densely populated with bigoted rhetoric (such as many experienced during the recent election campaign) individuals with a genetic predisposition to out-group hostility may find themselves more readily espousing prejudicial attitudes.
This is ultimately mixed news: unfortunately, it means that more heritable attitudes are more firmly entrenched  but by observing the changes in prejudice over time, it is clear that the specific types and strengths of prejudice within a society are open to change. Key to this is the understanding that heritability refers only to causes of variation, and says nothing about the average level of the trait itself. Take intelligence as an example: intelligence is highly heritable (
85%), yet since IQ testing began, psychologists have consistently observed generational increases in the average intelligence of the population, referred to as the Flynn Effect. The causes of these increases are largely unknown but could be attributed to better nutrition, more widely accessible education, or a lower burden of disease. Regardless of the cause, the heritability of intelligence has not changed for the most part, while the average IQ has continued to shift upwards.
The same is possible for prejudice. While genes may maintain some variation in how strongly prejudice is endorsed, overall prejudice within society can be reduced: a rising tide lifts all boats. Even across the few studies mentioned above, it is apparent that the focus of prejudice rapidly shifts from racism (i.e. issues of segregation) to those concerning gay rights within the space of 20 years.
However, it is incredibly important to consider these genetic effects when attempting to understand and intervene in prejudice. Multiple studies have shown that the family environment contributes relatively little to the maintenance of prejudicial attitudes over and above genes—likely racism tends to cluster within families because of shared genes. As a result, social interventions should be tailored, taking into account the fact that genetic variation may mitigate their effectiveness for certain individuals.
Humans have made great strides in overcoming the burden of prejudice but it is often a lack of understanding that keeps us from truly succeeding: We can no more assume that all individuals are equally prone to bigotry based on their environment than we can assume that one’s genes entirely determine their future. Rather, to resolve complex social issues requires a nuanced understanding of the interaction between biology and behavior, one that starts at the level of a single gene and extends to the entirety of modern culture. In the uncertain days to come it will be crucial to bear this in mind.
1. Neale, M.C. and L.C. Cardon, Methodology for genetic studies of twins and families. 1992, Kluwer, Boston: Kluwer Academic Publishers.
2. Martin, N.G., et al., Transmission of social attitudes. Proceedings of the National Academy of Sciences of the United States of America, 1986. 83(12): p. 4364-4368.
3. Alford, J., C. Funk, and J. Hibbing, Are Political Orientations Genetically Transmitted?, J. Alford, Editor. 2005. p. 153-167.
4. Orey, B.D. and H. Park, Nature, nurture, and ethnocentrism in the Minnesota Twin Study. Twin Research and Human Genetics, 2012. 15(01): p. 71-73.
5. Kandler, C., et al., The genetic and environmental roots of variance in negativity toward foreign nationals. Behavior Genetics, 2015. 45(2): p. 181-199.
The Incredible Shrinking Human Brain
The human brain is big, and it's powerful, able to dream up innovative solutions to complex problems. Yet our brains don't age well: As we grow older, they tend to shrink and become increasingly vulnerable to cognitive dysfunctions such as memory loss and dementia. A new magnetic resonance imaging (MRI) study comparing humans and chimpanzees finds that chimp brains maintain their size as they age. Slowly losing our minds, it turns out, may be the evolutionary price we pay for having bigger brains and longer life spans.
As far as researchers can tell, humans are the only animals subject to specific brain maladies such as Alzheimer's disease, which in the United States afflicts nearly 50% of people over the age of 85. But even normal, apparently healthy human brains show the effects of aging, such as the buildup of amyloid-β plaque deposits and loss of neural connections, especially in regions linked to learning and memory. And previous studies of human brains have suggested that these brain regions, which include the frontal lobe and the hippocampus, are especially prone to shrinkage with age.
Although few similar studies of other primates have been conducted, recent research with rhesus monkeys has shown only very limited shrinkage with age. Nevertheless, the evolutionary lineages leading to humans and rhesus monkeys diverged about 30 million years ago, leaving scientists in the dark about when the human pattern of brain aging might have begun.
To get a better idea, a team led by Chet Sherwood, an evolutionary neuroanatomist at George Washington University in Washington, D.C., directly compared the brain-shrinking patterns of chimps and humans, which diverged only about 5 million to 7 million years ago. The study sample included 87 humans ranging from ages 22 to 88, and 69 chimps from ages 10 to 51. Since chimps rarely live longer than 45 years in the wild—although a few in captivity have survived into their 60s—the sample represents the normal life span of both species.
The team used MRI scanners to measure the sizes of a number of brain regions in both humans and chimps. The differences were striking: While chimps showed no significant age-related shrinkage in any of the regions measured, all of the human brain regions showed dramatic age effects, the team reports online this week in the Proceedings of the National Academy of Sciences. Some regions shrank as much as 25% by 80 years of age. Moreover, the pattern was somewhat different for human gray matter, which contains the nerve cell bodies and their nuclei, along with auxiliary cells such as microglia, and human white matter, which consists of the long neural axons and which makes the connections between different brain regions.
For example, the gray matter of the human frontal lobe shrank an average of about 14% between the age of 30 and 80, and the gray matter of the hippocampus about 13% over the same period. But shrinkage of white matter was even more severe: The white matter of the frontal lobe shrank about 24%, similar to the white matter volume decrease in most other brain regions measured.
Moreover, unlike the gray matter, which showed a more gradual shrinkage over time, the decline in white matter was most precipitous between the ages of 70 and 80. So although the average decline in the frontal lobe was 24% at age 80, it was only about 6% at age 70.
So why do chimpanzees make it through their entire normal life spans without significant brain shrinkage, whereas the human brain appears to wither with age?
"This is the million-dollar question," Sherwood says. In the paper, the team points out that the larger human brain, which is more than three times as big as that of a chimp, also has much higher energy demands. Thus, the human brain uses up to 25% of the body's total available energy when we're at rest, compared with no more than about 10% for other primates.
The toll of keeping up with that energy supply, the team argues, shows up on the cellular and molecular levels in the human brain. This includes a decline in the efficiency of the mitochondria, the energy storehouses of living cells, as well as damage from oxidative stress, the result of oxygen-containing molecules that are produced during cell metabolism.
"My guess is that our neurons basically do the best they can to maintain maximal functioning for as long as they can," Sherwood says. "But they have the odds really stacked against them after long years of high energy consumption."
Dean Falk, an anthropologist at the School for Advanced Research in Santa Fe finds the differences between gray matter and white matter shrinkage patterns particularly interesting. White matter, which humans have relatively more of than chimps and other primates, "is particularly important for complex cognition in Homo sapiens," Falk says, because it makes the connections between brain regions involved in transmission of information during problem solving and other complicated tasks.
Nevertheless, Peter Rapp, a neurobiologist at the National Institutes of Health's Laboratory of Experimental Gerontology in Baltimore, Maryland, who carried out some of the earlier brain-imaging studies in rhesus monkeys, says that the new study does not distinguish between brain shrinkage as a consequence of normal aging in humans and shrinkage that might be due to neurodegenerative brain disease in a subset of the subjects. "Is what distinguishes chimps and humans susceptibility to disease, or a qualitative difference in the process of healthy brain aging?"
Bruce Yankner, a neurologist at Harvard Medical School in Boston, agrees. To test the authors' hypothesis that human brain shrinkage is a result of greater longevity, Yankner says, "it would be interesting" to see if similar brain shrinkage occurs in other species with extreme longevity, "such as tortoises and turtles that live for well over 100 years, elephants that can live for 70 years, and parrots that can live for 80 years."
How gender stereotypes led brain science
Research so far has failed to challenge deep prejudice, says Gina Rippon
Several things went wrong in the early days of sex differences and brain imaging research. With respect to sex differences, there was a frustrating backward focus on historical beliefs in stereotypes (termed “neurosexism” by psychologist Cordelia Fine). Studies were designed based on the go-to list of the “robust” differences between females and males, generated over the centuries, or the data were interpreted in terms of stereotypical female/male characteristics which may not have even been measured in the scanner. If a difference was found, it was much more likely to be published than a finding of no difference, and it would also breathlessly be hailed as an “at last the truth” moment by an enthusiastic media. Finally the evidence that women are hard-wired to be rubbish at map reading and that men can’t multi-task! So the advent of brain imaging at the end of the 20th century did not do much to advance our understanding of alleged links between sex and the brain. Here in the 21st century, are we doing any better?
One major breakthrough in recent years has been the realisation that, even in adulthood, our brains are continually being changed, not just by the education we receive, but also by the jobs we do, the hobbies we have, the sports we play. The brain of a working London taxi driver will be different from that of a trainee and from that of a retired taxi driver we can track differences among people who play videogames or are learning origami or to play the violin. Supposing these brain-changing experiences are different for different people, or groups of people? If, for example, being male means that you have much greater experience of constructing things or manipulating complex 3D representations (such as playing with Lego), it is very likely that this will be shown in your brain. Brains reflect the lives they have lived, not just the sex of their owners.
Seeing the life-long impressions made on our plastic brains by the experiences and attitudes they encounter makes us realise that we need to take a really close look at what is going on outside our heads as well as inside. We can no longer cast the sex differences debate as nature versus nurture – we need to acknowledge that the relationship between a brain and its world is not a one-way street, but a constant two-way flow of traffic.
Once we acknowledge that our brains are plastic and mouldable, then the power of gender stereotypes becomes evident. If we could follow the brain journey of a baby girl or a baby boy, we could see that right from the moment of birth, or even before, these brains may be set on different roads. Toys, clothes, books, parents, families, teachers, schools, universities, employers, social and cultural norms – and, of course, gender stereotypes – all can signpost different directions for different brains.
Resolving arguments about differences in the brain really matters. Understanding where such differences come from is important for everyone who has a brain and everyone who has a sex or a gender of some kind. Beliefs about sex differences (even if ill-founded) inform stereotypes, which commonly provide just two labels – girl or boy, female or male – which, in turn, historically carry with them huge amounts of “contents assured” information and save us having to judge each individual on their own merits or idiosyncrasies.
With input from exciting breakthroughs in neuroscience, the neat, binary distinctiveness of these labels is being challenged – we are coming to realise that nature is inextricably entangled with nurture. What used to be thought fixed and inevitable is being shown to be plastic and flexible the powerful biology-changing effects of our physical and our social worlds are being revealed.
The 21st century is not just challenging the old answers – it is challenging the question itself.