Information

Is the information in the brain stored in the connections rather than neurons?

Is the information in the brain stored in the connections rather than neurons?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Can I imagine the difference between the model of the grandma neuron and the model of interconnected neuron network so that the information isn't primarily stored in the neurons (respectively in their states) but in the connections between them.


Is the information in the Brain stored in the connections rather than in the neurons ?

It depends on what information you are referring to.

The brain does not just store one type of information,the model of a grandmother cell and the model of an interconnected neuron network are 2 subsystems of a greater system working together perhaps in a competitive or complementary arrangement. The purpose of this system is to extract information from the environment and identify it again for the organisms benefit. 1.

So let's say we see something that will benefit us in the future or that we might want to avoid. This is a simplification, but as you see that something, neurons in your neocortex light up specifically in region V1 in the back of your head, they arrive there after a lot of computations along the way from your retina, but also project to other parts of your brain where they are perhaps processed consciously,they fire in an orderly fashion, in a map, this bouncing around of electrical signals represents your conscious, and unconscious processing of the image, so the information exists as a network of electrical signals. 2.

During recollection, ( there are different types of memories so the structure in charge could be the neocortex,basal ganglia, hippocampus or even a muscle)3. this same map ( or a smaller subset ) gets replayed. The mechanism(s) by which this information is first encoded and later retrieved is still not entirely understood, but it involves a map and a getaway or switchboard.4. This switchboard ties together different percepts ( a smaller network of electrical signals), so in a way the information is represented at the switchboard level, if you disable this switchboard, you can't make new memories or lose previous ones.

A grandmother cell in the scheme of what I just explained, is the result of another subsystem that helps recognize something. Once you successfully encode a percept ( thing we want to remember seeing) you might want to associate it with something ( good,bad,delicious,etc), at the top of a series of associations lies a grandmother cell, which in turn is a handy evolutionary way of signaling back to other systems when some previous information has been experienced. So in this way the information of whether a subset of relationships in between networks of electrical signals has been experienced and encoded lies here.

References/Sources:

  1. The idea of grandmother cells and object recognition in general is summarized in the Object Recognition chapter of ( Cognitive Neuroscience,Gazzinga-3rd ed 222-225) along with the problems a grandmother cell introduces in object recognition (ie if you lose the grandmother cell you would lose all the information about your grandmother)

  2. Koch( Quest for Consciousness-2004) chapters on vision provide a very readable account of how the information goes from retina to v1. Vision Science (Palmer) is a more thorough account.

  3. The chapter on learning and memory (Cognitive Neuroscience) gives an overview.

  4. (Gluck -2001-Gateway to memory) presents different models of this arrangement.


The physiology of memory is still poorly understood, but there are some generalizations we can work with:

There are four "types" of memory which the human brain works with.

Sensory memory is very short-term (milliseconds), cannot be consciously accessed (it's used by the sensory processing centers of the brain for things like tracking moving objects), and is stored in the activation state of neurons (ie. which neurons are firing at any given time).

Working memory is short-term (it lasts up to 30 minutes), and stores "what you're thinking about now". It is not yet fully understood exactly how it works. However, we know that drugs which interrupt neuron state can "delete" working memory to some degree; this suggests that working memory is stored at least partially in the activation state of neurons.

Long-term memory is persistent and lasts, in theory, until the end of a person's life. It has been well established that long-term memory is stored in the connections between neurons.

Intermediate memory is a relatively new theory of memory which operates somewhere between working memory and long-term memory. As far as I'm aware, we don't yet understand many of the mechanisms behind it.


Grandmother Cells vs Distributed Representations

The idea of a "grandmother cell" is effectively the ultimate in sparse representations: that there is one, single neuron in your brain that represents your grandmother, and that everything you know about your grandmother is associated with that cell through connections.

At the other end of the spectrum is a "distributed representation," where a network of simultaneously active cells represents a concept, and that these cells are connected to each other and to related concepts (see here, here, and here). These networks could be highly overlapping, and could potentially overlap more when you are talking about related concepts: i.e., "dog" and "cat" might overlap more than "fish" and "squirrel" because the former are both 4-legged animals that live on land and are kept as pets - lots of similarity.

Realistically, no one in modern neuroscience is truly arguing for the existence of "grandmother cells" but there are certainly arguments about sparse vs. distributed representations, and now large or small the cell assemblies or networks are that reflect particular concepts, and what the importance of context is. The main arguments you will see today concern the extent of distribution across brain areas: that is, are there brain areas that contain somewhat-distributed representations of abstract concepts, or are those concepts distributed across modalities (where your concept of 'baseball' includes a synthesized visual representation in visual cortex, the texture of the seams in somatosensory cortex, the sound of ball-on-bat in auditory cortex, the motor plan for throwing the ball in motor cortex, etc).

Evidence for Grandmother cells?

Despite what I just said, there were some interesting papers from human subjects published several years ago, especially this one. These papers showed neurons in a particular area of the brain that had invariant responses to particular concepts, perhaps most famously "Jennifer Aniston" cells and "Halle Berry" cells that responded not only to various pictures of each actress but also to other more abstract representations such as their written name. Clearly, these are not the cells typically described in the primary visual cortex that respond to stimuli like bars and edges.

These papers were reported in the lay science and general press as evidence or proof of the "grandmother cell" theory. Some were more responsible and included some of the authors own words (see also here): essentially, what the authors are arguing is that cell assemblies might be pretty small (but not too small - the authors only recorded from a few cells yet found dozens that responded to Jennifer Aniston, not just one, for example).

Information in cells or connections

To get back to your original question: you asked about whether the difference between a grandmother cell and a neuronal network is whether the information is stored in the cells vs. the connections. I would argue that, although in the "grandmother cell" model there is a lot of importance placed on individual cells, in both cases the information is stored in the connections.

Even if you assume the extreme case of the "grandmother cell", where one cell is the hub that alone represents your grandmother, the information isn't stored as "grandma", it's stored as the associations you have with her: maybe smells of cookies, the shape of her face, her name signed on a birthday card. If you pulled someone's grandmother cell out of their brain network and interrogated it, you wouldn't be able to learn anything at all about their grandmother.

The difference between the "grandmother cell" and a more distributed network is that in the "grandmother cell" case, we would expect to find another cell that represented specifically the cookie smell, yet another that represented the shape of her face (and only her face), and so on. In a more distributed network, each of those related concepts would be encoded by their own network, and those networks might overlap somewhat where the concepts are closely related: that is, part of your representation of "grandmother" is the smell of her cookies.


From what I can gather, the short answer is that we don't have the full picture of how biological neural networks store information.

If you are willing to relax the constraints of your question to extent to artificial neural networks, then your answer becomes significantly easier to answer in part because we understand them much better. Artificial neural networks do store information as synaptic weights which is the only part that changes during the training of a network. The rest of the network such as the topology and the activation function remains unchanged.

Image classifiers are a convenient example of information as synaptic weights since we can see what the neural network 'sees' by maximal activation of neurons. It is immediately obvious that there is a lot of information stored in the weights of a network. The image below is taken from Understanding Neural Networks Through Deep Visualization


Storing a Memory Involves Distant Parts of the Brain

In studies with mice, Janelia researchers discovered that to maintain certain short-term memories, the brain’s cortex relies on connections with the thalamus.

New research from scientists at the Howard Hughes Medical Institute’s Janelia Research Campus shows that distant parts of the brain are called into action to store a single memory. In studies with mice, the researchers discovered that to maintain certain short-term memories, the brain’s cortex—the outer layer of tissue thought to be responsible for generating most thoughts and actions—relies on connections with a small region in the center of the brain called the thalamus.

The thalamus is best known as a relay center that passes incoming sensory information to other parts of the brain for processing. But clinical findings suggested that certain parts of the thalamus might also play a critical role in consciousness and cognitive function. The discovery that the thalamus is needed to briefly store information so that animals can act on a past experience demonstrates that the region has a powerful influence on the function of the cortex, says Janelia group leader Karel Svoboda, who led the study. “It really suggests that cortex by itself cannot maintain these memories,” he says. “Instead the thalamus is an important participant.”

Svoboda, Janelia group leader Shaul Druckmann, and their colleagues reported their findings in the May 11, 2017, issue of the journal Nature.

When a memory is formed in the brain, activity in the cells that store the information changes for the duration of the memory. Since individual neurons cannot remain active for more than a few milliseconds on their own, groups of cells work together to store the information. Neurons signaling back and forth can sustain one another’s activity for the seconds that it takes to store a short-term memory.

Svoboda wants to understand exactly how such memories are formed and maintained, including where in the brain they are stored. In prior work, his team determined that in mice, a region of the cortex called the anterior lateral motor cortex (ALM) is critical for short-term memory. Activity in this area is necessary for mice to perform a memory-related task in which they experience a sensory cue that they must remember for several seconds before they are given an opportunity to act on the cue and earn a reward.

Svoboda and his colleagues wanted to understand if ALM stores these memories by itself, or if other parts of the brain work in concert with the ALM to store memories. ALM connects to several other brain regions via long-range connections. The next step was to investigate whether any of the region’s long-range communications were important for memory storage.

Zengcai Guo and Hidehiko Inagaki, postdoctoral researchers in Svoboda’s lab, tested those connections one by one, evaluating whether switching off neurons in various brain regions interfered with memory-associated activity in the ALM and impacted animals’ ability to remember their cues.

The results were clear. “The only player that perturbed the memory was the thalamus,” Svoboda says. “And it was an incredibly dramatic effect. If you turn off these thalamic neurons, activity and short-term memories completely disappear in the cortex. The cortex effectively becomes comatose.”

In further experiments, the team discovered that information flows both ways between the thalamus and the ALM portion of the cortex. It’s like a game of ping-pong,” Svoboda says. “One excites the other, and the other then excites the first, and so on and so forth. This back and forth maintains these activity patterns that correspond to the memory.”

The finding highlights the functional importance of connections between distant parts of the brain, which Svoboda says are often neglected as neuroscientists focus their attention on activity within particular regions. “It was unexpected that these short-term memories are maintained in a thalamocortical loop,” he says. “This tells us that these memories are widely distributed across the brain.”


Neuroscientists Have Discovered a Phenomenon That They Can’t Explain

Carl Schoonover and Andrew Fink are confused. As neuroscientists, they know that the brain must be flexible but not too flexible. It must rewire itself in the face of new experiences, but must also consistently represent the features of the external world. How? The relatively simple explanation found in neuroscience textbooks is that specific groups of neurons reliably fire when their owner smells a rose, sees a sunset, or hears a bell. These representations—these patterns of neural firing—presumably stay the same from one moment to the next. But as Schoonover, Fink, and others have found, they sometimes don’t. They change—and to a confusing and unexpected extent.

Schoonover, Fink, and their colleagues from Columbia University allowed mice to sniff the same odors over several days and weeks, and recorded the activity of neurons in the rodents’ piriform cortex—a brain region involved in identifying smells. At a given moment, each odor caused a distinctive group of neurons in this region to fire. But as time went on, the makeup of these groups slowly changed. Some neurons stopped responding to the smells others started. After a month, each group was almost completely different. Put it this way: The neurons that represented the smell of an apple in May and those that represented the same smell in June were as different from each other as those that represent the smells of apples and grass at any one time.

This is, of course, just one study, of one brain region, in mice. But other scientists have shown that the same phenomenon, called representational drift, occurs in a variety of brain regions besides the piriform cortex. Its existence is clear everything else is a mystery. Schoonover and Fink told me that they don’t know why it happens, what it means, how the brain copes, or how much of the brain behaves in this way. How can animals possibly make any lasting sense of the world if their neural responses to that world are constantly in flux? If such flux is common, “there must be mechanisms in the brain that are undiscovered and even unimagined that allow it to keep up,” Schoonover said. “Scientists are meant to know what’s going on, but in this particular case, we are deeply confused. We expect it to take many years to iron out.”

It had already taken years for Schoonover and Fink to even confirm that representational drift exists in the piriform cortex. They needed to develop surgical techniques for implanting electrodes into a mouse’s brain and, crucially, keeping them in place for many weeks. Only then could they be sure that the drift they witnessed was really due to changes in the neurons, and not small movements of the electrodes themselves. They started working on this in 2014. By 2018, they were confident that they could get stable recordings. They then allowed implant-carrying mice to periodically inhale different odors.

The team showed that if a neuron in the piriform cortex reacts to a specific smell, the odds that it will still do so after a month are just one in 15. At any one time, the same number of neurons fires in response to each odor, but the identity of those neurons changes. Daily sniffs can slow the speed of that drift, but they don’t eliminate it. Nor, bizarrely, does learning: If the mice associated a smell with a mild electric shock, the neurons representing that scent would still completely change even though the mice continued to avoid it. “The prevailing notion in the field has been that neuronal responses in sensory areas are stable over time,” says Yaniv Ziv, a neurobiologist at the Weizmann Institute of Science who was not involved in the new study. “This shows that’s not the case.”

“There have been hints of this for at least 15 years,” across many parts of the brain, Schoonover told me. The hippocampus, for example, helps animals navigate their surroundings. It contains place cells—neurons that selectively fire when their owner enters specific locations. Walk from your bed to your front door, and different place cells will fire. But these preferences aren’t fixed: Ziv and others have now shown that the locations to which these cells are tuned can also drift over time.

In another experiment, Laura Driscoll, a neuroscientist who is now at Stanford, placed mice in a virtual T-shaped maze, and trained them to go either left or right. This simple task depends on the posterior parietal cortex, a brain region involved in spatial reasoning. Driscoll and her colleagues found that activity in this area also drifted: The neurons that fired when the mice ran the maze gradually changed, even though the rodents’ choices remained the same.

These results were surprising, but not overly so. The hippocampus is also involved in learning and short-term memory. You’d expect it to overwrite itself, and thus to continuously drift. “Up until now, observations of representational drift were confined to brain regions where we could tolerate it,” Schoonover said. The piriform cortex is different. It’s a sensory hub—a region that allows the brain to make sense of the stimuli around it. It ought to be stable: How else would smells ever be familiar? If representational drift can happen in the piriform cortex, it may be common throughout the brain.

It might be less common in other sensory hubs, such as the visual cortex, which processes information from the eyes. The neurons that respond to the smell of grass might change from month to month, but the ones that respond to the sight of grass seem to mostly stay the same. That might be because the visual cortex is highly organized. Adjacent groups of neurons tend to represent adjacent parts of the visual space in front of us, and this orderly mapping could constrain neural responses from drifting too far. But that might be true only for simple visual stimuli, such as lines or bars. Even in the visual cortex, Ziv found evidence of representational drift when mice watched the same movies over many days.

“We have a hunch that this should be the rule rather than the exception,” Schoonover said. “The onus now becomes finding the places where it doesn’t happen.” And in places where it does happen, “it’s the three F’s,” Fink added. “How fast does it go? How far does it get? And … how bad is it?”

How does the brain know what the nose is smelling or what the eyes are seeing, if the neural responses to smells and sights are continuously changing? One possibility is that it somehow corrects for drift. For example, parts of the brain that are connected to the piriform cortex might be able to gradually update their understanding of what the piriform’s neural activity means. The whole system changes, but it does so together.

Another possibility is that some high-level feature of the firing neurons stays the same, even as the specific active neurons change. As a simple analogy, “individuals in a population can change their mind while maintaining an overall consensus,” Timothy O’Leary, a neuroscientist at the University of Cambridge, told me. “The number of ways of representing the same signal in a large population is also large, so there’s room for the neural code to move.” Although some researchers have found signs of these stable, high-level patterns in other drifty parts of the brain, when Schoonover and Fink tried to do so in the piriform cortex, they couldn’t. Neither they nor their colleagues can conclusively say how the brain copes with representational drift. They’re also unsure why it happens at all.

Drift might simply be a nervous-system bug—a problem to be addressed. “The connections in many parts of the brain are being formed and broken down continually, and each neuron is itself continually recycling cellular material,” O’Leary said. Perhaps a system like this—a gray, goopy version of the ship of Theseus—is destined to drift over time. But that idea “is a little weak,” O’Leary told me. The nervous system can maintain precise and targeted connections, such as those between muscles and the nerves that control them. Drift doesn’t seem inevitable.

Alternatively, drift might be beneficial. By constantly changing how existing information is stored, the nervous system might be better able to incorporate new material. “Information that’s not continuously useful is forgotten, while information that continues to be useful is updated with the drift,” says Driscoll, who is now testing this idea using artificial networks. “The more I’ve thought about drift, the more it makes sense that it’s something we would see in the brain.” Schoonover likes this idea too: “Our favored interpretation is that drift is a manifestation of learning,” he told me. “It’s not learning itself it’s the smoke that comes out of learning.”

Schoonover and Fink compare the discovery of representational drift with the work of the astronomer Vera Rubin. In the 1970s, Rubin and her colleague Kent Ford noticed that some galaxies were spinning in unexpected ways that seemed to violate Newton’s laws of motion. Her analysis of that data provided the first direct evidence for dark matter, which makes up most of the matter in the universe, but has never been observed. Similarly, drift indicates “that there’s something else going on under the hood, and we don’t know what that is yet,” Schoonover said.

But the comparison between drift and Rubin’s spinning galaxies fails in one important way. Rubin knew that she was onto something odd because she could compare her data against Newtonian mechanics—a solid and thoroughly described theory of physics. No such theory exists in neuroscience. The field has a very clear idea of how individual neurons work, but it gets much fuzzier when it comes to neuronal networks, entire brains, or the behavior of whole animals.

Consider the very idea that specific patterns of firing neurons can represent different smells, sights, or sounds. That connection seems simple enough—from the perspective of the experimenter, who exposed an animal to a stimulus and then looked for active neurons in its brain. But the brain itself has to work with just half of that equation, a bunch of active neurons, to make sense of what might have triggered that activity. “Just because we can decode that information doesn’t mean the brain is doing that,” says John Krakauer, a neuroscientist at Johns Hopkins University.

For that reason, Krakauer says that Schoonover and Fink’s study, though “a technical tour de force,” is also “very slightly straw-mannish.” The idea of drift, he says, is surprising and exciting only when contrasted with the unsophisticated textbook idea of representations, which was never theoretically sound and was already being questioned. And that’s a broader problem for the entire field, he told me. “Mainstream neuroscience relies on taking very specific methods and results and packaging them in a vague cloud of concepts that are only barely agreed upon by the field,” he said. “In a lot of neuroscience, the premises remain unexamined, but everything else is impeccable.”

Fink agrees that the idea of stable representations was never a theory—more “a tacit assumption,” he said, and one that held “because it’s simple.” How could it not be that way? Well, it isn’t. So now what?

“There’s a real hunger in the field for new ideas,” Fink told me, which is why, he thinks, he and Schoonover haven’t yet faced the kind of vicious pushback that scientists with dogma-busting data tend to encounter. “People are really desperate for theories. The field is so immature conceptually that we’re still at the point of collecting factlets, and we’re not really in a position to rule anything out.” Neuroscience’s own representations of the brain still have plenty of room to drift.


Introduction

The view of the nervous system as a linear, computer-like machine performing classical, deterministic input-output or stimulus-response computations is still very popular in neuroscience. However, this view is challenged by experimental findings and theoretical analyses indicating that the nervous system is a non-linear dynamical complex system (Deco et al., 2008 Singer, 2009 Tognoli and Scott Kelso, 2014 Wolf et al., 2014) exhibiting highly stochastic activity (Deco et al., 2009). There is a need for a “paradigm shift from behaviorist stimulus-response concepts toward notions of predictive coding in self-organizing recurrent networks with high dimensional dynamics” (Singer, 2015). Neural networks consist of nerve cells that are linked by many reciprocal connections (Markov and Kennedy, 2013 Singer, 2013) and are capable of non-linear computations (London and Häusser, 2005). Neuronal networks with non-linear neurons and densely connected feedback loops can generate dynamics that is more complex, variable and rich than expected (Deco and Jirsa, 2012 Singer, 2013 Singer and Lazar, 2016).

The complexity of neuronal dynamics and its capability of fast parallel processing of information is thought to arise from powerful but purely classical computing strategies. The high diversity of neural network dynamics is considered to emerge primarily from non-linearities in the behavior of network nodes and from high variability in the strength and conduction delays of network connections. A conventional wisdom is that in macroscopic objects such as our brain the quantum fluctuations are self-averaging and thus cannot contribute to its rich dynamics. Indeed, it is very likely that the nervous system cannot display macroscopic quantum (classically impossible) behaviors such as quantum entanglement, superposition or tunneling (Koch and Hepp, 2006). Therefore, the prevailing view has been that quantum processes are irrelevant for the brain function. However, in contrast to quantum brain proposals based on implausible quantum mechanisms, there is an alternative, more realistic and subtle way in which quantum events might influence the brain activity and increase its computational power and information coding abilities. Part of this article has been published as a part of a book chapter and is adapted here with permission (Jedlicka, 2014).


A brain in the head, and one in the gut

Two brains are better than one. At least that is the rationale for the close - sometimes too close - relationship between the human body's two brains, the one at the top of the spinal cord and the hidden but powerful brain in the gut known as the enteric nervous system.

For Dr. Michael Gershon, the author of "The Second Brain" and the chairman of the department of anatomy and cell biology at Columbia University, the connection between the two can be unpleasantly clear.

"Every time I call the National Institutes of Health to check on a grant proposal," Gershon said, "I become painfully aware of the influence the brain has on the gut."

In fact, anyone who has ever felt butterflies in the stomach before giving a speech, a gut feeling that flies in the face of fact or a bout of intestinal urgency the night before an examination has experienced the actions of the dual nervous systems.

The connection between the brains lies at the heart of many woes, physical and psychiatric. Ailments like anxiety, depression, irritable bowel syndrome, ulcers and Parkinson's disease manifest symptoms at the brain and the gut level.

"The majority of patients with anxiety and depression will also have alterations of their GI function," said Dr. Emeran Mayer, professor of medicine, physiology and psychiatry at the University of California, Los Angeles.

A study in 1902 showed changes in the movement of food through the gastrointestinal tract in cats confronted by growling dogs.

One system's symptoms - and cures - may affect the other.

Antidepressants, for example, cause gastric distress in up to a quarter of the people who take them. Butterflies in the stomach are caused by a surge of stress hormones released by the body in a "fight or flight" situation. Stress can also overstimulate nerves in the esophagus, causing a feeling of choking.

Gershon, who coined the term "second brain" in 1996, is one of a number of researchers who are studying brain-gut connections in the relatively new field of neurogastroenterology. New understandings of the way the second brain works, and the interactions between the two, are helping to treat disorders like constipation, ulcers and Hirschsprung's disease.

The role of the enteric nervous system is to manage every aspect of digestion, from the esophagus to the stomach, small intestine and colon. The second brain, or little brain, accomplishes all that with the same tools as the big brain, a sophisticated nearly self-contained network of neural circuitry, neurotransmitters and proteins.

The independence is a function of the enteric nervous system's complexity.

"Rather than Mother Nature's trying to pack 100 million neurons someplace in the brain or spinal cord and then sending long connections to the GI tract, the circuitry is right next to the systems that require control," said Jackie Wood, professor of physiology, cell biology and internal medicine at Ohio State.

Two brains may seem like the stuff of science fiction, but they make literal and evolutionary sense.

"What brains do is control behavior," Wood said. "The brain in your gut has stored within its neural networks a variety of behavioral programs, like a library. The digestive state determines which program your gut calls up from its library and runs."

When someone skips lunch, the gut is more or less silent. Eat a pastrami sandwich, and contractions all along the small intestines mix the food with enzymes and move it toward the lining for absorption to begin. If the pastrami is rotten, reverse contractions will force it - and everything else in the gut - into the stomach and back out through the esophagus at high speed.

In each situation, the gut must assess conditions, decide on a course of action and initiate a reflex.

"The gut monitors pressure," Gershon said. "It monitors the progress of digestion. It detects nutrients, and it measures acid and salts. It's a little chemical lab."

The enteric system does all this on its own, with little help from the central nervous system.

By the early ➀s, scientists had accepted the idea of the enteric nervous system and the role of neurotransmitters like serotonin in the gut.


Stored in Synapses: How Scientists Completed a Map of the Roundworm’s Brain

Redrawing neural connections led to new clues about sex differences in scientists’ favorite model organism.

The tiny, transparent roundworm known as Caenorhabditis elegans is roughly the size of a comma. Its entire body is made up of just about 1,000 cells. A third are brain cells, or neurons, that govern how the worm wriggles and when it searches for food — or abandons a meal to mate. It is one of the simplest organisms with a nervous system.

The circuitry of C. elegans has made it a common test subject among scientists wanting to understand how the nervous system works in other animals. Now, a team of researchers has completed a map of all the neurons, as well as all 7,000 or so connections between those neurons, in both sexes of the worm.

“It’s a major step toward understanding how neurons interact with each other to give rise to different behaviors,” said Scott Emmons, a developmental biologist at the Albert Einstein College of Medicine in New York who led the research.

Structure dictates function in several areas of biology, Dr. Emmons said. The shape of wings provided insight into flight, the helical form of DNA revealed how genes are coded, and the structure of proteins suggested how enzymes bind to targets in the body.

It was this concept that led biologist Sydney Brenner to start cataloging the neural wiring of worms in the 1970s. He and his colleagues preserved C. elegans in agar and osmium fixative, sliced up their bodies like salami and photographed their cells with a powerful electron microscope.

Then the researchers began the painstaking work of manually tracing individual neurons and synapses that join them, coloring the links as if it were an intense paint-by-numbers project. The resulting diagram, known as a connectome, was published in 1986 and inspired neurobiologists around the world to attempt to understand how animal behavior, learning and memory were governed.

But the connectome was not complete. Because it was made by hand, it skipped over some parts of the worm’s body. And it only mapped one sex — the hermaphrodite worm, which can self-fertilize and is considered the equivalent of a female in the C. elegans species.

“We just had fragments of the worm’s wiring,” Dr. Emmons said.

In 2012, after spending years developing software that could map neurons more accurately, Dr. Emmons and his colleagues published the connectome of a male worm’s tail.

The researchers kept on drawing maps of the rest of the male’s nervous system, including the region in the head that is similar to the hermaphrodite’s and where most of the worm’s decision-making takes place.

The scientists also decided to reanalyze Dr. Brenner’s original images, so that they could compare the male and female nervous systems. The sensitivity of the new software allowed them to identify previously overlooked neural links in the hermaphrodite and areas where nerves were communicating in the intestine, epidermis and various muscles.

The scientists report their findings in a new paper, published Wednesday in Nature.

“The new connectomes provide much more comprehensive information than the old data sets did,” said Aakanksha Singhvi, a biologist at the Fred Hutchinson Cancer Research Center in Seattle, who was not involved in the study. “They refine our ability to tell who is talking to whom in the nervous system and will inspire new ideas about how this communication translates to worm behavior.”

The research also provides clues to surprising sex differences in worms, Dr. Singhvi said. Scientists already knew of primary sex differences: neurons that controlled uterine muscles in the hermaphrodite or copulation in the male.

But the new diagrams show that a significant number of synapses are different in pathways shared by both sexes.

Interpreting exactly how these variations regulate the worm’s decision-making and behavior will require more work.

“A connectome is just one snapshot image,” Dr. Singhvi said. “It doesn’t tell you what the neurons are saying to each other or when, whether they change with age or what the extent of variation between individuals is.”

Researchers are already working on more connectomes. A team based at Harvard University is studying C. elegans larvae to see how neurons and synapses change as the worms develop.

The FlyEM project at the Howard Hughes Medical Institute’s Janelia Research Campus is focusing on the fruit fly brain, while another group at the Argonne National Laboratory maps mouse neurons.

In 2010, the National Institutes of Health also allocated $40 million to map the human brain’s connections. But the human brain has billions of neurons and trillions of synapses, far more complex than a worm’s brain.

For now, Dr. Emmons is hoping that a few wriggling worms are just sophisticated enough to help scientists understand the human nervous system.


Understanding How the Brain Thinks

For 21st century success, now more than ever, students will need a skill set far beyond the current mandated standards that are evaluated on standardized tests. The qualifications for success in today's ever-changing world will demand the ability to think critically, communicate clearly, use continually changing technology, be culturally aware and adaptive, and possess the judgment and open-mindedness to make complex decisions based on accurate analysis of information. The most rewarding jobs of this century will be those that cannot be done by computers.

For students to be best prepared for the opportunities and challenges awaiting them, they need to develop their highest thinking skills -- the brain's executive functions. These higher-order neural networks are undergoing their most rapid development during the school years, and teachers are in the best position to promote the activation of these circuits. With the help of their teachers, students can develop the skillsets needed to solve problems that have not yet been recognized, analyze information as it becomes rapidly available in the globalized communication systems, and to skillfully and creatively take advantage of the evolving technological advances as they become available.

Factory Model of Education Prepares for "Assembly Line" Jobs

Automation and computerization are exceeding human ability for doing repetitive tasks and calculations, but the educational model has not changed. The factory model of education, still in place today, was designed for producing assembly line workers to do assigned tasks correctly. These workers did not need to analyze, create, or question.

Ironically, in response to more information, many educators are mandated to teach more rote facts and procedures, and students are given bigger books with more to memorize. In every country where I've given presentations and workshops, the problem is the same: overstuffed curriculum.

Even in countries where high-stakes standardized testing is not a dominant factor, school curriculum and emphasis have changed to provide more time for this additional rote memorization. Creative opportunities -- the arts, debate, general P.E., collaborative work, and inquiry -- are sacrificed at the altar of more predigested facts to be passively memorized. These students have fewer opportunities to discover the connections between isolated facts and to build neural networks of concepts that are needed to transfer learning to applications beyond the contexts in which the information is learned and practiced.

The High Costs of Maintaining the Factory Model

If students do not have opportunities to develop their higher order, cognitive skillsets they won't develop the reason, logic, creative problem solving, concept development, media literacy, and communication skills best suited for the daily complexities of life or the professional jobs of their future. Without these skills, they won't be able to compete on the global employment market with students currently developing their executive functions.

Instead, the best jobs will go to applicants who analyze information as it becomes available, adapt when new information makes facts obsolete, and collaborate with other experts on a global playing field. All these skills require tolerance, willingness to consider alternative perspectives, and the ability to articulate one's ideas successfully.

As educators, it is our challenge to see that all students have opportunities to stimulate their developing executive function networks so when they leave school they have the critical skillsets to choose the career and life paths that will give them the most satisfaction.

Executive Function = Critical Thinking

What my field of neurology has called "executive functions" for over 100 years are these highest cognitive processes. These executive functions have been given a variety of less specific names in education terminology such as higher order thinking or critical thinking. These are skillsets beyond those computers can do because they allow for flexible, interpretive, creative, and multidimensional thinking -- suitable for current and future challenges and opportunities. Executive functions can be thought of as the skills that would make a corporate executive successful. These include planning, flexibility, tolerance, risk assessment, informed decision-making, reasoning, analysis, and delay of immediate gratification to achieve long-term goals. These executive functions further allow for organizing, sorting, connecting, prioritizing, self-monitoring, self-correcting, self-assessing, abstracting, and focusing.

The Prefrontal Cortex: Home to Critical Thinking

The executive function control centers develop in the prefrontal cortex (PFC). The PFC gives us the potential to consider and voluntarily control our thinking, emotional responses, and behavior. It is the reflective "higher brain" compared to the reactive "lower brain". This prime real estate of the PFC comprises the highest percentage of brain volume in humans, compared to all other animals, which is roughly 20% of our brains.

Animals, compared to humans, are more dependent on their reactive lower brains to survive in their unpredictable environments where it is appropriate that automatic responses not be delayed by complex analysis. As man developed more control of his environment, the luxury of a bigger reflective brain correlated with the evolution of the PFC to its current proportions.

The prefrontal cortex is the last part of the brain to mature. This maturation is a process of neuroplasticity that includes 1) the pruning of unused cells to better provide for the metabolic needs of more frequently used neurons and 2) strengthening the connections in the circuits that are most used. Another aspect of neuroplasticity is the growth of stronger and increased numbers of connections among neurons. Each of the brain's over one billion neurons holds only a tiny bit of information. It is only when multiple neurons connect through their branches (axons and dendrites) that a memory is stored and retrievable.

This prefrontal cortex maturation, the pruning and strengthening process, continues into the twenties, with the most rapid changes in the age range of 8-16. Electricity flows from neuron to neuron through the axons and dendrites. This electrical flow carries information and also provides the stimulus that promotes the growth of these connections. Each time a network is activated -- the information recalled for review or use -- the connections become stronger and faster (speed through a circuit is largely determined by the layers of myelin coating that are built up around the axons -- this is also in response to the flow of the electric current of information transport when the circuit is activated). The stimulation of these networks during the ages of their rapid development strongly influences the development of the executive functions -- the social-emotional control and the highest thinking skillsets that today's students will carry with them as they leave school and become adults.

Preparing Students for the Challenges and Opportunities of the 21st Century

We have the obligation to provide our students with opportunities to learn the required foundational information and procedures through experiences that stimulate their developing neural networks of executive functions. We activate these networks through active learning experiences that involve students' prefrontal cortex circuits of judgment, critical analysis, induction, deduction, relational thinking with prior knowledge activation, and prediction. These experiences promote creative information processing as students recognize relationships between what they learn and what they already know. This is when neuroplasticity steps in and new connections (dendrites, synapses, myelinated axons) physically grow between formerly separate memory circuits when they are activated together. This is the physical manifestation of the "neurons that fire together, wire together" phenomenon.

Unless new rote memories are incorporated into larger, relational networks, they remain isolated bits of data in small, unconnected circuits. It is through active mental manipulation with prior knowledge that new information becomes incorporated into the already established neural network of previously acquired related memory.

Teaching that Strengthens Executive Function Networks

Making the switch from memorization to mental manipulation is about applying, communicating, and supporting what one already knows. The incorporation of rote memorization into the sturdy existing networks of long-term memory takes place when students recognize relationships to the prior knowledge stored in those networks.

When you provide students with opportunities to apply learning, especially through authentic, personally meaningful activities with formative assessments and corrective feedback throughout a unit, facts move from rote memory to become consolidated into related memory bank, instead of being pruned away from disuse.

The disuse pruning is another aspect of the brain's neuroplasticity. To best support the frequently used networks, the brain essentially dissolves isolated small neural networks of "unincorporated" facts and procedures that are rarely activated beyond drills and tests.

In contrast, opportunities to process new learning through executive functions promote its linkage to existing related memory banks through the growth of linking dendrites and synapses.

Students need to be explicitly taught and given opportunities to practice using executive functions to organize, prioritize, compare, contrast, connect to prior knowledge, give new examples of a concept, participate in open-ended discussions, synthesize new learning into concise summaries, and symbolize new learning into new mental constructs, such as through the arts or writing across the curriculum.

How to Engage Students' Developing Neural Networks to Promote Executive Function

The recommendations here are a few of the ways to engage students' developing networks of executive functions while they are undergoing their most rapid phase of maturation during the school years. Part 2 of this blog will delve more deeply into the mental manipulation strategies that promote consolidation of new input into existing memory circuits.

Judgment: This executive function, when developed, promotes a student's ability to monitor the accuracy of his or her work. Guidance, experiences, and feedback in estimation editing and revising one's own written work and class discussions for conflict resolution can activate the circuitry to build judgment.

Prioritizing: This executive function helps students to separate low relevance details from the main ideas of a text, lecture, math word problem, or complete units of study. Prioritizing skills are also used when students are guided to see how new facts fit into broader concepts, to plan ahead for long-term projects/reports, and to keep records of their most successful strategies that make the most efficient use of their time.

Setting goals, providing self-feedback, monitoring progress: Until students fully develop this PFC executive function, they are limited in their capacity to set and stick to realistic and manageable goals. They need support in recognizing the incremental progress they make as they apply effort towards their larger goals (see my previous two blogs about the "video game" model: How to Plan Instruction Using the Video Game Model and A Neurologist Makes the Case for the Video Game Model as a Learning Tool).

Model Metacognition Development Yourself

Planning learning opportunities to activate executive function often means going beyond the curriculum provided in textbooks. This is a hefty burden when you are also under the mandate of teaching a body of information that exceeds the time needed for successful mental manipulation.

When you do provide these executive function-activating opportunities, students will recognize their own changing attitudes and achievements. Students will begin to experience and comment on these insights, "I thought . would be boring, but it was pretty interesting" and "This is the first time I really understood . " or simply, "Thanks" and "That was cool."

These student responses are teachable moments to promote metacognition. Consider sharing the processes you use to create the instruction that they respond to positively. These discussions will help students recognize their abilities to extend their horizons and focus beyond simply getting by with satisfactory grades. They can build their executive functions of long-term goal-directed behavior, advance planning, delay of immediate gratification. In this way, they can take advantage of opportunities to review and revise work -- even when it has been completed -- rather than to be satisfied with "getting it done." Your input can help students see the link between taking responsibility for class participation, collaboration, and setting high self-standards for all classwork and homework, such that they can say, "I did my best and am proud of my efforts."

As written on the gate of my college, the message we can send our students is:


Seeing Memories Form at the Molecular Level

It happens at a microscopic level, but learning and processing memories impacts the structure of neurons on the brain. Important synapses in the brain get stronger, ones that are not as necessary get weaker. The process is called structural plasticity and it&rsquos vital to learning as well as retaining memories. New research from scientists In Ryohei Yasuda's laboratory at the Max Planck Florida Institute for Neuroscience (MPFI) is focusing on the mechanics of individual molecules within the neurons. It&rsquos this process that changes the structure of the neurons to accommodate new information in the brain. The most common was this happens is called Long Term Potentiation (LTP). It doesn&rsquot just happen randomly it&rsquos a complex process of molecular signaling along a specific part of the neuron, the dendritic spines. Dendrites are the spindly appendages on neurons over which information in the brain is carried, and within them are spines that grow stronger each time a signal is passed through.

In the Yasuda Lab, one particular protein molecule is being investigated. Calcium/calmodulin-dependent protein kinase II (CaMKII) is not only important in the signaling process, but also in keeping the dendritic spines strong. Previous research hasn&rsquot been able to nail down a definite time period for how long this protein needs to be active in order to result in LTP, however the researchers at the Yasuda lab have new information that could show a more definite length of time. While it&rsquos been argued that it stays active for about an hour, the work at the MPFI suggests it&rsquos only active for about a minute.

The key to getting this information has been the use of optogenetics. Molecules can be effectively manipulated by using beams of light in optogenetics so that their behavior can be &ldquoseen&rdquo and recorded accurately. To this end, Dr. Myung Shin, a post-doctoral researcher in the Yasuda Lab, worked with Dr. Hideji Murakoshi, a former postdoctoral researcher in the Yasuda Lab and now an associate professor at the National Institute for Physiological Sciences to create a light activated CaMKII inhibitor, or photoactivatable autocamtide inhibitory peptide 2 (paAIP2). By using this inhibitor, and controlling it via the biosensors they came up with, the window of time that is needed for CAMKII activity to produce a strong connection can be measured precisely rather than simply estimated.

Their results showed that CaMKII activation persists for approximately 1 minute. They did this by first inhibiting CaMKII for approximately 1 minute, and that in turn limited spine growth and synapse strength during LTP. However, when inhibiting CaMKII with 1 minute of delay, the neuron showed normal LTP, confirming the previous results that CaMKII activation lasts for 1 minute. The team also expanded the work into an animal model. They placed a mouse in a brightly lit cage which was then connected to a dark cage. Every time the mouse entered the dark area, it was given a small electrical shock, inducing fear learning. When they inhibited CaMKII activity in the amygdala during this training the mouse did not learn to avoide the dark room, despite the negative reinforcement. Once a mouse had learned to fear the dark, inhibiting CaMKII did not make a difference and the mouse retained the memory and knowledge of staying away from the dark area.

In a press release Dr. Shin stated that the hope is that other labs will be able to use this light-activated inhibiting process for a range of different studies. "This new tool has many potential applications in research,&rdquo she explained. Dr. Yasuda, the lab&rsquos director also stated, "One of our potential future directions is to combine this inhibitor with signaling biosensors. Combining these approaches, we should be able to determine the temporal requirements of CaMKII activation for various downstream signaling molecules." The video below has more information on this new research.


The bottom-most part of the brain is the brain stem. The brain stem is attached to the spinal cord. It relays information between parts of the brain or between the brain and body and regulates basic body function. It is made up of the midbrain, medulla and the pons.

Midbrain : The midbrain contains the major motor supply to the muscles controlling eye movements and relays information for some visual and auditory reflexes.

Pons : The pons is a mass of nerve fibers that serves as a bridge between the medulla and midbrain above it. The pons is associated with face sensation and movement.

Medulla : The medulla (also known as the medulla oblongata) is located at the base of the brain stem and controls many of the mechanisms necessary for life, such as heartbeat, blood pressure and breathing.


  • Episodes and events that we experience become intimately associated with emotion as they are stored as memories in the brain.
  • Contextual information about these events&mdashwhere and when they happened&mdashis recorded in the brain's hippocampus.
  • The emotional component of the memory is stored separately, in a brain region called the amygdala.
  • New research shows it is possible to manipulate neural circuits in the brain of mice and alter the emotional associations of specific memories.

By manipulating neural circuits in the brain of mice, scientists have altered the emotional associations of specific memories. The research, led by Howard Hughes Medical Institute investigator Susumu Tonegawa at the Massachusetts Institute of Technology (MIT), reveals that the connections between the part of the brain that stores contextual information about an experience and the part of the brain that stores the emotional memory of that experience are malleable.

Altering those connections can transform a negative memory into a positive one, Tonegawa and his MIT colleagues report in the August 28, 2014, issue of the journal Nature.

&ldquoThere is some evidence from pyschotherapy that positive memory can suppress memories of negative experience,&rdquo Tonegawa says, referring to treatments that reduce clinical depression by helping patients recall positive memories. &ldquoWe have shown how the emotional valence of memories can be switched on the cellular level.&rdquo

The episodes and events that we experience become intimately associated with emotion as they are stored as memories in the brain. Recalling a favorite vacation may summon pleasure for years to come, whereas the fear that accompanies a memory of assault might cause a victim to never return to the scene of the crime. Tonegawa explains that the contextual information about these events &ndash where and when they happened&mdashis recorded in the brain's hippocampus, whereas the emotional component of the memory is stored separately, in a brain region called the amygdala. &ldquoThe amygdala can store information with either a positive or negative valence, and associate it with a memory,&rdquo Tonegawa explains.

Last year, Tonegawa and his colleagues reported that by artificially activating the small set of cells that stored a specific memory in a mouse, they could create a new, false memory. In that study, the team made the cells that stored a memory of a safe environment sensitive to light, so that they could be manipulated by the researchers. Switching on those cells while subjecting the animal to a mild shock in a new environment caused the mouse to fear the original environment, even though it had had no unpleasant experiences there.

In those experiments, the scientists had caused the mice to associate a neutral setting with fear. Now Tonegawa and his colleagues wanted to see if he could alter a memory that was already associated with emotion. Once an animal had developed fear of a place, could the memory of that place be made pleasurable instead?

To find out, the scientists began by placing male mice in a chamber that delivered a mild shock. As the mouse formed of memory of this dangerous place, Tonegawa's team used a method it had previously developed to introduce a light-sensitive protein into the cells that stored the information. By linking the production of the light-sensitive protein to the activation of a gene that is switched on as memories are encoded, they targeted light-sensitivity to the cells that stored the newly formed memory.

The mice were removed from the chamber and a few days later, the scientists artificially reactivated the memory by shining a light into the cells holding the memory of the original place. The animals responded by stopping their explorations and freezing in place, indicating they were afraid.

Now the scientists wanted to see if they could overwrite the fear and give the mice positive associations to the chamber, despite their negative experience there. So they placed the mice in a new environment, where instead of a shock they had the opportunity to interact with female mice. As they did so, the researchers activated their fear memory-storing neurons with light. The scientists activated only one subset of memory-storing neurons at a time&mdasheither those in the context-storing hippocampus or those in the emotion-storing amygdala. They then tested the emotional association of the memory of the original chamber by giving mice the opportunity to move away from an environment in which the memory was artificially triggered.

Reactivating the amygdala component of the memory while the male mice had the pleasurable experience of interacting with females failed to change the fear response driven by those amygdala neurons. Consequently, mice retained their fear. When the researchers reactivated the memory-storing cells in the hippocampus while the mice interacted with females, however, the memory cells in the hippocampus acquired a new emotional association. Now the mice sought out environments that triggered the memory.

&ldquoSo the animal acquired a pleasure memory,&rdquo Tonegawa says. &ldquoBut what happened to the original fear memory? Is it still there or is it gone?&rdquo When they put the animals back in the original chamber, where they had experienced the unpleasant shock, the animals showed less fear and more exploratory and reward-seeking behaviors. &ldquoThe original fear memory is significantly changed,&rdquo Tonegawa concludes.

The researchers had similar results in experiments where they switched the emotion of a memory in the opposite direction&mdashallowing mice to first develop a pleasurable memory of the chamber, then artificially activating the memory-storing cells in the hippocampus while the animals experienced a shock. In those mice, the pleasurable response linked to the hippocampal memory cells was replaced with a fear response.

The experiments indicate that the cells that store the contextual components of a memory form impermanent or malleable connections to the emotional components of that memory. Tonegawa explains that while a single set of neurons in the hippocampus stores the contextual information about a memory, there are two distinct sets of neurons in the amygdala to which they can connect: one set responsible for positive memory, the other responsible for negative memory. Circuits connect the hippocampal cells to each of the two populations of cells in the amygdala. &ldquoThere is a competition between these circuits that dictates the overall emotional value and [positive or negative] direction of a memory,&rdquo Tonegawa says.

In an accompanying News & Views article in Nature, Tomonori Takeuchi and Richard G.M. Morris of the University of Edinburgh, state, &ldquoWhat is so intriguing about this study is that the memory representations associated with a place are dissected into their network components and, rather than re-exposing the animals to the training situation to achieve a change, light is used to selectively reactivate the representation of the &lsquowhere&rsquo component of a memory and then change its &lsquowhat&rsquo association.&rdquo

Tonegawa emphasizes that their success in switching the emotion of a memory in mice does not translate to an immediate therapy for patients. There is no existing technology to manipulate neurons in people as they did in their mouse experiments. However, he says, the findings suggest that neural circuits connecting the hippocampus and the amygdala might be targeted for the development of new drugs to treat mental illness.


Watch the video: Πώς μπορούμε να αναπτύξουμε νέους νευρώνες στον εγκέφαλο. TED (July 2022).


Comments:

  1. Wintanweorth

    Bravo, I think this is a great idea.

  2. Akile

    Bravo, I think this is the excellent idea

  3. Harmon

    Logically, I agree

  4. Newland

    I know nothing about it



Write a message