January 30, 2017

Why is it again that we yawn?

This question is probably as old as yawning itself and still remains to be answered, although there are a lot of interesting hypotheses out there:

1. Respiratory and circulatory hypothesis
The most widely believed and oldest hypothesis that yawning functions to regulate levels of oxygen and carbon dioxide. However, studies by Provine and colleagues showed that when measured in a controlled environment, yawning frequency was not affected by manipulating levels of oxygen and carbon dioxide [1].

2. Mental attribution hypothesis and social/communication hypothesis
This hypothesis claims that just thinking about yawning makes you yawn [2] and is closely linked to the social/communication hypothesis. Yawning is understood across cultures and frequently occurs in social contexts. Moreover, everyone has experienced a yawn being "contagious". Interestingly, the susceptibility to contagious yawning correlates with empathic skills. This is reduced, for example, in patients with autism or schizophrenia. However, experimental data are lacking and highly debated [3,4].

3. Arousal hypothesis
This hypothesis proposes that yawning facilitates arousal. The experimental data suggest that yawning indeed occurs during progressive drowsiness or sleepiness, which would be compatible with the hypothesis. However, EEG data could not show specific arousing effects of yawning on the brain or the autonomic nervous system [4].

Macaca fuscata juvenile yawning via Wikimedia Commons
4. Thermoregulation hypothesis
This is one of the most recent hypotheses and puts forward that yawning functions as a brain cooling mechanism. The mechanism proposed being an increase in brain temperature triggers a yawn and the physiological reactions following promote a return to brain thermal homeostasis. However, again, experimental data remain sparse and highly debated [3,4].
Nevertheless, here are a few facts that we know for sure: Yawning is a phylogenically old property and occurs in four vertebrates classes: mammals, reptiles, birds and fish. A yawn is characterized by a standard cascade of movements over a 5- to 10-second period. The mouth opens widely for 4 to 6 seconds with simultaneous retraction of the tongue, and is usually combined with retroflexion of the head and sometimes elevation of the arms. Numerous neurotransmitters, neuropeptides, and hormones have been found to be implicated in the control of yawning; amongst others, dopamine, acetylcholine, glutamate, serotonin, nitric oxide, adrenocorticotropic hormone related peptides, oxytocin, and steroid hormones facilitate yawning, whereas opioid peptides have an inhibitory effect [4].

What is there to be concluded? 
Ladies and gentlemen, there's research to be done!
Do you also sometimes wonder about the simple neuroscientific questions in everyday life, but don't really feel like looking them up right away?

Do you also sometimes wonder about the simple neuroscientific questions in everyday life, but don't really feel like looking them up right away? For questions like this, just mail us your question (cns-newsletter@charite.de) and Dr. Harebrained will give us his explanation!


[1] Provine et al., Behav Neural Biol, 1987
[2] Provine, Ethology, 1986
[3] Gallup, Neurosci Biobehav Rev, 2011
[4] Guggisberg et al., Neurosci Biobehav Rev, 2010


By Veronika Lang, originally published June 2012 – Volume 05, Issue 2, "Open Access"

January 27, 2017

Timeless Women in Science

In a recent survey for our next issue I realized that many young female scientists do not know much about famous woman in history. Time to change that!



AD 415: Hypatia of Alexandria (died in AD 415) - the first notable woman in mathematics. She also taught philosophy and astronomy. Hypatia's contributions to science include the invention of the hydrometer and charting of celestial bodies.

1754: Dorothea Christiana Erxleben (1715-1762) - the first female medical doctor in Germany.

1811: Mary Anning (1799-1847) - known for discovering important fossil finds of the Jurassic era at Lyme Regis. Mary Anning's first discovery was an ichthyosaur in 1811. Her findings had a major impact on the contemporary scientific understanding of prehistoric life and the history of the world.

1816: Sophie Germain (1776-1831) - a mathematician, physicist, and philosopher. Sophie Germain was the first female winner of the grand prize from the Paris Academy of Sciences in 1816 for her work on the elasticity theory.

1842/43: Augusta Ada King, Countess of Lovelace (1815-1852) - the world's first computer programmer. In 1842/43, Ada Lovelace wrote the first algorithm to be processed by a machine while translating Charles Babbage's work on the analytical engine, an early mechanical computer.

1848: Maria Mitchell (1818-1889) - known for her work in astronomy and calculating the positions of Venus. In 1848, Maria Mitchell was the first woman member of the American Academy of Arts and Sciences and of the American Association for the Advancement of Science two years later.

1858: Rosa Smith Eigenmann (1858-1947) - the first woman ichthyologist. Together with her husband, Carl H. Eigenmann, Rosa Smith Eigenmann discovered around 150 species of fish.

1870: Ellen Henrietta Swallow Richards (1842-1911) - chemist and ecologist. Ellen Swallow was one of the founders of the environmental hygiene, a precursor of the modern science of ecology. She was the first woman admitted to the MIT in Cambridge in 1870.

1903/1911: Marie Skłodowska Curie (1867-1934) - the first person honored with two Nobel Prizes. In 1903, Marie and Pierre Curie were awarded half the Nobel Prize in Physics for their research on the radiation phenomena. The second Nobel Prize was awarded to Marie Curie in Chemistry for the discovery of the elements radium and polonium and the isolation of radium. She was also the first female professor at the University of Paris.

1912: Annie Jump Cannon (1863-1941) - an astronomer who developed the Harvard Classification Scheme, together with Edward Charles Pickering, in 1912. This stellar classification based on the temperatures of the stars is still in use today.

1913: Rahel Hirsch (1870-1953) - the first woman to become a professor in medicine in the Kingdom of Prussia in 1913 and thus she was also the first woman to work at Charité. Rahel Hirsch discovered that solid particles could penetrate from the veins and arteries into the kidneys and get harmlessly expelled through the urine, which was acknowledged only after her death and retroactively named "Hirsch Effect" by Gerhard Volkheimer. In 1919, she had to quit science and work as a medical doctor because she was not paid at the Charité. She finally went to London in 1938 because Jews could no longer work under the Nazi regime.

1914: Lillian Moller Gilbreth (1878-1972) - one of the first female engineers holding a PhD and considered to be the mother of modern management. Together with her husband, Frank Bunker Gilbreth, Sr., Lillian laid the groundwork for industrial engineering. In her book "The Psychology of Management", published in 1914, she demanded that psychology should be part of management.

1917: Florence Rena Sabin (1871-1953) - the first woman to become a full professor at a medical college when she got appointed as a professor of histology at Johns-Hopkins School of Medicine in 1917. Florence Sabin was also the first woman elected to the National Academy of Sciences (1924) and the first woman to head a department at the Rockefeller Institute for Medical Research (1925).

1918: Amalie "Emmy" Noether (1882-1935) - considered as the most important woman in the history of mathematics by Albert Einstein and David Hilbert. Emmy Noether considerably contributed to the fields of abstract algebra and theoretical physics. The Noether's theorem published in 1918 proved the connection between symmetry and conservation laws.

1926: May Edward Chinn (1896-1980) - a physician and the first African-American woman to graduate from Bellevue Hospital Medical College and to intern at Harlem Hospital despite discrimination concerning her sex and race. May Edward Chinn is also known for her work in cancer research developing the Pap smear test for cervical cancer.

1928: Margaret Mead (1901-1978) - a cultural anthropologist and one of the most important representatives of cultural relativism in the 20th century. She also paved the way for the sexual revolution in the 1960s and 1970s. The first of her 23 books, Coming of Age in Samoa, was published in 1928 indicating her strong belief in cultural determinism.

1934: Dame Jane Morris Goodall (born in 1934) - a primatologist who is considered to be the world's foremost expert on chimpanzees. She studied the social and family interactions of wild chimpanzees for 45 years.

1935: Irène Joliot-Curie (1897-1956) - the daughter of Marie and Pierre Curie. In 1935, Irène Joliot-Curie was awarded the Nobel Prize in Chemistry, together with her husband Frédéric Joliot-Curie, "for their synthesis of new radioactive elements". The family Curie is the one with the most Nobel laureates to date.

1940: Roger Arliner Young (1899-1964) - the first African American woman to receive a doctorate degree in zoology in 1940. Her contributions to science include the research on radiation effects on sea urchin eggs, the mechanisms of hydration and dehydration of living cells, and the control of salt concentration in paramecium.

1944: Lise Meitner (1878-1968) - the second woman receiving a doctoral degree in physics at the University of Vienna in 1905. Lise Meitner is known for her work on the discovery of nuclear fission for which her colleague Otto Hahn was awarded the Nobel Prize in Chemistry in 1944. Five years later, she received the Max Planck Medal of the German Physics Society. It was several years later after her death, in 1997, that the element 109 was named meitnerium in her honour.
 
1944: Helen Brooke Taussig (1898-1986) - the founder of the field of pediatric cardiology. Helen Taussig developed a concept first applied in the Blalock-Taussig shunt in 1944. This procedure extended the lives of children born with a blue baby syndrome (anoxemia).

1947: Gerty Theresa Cori (1896-1957) - the first woman to win a Nobel Prize in Physiology or Medicine and the third woman receiving a Nobel Prize in science. In 1947, Gerty Cori was awarded one half of the Nobel Prize together with her husband, Carl Ferdinand Cori, "for their discovery of the course of the catalytic conversion of glycogen". They shared the prize with Bernardo Houssay.

1947: Mária Telkes (1900-1995) - a chemist who created the first thermoelectric power generator in 1947 and the first thermoelectric refrigerator in 1953 using the principles of semiconductor thermoelectricity.

1951: Rózsa Péter (1905-1977) - the first Hungarian female mathematician to become an Academic Doctor of Mathematics and to be elected to the Hungarian Academy of Sciences. Rózsa Péter is credited for her work on the Recursive Function Theory published in 1951. She also published "Playing with Infinity" in 1957 which was translated into at least 14 languages.

1951/2: Grace Murray Hopper (1906-1992) - a pioneering computer scientist who was one of the first programmers of the Havard Mark I computer. Grace Hopper also developed the first compiler for a computer programming language (A-0 System) in 1951/52 as well as COBOL, one of the first programming languages.

1953: Rosalind Elsie Franklin (1920-1958) - a pioneering molecular biologist. Rosalind Franklin critically contributed to the hypothesis on the structure of the DNA published in 1953 by Francis Crick and James D. Watson who, together with Wilkins, were awarded the Nobel Prize in Physiology or Medicine in 1962. She is also known for her work on the structures of the tobacco mosaic and polio viruses.

1957/58: Brenda Milner (born in 1918) - the first to study the role of the hippocampus in the formation of memory. In 1957/58, Brenda Milner got famous for her work about the patient H.M. who after damage to the medial temporal lobe showed memory deficits.

1962: Rachel Louise Carson (1907-1964) - was a marine biologist and nature writer. Rachel Carson's book "Silent Spring" (1962) was the foundation for the US and worldwide environmental movement and is considered to be one of the most influential books of the 20th century.

1963: Maria Goeppert-Mayer (1906-1972) - the second female Nobel laureate in Physics in 1963 after Marie Curie. Maria Goeppert-Mayer was credited, together with Johannes Hans Daniel Jensen and Eugene Paul "E. P." Wigner "for their discoveries concerning nuclear shell structure" of the atomic nucleus.

1964: Dorothy Mary Crowfoot Hodgkin (1910-1994) - a chemist who worked on the development of protein crystallography. Dorothy Hodgkin discovered the three-dimensional structures of cholesterol (1937), penicillin (1945), vitamin B12 (1954), and insulin (1969). In 1964, she was awarded the Nobel Prize in Chemistry "for her determinations by X-ray techniques of the structures of important biochemical substances".

1967: Helen Battles Sawyer Hogg (1905-1993) - credited for her outstanding achievements in astronomy. Helen Sawyer Hogg advanced research on globular clusters and therefore, received the Rittenhouse Astronomical Society Silver Medal Award in 1967.

1975: Chien-Shiung Wu (1912-1997) - the first woman elected president of the American Physical Society in 1975. During the second world war, Chien-Shiung Wu worked on the Manhattan Project. In the 1950s, Tsung-Dao Lee and Chen Ning Yang showed in their theoretical studies that the "Law of Conservation of Parity" was invalid which Chien-Shiung Wu could confirm in her experiments. For their work, Tsung-Dao Lee and Chen Ning Yang received a Nobel Prize in Physics in 1957.

1977: Rosalyn Sussman Yalow (born in 1921) - the second woman to win a Nobel Prize in Physiology or Medicine. Rosalyn Yalow developed the radioimmunoassay (RIA) technique of peptide hormones together with her co-worker Berson for which she was awarded the Nobel Prize in 1977. She shared the prize with Roger Guillemin and Andrew Schally.

1983: Barbara McClintock (1902-1992) - awarded the Nobel Prize for Physiology or Medicine in 1983 "for her discovery of mobile genetic elements". She is the only woman to receive an unshared Nobel Prize in this category.

1986: Rita Levi-Montalcini (born in 1909) - the oldest living Nobel laureate and the first to reach a 100th birthday. Rita Levi-Montalcini is a neurologist who received the Nobel Prize in Physiology or Medicine, together with her colleague Stanley Cohen, for the discovery of the Nerve Growth Factor (NGF). Read a previous blog entry dedicated to Rita Levi-Montalcini here.

1988: Gertrude Belle Elion (1918-1999) - a biochemist and pharmacologist. Together with George Herbert Hitchings and Sir James Whyte Black, Gertrude Elion received the Nobel Prize in Physiology or Medicine "for their discoveries of important principles for drug treatment" in 1988. She developed the first immuno-suppressive agent used for organ transplants as well as drugs to treat, among others, leukemia, malaria, viral herpes, and meningitis.

1992: Mae Carol Jemison (born in 1956) - the first African American woman to travel in space. Mae Jemison is a physician and NASA astronaut who went into orbit aboard the Space Shuttle Endeavour on September 12, 1992.

1995: Christiane Nüsslein-Volhard (born in 1942) - a biologist who was awarded the Nobel Prize in Physiology or Medicine, along with Eric Wieschaus and Edward B. Lewis, for their work on "the genetic control of embryonic development" in 1995. Christiane Nüsslein-Volhard is the director of the Max-Planck Institute for Developmental Biology and the Genetics Department in Tuebingen.

2002: Elizabeth Loftus (born in 1944) - a psychologist known for her research on false memories and the misinformation effect. Elizabeth Loftus was ranked 58th of the 100 most influential psychologists of the 20th century and the highest ranked woman on the list in 2002.

2004: Linda B. Buck (born in 1947) - a biologist credited for her work on the olfactory system. "For their discoveries of odorant receptors and the organization of the olfactory system", Linda Buck and Richard Axel received the Nobel Prize in Physiology or Medicine in 2004.

2008: Françoise Barré-Sinoussi (born in 1947) - a virologist known for her work on the human immunodeficiency virus (HIV). The discovery of this virus resulted in one half of the Nobel Prize in Physiology or Medicine in 2008 which Françoise Barré-Sinoussi received along with Luc Montagnier.

2009: Elizabeth Helen Blackburn (born in 1948) and Carolyn Widney "Carol" Greider (born in 1961) - two biologists who, together with Jack William Szostak, were awarded the Nobel Prize in Physiology or Medicine in 2009 "for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase". The enzyme was discovered in 1984 when Carol Greider was still a graduate student of Elizabeth Blackburn.

2009: Ada E. Yonath (born in 1939) - the first Israeli woman to be awarded the Nobel Prize. In 2009, Ada Yonath was credited for her groundbreaking work on the structure and function of the ribosome, together with Venkatraman Ramakrishnan and Thomas Arthur Steitz.


 ...and the list goes on!
Further Reading: Women in Science - A Selection of 16 Significant Contributors: http://www.sdsc.edu/ScienceWomen/GWIS.pdf

this article originally appeared in our November 2010 newsletter, Volume 03, Issue 3

January 26, 2017

Why is light reflected off animal eyes at night, but not humans?


While on my way home yesterday, two eyes were staring at me out of the bushes. What is the reason for this scary phenomenon?


The phenomenon of 'eye-shine' is seen in a variety of species, and is thought to be due to the presence of an intraocular reflecting structure, the tapetum lucidum. This structure was described as early as 1854 in the Medical Lexicon by Robley Dunglison:

"TAPETUM, Ta'petry, Tape'tum choroidea. A shiny spot, on the outside of the optic nerve, in the eyes of certain animals, which is owing to the absence of the pigmentum nigrum, occasioning the reflection of a portion of the light from the membrana Ruyschiana."




Raccoons in a tree with their tapeta lucida reflecting camera flash


The Tapetum Lucidum
And he wasn't too far off back then. The tapetum lucidum is basically a biological reflector system that is a common feature in the eyes of vertebrates. Some species, usually diurnal animals (primates, squirrels, pigs), do not have this structure. Depending on the species, location and composition may differ; the tapetum lucidum can either be located within the retina or within the choriod behind the retina. The reflecting material in the tapetal cells also varies from guanine crystals (in some teleosts, reptiles) or lipid (in some teleosts, some mammals) to zinc cysteine/Riboflavin/collagen (most mammals). Therefore, not all animals eyes glow the same color, ranging from yellow, to green, to blue, and everything in between. Age and light angle can also affect the color of the eye shine. The tapetum functions like a mirror and provides the light-sensitive retinal cells with a second opportunity for photon-photoreceptor stimulation, thereby enhancing visual sensitivity at low light levels.

The Red Eye Effect
But wait a second... what about the 'human' red eye effect in pictures? Interestingly this effect rarely occurs in animals that have a tapetum lucidum; however, as we all know, it often does in humans. The cause of it is that the light flash occurs too fast for the pupil to close, allowing much of the very bright light from the flash to pass through the pupil into the eye. It then reflects off the fundus at the back of the eyeball and out through the pupil. The red to orange color which we then observe on photographs is due to the reflection of light from choroidal blood vessels.


Do you also sometimes wonder about the simple neuroscientific questions in everyday life, but don't really feel like looking them up right away? For questions like this, just mail us your question (cns-newsletter@charite.de) and Dr. Harebrained will give us his explanation!

Dunglison, Medical lexicon, p. 843., 1854
Ollivier, Vet Ophthalmol, 2004

By Veronika Lang, originally published March 2012 in Volume 5, Issue1, "Mental Health Disorders"

January 24, 2017

“Good Scientific Practice” – Ideals vs. Reality

Is science corrupted? Who is the culprit? What can we do to change that?

The more corrupt science gets, the more universities, funding agencies and journals inform us about how to maintain research integrity and good scientific practice. In their guidelines we learn, for instance, how to prevent misconduct, what qualifies for authorship, and how to establish proper research procedures. So theoretically, everyone should know how proper science ideally works, right?
Why are we then still faced with retractions, fraud, and questionable research practices?

"Piled Higher and Deeper" by Jorge Cham
www.phdcomics.com
As important as such policies may be, it is at least equally important to ask how realistic the implementation of such lofty goals is, given the current scientific system. It is naïve to assume that the whole system could be healed by the formulation of some guiding rules, but without also putting basic scientific principles up for discussion. The scientific publication system, for instance, is always deemed a cornerstone of science and has rarely been questioned. But it is also inevitably linked to the well-known pressure to publish, meaning that a scientist’s reputation is dependent on their number of publications, preferably in “high-impact” journals. However, such high-profile journals usually favor novel and interesting findings and are simply not interested in so-called “null findings” or replications, independent of the scientific accuracy and rigor of the work.
It is thus highly insincere when journals, on the one hand, demand proper research procedures, but, on the other hand, base their decision about the acceptance of manuscripts on completely different criteria. The enforcement of good scientific practice has to operate on the foundations of the scientific system, for example by the limitation of the power of publishing companies or alternative models of publishing.
It's time to take action!
There are dozens of other examples – like hierarchical structures that hamper scientists (especially young ones) from enforcing their rights, for instance in authorship disputes – showing that the sole formulation of codes of conduct is in itself not much more than a lip service. It is much more critical to establish the structures for their realization, even though this often implies striking new paths and throwing old structures overboard. The motto for the future should thus be: facta, non verba!

Anonymous, originally published March 2016 in Volume 09, Issue 1 "The Aging Brain"

January 22, 2017

Critical neuroscience as an approach to neuroscientific practice

Happy Birthday Sir Francis Bacon! (22 Jan 1561 - 9 Apr 1626).
The English philosopher is remembered as the father of the scientific method
[1]. He held the aim that scientists should concentrate on certain important kinds of experimentally reproducible situations. The investigator should aim to make a gradual ascent to more and more comprehensive laws, and will acquire greater and greater certainty as he or she moves up the pyramid of laws.


As a lab collaboration, we wrote a science-themed adult film script when one of the authors of this article was working as a research assistant in the summer of 2006. The opening scene takes place between Angela, the 30-something jaded post-doc, and Tammie, the eager undergrad research assistant. In the dialogue proceeding their first intimate scene, Angela asks Tammie how much she wants to be a scientist, to which Tammie responds with glowing eyes: "I want to be a really good scientist real, real bad." Which of course begs the question:


 What does it mean to be a good scientist?
In the case of Tammie and Angela, the answer emerged in the form of unspeakable acts with a pipette and numerous Eppendorf tubes. But at a moment in which neuroscience has so much cultural authority, one could interpret being a 'good neuroscientist' as developing a deeper awareness of the reciprocal interactions between culture and neuroscience. How does culture shape neuroscientific ideas and applications, and what are the implications of neuroscientific findings for culture? Just as a neuron does not exist in isolation in the brain, the brain is situated in a world of meanings.
In the last several decades, cognitive neuroscience experiments have provided striking insights into the neural underpinnings of memory, language, vision, emotion and social interaction. Neuroscience has, in this way, changed our common sense about ourselves. In more recent years, non-invasive brain-imaging tools, coupled with academic and public fascination with themes such as the quest for the neural correlates of moral reasoning, the brain basis of revenge, cooperation and empathy or imaging-based lie detectors have seemed to render the limits of neuroscience almost boundless.
How can we, as neuroscientists, incorporate the epistemological and ethical issues that arise from the widening gap between our tools and our research questions — relating to self, subjective experience and the understanding of others — into our research practice? As the gap between what the tools can do and the questions to which they are applied widens, the experimental process involves making more inferences and interpretations. Perhaps we could here benefit from thinking about the social and cultural context of the way we study the brain.
The new focus on brain-centered explanations has immediate implications for how we understand, and deal with, normal and abnormal behavior in everyday life. The consequences are exemplified in psychiatry, in which biological reductionism brings about explanations of human problems that tend to be based in the body alone rather than in our social circumstances and histories. Such biological models tend to reduce personhood, the subjective and contextualized self, to 'brainhood' (Vidal, 2005). The critical neuroscience approach traces the journey of a neuroscience research topic, for example antisocial behavior, empathy, deception, volition or political orientation, from its entry into the laboratory as a question for neuroscience, to its translation into the language and practices of neuroscience that produce a neuroscientific fact and the subsequent application outside the laboratory. This is not necessarily a linear process; indeed the facts about the mind produced by neuroscience can direct new questions and even give rise to new behavioral phenomena. For example, Hacking has suggested this 'looping effect' in the case of autism and multiple personality disorder, which he calls 'interactive kinds' (Hacking, 1995). He argues that scientific classification of people interacts with the experience of those being classified and may create new ways of being. As such, the classification itself changes and may in turn shift the target of study over time.

What is 'critical neuroscience'?
Central to 'critical neuroscience' is the idea that analysis of the social and cultural structure of neuroscience may inform the design of experiments in a meaningful way, particularly studies of the social or cultural brain. The idea was born out of shared interests between junior researchers working in neuroscience, philosophy and anthropology, who were becoming increasingly aware of the social and ethical issues within the brain sciences, as well as the gulf between the disciplines that deal with them and experimental neuroscience itself.
The shared objective of critical neuroscience is to search for a ways to incorporate an understanding of the social structure of neuroscience into the practice of the empirical work. It is particularly timely as the pace of cognitive neuroscience research steps up and its outcomes are materialized in daily life, through new ways of being, new categories of normal and abnormal and various new forms of interventions.

What is 'critical' about critical neuroscience'?
The brain sciences, particularly social neuroscience, are producing neuropsychiatric theories of behavior that have recently been imported into clinical contexts, law courts, commercial marketing, psychoanalysis and thus the popular consciousness about our minds, our selves and the way we live. Given that cognitive neuroscience produces not only theories of subjective experience, but also interventions directed towards our selves, this field of science seems to hold a special status in terms of its potential for self-understanding, the understanding of social behavior as well as the cultural influence on behavior. As such, our research also warrants critical analysis.
Critical neuroscience aims to assess some of the basic assumptions upon which cognitive neuroscience is founded, and to evaluate the social and political implications of the experimental research. Ultimately, it seeks to find ways to incorporate theoretical critique into practice in the laboratory, in particular, to apply to empirical approaches to the study of the 'cultural' and 'social brain'. In order for this to happen, critical neuroscience is concerned with examining the philosophical foundations, cultural practices and economic imperatives in neuroscience which may shape the experimental and explanatory frameworks produced through our research.


 For further information about Critical Neuroscience visit www.critical-neuroscience.org



By Daniel S. Margulies and Suparna Choudhury, Max Planck Institute for the History of Science, Berlin, Berlin School of Mind & Brain

This article originally appeared June 2011 in Volume 04, Issue 02 " Good & Bad Scientific Practice"

January 20, 2017

Cyborgs, Brain Highways, and Memory Erasure: The Future of Neuroscience

How will our future look like? Imagine how amazing it would be to move things around, or turn on lights only with our minds. Or if we could download our memories on a disk and retrieve them later! Sounds like something out of ancient sci-fi movies, right? But these things might not be that far from reality!


In the last few decades, neuroscience has made considerable strides in unraveling the mysteries of the human brain. With advances in the field of genetics and physics, such as human genome sequencing, optogenetics, and high resolution microscopy, scientists can now manipulate specific areas within the brain and see how they affect behavior. Large-scale projects have been initiated such as the US BRAIN initiative, bringing together scientists from all over the world with the aim of developing next-generation tools to explore how neural connections lead to thoughts, emotions or movements. Here’s a list of ongoing projects which could profoundly improve our understanding of the brain:

Connectomics
Mapping the whole human brain will be one of the biggest scientific challenges of the 21st century. The Connectome project was launched in 2005 with the aim of determining a comprehensive map of each individual neuronal connection of the 300 million neurons in a mouse brain - or what is referred to as the ‘wiring diagram’- and ultimately map the 10 billion connections in the human brain. In the years to come, the circuitry of the whole brain will be known, and this can help us to answer how brain circuitry changes during development, aging, disease or with experience. Maybe one day we could leave behind our connectome with our memories and experiences [1]!

Blue Brain Project
Another ambitious project, The Blue Brain project, was initiated in 2005 with the goal of simulating the whole human brain. Scientists at EPFL, Switzerland, have already made progress in modeling micro columns of the mouse brain to answer how a network of neurons processes sensory information. The computer model uses an overwhelming amount of information on the type of neurons, their electrical properties, shapes and connectivity to simulate thousands of neurons, allowing scientists to understand how the brain processes information and how brain waves are generated [2].

Source: Grau et al., PLoS ONE, 2014
Brain-Machine Interface
Many of us might remember the kick-start of the football world cup in Brazil by a paraplegic man using a mind-controlled prosthetic leg. This was made possible because of advances in Brain-Machine interface (BMI) technology by scientists at Duke University. BMI technology provides a direct electronic interface and can convey messages and commands directly from the human brain to a computer. The electrical activity of the conscious brain is monitored using electroencephalogram (EEG) signals, with detected patterns being digitalised and sent to a computer, or in the case of neuroprosthetics, to the control unit of a robotic arm or leg. Scientists have made progress in developing neuroprosthetics for paralyzed patients, enabling them to grasp things, even with up to six degrees of freedom [3]. So there is certainly hope ahead for paralyzed patients to walk, and to perform day-to-day activities.



Deep Brain Stimulation
Since 1987, Deep brain stimulation (DBS) has become the widely recommended treatment option for movement and neuropsychiatric disorders such as Parkinson’s disease, chronic pain, major depression and obsessive compulsive disorders. DBS involves the implantation of a medical device called a neurostimulator, which sends electrical impulses through implanted electrodes, to specific targets in the brain. This treatment has been proven effective in some patients but it also causes some neuropsychiatric side effects such as hallucinations, euphoria, cognitive dysfunction or depression. Clinical and technological advances in DBS need to be evolved in order to offer better quality of life for patients with debilitating disorders in the years to come [4].

Future possibilities
The future of neuroscience looks exciting and promising [5]. A few years ago, scientists at MIT successfully implanted false memories in mice by just reactivating the cell assembly for the memory of foot shocks [6]. The prospect of erasing bad memories or implanting memories in cases of post traumatic stress disorders represents an exciting possibility for humans. Recently, scientists managed to transmit a message into the mind of a colleague 5,000 miles away using brain waves [7]. Maybe there comes a day in future where we can telepathically send emails!
We have only reached the tip of the iceberg of unlocking the clockwork of the brain. This is evident from the fact that we have not yet been able to fully understand the simple 302 neuron- circuitry of C. elegans. Fundamental questions such as how we perceive with our senses, how we navigate through the world or bigger questions such as how do our thought processes work or what makes us conscious beings have been unanswered. The sheer complexity of the human brain will keep the neuroscientists around the world busy for the next decades for sure.
  1. braininitiative.nih.gov
  2. École Polytechnique Fédérale de Lausanne
  3. Collinger et al., Lancet, 2013
  4. ninds.nih.gov
  5. Wall Street Journal
  6. Ramirez et al., Science, 2012
  7. dailymail.co.uk
by Aarti Swaminathan, PhD Student AG Schmitz

January 18, 2017

The Future Is Now… And It Grants PhDs

What are your expectations and hopes for 2017? Thinking about the future can be intimidating and scary at some times. Did you know that trying to predict the future is a real science?


Tell me, what will the weather be like tomorrow? Do you think your next experiment will go well? How will food supply chain systems influence South American elections 20 years from now? The future can be both tangible and remarkably remote. Trying to predict it may at times seem hopeless, yet at other times, may be as simple as sticking your head out the window and deciding to take an umbrella to work. 

Future Studies: The Academic Discipline of Today?
Source: Vintageprintable1, via Flickr 

To many, a “professional futurist” may conjure the image of someone in a tin foil hat reading a lot of science fiction, and in fact, the discipline does have its roots in literature. For many, the birth of futurism came with Samuel Madden’s Memoirs of the 20th Century (written 1733), which tried to predict geopolitical trends, but had little to say about development of science or technology [1]. H.G. Wells took another (and doubtlessly flashier) approach by imagining a world in which aliens, gadgets, and time travel defined human possibilities. Real 20th century problems, like the possibility of thermonuclear war, or the benefits of a planned economy also helped boost the discipline, as did the rise of advanced computing/simulation.
Today, Future Studies is a full-fledged academic discipline, available for study (including PhDs) around the world. Some schools focus on sub-specialties, such as predicting business trends at the Turku School of Economics [2], while others take a more general approach. Their main concern may be summarized by “three Ps and a W”: futures that are possible, probable or preferable, plus so-called “wildcard” futures [3]. At face value, the whole concept of futurism seems a little wacky, but the more you think about it, the more it seems to make sense. More traditional fields such as history and sociology attempt to reverse engineer complex systems (i.e. society and culture) to make sense of what has happened. Why not try the whole thing the other way around?

Not Just Science Fiction
It may be true that futurism tries to make sense of a great deal of non-static, complexly interrelated factors, but it can be broken down into more tangible components. This is done every day not just by professional futurists, but by people in more mundane professions such as stock traders, meteorologists, and insurance assessors. One common predictive method in all of these disciplines is predictive modelling, or building a simulation to describe possible outcomes given possible constraints. Systems engineering, too, builds projects around estimations of future states, for example, building a “smart” supply chain that can respond to up-to-the-minute demands.
Other methods employed by futurists have been adopted from the social sciences. For example, social network analysis uses the nature of connections within a community to predict how that community will respond to changes in the environment [4]. It was originally used in anthropology and sociology but is becoming more and more common in other “futuristic” fields like epidemiology and marketing. Another example is the Delphi Method, which uses structured interviews and feedback to combine and extrapolate from the opinions of experts [5].
FUTURES MAY BE POSSIBLE,
PROBABLE, OR PREFERABLE.
 Yet for every interdisciplinary success, there are methods that appear to be very puzzling, at least to an outsider. For example, most people are familiar with trend analysis, which identifies factors that strongly influence the present, and uses them to project future scenarios. However, future studies also employs Emerging Issues Analysis [6], which does the exact opposite: identifying unimportant forces or events in the present, and predicting what could happen if they were to become important in the future. There are also a host of other methods based on “visioning” and “future biographies” which are significantly heavier on imagination than math or statistics. But if they end up being accurate (see, for example, H.G. Wells’ accurate predictions about modern warfare [7]), are they any less valid?

Think Like a Futurist
Would it surprise you to learn that you are likely also a futurist? While your day-to-day may not involve planning for the rise and fall of geopolitical powers, most readers of this newsletter make their living by making and testing predictions. In the form of scientific hypotheses. After all, having semi-reasonable expectations about your experiments means that you will most likely make efficient use of your time and resources in the lab. The role of prediction in biomedicine has been extensively studied, and is beginning to be rolled out in concrete practice. Less is known about how skillfully biologists can tell the future (see Box 1).

Thinking about the future, especially in an academic context, can have profound implications for the way we envision and plan for the consequences of our actions. But one side effect is that it also calls practitioners to question their assumptions about the present. If you fervently believe that, say, a certain candidate winning the American election would result in disastrous foreign policy decisions, what does that say about foreign policy at present? Or the electoral system that would get them there in the first place? At the end of the day, predicting the future calls for a profound understanding of the present. Whether or not you choose to make future studies a full-time occupation, that should be a universal priority.




[1] Alkon, Sci Fict Stud 1985
[2] University of Turku
[3] Wikipedia
[4] Otte and Rousseau, J Inf Sci, 2002
[5] RAND Corporation
[6] Inayatullah, Foresight 2008
[7] The Telegraph

by Constance Holman, PhD Student AG Schmitz
this article originally appeared September 2016 in Volume 09 Isuue 3 "Happy Anniversary MedNeuro!"