Showing posts with label reproduction. Show all posts
Showing posts with label reproduction. Show all posts

February 12, 2018

You have Beautiful Eyes, Hundreds of Them!

What do people look for in a partner? Many of us would love to know what aspects of our appearance are of most interest to potential mates. Well, we could start by posing the question 'What do people look at in a partner?' After all, what the eye does not see, the heart does not grieve over or in this case throb for.

Avian Eyetracking Shows Peahens Checking Out Males' Train Feathers
This is one of those research questions, however, where straightforward questionnaire data are likely to raise suspicions. How many people are going to admit that they look straight at someone's buttocks or cleavage? Eyetracking, on the other hand, can reveal a great deal about what people attend to, and has delivered such edifying conclusions as: Men seem to like to assess each other's crotches [1]; women check each other out as much as they do men [2]; and men look longer at larger breasts (even when controlling for the larger area of the visual field they occupy) [3].


Picture reproduced with permission from Yorzinski et al, J Exp Biol, 2013

A recent novelty, however, is the application of eyetracking to the romantic interests of birds. And no better bird to begin with than the peacock, famous for its eye-catching train of iridescent feathers, rattled in mating displays. Of course, in the animal kingdom it tends to be the women who do the ogling, so a recent study tracked peahens' eye movements while the males strutted their stuff [4].
The peahens were not especially impressed, spending less than a third of their time even looking at the male at all. Nor were they interested in everything he had to offer. The upper train, where most of the eyes are located, was of relatively little interest. Instead, the females' gaze lingered on the lower train, which they scanned from side to side in a way that suggests they were assessing its symmetry, an important feature in sexual selection [5].
So how can we make sure our next date results in love at first saccade? The authors offer a somewhat disheartening speculation. Briefer viewing times may indicate simply that a trait is much easier to assess. Peahens may look less at train eyes simply because it is very easy to see whether a male has fewer than required, and he may then be rejected without further ado [6]

[1] http://bit.ly/NCem7I
[2] Rupp and Wallen, Horm Behav, 2007
[3] Gervais et al, Sex Roles, 2013
[4] Yorzinski et al, J Exp Biol, 2013
[5] Moller and Thornhill, Amer Nat, 1998
[6] Dakin and Montgomerie, Anim Behav, 2011

by Luke Tudge,
This article originally appeared 2014 in CNS Volume 7, Issue 2, Neuroscience of Love 

January 26, 2018

Of the Importance to Publish Negative Results


I had a rough time during my PhD with many experiments that did not support a common hypothesis in my field of research. However, I was able to successfully submit a manuscript describing my negative data. Recently I even won a prize for publishing them.

When scientists embark on a new study, they formulate a hypothesis that they want to test. Sometimes the experiments do not support the hypothesis the researchers set out to test. If the obtained data are unable to confirm a hypothesis or replicate previous results, they are called negative results. Sometimes they are also called NULL results, as the Null hypothesis H0 (the hypothesis that there will be no difference between experimental and control group) was not rejected. Most of the time, negative results are more accurate and give more informative than results that support a new hypothesis. 
If a test of experimental data comes up significant with p < 0.05, we reject H0 and accept H1 (the hypothesis that the results show an effect). Notably, we only tested H0 and the p-value says nothing about the probability of H1 being true. However, a non-significant p-value means that H0 is true (or the data didn’t have enough power to reject it). In a Bayesian sense, data underlying a non-significant p-value can be strong evidence for the H0. 
Negative data are obviously not very spectacular, because we want to find out what is true, not what isn’t. Positive results seem more interesting and more important than NULL results. The latter are often not submitted for publication, because they are believed to generate less value to scientists and academic publishers. Indeed, they are less likely to open new avenues of research that generate funding opportunities. Manuscripts reporting negative data are also more likely to get rejected, because they appear less exciting. Traditionally it is difficult to publish negative data, unless they refute a spectacular claim. Studies that do not confirm a new hypothesis often get literally filed away in a drawer. Therefore this is also called the “file drawer phenomenon”.
PUBLISH ALL RESULTS TO FIGHT THE PUBLICATION BIAS! 
Unfortunately, the negative data get lost to the scientific community. If ever another group of researchers has a similar hypothesis, they are likely to tap into the same dead end. The fact that such negative data are rarely published, leads other scientists to waste time and effort by unnecessarily repeating experiments. It is estimated [1] that this costs the US economy alone, $28bn each year, similar in scale to the total $35bn National Institute of Health annual budget [2]. Moreover, the bias towards positive results can lead to an overestimation of biological phenomena or efficacy of drugs. It is devastating and frustrating, if the biased representation of preclinical work compromises the outcome of drug trials. Thus, publishing more negative results will have a positive impact on the development of new drugs and healthcare solutions. 
by Maklay 62 via pixabay

Another current problem is reproducibility. Even though it is fundamental to scientific progress, replication of studies carries little prestige in academic research. Especially in neuroscience, reproducibility has come under particular focus due to some spectacular cases, where data could not be reproduced [3]. Recently, systematic studies demonstrated that current biomedicine has a serious replication problem. It is shocking that more than half of the published biomedical data could not be reproduced [1]. This led to the declaration of a reproducibility crisis. It is necessary to value the effort to reproduce and publish studies regardless of their outcome.
 SCIENCE IS MOST EFFECTIVE WHEN BOTH POSITIVE AND NEGATIVE RESULTS ARE PUBLISHED
Fortunately, many journals now publish reproduction studies and negative data; for example PeerJ, PlosONE, J Neg Res Biomed, Scientific reports and others. Furthermore the necessity to reproduce experiments and publish negative results gets now also recognized by funding agencies that award publications that do not confirm the expected outcome or original hypothesis. The prizes aim to emphasize the value in publishing all the results, as science is most effective when both positive and negative results are published. Another way to fight publication bias and focus on the scientific process and soundness are “Registered Reports”. For this type of journal article, methods and proposed analyses are pre-registered prior to research being conducted. Thereby the results are accepted for publication before data collection commences and without regard to their positive or negative outcome.  
These efforts show, that the recognition to publish negative results and replication studies is growing. Hopefully this will contribute to the soundness of science and retrieve research from the reproducibility crisis.

QUEST is giving away 15 awards of € 1,000  to first/last/corresponding authors (BIH, MDC or Charité affiliation) of preclinical or clinical research papers in which the main result is a NULL or ‘negative’ or in which the replication of own results or the results of others is attempted. Futher information can be found here.

The ECNP’s Preclinical Data Forum created the “ECNP Preclinical Network Data Prize”, a prize for published “negative” scientific results, of €10,000. Aimed initially at neuroscience research, it encourages publication of data where the results do not confirm the expected outcome or original hypothesis. The ECNP’s Preclinical Data Forum is a mixed industry and academic group which aims to improve the replicability and reliability of scientific data, especially in drug development. Futher information can be found here.
by Claudia Willmes, PhD Alumna AG Eickholt / AG Schmitz  

[1] sciencemag, 2015 http://bit.ly/2E5ho01
[2  sciencemag, 2017 http://bit.ly/2uWuFTt
[3] nature news, 2014 http://go.nature.com/2rAME4b