|Source: R. Wiseman's Blog|
Going beyond episodes of misconduct, several recent articles in newspapers and blogs have discussed how much these research fields are error prone due to the validation mechanisms commonly adopted. For instance, working papers are usually made public while there is much work yet to be done.
From a broader perspective, it is a problem of incentives with too many trade-offs at work, as for instance the importance of guaranteeing high speed of publication vs. giving enough time for an accurate peer-review, or the researcher's need for visibility (and for funds) for new and innovative research vs. the social need for replications of previous research. Those incentives, in particular in social sciences, lead to what seems to be an increasing number of errors (or at least of errors unveilings) and to several doubts about the effectiveness of established publication procedures, such as in the famous "social contagion" case.
Hopefully, learning is largely possible thanks to mistakes, examples coming from different disciplines, and because of technology. And that is very important also for innovative sub-fields, such as Computational Social Science, for which the lack of standards is a daily challenge for their practitioners.
However, there are problems which are typical of social sciences and for which the solution is even more difficult to find. For instance, in social sciences often you can not replicate data collection and you have to rely on secondary data (i.e., data collected by somebody else). A very good example of the kind of problems that can emerge with secondary data is described in "Rising regional inequality in China: Fact or artefact?" by Gibson & Li: if you don't take into account all the peculiarities and the changes over time in collecting local data about GDP and residents, you observe a rising regional inequality in China that largely is just a statistical artefact.