Scientometrics DOI 10.1007/s11192-010-0233-5

Worsening file-drawer problem in the abstracts of natural, medical and social science databases Marco Pautasso

Received: 16 November 2009 Ó Akade´miai Kiado´, Budapest, Hungary 2010

Abstract The file-drawer problem is the tendency of journals to preferentially publish studies with statistically significant results. The problem is an old one and has been documented in various fields, but to my best knowledge there has not been attention to how the issue is developing in a quantitative way through time. In the abstracts of various major scholarly databases (Science and Social Science Citation Index (1991–2008), CAB Abstracts and Medline (1970s–2008), the file drawer problem is gradually getting worse, in spite of an increase in (1) the total number of publications and (2) the proportion of publications reporting both the presence and the absence of significant differences. The trend is confirmed for particular natural science topics such as biology, energy and environment but not for papers retrieved with the keywords biodiversity, chemistry, computer, engineering, genetics, psychology and quantum (physics). A worsening file-drawer problem can be detected in various medical fields (infection, immunology, malaria, obesity, oncology and pharmacology), but not for papers indexed with strings such as AIDS/HIV, epidemiology, health and neurology. An increase in the selective publication of some results against some others is worrying because it can lead to enhanced bias in metaanalysis and hence to a distorted picture of the evidence for or against a certain hypothesis. Long-term monitoring of the file-drawer problem is needed to ensure a sustainable and reliable production of (peer-reviewed) scientific knowledge. Keywords History of science  Meta-analysis  Publication explosion  Scientific knowledge  Significant differences  STM publishing

Introduction The file-drawer problem is the relative lack of non-significant published results (e.g., Sterling 1959; Begg and Berlin 1988; Kennedy 2004). It follows from the tendency of journals to preferentially publish studies with statistically significant results. Rosenthal M. Pautasso (&) Division of Biology, Imperial College London, Silwood Campus, Ascot SL5 7PY, UK e-mail: [email protected]

123

1000

n = 18, y = - 73532 + 37x R2 = 0.91, p < 0.001

500

0 1990 300

number of new yearly papers) (thousands) in the Social Sciences Citation Index

(a)

number of new yearly papers (thousands) in Medline

1500

1995

2000

2005

2010

(c)

200

100

n = 18, y = -5644 + 2.90x, R2 = 0.65, p < 0.001 0 1990

800

1995

2000

2005

2010

(b)

600 400 200

n = 34, y = -26610 + 13x, 2

R = 0.92, p < 0.001 0 1970 300

number of new yearly papers (thousands) in CAB Abstracts

number of new yearly papers (thousands) in the Science Citation Index

M. Pautasso

1980

1990

2000

2010

(d)

250 200 150 100 50 0 1970

n = 34, y = -6875 + 3.53x, 2 R = 0.75, p < 0.001 1980

1990

2000

2010

Fig. 1 The magnitude of the increase in the peer-reviewed scientific literature in the four databases of Fig. 2. (a: Science Citation Index, b: Medline, c: Social Science Citation Index, and d: CAB Abstracts)

famously argued in 1979 that journals are filled with the 5% of studies with significant differences, whilst the drawers (or computer folders, now) are full of the other 95% of studies. Since then, the problem has been documented in several fields, from medical research (Begg and Berlin 1988) to biology (Csada et al. 1996) and psychiatry (Gilbody et al. 2000), from oncology (Krzyzanowska et al. 2003) to sociology (Gerber and Malhotra 2008), from systematic reviews in medicine (Tricco et al. 2009) to research on drug addiction (Vecchi et al. 2009) and orthodontics (Koletsi et al. 2009). However, in the meanwhile the yearly output in peer-reviewed scholarly publications has soared (Fig. 1). For example, compared to 1991, roughly 4, 5 and 6 times more papers were indexed in 2008 in the Web of Science on environment, cancer and health, respectively (Tables 1, 2). Has this increase facilitated the appearance of non-significant results in the scientific literature? Or has it become even more difficult to get such results accepted, given the increasing rejection rates of many journals due to the worsening need to publish and the increased number of researchers worldwide? In this study, I investigate the development of the file-drawer problem in the abstracts of four major scholarly databases during the last decades. I also study the temporal development of the issue for various subtopics in the natural and medical sciences.

Materials and methods I investigated any temporal trends in the file-drawer problem in the abstracts of four databases of the peer-reviewed scientific literature. For the Science and Social Science Citation Indexes, the period studied was 1991–2008 (before 1991, only titles are searched, whereas from 1991 onwards also abstracts are included in these databases). As abstracts

123

Worsening file-drawer problem Table 1 Ratio of the proportion of papers reporting non-significant differences by the proportion of papers reporting in the title/abstract significant differences as a function of the year to which the ratio refers (1991– 2008) for 10 selected topics in the natural sciences in Web of Science (all citation indexes, as of April 2009) Keyword

R2

Regression

Slope s.e.

R

p

D

Biodiversity

0.00

y = -7 - 0.004x

0.013

0.77

28

Biolog*

0.56

y = 33 - 0.016x

0.003

<0.001

529

40 3.3

Non

SD

Ratio

5

2.2

0.6

9

1.0

1.1

Chemist*

0.03

y = 12 - 0.006x

0.009

0.51

248

2.7

3

0.6

0.7

Computer

0.00

y = -5 ? 0.003x

0.009

0.74

338

2.1

6

1.3

1.2

Engineer*

0.20

y = -34 ? 0.017x

0.009

0.07

240

4.9

2

0.7

0.8

Environment*

0.45

y = 21 - 0.010x

0.003

0.002

910

4.4

6

0.4

0.8

Energy

0.30

y = 27 - 0.013x

0.005

0.02

1077

2.6

5

0.4

1.2

Genetic*

0.00

y = 0.2 ? 0.001x

0.005

0.94

520

3.9

11

1.1

0.9

Psychol*

0.06

y = -6 ? 0.004x

0.002

0.11

145

3.5

18

2.8

0.6

Quantum*

0.08

y = -6 ? 22 - 0.011x

0.009

0.25

933

4.2

1

0.2

0.4

The sum (R, in 1000 papers) and variation from 1991 to 2008 (D, 1991 = 1) in the total number of papers retrieved, as well as the average proportion (with st. dev.) of papers with indication of absence of significant differences (Non, per 1000 papers) and the average ratio of papers with indication of absence versus presence of significant differences (ratio) in the title/abstract are also given p values in bold are significant at p \ 0.05

Table 2 Ratio of the proportion of papers reporting in the title/abstract non-significant differences by the proportion of papers reporting significant differences as a function of the year to which the ratio refers (1991–2008) for 10 selected topics in the medical sciences (AIDS, cancer, epidemics, health, infections, immunity, malaria, neurology, obesity, pharmacology) in Web of Science (all citation indexes, as of April 2009) Keyword

R2

Regression

Slope s.e.

R

p

D

Non

SD

ratio

Aids or Hiv

0.05

y = 22 - 0.010x

0.011

0.37

209

1.8

10

1.3

1.8

Cancer*

0.64

y = 55 - 0.027x

0.005

<0.001

755

4.7

18

1.6

1.7

Epidem*

0.05

y = 20 - 0.009x

0.010

0.36

203

3.5

13

1.2

1.4

Health

0.18

y = -15 ? 0.008x

0.004

0.08

560

6.0

14

2.2

1.3

Infect*

0.47

y = 45 - 0.021x

0.006

<0.001

714

2.4

17

1.6

2.1

Immun*

0.51

y = 41 - 0.024x

0.006

0.001

1050

1.8

15

1.6

1.9

Malaria

0.30

y = 263 - 0.130x

0.050

0.02

33

3.0

13

3.5

2.6

Neuro*

0.13

y = 15 - 0.013x

0.009

0.28

844

2.3

13

1.1

1.8

Obesit*

0.54

y = 171 - 0.080x

0.020

<0.001

81

8.2

25

3.4

2.3

Pharmac*

0.36

y = 71 - 0.034x

0.011

0.008

314

2.8

20

1.9

2.3

The sum (R, in 1000 papers) and variation from 1991 to 2008 (D, 1991 = 1) in the total number of papers retrieved, as well as the average proportion (with st. dev.) of papers with indication of absence of significant differences (Non, per 1000 papers) and the average ratio of papers with indication of absence versus presence of significant differences (ratio) in the title/abstract are also given p values in bold are significant at p \ 0.05

are searched also before 1991 for Medline and CAB-Abstracts, it was possible to analyze data for these two databases from the 1970s. For each year, I searched for papers with explicit mentioning of ‘‘no significant difference/s’’ or ‘‘no statistically significant difference/s’’ in the title and/or abstract. This is a conservative estimate of the number of publications with no statistical support for the

123

M. Pautasso

hypothesis tested, as other papers may report no significant differences with other kinds of wordings or without explicitly mentioning them in the title and/or abstract. There is currently no way to search the full text of papers in ISI Web of Knowledge, but it is reasonable to expect that most abstracts are representative of the paper as a whole. It is true that in title and abstract authors tend to highlight the most important findings, but this (1) makes the analysis more conservative (it avoids investigating trivial results reported in the full text and concentrates on the major results of the studies) and (2) is the case throughout the period studied (so that temporal comparability is guaranteed). I then searched for papers which referred in the abstract to ‘‘significant difference/s’’, subtracted the first result from the second (searching for ‘‘significant difference/s’’ will find also results reporting ‘‘no (statistically) significant difference/s’’), and used the result as a similarly conservative estimate of the number of publications which report significant differences. These two yearly estimates were standardized dividing them by the total number of papers indexed in the same database for the same year. Taking into account the number of papers reporting in the abstract ‘‘lack (or absence) of significant difference/s’’ did not affect results, as these papers were orders of magnitude less frequent than those using the wording ‘‘no significant difference/s’’. Similarly, findings with the strings ‘‘(no) significant difference*’’ appear to be much more frequent than with other wordings (e.g. ‘‘(no) significant effect’’). For example, for ISI Web of Science (Science Citation Index, all years, as of January 2010) and the topic ‘‘biolog*’’, searching for ‘‘no significant difference*’’ gives about 4400 results, whilst ‘‘no significant increase*’’ only provides *170 results. Analogously, for the Science Citation Index (all years) and the topic ‘‘health’’, searching for ‘‘no significant difference*’’ gives over 20,000 results, whilst ‘‘no significant association*’’ only provides *1700 results. Similarly, for all Citation Indexes (all years) and the topic ‘‘cancer’’, searching for ‘‘significant difference*’’ results in about 24,000 results, whilst ‘‘significantly different’’ obtains about 6400 items. Apart from papers retrieved with keywords other than ‘‘(no) significant difference*’’ being fewer in number, there is no reason to expect that the trends reported for ‘‘significant differences’’ may behave differently for papers where researchers express the absence or presence of significant results without the exact words ‘‘significant difference*’’. A comparative analysis of such trends for different wordings may well be object of a later investigation, but is not the aim of the present study. The regression of the ratio of the number of publications reporting in the abstract the absence versus the presence of significant differences as a function of year of publication was analyzed in SAS 9.1. The same analysis was performed for papers retrieved with 20 general keywords in Web of Science (all Citation Indexes) for the natural (biodiversity, biology, chemistry, computer, engineering, energy, environment, genetics, psychology, quantum physics) and the medical (AIDS/HIV, cancer, epidemiology, health, immunology, infection, malaria, neurology, obesity, pharmacology) sciences.

Results In all databases (Science and Social Science Citation Indexes, Medline and CAB abstracts), there was an increase through time in the number of papers published per year (Fig. 1) and in the proportion of papers which mentioned in the abstract the presence as well as the absence of significant differences (Fig. 2). There was a healthy presence of a generally higher proportion of studies reporting in the abstract the absence rather than the presence of significant differences (Fig. 2).

123

Worsening file-drawer problem

(b) n = 18, y = 32 - 0.01x, R = 0.72, p < 0.001

20

2.0

R2 = 0.70, p < 0.001

8 6 1.5 4 "no significant difference*"

2 0 1990

"significant difference*" non sign./sign.

1995

2000

2005

1.0 2010

15 2.0 10 1.5 5

0 1970

"no significant difference*" "significant difference*" non sign./sign.

1980

1990

2000

1.0 2010

(d) 2

n = 18, y = 18.4 - 0.01x, R = 0.23, p < 0.001

1.5

1.0 6 4

0.5 "significant difference*" "no significant difference*"

2

non sign./sign.

0 1990

1995

2000

2005

0.0 2010

non sign./sign.

8

proportion of papers in CAB abstracts (per 1000)

10

20

2.5

n = 36, y = 43.5 -0.02x, R2 = 0.79, p < 0.001

15

2.0

10

1.5

5

0 1970

"no significant difference*" "significant difference*" non sign./sign.

1980

1990

2000

non sign./sign.

(c) proportion of papers in Social Sciences Citation Index (per 1000)

2.5

n = 34, y = -5452 + 5.48 x -0.001x2,

non sign./sign.

2

proportion of papers in Medline (per 1000)

10

non sign./ sign.

proportion of papers in the Science Citation Index (per 1000)

(a)

1.0

0.5 2010

Fig. 2 Proportion of papers (per 1000) in a the Science Citation Index, b Medline, c the Social Science Citation Index, and d CAB Abstracts, reporting the absence or presence of significant differences in the title/ abstract, as of March 2009. The ratio between the two variables is provided with a regression line (secondary y-axis)

In spite of these results, for all databases there was a tendency for studies reporting in the abstract significant differences to increase faster than those which failed to do so. Hence, there was a decrease through time in the ratio of non-significant to significant results, which was consistent across the databases analyzed (Fig. 2). This decreasing ratio applied also for other databases with very little reporting of statistical differences in titles and/or abstracts such as the ISI Conference Proceedings database (n = 18, R2 = 0.50, y = 55 - 0.027x, slope s.e. = 0.007, p \ 0.001) and the Arts and Humanities Citation Index (n = 17, R2 = 0.24, y = 57 - 0.028x, slope s.e. = 0.013, p = 0.04). These results were confirmed when searching in Web of Science (all Citation Indexes) for some fields/topics in the natural sciences such as biology, energy and environment (Table 1). There was instead no significant variation in the ratio of the proportion of papers reporting in the abstract the absence versus the presence of significant differences for papers dealing with biodiversity, chemistry, computers, engineering, genetics, psychology and quantum (physics). In the medical sciences, the worsening of the file drawer problem was more generalized, with a significant decrease of the ratio of the proportion of papers reporting in the abstract the absence versus the presence of significant differences for fields such as immunology, infection, research on malaria and obesity, as well as oncology and pharmacology (Table 2). Exception to this trend was made by papers on AIDS/HIV, epidemiology, neurology and by those retrieved with the generic keyword ‘health’. For the medical sciences studied, there was a generally higher proportion of papers reporting the absence of significant differences than in the non-medical fields investigated (Tables 1, 2). Similarly, the ratio of papers reporting the absence versus the presence of significant differences was

123

M. Pautasso

generally higher for the medical topics studied compared to those in the natural sciences (Tables 1, 2).

Discussion Bias in science can (but does not need to) occur in several ways, from the selective funding of some proposals against some others to the preferential attention to and citation of some articles in comparison to others (Cicchetti 1991; Garfield 1997; Paris et al. 1998; Bensman 2007; Nieminen et al. 2007; Greenberg 2009; Marsh et al. 2009; Reinhart 2009; Taborsky 2009). In between funding and citation bias is publication bias, i.e. the tendency to accept for publication submissions with certain features and to dismiss manuscripts with some other features. Although it is possible that funding and citation bias are driving publication bias, this study does not address the issue of the temporal development in a potential bias of funding bodies towards proposals which are more likely to end up in obtaining positive results. Nor does this study deal with whether the citation likelihood of studies reporting the absence versus the presence of significant differences has been changing through time. This study shows a recent catching up of the number of studies reporting the presence of statistical differences in title and/or abstracts compared to the studies reporting the absence of them. This result is evidence that in the abstracts of four (six counting ISI Conference Proceedings and the Arts and Humanities Citation Index) scientific databases the filedrawer problem has been getting worse during the last decades. This happened in spite of the progressive, generalized and unrelenting increase in the number of new publications per year indexed in ISI Web of Knowledge. The trend towards a worsening of the filedrawer problem could have different causes: (1) an increase through time of rejection rates (forcing editors to increase the tendency to decline publication to submissions with negative results), (2) a gradually increased general emphasis on the impact factor (forcing editors to try and publish more citable papers, thus, e.g., those reporting in the abstract the presence of significant differences). These two mechanisms (increased rejection rates and increased emphasis on the impact factor) are not mutually exclusive and could also work from the perspective of authors and funding bodies. If there are increases in the peer review selectivity and in the importance of the standing of the journals where manuscripts appear, authors might tend to preferentially perform analyses which are likely to result in big effect sizes and thus in the presence of significant differences, or to preferentially write up and submit manuscripts about analyses which resulted in the presence of significant differences. The increase in the number of indexed publications which report the presence or absence of significant differences in the title or abstract shows that publications are becoming more quantitative. This could imply that it is becoming more difficult to publish, as authors feel an increasing need to stress in the abstract that they found the presence or absence of significant differences. That peer review may have a role in the results reported here, and that these may not be entirely due to funding and authors’ practices, is suggested by independent studies of the likelihood of abstracts to be accepted to conferences in e.g. oncology and research on drug addiction (Krzyzanowska et al. 2003; Vecchi et al. 2009). Moreover, the downward trend of the ratio of reported non-significant versus significant results appears to be general, as it is happening across the databases studied, which span the natural, medical and social sciences (Fig. 2). The trend is not universal, as there are topics which do not show variation in the ratio of papers reporting in the abstract the presence versus absence of significant differences over

123

Worsening file-drawer problem

the time-span analyzed (e.g., chemistry, engineering, and genetics). This result confirms that the standards of peer review may well differ from scientific field to field (e.g., Abt 1992; Guetzkow et al. 2004; Klein 2006). However, the absence of a worsening of the filedrawer problem in some fields does not rule out that subtopics within these fields might show a negative trend in the proportion of papers reporting in the abstract the absence versus the presence of significant differences. Conversely, for the general fields where a negative trend is manifest (e.g. biology), it is possible that sub-fields may not have experienced the same development in the file-drawer problem (e.g. biodiversity). For biodiversity, there has been a remarkable increase in the number of papers with that keyword over the period studied (about 40 times more papers in 2008 compared to 1991), so that this volcanic development could have masked any trends in the reporting of significant differences. In informatics, it might not be seen as important to report significant differences as in other more experimental sciences, although there is no trend also in genetics, where there is much more emphasis on reporting significant differences (Table 1). One further example where there is no significant trend in spite of a relatively good proportion of studies reporting significant differences in the abstract is psychology. This is interesting given that Rosenthal originally pointed out the file-drawer problem for the psychological literature. Psychology also stands out from the other topics studied as it shows a high proportion of papers reporting the absence of significant differences (comparable to the one generally observed in medical sciences), but at the same time a low ratio of papers reporting the absence versus presence of significant differences (common among the non-medical topics studied). In spite of the negative temporal trend reported, the generally higher proportion of studies reporting in the abstract the absence rather than the presence of significant differences goes against the previous speculations and reports (e.g. Rosenthal 1979, Csada et al. 1996; Gilbody et al. 2000) of very little publishing of negative results in the peerreviewed literature. It is possible that this difference could stem from the present analysis having been conducted on a large dataset (several million papers are indexed in Web of Knowledge), whereas previous studies may have tended to concentrate on selected issues of a few journals, which may or may not have been representative of the scientific literature in general. Although in the databases analyzed and during the last decades there does not seem to have been an overall manifest strong bias against publishing negative results, this may still be the case for specific topics and fields. For example, the Social Science Citation Index shows a significantly lower ratio of the proportion of papers reporting in the abstract non significant versus significant results than the other databases studied (Fig. 3). Moreover, if the expectation is that 95% of studies will Fig. 3 Average ratio of the proportion of papers reporting in the title/abstract non-significant differences by the proportion of papers reporting significant differences in the four databases studied for the four (or two for the Science and Social Science Citation Indexes) decades analyzed. Different letters show significant differences at p \ 0.05 in the mean ratios for different databases within a given decade (ANOVA)

123

M. Pautasso

yield no significant results, then there should be a ratio of papers reporting non-significant versus significant results of about 19 to one. However, the analyzed data are far away from such a situation. There is an overall ratio (for all years studied) of 1.6, 0.9, 1.9 and 1.3 papers with a report of ‘‘non-significant differences’’ for each paper with a report of ‘‘significant differences’’ in the Science and Social Science Citation Indexes, Medline and CAB Abstracts, respectively. The selective publication of some results against some others is worrying because it can lead to bias in meta-analysis and hence to a distorted picture of the evidence for or against a certain hypothesis (Begg and Berlin 1988; Khoury et al. 2009; Levine et al. 2009; Song et al. 2009). Some scholars have expressed the feeling that the peer review system is in need of reform because of the worsening delays in obtaining constructive reports (Lawrence 2003; Hauser and Fehr 2007; Primack and Marrs 2008; Hochberg et al. 2009; Schwartz and Zamboanga 2009; de Mesnard 2010; Pautasso and Scha¨fer 2010). The trend documented here away from publishing studies that report the absence of significant differences is a further symptom of bad health and should be counteracted. One constructive suggestion is to focus on effect size rather than mere p values (Killeen 2005; Nakagawa and Cuthill 2007). In addition, it is important that guidelines for peer reviewers (e.g. Smith 1990; Provenzale and Stanley 2005; Bourne and Korngreen 2006; Pautasso and Pautasso 2010) explicitly discuss the file-drawer problem in the context of how to peer review scientific manuscripts, as the cumulative action and bias of the millions of individual peer reports provided each year on submissions has certainly the power to shape in a nonrandom way what gets through the sieve of peer review. Further work is needed to assess the generality and pinpoint the causes of the negative trend reported. There is a need to check the results obtained with the wording ‘‘(no) significant difference*’’ with other less frequently used wordings: the present analysis only studies trends in the file drawer problem with a specific search string, not with all possible wordings conveying the presence or the absence of significant differences. Given that there are scientific fields where the proportion of studies reporting in the abstract the absence versus presence of significant differences has not changed during the last decades, these fields could provide information about factors which facilitate the publication of negative results. It would be interesting to know whether variation in institutional factors (e.g. types of journals (for profit, scientific society, open-access) and peer review (anonymous, double-blind, open)) can have an influence on the development of the file-drawer problem across the various scientific fields. Similarly, a fascinating question would be whether there are differences in the trend investigated here for cross-, mono-, multi-, inter- and transdisciplinary fields. Acknowledgements Many thanks to L. Ambrosino, R. Brown, T. Hirsch, O. Holdenrieder, M. Jeger, C. Pautasso, R. Russo and H. Scha¨fer for insight, discussion or support and to I. Cuthill, O. Holdenrieder, T. Matoni, P. Vineis, K. West and anonymous reviewers for helpful comments on a previous draft.

References Abt, H. A. (1992). Publication practices in various sciences. Scientometrics, 24, 441–447. Begg, C. B., & Berlin, J. A. (1988). Publication bias: A problem in interpreting medical data. Journal of the Royal Statistical Society A, 151, 419–463. Bensman, S. J. (2007). Garfield and the impact factor. Annual Review of Information Science and Technology, 41, 93–155.

123

Worsening file-drawer problem Bourne, P. E., & Korngreen, A. (2006). Ten simple rules for reviewers. PLoS Computational Biology, 2, e110. Cicchetti, D. V. (1991). The reliability of peer-review for manuscript and grant submissions: A crossdisciplinary investigation. Behavioral and Brain Sciences, 14, 119–134. Csada, R. D., James, P. C., & Espie, R. H. M. (1996). The ‘‘file drawer problem’’ of non-significant results: Does it apply to biological research? Oikos, 76, 591–593. de Mesnard, L. (2010). On Hochberg et al.’s ‘‘The tragedy of the reviewer commons’’. Scientometrics, in press doi:10.1007/s11192-009-0141-8. Garfield, E. (1997). A statistically valid definition of bias is needed to determine whether the Science Citation Index(R) discriminates against third world journals. Current Science, 73, 639–641. Gerber, A. S., & Malhotra, N. (2008). Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37, 3–30. Gilbody, S. M., Song, F., Eastwood, A. J., & Sutton, A. (2000). The causes, consequences and detection of publication bias in psychiatry. Acta Psychiatrica Scandinavica, 102, 241–249. Greenberg, S. A. (2009). How citation distortions create unfounded authority: Analysis of a citation network. British Medical Journal, 339, b2680. Guetzkow, J., Lamont, M., & Mallard, G. (2004). What is originality in the humanities and the social sciences? American Sociological Review, 69, 190–212. Hauser, M., & Fehr, E. (2007). An incentive solution to the peer review problem. PLoS Biology, 5, e107. Hochberg, M. E., Chase, J. M., Gotelli, N. J., Hastings, A., & Naeem, S. (2009). The tragedy of the reviewer commons. Ecology Letters, 12, 2–4. Kennedy, D. (2004). The old file-drawer problem. Science, 305, 451. Khoury, M. J., Bertram, L., Boffetta, P., Butterworth, A. S., Chanock, S. J., Dolan, S. M., et al. (2009). Genome-wide association studies, field synopses, and the development of the knowledge base on genetic variation and human diseases. American Journal of Epidemiology, 170, 269–279. Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological Science, 16, 345– 353. Klein, J. T. (2006). Afterword: The emergent literature on interdisciplinary and transdisciplinary research evaluation. Research Evaluation, 15, 75–80. Koletsi, D., Karagianni, A., Pandis, N., Makou, M., Polychronopolou, A., & Eliades, T. (2009). Are studies reporting significant results more likely to be published? American Journal of Orthodontics and Dentofacial Orthopedics, 136, 632e1. Krzyzanowska, M. K., Pintilie, M., & Tannock, I. F. (2003). Factors associated with failure to publish large randomized trials presented at an oncology meeting. Journal of the American Medical Association, 290, 495–501. Lawrence, P. A. (2003). The politics of publication. Nature, 422, 259–261. Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated in meta-analyses: Evidence and implications of a publication bias against non-significant findings. Communication Monographs, 76, 286–302. Marsh, H. W., Bornmann, L., Mutz, R., Daniel, H. D., & O’Mara, A. (2009). Gender effects in the peer reviews of grant proposals: A comprehensive meta-analysis comparing traditional and multilevel approaches. Review of Educational Research, 79, 1290–1326. Nakagawa, S., & Cuthill, I. C. (2007). Effect size, confidence interval and statistical significance: A practical guide for biologists. Biological Reviews, 82, 591–605. Nieminen, P., Rucker, G., Miettunen, J., Carpenter, J., & Schumacher, M. (2007). Statistically significant papers in psychiatry were cited more often than others. Journal of Clinical Epidemiology, 60, 939–946. Paris, G., De Leo, G., Menozzi, P., & Gatto, M. (1998). Region-based citation bias in science. Nature, 396, 6708. Pautasso, M., & Pautasso, C. (2010). Peer reviewing interdisciplinary papers. European Review, 18, 227– 237. Pautasso, M., & Scha¨fer, H. (2010). Peer review delay and selectivity in ecology journals. Scientometrics, in press. doi:10.1007/s11192-009-0105-z. Primack, R. B., & Marrs, R. (2008). Bias in the review process. Biological Conservation, 141, 2919–2920. Provenzale, J. M., & Stanley, R. J. (2005). A systematic guide to reviewing a manuscript. American Journal of Radiology, 185, 848–854. Reinhart, M. (2009). Peer review of grant applications in biology and medicine. Reliability, fairness, and validity. Scientometrics, 81, 789–809. Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638–641.

123

M. Pautasso Schwartz, S. J., & Zamboanga, B. L. (2009). The peer-review and editorial system: Ways to fix something that might be broken. Perspectives in Psychological Science, 4, 54–61. Smith, A. J. (1990). The task of the referee. IEEE Computer, 23, 46–51. Song, F. J., Parekh-Bhurke, S., Hooper, L., Loke, Y. K., Ryder, J. J., Sutton, A. J., et al. (2009). Extent of publication bias in different categories of research cohorts: A meta-analysis of empirical studies. BMC Medical Research Methodology, 9, 79. Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—Or vice versa. Journal of the American Statistical Association, 54, 30–34. Taborsky, M. (2009). Biased citation practice and taxonomic parochialism. Ethology, 115, 105–111. Tricco, A. C., Tetzaff, J., Pham, B., Brehaut, J., & Moher, D. (2009). Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: Cross-sectional study. Journal of Clinical Epidemiology, 62, 380–386. Vecchi, S., Belleudi, V., Amato, L., Davoli, M., & Peducci, C. A. (2009). Does direction of results of abstracts submitted to scientific conferences on drug addiction predict full publication? BMC Medical Research Methodology, 9, 23.

123

Worsening file-drawer problem in the abstracts of ...

were indexed in 2008 in the Web of Science on environment, cancer and health, respec- tively (Tables ..... randomized trials presented at an oncology meeting. Journal of ... Peer review of grant applications in biology and medicine. Reliability ...

298KB Sizes 1 Downloads 172 Views

Recommend Documents

Book of Abstracts
The dynamics of Arctic sea ice decline and consequences for heat and .... document, and we hope this presentation inspires new and useful interactions, ...

Book of Abstracts
visualizing the syntactic and semantic content of written text, a novel method of ... method can be useful for data mining in large databases of literary works. 2.

abstracts - TSI
Oct 22, 2016 - Proximus (2016) Analytics for the real world, https://proximus.io/ ..... management measure should be to ensure operational control and the ...

Chaos in the Stormer problem
Apr 24, 2007 - ence of radius ρ = ρ1, ρ(t) = ρ1 + (ρ − ρ1) cos(ωLt)+ ˙ρ(0) sin(ωLt)/ωL and after substitution into the expression for ∆φ, we obtain the result. The rotation period around the Earth is obtained from the conditions, n∆

plenary abstracts
In M. Apple, D. D. Da Silva, & T. Fellner (eds.), Language. Learning ... reality of school practices. In addition, the co-existence of national identity and the desire to ...

INTED2007 Abstracts Book
systems are not specially, tailored to support sketch-based design. In this ... this process is done in an interactive environment with the great help of icons and ...

Abstracts
1Radiology and Imaging Sciences, National Institutes of Health ... splenic cephalo-caudal length cutoff of 12 cm to detect splenomeg- aly, 59% of all cases and ...

abstracts -
A conversation is interrupted as a friend types a lengthy text message. ... algorithms and firewall programs, to protect the identity of their customers. ... networks in the US obfuscate knowledge by broadcasting irrelevant information and blurring .

SHORT COmmuniCaTiOnS The Problem of understanding of nature in ...
Feb 6, 2014 - Vol. 2, no. 2 (autumn 2014). The nature of understanding (comprehending). Knowledge of something is not understanding if it comprises ignorance. Understanding is a process, a human movement toward wisdom, toward seeing the unity of dive

SHORT COmmuniCaTiOnS The Problem of understanding of nature in ...
Feb 6, 2014 - human beings are not conscious beings; they are incapable of empathy, incapable of making their own decisions based on free will. The fact that exact science sees only 'laws of nature'—that is, physical or scientific laws—does not m

abstracts
between two sequences – Levenshtein distance or the minimal editing distance – is to ... recommended for studying by the students of philological departments in ... V.V. Nikitina (business terminology), D.P. Shapran (marketing terminology) and ot

Historical Abstracts
The Basic Search interface is similar to all EBSCO database interfaces. .... that the request was submitted and an email with instructions for accessing the item.

Solution of the tunneling-percolation problem in the ...
Apr 16, 2010 - where rij is the center-to-center distance. ... data on the electrical conductivity. ... a random displacement of its center and a random rotation of.

S146 Abstracts The Journal of Heart and Lung ...
O.H. Frazier,1 1Center for Cardiac Support, Texas Heart Institute,. Houston, TX. Objectives: ... individual sites including fire stations and ambulance companies.

Abstracts of invited talks
teaching mathematics, which will help make the learning of mathematics interesting ..... The National Curriculum Framework (NCF, 2005) suggests the need of more ...... http://www.eurekalert.org/pub_releases/2002-02/aiop-ohs021202.php.

oral list of abstracts
1. Tailoring topographical and magnetic properties of Fe-Ni alloy based thin films by 100 MeV Ag ions. Lisha Raghavan [email protected], mraiye Cochin University of Science and technology, Cochin. B. O. 2. Change of Shape and Optical Responses of

BOOK OF ABSTRACTS ICIEVE 2015 UNIVERSITAS PENDIDIKAN ...
BOOK OF ABSTRACTS ICIEVE 2015 UNIVERSITAS PENDIDIKAN INDONESIA.pdf. BOOK OF ABSTRACTS ICIEVE 2015 UNIVERSITAS PENDIDIKAN ...

abstracts
Mar 5, 2017 - Department of Fish and Game restoring habitats of pupfish, tui chub, trout, Steelhead, monitoring Tule elk ...... produces biogenic sediment creating an annual two-component laminae couplet influencing ..... are the most abundant compon

SJTSP Abstracts final.pdf
Moderator: Yin-Kai Chen, Bowling Green State University. Page 3 of 18. SJTSP Abstracts final.pdf. SJTSP Abstracts final.pdf. Open. Extract. Open with. Sign In.

Abstracts Division & Titles.pdf
Page 1 of 11. 1. S.No Abstract. Number. Delegate Name Title. 1. KNA-01 Key Note Address. Prof. K. Sathyavelu Reddy. Department of Zoology,. S.V.University,. Tirupati, AP. Bio Medical Sciences and Biotechnology. for Disease Control: A review of Past,.

call for abstracts! -
May 12, 2017 - Hyatt Regency Rochester. 125 East Main Street. Rochester, NY 14604. CALL FOR ABSTRACTS! New York State 2017 Clinical Conference on ...

Waterlogged-The-Serious-Problem-Of-Overhydration-In-Endurance ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Waterlogged-The-Serious-Problem-Of-Overhydration-In-Endurance-Sports.pdf. Waterlogged-The-Serious-Problem-Of

Solution of the tunneling-percolation problem in ... - APS Link Manager
Apr 16, 2010 - explicitly the filler particle shapes and the interparticle electron-tunneling process. We show that the main features of the filler dependencies of ...