Editors publish articles based on various factors, including originality of the research, clinical importance and usefulness of the findings, methodological quality, and readership interest of the journal.1,2,3 Selecting which manuscripts to publish from a large number of submissions is a difficult and complex process, and critics have argued that the editorial review process is arbitrary, slow and biased, and fails to prevent the publication of flawed studies.4
For example, it is not clear whether editors tend to publish studies with statistically significant results (positive) in preference to those with statistically non-significant or null results (negative), thereby contributing to the problem of publication bias.5,6,7 Publication bias raises the concern that statistically significant study results may dominate the research record and skew the results of systematic reviews and meta-analyses in favour of new treatments with positive results.8 The source of publication bias is unclear. Previous studies have concluded that authors often do not submit studies with statistically non-significant findings for publication because of a perceived lack of interest, methodological limitations,5,6,7,9 or the assumption that editors and reviewers are less likely to publish them.5,9,10
To our knowledge, only one other study has systematically evaluated manuscript characteristics that are associated with publication.11 Olson and colleagues assessed a prospective cohort of 745 manuscripts submitted to JAMA reporting controlled clinical trials. They found that higher quality studies and those enrolling participants in the United States were more likely to be published, but there was no difference in publication rates between studies with positive and negative results.
In our study, we identified characteristics of manuscripts submitted to three major biomedical journals across a wide range of study designs. We systematically evaluated manuscript characteristics that had been shown in other studies to be predictive of publication. We hypothesised that editors were more likely to publish studies with statistically significant results, higher methodological quality, and other characteristics found to be individually associated with publication, including study design, sample size, funding source and geographic region.2,5,6,7,10,11
We studied publication outcomes at three major peer-reviewed biomedical journals: BMJ and the Lancet (in the United Kingdom) and Annals of Internal Medicine (in the US). We selected these three journals because they are among the top 10 journals in impact factor (range, 7.0–21.7) and immediacy index (the average number of times current articles in a specific journal are cited in the year they are published) (range, 3.0–5.8);12 have wide circulation and subscription rates throughout the world;13 represent the concerns of a general medical audience; and have highly competitive acceptance rates for original research (range, 5%–10%). In addition, each of the journals is published weekly or biweekly, ensuring a high volume of submitted and published articles. Each journal publishes clinical studies based on a variety of study designs, including randomised controlled trials (RCTs), observational studies, qualitative research and systematic reviews.
From January 2003 to April 2003, we consecutively enrolled manuscripts reporting original research submitted to each journal. For two of the journals, enrolment was re-opened from November 2003 to February 2004 to obtain the originally planned number of accepted manuscripts (about 70). We included RCTs; non-randomised trials; systematic reviews; meta-analyses; prospective and retrospective cohort studies; and case–control, case series, cross-sectional, qualitative and ethnographic studies. We excluded case reports on a single patient. Submitted manuscripts meeting these criteria were enrolled and randomly assigned a study number.
Data on manuscript characteristics were abstracted independently by two of us (K P L, E A B), who were unaware of the manuscript’s publication status. We developed a standardised data collection form to record manuscript characteristics, many of which have been examined individually by others.2,5,6,7,10,11 Our main predictor of publication was statistical significance of results. For studies conducting statistical analyses, primary results were classified as statistically significant (P < 0.05 or 95% CI for difference excluding 0 or 95% CI for ratio excluding 1) or not statistically significant. We also examined:
study design (eg, RCT, non-RCT, cohort, case–control, systematic review);
analytical methods used, classified as statistical/quantitative (eg, t tests, χ2 tests, analysis of variance, regressions, survival analysis, Kaplan–Meier curves, econometric analyses) or descriptive/qualitative (eg, proportions, frequencies, qualitative data or ethnography);
whether a hypothesis was clearly stated;
sample size (all study designs were included except systematic reviews that reported the number of studies but not the total number of subjects);
whether a description of the study subjects was provided;
whether the manuscript disclosed any funding source (eg, industry, private non-profit, government, no funding, or multiple sources);
authorship characteristics: apparent sex of the first and last authors; whether the corresponding author’s country was classified as high- or low-income based on World Bank classifications;14 and whether the corresponding author was from the same country as that of the publishing journal.
Our primary outcome was acceptance for publication. We classified rejection as either outright (with no external peer review) or after peer review.
Proportions of manuscripts accepted for publication were first analysed using univariate logistic regression and estimating odds ratios (ORs) to identify associations between independent variables and publication. P values were not adjusted for multiple comparisons and P < 0.05 was considered statistically significant.
To control for several variables simultaneously, we carried out multivariate logistic regression analysis and calculated ORs. For our primary analysis, we compared accepted manuscripts with all rejected manuscripts. Further sensitivity analyses compared accepted manuscripts with manuscripts that were rejected outright or rejected after peer review.
The number of manuscripts enrolled was targeted to produce about 70 acceptances, which we chose so that there would be at least 10 acceptances per predictor in a multivariate model with up to seven simultaneous predictors. Data were analysed using SAS software (version 9.1, SAS Institute Inc, Cary, NC, USA).
Accepted manuscripts were matched by journal and study design to rejected-outright manuscripts, which were selected at random if there were more rejected than accepted manuscripts in a journal-design stratum. Manuscripts rejected after peer review were not included in this analysis because the number of manuscripts in this group was inadequate for matching.
Two reviewers (K P L, J M H-L) independently assessed the methodological quality of each manuscript using a validated instrument.15 Our quality assessment instrument includes 22 items designed to measure the minimisation of systematic bias for a wide range of study designs (including RCTs, non-RCTs and observational studies), regardless of study topic. This instrument compares favourably in terms of validity and reliability with other instruments assessing the quality of RCTs,16 and performs similarly to other well accepted instruments for scoring the quality of trials included in meta-analyses.17 For systematic reviews and meta-analyses, we used a slightly modified version of the Oxman instrument,18 which is a valid, reliable instrument for assessing these types of studies.19
The two reviewers were trained to use the instruments and given detailed written instructions. One reviewer (J M H-L) was blinded to manuscript publication status (accepted or rejected). Scores ranged on a continuous scale from 0 (lowest quality) to 2 (highest quality).15,20 The average of the two reviewers’ scores was used for the analyses. If the reviewers’ scores differed by more than 1 SD, the manuscript was discussed by both reviewers until consensus was achieved, and the consensus score was used in the analyses. About 1.3% of methodological quality scores required adjudication. Inter-rater reliability of overall scores measured by intraclass correlation was good (r = 0.78).
Because quality scores have limitations in accurately assessing the reduction of bias in RCTs,17 we also evaluated the specific quality components of concealment of treatment allocation and double-blinding in a sub-sample of RCTs. When inadequate, these components are associated with exaggerated effect sizes in RCTs,21,22 although this cannot be generalised to all clinical areas.23 Individual components of study design for non-RCTs were not analysed because there is no empirical evidence to suggest which components are associated with exaggerated effect sizes.
In a nested case–control analysis, we assessed methodological quality as a predictor of publication. Matched conditional logistic regression was used to model the influence of methodological quality scores on odds of acceptance, stratified by journal and study design. ORs were scaled to correspond to a 0.1 point increase in quality score, as this is an interpretable degree of difference in quality. Additional models tested interactions between quality scores and study design, as the design of the study (RCT v non-RCT and observational studies) is known to influence the quality score.20 In a separate analysis of RCTs, the individual components of concealment of random allocation and double-blinding were dichotomised (adequate or inadequate), and ORs were calculated by matched conditional logistic regression.
During the study period, 1107 manuscripts meeting eligibility criteria were submitted to the three journals. Sixty-eight (6%) were accepted for publication, 777 (70%) were rejected outright and 262 (24%) were rejected after peer review (Box 1).
Box 1
Publication outcomes for the cohort of submitted manuscripts at the three biomedical journals during the study period
In a univariate analysis, there were significant associations between publication and study design (RCT v all other study designs), analytical methods (descriptive/qualitative v statistical/quantitative), funding source (any disclosure v no disclosure), and corresponding author’s country of residence (same country as publishing journal v other country) (Box 2).
Box 2
Association between characteristics of submitted manuscripts (MSs) and publication: univariate analysis (accepted [n = 68] v all rejected [n = 1039] MSs)
Sample size data were divided into quartiles, and the upper three quartiles (≥ 73 subjects) were compared with the lowest quartile (< 73 subjects).
Most of the submitted manuscripts (718/1107 [87%]) reported statistically significant results. The proportion of accepted manuscripts reporting statistically significant results (35; 83%) was slightly lower, but not significantly different from the proportion of submitted manuscripts with significant results (Box 2).
In multivariate logistic regression analyses comparing accepted with all rejected manuscripts we included study design, analytical methods, sample size, funding source, and country of the corresponding author (total n = 969) (Box 3). Factors significantly associated with publication were having an RCT or systematic review study design, use of descriptive/qualitative analytical methods, disclosure of any funding source, and having a corresponding author residing in the same country as the publishing journal. There was a non-significant trend towards manuscripts with larger sample sizes being published. After controlling for these five variables, manuscripts with statistically significant results were no more likely to be published than those with non-significant results.
Box 3
Association between characteristics of submitted manuscripts and publication: multivariate analysis
In multivariate logistic regression analyses comparing accepted manuscripts with those rejected outright, we included the same five variables (total n = 734). We also compared accepted manuscripts with those rejected after peer review (total n = 294) (Box 3). Similar findings were observed in both models, although associations were not statistically significant when comparing accepted manuscripts with those rejected after peer review. In the latter analysis, the number of observations decreased because fewer manuscripts were rejected after peer review than rejected outright. In none of the sensitivity analyses did statistical significance of study results appear to increase the chance of publication.
Of the 68 accepted manuscripts, three basic research studies were excluded because our quality instrument was not designed to evaluate these types of studies. Two of the remaining accepted manuscripts did not contribute to the matched analysis because they were in a journal-design stratum that had no rejected manuscripts. In three strata there were one fewer rejected than accepted manuscripts. Thus, our final sample size for analysis consisted of 123 manuscripts (63 cases [accepted manuscripts] and 60 controls [rejected manuscripts]) distributed over 21 journal-design strata that had at least one each of accepted and rejected manuscripts. We also performed separate analyses for RCTs (n = 26; 13 cases, 13 controls), systematic reviews (n = 12; 6 cases, 6 controls), and “all other” types of study design (n = 85; 44 cases, 41 controls).
Manuscripts with higher methodological quality scores were significantly more likely to be accepted for publication (OR, 1.39 per 0.1 point increase in quality score; 95% CI, 1.16–1.67; P < 0.001) (Box 4). Checking for non-linearity by adding a quadratic term for quality score did not substantially improve the model (P = 0.24). The estimated effect of quality on odds of acceptance separately for the three major study design categories is shown in Box 4. All estimates were positive, with CIs overlapping considerably, suggesting that the influence of quality score on chance of acceptance appeared to be similar by study design. Formal tests for interactions by design or journal had large P values (P = 0.43).
Box 4
Association between methodological quality score and publication: aggregate results* and results stratified by study design
Among the 26 RCTs, those with adequate concealment of treatment allocation appeared to be more likely to be published, although the results, within this small sample, were not statistically significant (9 accepted v 4 rejected; OR, 8.6; 95% CI, 0.91–80.9; P = 0.060), as were those with double-blinding (9 accepted v 5 rejected; OR, 3.4; 95% CI, 0.69–16.7; P = 0.13).
Manuscripts with higher methodological quality were more likely to be published, but those reporting statistically significant results were no more likely to be published than those without, suggesting that the source of publication bias is not at the editorial level. This confirms previous findings at a single, large, general biomedical journal with a high impact factor.11 In our study, the proportion of submitted manuscripts reporting statistically significant results far outnumbered those reporting statistically non-significant results, corroborating previous findings that suggest investigators may fail to submit negative studies.5,6,7,9 Furthermore, in none of the sensitivity analyses (accepted v rejected outright, accepted v rejected after peer review) did statistical significance of results appear to increase the chance of publication, suggesting that studies with statistically significant results are not more likely to be published, regardless of whether they have been peer reviewed or not.
Studies with an RCT or systematic review study design and (possibly) larger sample size were more likely to be published than smaller studies of other designs. Such studies may be less susceptible to methodological bias.21 This is also supported by our findings that manuscripts with higher methodological quality were more likely to be published. On the other hand, editors may have a tendency to publish systematic reviews and RCTs because they are cited more often than other study designs,24 thereby positively influencing their own journal’s impact factor.
We can suggest a couple of reasons why descriptive/qualitative analytical methods were associated with higher publication rates in our study. Firstly, early examinations of new treatments are often conducted using observational studies or case series, and results from these studies may be novel and stimulate new areas of research or reassessment of current clinical practice and standards of care. Secondly, descriptive/qualitative studies may be more likely to be published in these major biomedical journals because they may receive higher quality submissions.
Manuscripts disclosing any funding source were significantly more likely to be published than those with no disclosure. At each of the three journals surveyed, authors are required to disclose the funding source, describe the role of the funding source in the research process, and declare any conflicts of interest. Such disclosure helps editors and reviewers to assess potential bias associated with funding and research findings,25,26,27 and previous research shows that readers’ perceptions and reactions to research reports are influenced by statements of competing interests.28,29
There appears to be an editorial bias towards accepting manuscripts whose corresponding author lives in the same country as that of the publishing journal. Other studies have found a similar association, but did not control for differences in submissions to journals by nationality,30 or compared nationality of authors and reviewers only and did not adjust for other aspects of the submitted manuscripts.31 We did not observe an association between publication and income level of the corresponding author’s country or sex of the first or last author.
Our study is strengthened by its prospective design and large sample size. We included a variety of study designs, evaluated well defined objective manuscript characteristics, abstracted data independently while blinded to publication status, and adjusted for confounding variables. However, our findings were based on large general medical journals and may not be generalisable to specialty journals or journals with fewer editors, fewer submissions, or lower circulation.32 Secondly, the types of manuscripts submitted and accepted during the chosen time period may be unique. (However, by prospectively enrolling consecutive manuscripts submitted to three large, high-impact general medical journals over an 8-month period, we believe our sample would have been representative.) Finally, although we examined characteristics of submitted manuscripts associated with publication, we did not examine the editorial decision-making process. Many factors other than manuscript characteristics — such as novelty, clinical importance and usefulness, and readership interest of the journal — clearly influence the decision to publish.1,2,3
Received 9 January 2006, accepted 10 April 2006
Abstract
Objective: To identify characteristics of submitted manuscripts that are associated with acceptance for publication by major biomedical journals.
Design, setting and participants: A prospective cohort study of manuscripts reporting original research submitted to three major biomedical journals (BMJ and the Lancet [UK] and Annals of Internal Medicine [USA]) between January and April 2003 and between November 2003 and February 2004. Case reports on single patients were excluded.
Main outcome measures: Publication outcome, methodological quality, predictors of publication.
Results: Of 1107 manuscripts enrolled in the study, 68 (6%) were accepted, 777 (70%) were rejected outright, and 262 (24%) were rejected after peer review. Higher methodological quality scores were associated with an increased chance of acceptance (odds ratio [OR], 1.39 per 0.1 point increase in quality score; 95% CI, 1.16–1.67; P < 0.001), after controlling for study design and journal. In a multivariate logistic regression model, manuscripts were more likely to be published if they reported a randomised controlled trial (RCT) (OR, 2.40; 95% CI, 1.21–4.80); used descriptive or qualitative analytical methods (OR, 2.85; 95% CI, 1.51–5.37); disclosed any funding source (OR, 1.90; 95% CI, 1.01–3.60); or had a corresponding author living in the same country as that of the publishing journal (OR, 1.99; 95% CI, 1.14–3.46). There was a non-significant trend towards manuscripts with larger sample size (≥ 73) being published (OR, 2.01; 95% CI, 0.94–4.32). After adjustment for other study characteristics, having statistically significant results did not improve the chance of a study being published (OR, 0.83; 95% CI, 0.34–1.96).
Conclusions: Submitted manuscripts are more likely to be published if they have high methodological quality, RCT study design, descriptive or qualitative analytical methods and disclosure of any funding source, and if the corresponding author lives in the same country as that of the publishing journal. Larger sample size may also increase the chance of acceptance for publication.