Perhaps 16% of all published papers are bullshit. At least this is the result of my simulation.
The idea is the following: Let's assume peer review works well. Reviewers are independent experts and they are able to judge the quality of the papers quite well.
In a single review process, this should lead to very good results. But what happens, if all papers that are rejected are sent to the next journal (and we all know this is what happens!).
This little R-simulation tries to measure these effects.
(Note for nerds: It is a cool project without ANY additional package!)
In a single review process, this should lead to very good results. But what happens, if all papers that are rejected are sent to the next journal (and we all know this is what happens!).
This little R-simulation tries to measure these effects.
(Note for nerds: It is a cool project without ANY additional package!)
## Simulating a peer review process.
## December 2015
# n defines the number of articles to review in the simulation
n <- 1000000
# q is the "real quality" of the articles:
# q has an ordinary scale of "strong reject", "reject", "revise and resubmit",
# "accept with revision", "accept" = 0,1,2,3,4
# To simulate the quality, a probability density function is needed.
# I think a Poisson distribution with lambda 0.5 does fine. Why?
# Good journals reject more than 90% of the submitted papers. I suppose, really good
# papers are rare.
# Of course, you can experiment with different lambda or different distributions.
l <- 0.5
q <- rpois(n, l)
# our scale ends at 4, so every number above is reduced to 4
q[which(q>=5)] <- 4
# Let's have a look at the quantiles of q:
quantile(q)
## December 2015
# n defines the number of articles to review in the simulation
n <- 1000000
# q is the "real quality" of the articles:
# q has an ordinary scale of "strong reject", "reject", "revise and resubmit",
# "accept with revision", "accept" = 0,1,2,3,4
# To simulate the quality, a probability density function is needed.
# I think a Poisson distribution with lambda 0.5 does fine. Why?
# Good journals reject more than 90% of the submitted papers. I suppose, really good
# papers are rare.
# Of course, you can experiment with different lambda or different distributions.
l <- 0.5
q <- rpois(n, l)
# our scale ends at 4, so every number above is reduced to 4
q[which(q>=5)] <- 4
# Let's have a look at the quantiles of q:
quantile(q)
## 0% 25% 50% 75% 100%
## 0 0 0 1 4
## 0 0 0 1 4
# Good papers are rare!
boxplot(q, main="quality distribution")
boxplot(q, main="quality distribution")
smoothScatter(q, main="smooth scatter plot: q", xlab="")
# The paper now goes under review with 3 reviewers.
# Each reviewer is an expert in the field and is able to predict the
# real quality of the papers in ca. 90%. Errors are truly random and
# therefore follow a normal distribution with q as mean and a standard
# deviation SD where 90% of the rounded estimated random numbers are
# equal to corresponding q.
# In principle, SD = 0.25 should result in 95.45% accuracy. But because
# the reviewers cannot go above 4 or below 0, the results are a little
# distorted.
SD <- 0.34
# Now we create the review results of the first reviewer b1:
b1 <- rnorm(n, q, sd=SD)
b1 <- round(b1)
# Again, we have to trim the scale.
b1[which(b1>=5)] <- 4
b1[which(b1<0)] <- 0
# Let's see in how many cases (as percentage) the reviewer misses
# the real quality:
length(which(q!=b1))/n
# Each reviewer is an expert in the field and is able to predict the
# real quality of the papers in ca. 90%. Errors are truly random and
# therefore follow a normal distribution with q as mean and a standard
# deviation SD where 90% of the rounded estimated random numbers are
# equal to corresponding q.
# In principle, SD = 0.25 should result in 95.45% accuracy. But because
# the reviewers cannot go above 4 or below 0, the results are a little
# distorted.
SD <- 0.34
# Now we create the review results of the first reviewer b1:
b1 <- rnorm(n, q, sd=SD)
b1 <- round(b1)
# Again, we have to trim the scale.
b1[which(b1>=5)] <- 4
b1[which(b1<0)] <- 0
# Let's see in how many cases (as percentage) the reviewer misses
# the real quality:
length(which(q!=b1))/n
## [1] 0.098589
# Same thing two more times:
b2 <- rnorm(n, q, sd=SD)
b2 <- round(b2)
b2[which(b2>=5)] <- 4
b2[which(b2<0)] <- 0
length(which(q!=b2))/n
b2 <- rnorm(n, q, sd=SD)
b2 <- round(b2)
b2[which(b2>=5)] <- 4
b2[which(b2<0)] <- 0
length(which(q!=b2))/n
## [1] 0.098433
b3 <- rnorm(n, q, sd=SD)
b3 <- round(b3)
b3[which(b3>=5)] <- 4
b3[which(b3<0)] <- 0
length(which(q!=b3))/n
b3 <- round(b3)
b3[which(b3>=5)] <- 4
b3[which(b3<0)] <- 0
length(which(q!=b3))/n
## [1] 0.098341
# The reviewers miss q in ca. 10%.
# Let's combine the results in a data-frame, which is more handy.
df <- cbind.data.frame(q,b1,b2,b3)
# How often do we expect different opinions of the reviewers?
length(which((b1!=b2) | (b1!=b3) | (b2!=b3)))/n
# Let's combine the results in a data-frame, which is more handy.
df <- cbind.data.frame(q,b1,b2,b3)
# How often do we expect different opinions of the reviewers?
length(which((b1!=b2) | (b1!=b3) | (b2!=b3)))/n
## [1] 0.263775
# In every fourth case, the reviewers do not aggree.
# But because the errors are normally distributed, the mean
# should be a powerful instrument to get rid of the errors.
df$mean <- round(apply(df[,2:4], 1, mean))
length(which(df$q!=df$mean))/n
# But because the errors are normally distributed, the mean
# should be a powerful instrument to get rid of the errors.
df$mean <- round(apply(df[,2:4], 1, mean))
length(which(df$q!=df$mean))/n
## [1] 0.018926
# Only 2% of the papers are misjudged by the team of reviewers!
# How many papers are rejected?
length(which(df$mean<2))/n
# How many papers are rejected?
length(which(df$mean<2))/n
## [1] 0.906613
# How many good papers are rejected?
length(which(df$mean<2 & df$q>=2))/n
length(which(df$mean<2 & df$q>=2))/n
## [1] 0.001031
# From 1000 papers only 1 good paper is falsely rejected!
# How many papers are accepted falsely?
length(which(df$mean>=2 & df$q<2))/n
# How many papers are accepted falsely?
length(which(df$mean>=2 & df$q<2))/n
## [1] 0.004025
# Only 4 of 1000 papers are accepted falsely.
# Review seems to work great.
# But wait a minute. What happens, if all rejected papers are send to another
# journal for a second try?
# We now simply copy the above process, but we only take the rejected papers
# as data.
df2 <- df[which(df$mean<2),]
n2=dim(df2)[1]
# Now we start a second review process on these papers.
df2$b1 <- rnorm(n2, q, sd=SD)
df2$b1 <- round(df2$b1)
df2$b1[which(df2$b1>=5)] <- 4
df2$b1[which(df2$b1<0)] <- 0
df2$b2 <- rnorm(n2, q, sd=SD)
df2$b2 <- round(df2$b2)
df2$b2[which(df2$b2>=5)] <- 4
df2$b2[which(df2$b2<0)] <- 0
df2$b3 <- rnorm(n2, q, sd=SD)
df2$b3 <- round(df2$b3)
df2$b3[which(df2$b3>=5)] <- 4
df2$b3[which(df2$b3<0)] <- 0
df2$mean <- round(apply(df2[,2:4], 1, mean))
length(which(df2$q!=df2$mean))/n2
# Review seems to work great.
# But wait a minute. What happens, if all rejected papers are send to another
# journal for a second try?
# We now simply copy the above process, but we only take the rejected papers
# as data.
df2 <- df[which(df$mean<2),]
n2=dim(df2)[1]
# Now we start a second review process on these papers.
df2$b1 <- rnorm(n2, q, sd=SD)
df2$b1 <- round(df2$b1)
df2$b1[which(df2$b1>=5)] <- 4
df2$b1[which(df2$b1<0)] <- 0
df2$b2 <- rnorm(n2, q, sd=SD)
df2$b2 <- round(df2$b2)
df2$b2[which(df2$b2>=5)] <- 4
df2$b2[which(df2$b2<0)] <- 0
df2$b3 <- rnorm(n2, q, sd=SD)
df2$b3 <- round(df2$b3)
df2$b3[which(df2$b3>=5)] <- 4
df2$b3[which(df2$b3<0)] <- 0
df2$mean <- round(apply(df2[,2:4], 1, mean))
length(which(df2$q!=df2$mean))/n2
## [1] 0.4958499
# Now, 50% are misjudged by the team of reviewers!!!
# Why? The real quality of these papers has a scale of (0,1). But our
# reviewers assign values from 0 to 4.
# How many papers are rejected?
length(which(df2$mean<2))/n2
# Why? The real quality of these papers has a scale of (0,1). But our
# reviewers assign values from 0 to 4.
# How many papers are rejected?
length(which(df2$mean<2))/n2
## [1] 0.9066327
# Again, 90% of the papers are rejected. But while this was quite accurate
# in the first round, it is a catastrophe now:
# How many papers are falsely accepted?
length(which(df2$mean>=2 & df2$q<2))/n2
# in the first round, it is a catastrophe now:
# How many papers are falsely accepted?
length(which(df2$mean>=2 & df2$q<2))/n2
## [1] 0.0932603
# Of course it is nearly 10%.
# How many papers are falsely accepted in the first AND second round?
(length(which(df$mean>=2 & df$q<2)) + length(which(df2$mean>=2 & df2$q<2)))/n
# How many papers are falsely accepted in the first AND second round?
(length(which(df$mean>=2 & df$q<2)) + length(which(df2$mean>=2 & df2$q<2)))/n
## [1] 0.088576
# Now, we have nearly 9 from 100 papers published where the quality is poor.
# You can guess what happens, if we send all the papers that are still
# rejected in review again:
df3 <- df[which(df2$mean<2),]
n3=dim(df3)[1]
df3$b1 <- rnorm(n3, q, sd=SD)
df3$b1 <- round(df3$b1)
df3$b1[which(df3$b1>=5)] <- 4
df3$b1[which(df3$b1<0)] <- 0
df3$b2 <- rnorm(n3, q, sd=SD)
df3$b2 <- round(df3$b2)
df3$b2[which(df3$b2>=5)] <- 4
df3$b2[which(df3$b2<0)] <- 0
df3$b3 <- rnorm(n3, q, sd=SD)
df3$b3 <- round(df3$b3)
df3$b3[which(df3$b3>=5)] <- 4
df3$b3[which(df3$b3<0)] <- 0
df3$mean <- round(apply(df3[,2:4], 1, mean))
length(which(df3$q!=df3$mean))/n3
# You can guess what happens, if we send all the papers that are still
# rejected in review again:
df3 <- df[which(df2$mean<2),]
n3=dim(df3)[1]
df3$b1 <- rnorm(n3, q, sd=SD)
df3$b1 <- round(df3$b1)
df3$b1[which(df3$b1>=5)] <- 4
df3$b1[which(df3$b1<0)] <- 0
df3$b2 <- rnorm(n3, q, sd=SD)
df3$b2 <- round(df3$b2)
df3$b2[which(df3$b2>=5)] <- 4
df3$b2[which(df3$b2<0)] <- 0
df3$b3 <- rnorm(n3, q, sd=SD)
df3$b3 <- round(df3$b3)
df3$b3[which(df3$b3>=5)] <- 4
df3$b3[which(df3$b3<0)] <- 0
df3$mean <- round(apply(df3[,2:4], 1, mean))
length(which(df3$q!=df3$mean))/n3
## [1] 0.4970978
length(which(df3$mean<2))/n3
## [1] 0.9068415
length(which(df3$mean>=2 & df3$q<2))/n3
## [1] 0.09303924
# As you can see, the results are more or less identical.
# How many false positives (accepted poor papers) do we have in the end?
(length(which(df$mean>=2 & df$q<2)) +
length(which(df2$mean>=2 & df2$q<2)) +
length(which(df3$mean>=2 & df3$q<2)))/n
# How many false positives (accepted poor papers) do we have in the end?
(length(which(df$mean>=2 & df$q<2)) +
length(which(df2$mean>=2 & df2$q<2)) +
length(which(df3$mean>=2 & df3$q<2)))/n
## [1] 0.165051
# 16 out of 100 published papers are bullshit!
Kommentare
Kommentar veröffentlichen