Perhaps 16% of all published papers are bullshit. At least this is the result of my simulation. The idea is the following: Let's assume peer review works well. Reviewers are independent experts and they are able to judge the quality of the papers quite well. In a single review process, this should lead to very good results. But what happens, if all papers that are rejected are sent to the next journal (and we all know this is what happens!). This little R-simulation tries to measure these effects. (Note for nerds: It is a cool project without ANY additional package!) ## Simulating a peer review process. ## December 2015 # n defines the number of articles to review in the simulation n <- 1000000 # q is the "real quality" of the articles: # q has an ordinary scale of "strong reject", "reject", "revise and resubmit", # "accept with revision", "accept" = 0,1,2,3,4 # To simulate the quality, a probability density