In a recent incident, a number of fake academic papers, which were computer-generated gibberish, got published (and then withdrawn when the problem was spotted). Nature's account is here:
It is all very entertaining. It is also not surprising that there should be occasional incidents like this, when so much gets published, and appropriate peer reviewers may be in short supply. At least we can hope that when a paper is gibberish, as opposed to merely bad, even a first year student would be able to see that it was gibberish. That is, there should be an effective control at the point of reading.
But is this a problem that could be solved, at least partially, by the purest form of the open access approach to academic publishing, an approach that is in any case gaining ground?
I have in mind the free publication of papers on websites, in ways that make them freely accessible, rather than in traditional journals. This is happening anyway. The arXiv, http://arxiv.org/ , sets a particularly high standard of organization and ease of use, including the tracking of revisions of papers, but there are other worthy sites too. Such sites do need funding: the arXiv is funded by Cornell University Library and others. But the expense must be modest, compared to the immense value of the service.
Often, a paper published in this way is a draft, not the final version, which will typically appear in a traditional journal and will be the version of record, the version for others to cite. Publication in a traditional journal creates expense, either for the author, or for libraries or individual readers. Some of that expense is justified, because the selection of papers, the organization of peer review and the checking of formatting takes effort - although reviewers themselves are not likely to be paid, and the rise of LaTeX and stylesheets to use with it means that formatting, and the checking of it, involves much less work than it used to require. But some publishers exploit the prestige of their journals, and the ability to deny online access to past years' editions if one stops paying the subscription, in order to make very large profits, at the expense of university libraries who could otherwise use the money elsewhere.
The obvious solution would be to move to free online publication of the version of record of each paper, and to buy permanent open access to past years' editions of journals with one-off payments to publishers.
But what would then happen to quality control?
It could be imposed by all who read papers - and open access would greatly increase the pool of potential readers, improving the effectiveness of this form of quality control. A well-organized website could allow readers to rate papers on a simple scale, and to leave comments. Ideally, the raters and commentators should be identified, and should be allowed to rate one another, so that expert commentators (as identified by the ratings of others) found that their ratings and comments carried more weight than those of people without such recognition. Indeed, people who were highly rated should have their ratings of other commentators given more weight than the ratings of commentators that were given by those who were not themselves highly rated.
There would be interesting problems here in the design of voting systems. How could the risk of conspiracies to distort the voting be minimized? (The risk could not be eliminated, unless an authority imposed weights by reference to a list of approved academics.) But something that was good enough should be possible.
But what about the academic job market, the allocation of research grants, and so on? Lists of publications in prestigious journals currently form important parts of applications. Here, I think there is not much difficulty. An application would include links to papers, and to the comments on them. Those who allocated jobs and grants would know whose comments were most worthy of note. They would then have the advantage of often having more than one opinion on the applicant's work. And one would also eliminate the unfairness that springs from the current element of luck in publication. Luck arises because each journal can publish far fewer papers than are submitted, and is therefore likely to reject papers that were just as worthy of publication as the accepted ones. Finally, any papers by an applicant that had not been exposed to open comment in this way would be treated with suspicion by the allocators of jobs and of grants, encouraging a fairly rapid shift to the system.
So how would this solve the problem of fake papers? The first people to look at such a paper would simply leave comments to the effect that it was gibberish. It would then be ignored by the academic community.
Finally, why should we cut out commercial publishers? Why is the purest form of open access publishing the most desirable form in this context, apart from the obvious point that it would allow universities to make better use of their funds? The answer is that if there is no commercial interest, there is no reason to design a system that would be any other than the best system for the academic purposes of honest appraisal and the encouragement of good work. A commercial publisher would always have an urge to control adverse comment on papers that it had published, and to solicit favourable comment from prestigious commentators. Good publishers would resist the urge, but it would be safer to take money out of the system altogether.