Who is Actually Harmed by Predatory Publishers?

Martin Paul Eve* and Ernesto Priego**

*Birkbeck, University of London, London, UK, martin.eve@bbk.ac.uk, https://www.martineve.com

**City, University of London, London, UK, Ernesto.Priego.1@city.ac.ukepriego.wordpress.com

Abstract: ‘Predatory publishing’ refers to conditions under which gold open access academic publishers claim to conduct peer review and charge for their publishing services but do not, in fact, actually perform such reviews. Most prominently exposed in recent years by Jeffrey Beall, the phenomenon garners much media attention. In this article, we acknowledge that such practices are deceptive but then examine, across a variety of stakeholder groups, what the harm is from such actions to each group of actors. We find that established publishers have a strong motivation to hype claims of predation as damaging to the scholarly and scientific endeavour while noting that, in fact, systems of peer review are themselves already acknowledged as deeply flawed.

Keywords: Open Access, Scholarly Communications, Predatory Publishing, Evaluative Cultures, Academia

Acknowledgement: The authors wish to thank Ross Mounce and David Prosser for helpful comments on the manuscript of this article. Parts of this article on the problems of peer review are derived from and share a narrative with a chapter by Eve that is currently under submission. 

1.   Open Access and Predatory Publishing

Over the past twenty years, the open access movement has sought to make academic research freely available to read and to re-use online (Suber 2012; Willinsky 2006; Eve 2014). Although hardly unanimously accepted as a good thing (Golumbia 2016), this has begun a reconfiguration of publisher business models, particularly under the 'gold' route, where a publisher makes work openly accessible (Jisc 2016). Indeed, the logic often runs that if one cannot sell a product to libraries, because it is already free, one should sell a service to academic authors, institutions, and funders. This is the origin of the article processing charge (APC) business model for gold open access (OA) (although note that APCs are not the only business model for gold open access. See Look and Pinter 2010; Eve 2015).

Alongside the rise of article processing charges there has been a prominent argument that the specifics of this business model cause a type of 'predatory' publisher to emerge. Such publishers, it is argued, profit from APCs while failing to conduct the basic quality checks that are expected from systems of academic peer review. While Suber has argued, to the contrary, that there is a potential for open access publications using the APC model to be more selective (since they do not have to fill an issue for sale to subscribers), the general logic of such predatory behaviour is claimed to be one of cashing in (Suber 2010, 117).

    The most famous denouncer of these predatory practices is undoubtedly the librarian Jeffrey Beall, whose list of questionable publishers was vigorously maintained until its disappearance from the Internet in early 2017. From his writing, we can infer that Beall's motivation was a political dislike of certain aspects of open access. In a 2013 article, he wrote: "While the open-access (OA) movement purports to be about making scholarly content open-access, its true motives are much different. The OA movement is an anti-corporatist movement that wants to deny the freedom of the press to companies it disagrees with". Such a stance separated Beall even from the usually-conservative Scholarly Kitchen website and also prompted a number of counter-studies (Esposito 2013; Berger and Cirasella 2015). There have also, since this time, been two 'sting' operations on publishers to determine how fraudulent their behaviour actually is, as well as some critiques of these studies (Bohannon 2013; Buckland et al. 2013; Sorokowski et al. 2017). Despite criticisms of Beall's list spanning many years (Crawford 2014), and its subsequent closure, his work lives on in a new subscription database launched in 2017 that can be purchased by academic libraries (Silver 2017).

In this article, we want to examine the question of who is actually harmed by the practices of predatory publishers, for it is not actually so straightforward as a mere 'predatory' vs. 'reputable' divide might imply. The question is ensconced within broader thinking about the evaluative cultures for research in academia and the ways in which the academy uses proxies of prestige to denote quality (Oswald 2006; Suber 2008; Moore et al. 2017). Indeed, we contend in this article that if the label of 'predation' is predicated upon a non-provision of peer review, then the soundness of peer review itself must be questioned. Examining the literature upon the general efficacy of pre-publication peer review does not lead us to believe that such practices are good pre-determiners of academic quality. Nonetheless, peer review remains valued by the academy, despite its inefficacy, since it performs a labour-saving function that is tightly coupled to academic reputations.

1.1.     Basic Predicates and Suppositions

A number of basic predicates and suppositions underpin our argument here. The first of these is that predatory publishers are real, even if the terminology in question comes with a strongly accusatory tone. For instance, others have instead used the term "pseudojournals" or noted that predation is not one-way (McGlynn 2013; Shamseer and Moher 2017). Nonetheless, there are publishers that purport to conduct peer-review and to have academic oversight in place, but whose true purpose is merely to extract an APC from authors and/or institutions, and we will continue to use the extant language of predation (Shen and Björk 2015). Such publications may have editorial boards composed of either fake (that is, non-existent) academics, academics who are unaware that they are being listed, or academics who have consented to be listed but are unaware that the journals are predatory. Such practices are deceptive and possibly fraudulent under the sales and advertising laws of many jurisdictions.

Conversely, such publishers do also provide a service to the authors who choose to publish with them. Often, these services are not commensurate with the level of service provided by traditional academic publishers, even when it is deceptively claimed that this is the case. For instance, it is unclear what the digital preservation practices of these entities may be. However, in the basic sense of publishing as 'making public', such journals do offer a venue in which one may 'publish'. This involves running an infrastructure for digital publication and being able/willing to perform some basic labour functions. In this sense, even predatory publishing requires labour that must, under the current paradigm, be remunerated. However, it has also been shown that these publication venues are extremely low cost and "can be assumed to have modest annual incomes" (Xia 2015).

In light of the above, it is not uncontroversial to claim that there is harm to authors who are 'duped' into publishing with such predatory actors. Certainly, those paying for a service that they believe includes peer review are being conned. The main demographic for authors in such venues are "for the most part, young and inexperienced researchers from developing countries" (Xia et al. 2015). On the other hand, there have been claims that the entire economic system of much academic publishing - even under non-open access models - is one that hoodwinks academics into writing works that nobody can afford to read (The Guardian 2015; Reilly 2015).

There are, however, a set of broader arguments about the damage done by these entities that we here wish to contest. For instance, when speaking of the harm done to authors through publishing in such venues, we argue that this is often based upon an implicit indirect reputational harm; that is, that such venues will not fare well when pitted against hiring, promotion, and tenure panels in the global North. There are also assertions in the afore-cited literature that the scientific record and/or public understanding of science/truth is damaged by the actions of these publishers. We here argue that these claims are contestable and that they say more about the inadequacies of those systems of review (and public understandings of science) than they do about the predatory publishers themselves. We argue this through reference to the harm done, or otherwise, to a range of stakeholder groups: academic authors; academic hiring, promotion, and tenure committees; general publics; funders; learned societies; librarians; and traditional academic publishers.

1.2.     Peer Review and Evaluative Labour Time

Leaving aside the dishonesty of claiming to conduct nonexistent peer review, in order to understand the difficulties with predatory publishing, one must understand the benefits but also limitations of peer review. For the two sides of predatory publishing consist of a fraud perpetrated upon authors (by falsely claiming to conduct review for a fee) and a fraud perpetrated upon readers (by falsely labelling work in the venue as reviewed). How serious the second act of fraud is depends upon how solid peer review actually is.

In terms of its limitations, peer review is very bad at predictively spotting excellent work, even when conducted by researchers within their own sub-fields (Smith 2006; Eyre-Walker and Stoletzki 2013; Moore et al. 2017). It should also be considered that there are significant differences between peer-review processes in different disciplines (Walker and Rocha da Silva 2015). Peer review is a heterogeneous term that is ill-defined and barely standardised. For instance, in much academic research-book publishing it is not uncommon for contracts to be issued on the basis of a proposal, with a lighter review of the full manuscript. This means that criteria for what constitutes 'excellence' varies across disciplinary boundaries, but also within fields. Although almost every academic has an anecdote about how positive review comments or criticism have helped to improve work, as we have previously shown, when peer review is used as a gatekeeping process there are examples of both false negatives and false positives within this realm (Moore et al. 2017).

For an example of false negatives, consider that Campanario (2009) and Gans and Shepherd (1994) each examined instances of Nobel-prize winning work being rejected from elite journals. Further, Campanario and several others have shown that papers that were originally rejected have gone on to be among the most cited works in particular fields (Campanario 1993; 1996; Campanario and Acedo 2007; Siler et al. 2015). Given that most rejected manuscripts do end up being published elsewhere anyway, this is not surprising (Moore et al. 2017).

More distressingly, there are also examples of claimed false positives within the system. For example, Peters and Ceci (1982) conducted an experiment in which they disguised recently-accepted articles as new submissions and fed them back to the same journals. Only 8% of these re-submissions were detected as plagiarism, but 90% were rejected for methodological and other reasons by journals that had previously accepted them. 1982 is, of course, a long time ago. It is also unclear whether this pattern would be repeated across other disciplinary spaces were it tested.

However, given that these flaws in peer review are so evident, it is surprising that systems of double-blind or single-blind review remain in use, although there are reasons why changing systems of anonymity can also be problematic (Eve 2013). Indeed, it is certainly the case elsewhere that the value of non-peer-reviewed material has been recognised. For instance, the rise of preprints in many disciplines is indicative of new structures of value (Kiley 2017; The Economist 2017). It is strange, then, as David Wojick has noted, that so many anxieties about predatory publishing (that is: a system that disregards peer review, though deceptively) have emerged so strongly at a time when others are bestowing fresh value upon non-peer-reviewed material (Burdick 2017; Wojick 2017). Others have argued that what is wrong here is the labelling of non-peer-reviewed material as though it has been through that process (Anderson 2017). Yet if we accept that peer-review is deeply flawed as a gatekeeping mechanism, as above, then there are serious problems with such an idea.

We contend that the reasons that peer review does remain in use are to do with labour. (On how the growth in scientific production may threaten academics' capacity to handle the demand for peer review in the biomedical sciences, see Kovanis et al. 2016.) Specifically, there is a shortage of evaluative labour available to hiring, promotion, and tenure committees. Academic hiring panels often have a candidate pool of several hundred applicants to a single position. The labour of individually evaluating every single one of these applicants on their own research merits - for example, if they all had a research monograph in the humanities - would be extraordinary. Indeed, in the hypothetical situation that we here sketch, it could take a hiring panel an entire year just to read all of the candidates' research artefacts. To avoid this evaluative labour, panels often resort to proxy measures such as journal brand, press name, or Impact Factor (for a critique of the Impact Factor, see Brembs et al 2013).

Clearly, the use of these proxy measures is poor academic practice. In its aggregation, resorting to such proxies disallows for variance between the container (journal/press) and the contents (article/book). Good publishers can publish bad work and bad publishers can publish good work. This is assured by the pragmatics of university publishing that currently allow academics to submit their work wherever they like. Indeed, statements such as the San Francisco Declaration on Research Assessment (DORA) have been formulated to attempt to disavow such flawed evaluation techniques in order to give academics greater freedom to publish in open access venues (American Society for Cell Biology et al. 2013).

Yet the systems of peer review that are clearly flawed continue to be used, since there is a belief that publication brand through rigorous filtering correlates with scarcity. That is to say, it is believed that peer review by other entities - a kind of "outsourcing" of academic evaluation (Smith 2013; Waters 2001) - can be used as a substitute in individual panel situations. For instance, if it is believed that only one in several hundred submissions will appear as articles in a certain journal, then that journal can act as a direct substitute for the exact hiring situation that we presented above. As a result, publications act as a type of currency within a symbolic exchange paradigm where they can be traded into a real-world material economy of jobs, salaries, benefits and so on for those who can accrue such symbolic capital.

Symbolic capital associated with scholarly journals is not exempt from the processes involved in the more general development of brand trust online. Younger and independent publishers and journals do face significant challenges in guaranteeing success in building 'brand trust'. Marketing research on the factors that influence brand trust online could be applied to appreciate how legacy publishers might benefit from disallowing a variance between containers and contents (Ha 2004). Indeed, the diverse ways in which people build brand online has led to various diverse criteria for judging whether journals are 'predatory' or otherwise, with little centralised consensus (Shamseer et al. 2017).

These frames of the inefficacy of peer review and the labour-shortage paradigms of hiring panels in evaluative cultures are the backdrop to the debate against which predatory publishing emerges. The question then becomes: if peer review was of questionable good in the first place, but we are still using it to evaluate research work, then who is actually harmed by predatory publishing?

2.   Evaluating the Harm to Different Participant Groups

To recap: against the above context, we propose to evaluate the harm to the following stakeholder groups, in this order: academic authors; academic hiring, promotion, and tenure committees; general publics; funders; learned societies; librarians; and traditional academic publishers.

2.1.     Academic Authors

Academic authors choose to publish in specific venues for a variety of reasons. Some of these are to do with dissemination (e.g. is the venue accessible to those outside the academy?) Others have to do with prestige and reputation, often linked to discourses on quality (Starbuck 2014). As above, if a venue claims to conduct peer review but does not, there is an intrinsic harm of fraud at work that will not be mitigated by other arguments. This is both a financial and reputational harm to the author, who will not have been provided with a service that has been advertised. That said, academics who publish in an open-access venue that achieves the former set of conditions (i.e. disseminates their work so that anybody can read it) but that does not fulfil the latter (i.e. there are fraudulent peer review practices) can see benefit or harm in different ways.

Through having their work available openly (and most predatory publishers are open access, even if most open access journals are not predatory), there are a range of advantages that are discussed in the aforementioned literatures on the histories of open access. This open dissemination can often be achieved through these predatory platforms at a cost level that is much lower than traditional academic publication channels. At the same time, if authors seek reputational credit for their work - one of the core drivers of academic productivity - they are likely to be harmed by publishing in predatory venues. In other words: the degree of harm depends upon the author's motivations in publishing. This likelihood of harm to an author is in large part due to the continued problematic use of tenuous measures for evaluation by hiring panels, and it greatly depends upon the quality of the work as to whether or not the harm here is justified.

If the author publishes sub-standard work, then it does not deserve institutional reward. Thus, publishing in a predatory venue is of no undeserved harm to that academic (though there are difficulties in defining what has merit, as above). It is also presumably the case, according to the logic of those who believe that peer review works, despite the evidence to the contrary, that this material would not have found a home elsewhere anyway.

If the author or authors publish brilliant work that just so happens to appear in a predatory venue, then certainly they have been duped out of the institutional systems of reward that assume that this work should have appeared in a top journal. It is also the case that this work remains excellent, despite the venue, and that the actual reputational harm done here is perpetrated by those who brand the journal as predatory. If this does not occur and the work is assessed on its own merits, the problem does not arise. However, what does it say of our abilities to independently judge work and our systems of accreditation (which are actually systems to regulate the efficiency of scarce evaluative labour time) that we are unable to countenance good work appearing in bad places? It is, in the present authors' view, a damning indictment of those systems.

When it is said, then, that predatory publishers prey on early-career researchers or those from outside the global North who seek reward in the Anglo-US systems, what is actually being assumed is that our systems of accreditation possess inadequate discriminatory power to spot brilliance that occurs outside the parameters we have defined for containers. Such a view appears deeply condescending and also carries an imperial legacy upon its shoulders. For it says, just as did Beall when he called SciELO a "publication favela" (Scientific Electronic Library Online 2015): publish in the venues that we respect or we will not countenance the work's merit.

2.2.     Academic Hiring, Promotion, and Tenure Panels

Predatory publishers put academic hiring, promotion, and tenure panels in a difficult position. Such panels seek correlations between the frequency of appearance of research artefacts in certain venues and the appointment's own applicant-to-position ratio within labour-saving frames that can serve as shorthands for a comparable measure of scarcity. However, to admit this is tantamount to logical suicide, since academics know that their ability to choose where to publish allows researchers to make decisions that invalidate such logic; a top academic can choose to publish brilliant work on a blog if she desires. No panel would ever claim that it needed such proxy measures, then, unless its members truly did not understand the logical disconnect between artefact quality and container. Such panels are far more likely to claim that they might use them as a 'useful steer' in coming to a judgement.

Again, whether or not the work that appears in predatory journals is any good is the crux of the matter. If all work can universally be ruled to be bad unless it appears in a venue that panels deem worthwhile, then no harm is done to academic hiring panels since they can safely ignore predatory venues. However, it seems unlikely that this is true.

If there is a possibility of good work appearing in a predatory venue - which we believe is possible, given how frequently it is argued that early-career researchers can easily be duped by such publishers - then the main harm done to academic hiring panels is to obliterate the validity of their proxy measures and, therefore, their ability to contain evaluative labour time to a reasonable level. The logic here is the reason why we have seen a prominent move towards article-level metrics in the past few years; it is an attempt to undo the paradoxical double-bind within which panels find themselves: that is, a bind of knowing that the proxies are poor but at the same time depending upon them to save evaluative labour time.

Finally, if hiring panels seek peer-reviewed work, despite its flaws, then it is possible that candidates who come up against the author with an article in a predatory venue in a competitive hiring situation may be either disadvantaged (if the hiring panel is 'fooled' by the predatory journal) or advantaged (if the panel see through the scam).

These are all fundamentally matters of online and digital abundance. If container brand does not act as a scarcity correlate then hiring panels find themselves in difficulty.

2.3.     General Publics and Scientific Truth

When speaking of general publics, we refer here not only to those outside the academy but those within it as well. While we have covered academic authors above, we here tackle harm done to academic readers as well as other readerly groups.

The risk of nonsense contaminating the scientific record is both understated and overstated, in different ways. The most famous example of absolute nonsense making it into a predatory journal is perhaps Peter Vamplew submitting a paper to the International Journal of Advanced Computer Technology. The paper consisted solely of the words "get me off your fucking mailing list" repeated over and over again in various humorous typographical layouts (Safi 2014). On the one hand, this clearly demonstrates an inadequate review process at this journal and it is distressing to see such material presented as evidence of scientific thought within an 'academic journal'. On the other hand, the idiocy of the piece is so patently obvious a case as to cause disbelief that anybody would be fooled into taking this seriously.

Academics reading work that is less patently nonsensical, though, are in a different position of harm to the general public. Academics, as individuals, seek to use container brand as labour-saving devices in order to avoid reading labour (a framing that we credit to Geoffrey Bilder in private correspondence). There are parallels with the hiring panels above because, essentially, there is always more to be read than it is possible to read. Shorthand evaluations are, therefore, needed in order to know where to focus one's scarce attention time. The aforementioned proxy measures usually serve as the required shorthands.

Assuming that academics could not fall back on such proxies and had to read material in predatory venues, what if the work being read was poor and inaccurate but sufficiently disguised as to appear plausible? It is possible that poor hypotheses and future work could be based upon fraudulent or unethical material, which would lead to harm. However, if individual academics are unable to spot the ramifications here, it is unlikely that pre-publication peer review would have seen this in advance either (see the infamous Wakefield et al. 1998). The more likely harm is time wasted which, as above, is a flaw in our systems of evaluation.

In terms of broader general publics, the harm is potentially that of science and humanities work being discredited in the popular and political view. However, this problem is manifesting itself across all forms of digital communication under the paradigm known as "fake news" (The Media Insight Project 2017). Indeed, it is often argued that the world wide web has triggered a degradation of trust and allowed a previously-unknown volume of false information to flow, as opposed to an era of material print scarcity and cost (Taraborelli 2008). The usual argument goes that the proliferation of news sources has rendered our frames of value impotent. Stripped of borders that demarcate cultural authority, we are told, websites accessed through the Internet - and predatory publishers with them - have allowed conspiracy theory websites to present themselves as peers to 'mainstream media' venues; a parallel to the phenomenon under way in scholarly communications around predatory publishing (see Cochran 2016).

This is a general part of a culture of infinite reproduction facilitated by the digital. Predatory publishers contribute to the loss of the demarcation of cultural authority within certain venues. Whether or not they contribute to a decline in the accuracy of the scientific record and its public understanding depends upon whether the material that they publish is true or false. It cannot even be claimed that such publications have a monopoly on publishing untrue material; The Lancet was equally guilty, albeit unknowingly, before it retracted the Wakefield article. One radical solution, of course, would be to fill predatory venues with reputable material; that is, to opt to publish in predatory venues. For if these venues are capable of such broad dissemination in the name of harm, as is claimed, then they certainly cannot be said to have no dissemination function. One of the solutions would be to use that dissemination power to broadcast science and humanities research that is, to the best of our flawed knowledge, accurate.

It is also a red herring to blame such publication practices for an entire loss of trust in science and research. At least partially, the closed, subscription nature of genuine research in traditional toll-access journals must also play a part in any attribution of fault here. For, if the public cannot read work, in an era of demanded transparency, such occlusion is likely to trigger mistrust, no matter how misplaced.

2.4.     Funders

As with the other groups, research funders are diverse in their approaches. They can range from small-scale to multi-million-dollar entities. Often, though, as with the Gates Foundation and the Wellcome Trust, they have adopted a hard-line on the publication practice of those they fund, insisting on open access. In 2017, these funders even went so far as to launch their own open access publishing platform (Butler 2017; Grove 2017).

Funders are in a strong position when it comes to dictating publishing terms. Quite simply, their argument is: take our money and you abide by our rules. Also, at the end of the day, the only detriment to a funder's ability to conduct research would be a depletion of resources or a negative ethical standing (and even this does not always cause researchers to turn down funding).

Funders have taken a variety of approaches when it comes to predatory publishing and there is not necessarily a consistent message between them. One funder that has taken a firm line is the National Research Foundation of South Africa, who wrote in 2017 that:

 

The National Research Foundation (NRF) of South Africa's peer review and adjudication system has identified a number of instances where applications for research grants, scholarships and NRF rating include publications in predatory journals or cite invitations by deceptive publishers to serve on editorial boards of journals.

This practice is neither supported nor encouraged by the NRF as it challenges the integrity of the NRF's scientific peer review process. The use of predatory journals and deceptive publishers compromises the creation and dissemination of rigorous scientific and scholarly work within the Digital and Open Access movement.

(National Research Foundation of South Africa 2017)

There appears to be a twofold logic within such a statement. The first is that the use of journals that do not have external peer review causes harm for the labour time of reviewers at the NRF, as for academic readers in general. The second is that the NRF supports the broad dissemination of research work through open channels but worries that researcher perception of OA will be tarnished by such operators. In other words, there is a social consideration among funders that the best way of maximising dissemination (i.e. open access) will be corrupted by predatory publishers, a view once shared by none other than Jeffrey Beall himself (2012).

Likewise, in Mexico, the National Council for Science and Technology (CONACYT), the country's foremost scientific funder, explicitly bans Mexican researchers from submitting publications to "venues that do not guarantee rigorous peer review" (2017) as evidence of scholarly work when applying for funding and other financial stimuli. The examples provided in this category include links to Beall's now-deleted list as well as conference proceedings, "electronic books without thematic coherence", self-published books, and "simple translations" (Ibid.).

2.5.     Learned Societies

Learned and Professional Societies exist to promote disciplinary subject groups. As such, they are guardians of disciplinary tradition and they have often had a vexed relationship to open access, since they frequently derive revenue from subscription publishing (Jump 2013; Eve 2014, 38-40; although see Sutton et al. 2014 for a counter-narrative). In general, Societies encounter the same harms as academic readers and writers because they are composed of academics. However, such Societies also gain political capital through their accumulated tradition and past record of eminence.

Since the disciplines that Societies work within are self-contained cultures of evaluation, Societies are often deeply invested in peer review and its protection. They may, then, perceive a reputational harm from the deception undertaken by predatory publishers. At its heart, though, there is also a challenge here in that if Societies admit that they require peer review to know which work is good in the face of overwhelming evaluative labour demands, they cede their authority to judge to container brands and often anonymous evaluators.

2.6.     Academic Librarians

Academic libraries have taken an enormous economic hit over the past thirty years, with the total cost of ownership for all serials rising by an estimated 300% above inflation and actual budgets barely keeping pace with inflation (Association of Research Libraries 2014). Although there are, therefore, many reasons why people advocate for open access, among librarians there resides a core attempt to stem these costs without creating an even broader access gap.

There are multiple potential harms from predatory publishers to this group. For those librarians who have supported open access, predatory publishing is harmful since it degrades the general perception of open access among researchers who are not well-versed in debates around scholarly communications (and even among some who are). The drive towards traditional venues and/or away from open access as the answer to a crisis of legitimation also continues the concentration of economic power in the hands of a few large and extremely profitable publishers, which continues to harm library budgets. There is also an additional labour-time burden for librarians when they must advise researchers on whether journals are sound.

On the other hand, existing modes of transition to open access that incorporate hybrid APC payments have so far resulted only in a higher total cost of ownership for libraries (Pinfield et al. 2016). The low-cost options offered by predatory publishers look, in some perverse ways, to be enticing. They are an order of magnitude less expensive than traditional publishers; probably because the publisher labour time involved in extreme selectivity is extensive and costly (although there are also open access journals with no author-facing charges who conduct review: there is no correlation between price and quality). While some new publishers such as Ubiquity Press have rigorous peer review procedures and a low-cost model, others such as PLOS ONE have found that even operating a less selective model of review requires a high APC. In other words, for libraries, predatory publishing is mostly harmful but the cost benefits of non-selectivity, alongside knowledge that peer review doesn't actually work that well, does introduce some doubts.

2.7.     Academic Publishers

The final group to which we turn are academic publishers. It is worth noting at the outset that this is not a homogeneous group. There are academic publishers making close to 40% profit on billions of dollars of revenue and there are academic publishers that are one lawsuit away from bankruptcy (Monbiot 2011; Fiormonte and Priego 2016). There are subscription, hybrid, and pure-open-access publishers. There are publishers who behave reputably in line with the public statements of what they provide, and there are predatory publishers who do not. On the other hand, there are also publishers who are apparently reputable and who are not called "predatory" but who nonetheless take money for APCs while not making the work openly accessible (Mounce 2017a; 2017b; 2017c). Indeed, if the accusation against predatory publishers is that they do not provide a service for which they charge nonetheless, many existing big-name academic publishers might also be found guilty.

As with other stakeholders, predatory publishing has different harmful or beneficial effects upon different sub-groups. Generally speaking, predatory publishers have a beneficial effect for traditional, large, historically subscription-based publishers who feel threatened by new open, digital models. This is because these entities can claim that they are the answer to the problems of cultural authority and the erosion of its demarcation for which predatory publishing seems to be responsible. Large corporate entities with a track record of peer review can denounce the practices of the predators and, if they can amplify the problem sufficiently, are likely to see researchers sticking to their tried-and-trusted approach. The problems with this are the same as the problems for the evaluative cultural labour above; using a broad entity such as a publisher or a journal as an evaluation tool simply does not make sense. However, it is not in the interest of corporate finance to make these problems widely known, and denouncing predatory publishing is a good way to reinforce market position.

On the other hand, new open access presses face much greater harm from predatory publishing than their established toll-access counterparts since they will have to establish their credentials and reputation within a climate in which there is ongoing suspicion around new publishers. Nobody will ask whether Cambridge University Press is predatory; yet any new open-access press will have to face this question. Since the mechanisms of peer-review in many disciplines remain hidden, despite experiments to change this (Fitzpatrick 2011), it is difficult for such entities to prove that their rigour (even if flawed) is equal to that found in existing publishers. Further, as stigma attaches to the APC business model for gold open access, new entities find themselves having to justify their request for payments in ways that older organisations do not. Finally, these new entities find themselves bound to existing (flawed) mechanisms of double- or single-blind peer review. To experiment with models of peer review as a young press, in a climate of fear about predatory publishing, is to be bold indeed. Yet such experimentation has happened, and the publication of full review reports in some venues, such as the discretionary policy at PeerJ, can go some way to legitimise new publishers. That said, the backlash against the deceptive practices of predatory publishers is more frequently used to legitimate the continuation of a damaged system of evaluation and toll-access dissemination, instead of attempting to experiment in order to find new ways in which we might review (Teytelman 2017).

We would also note that there are valid broader critiques of the APC-driven gold open access model that are given unjust ammunition by claims of predation. The concentrational effects of this particular business model have long been a target for those in disciplinary spaces where such fees are not readily accessible, even where authors are generally amenable to open access (ROAPE Editors 2013). It is true that, without liberal waivers and other measures in place, APCs are as exclusionary on the supply side as the toll-access/subscription model was on the payment side. What predatory publishing brings to the table is extra firepower for the argument that such a pay-to-publish model might inherently violate the quality of academic work, as opposed to merely facilitating open dissemination. Once again, and to highlight this point, it is primarily open access that is damaged by the label of 'predatory' as here applied.

3.   Conclusion

The debate about predatory publishers is not going to disappear. We maintain that it is deceptive and wrong to claim to provide a service when such service is not provided, and predatory publishers should never be defended on those grounds.

There are many entities, though, with vested interests who stand to benefit from the existence of organisations that make traditional peer-review and toll-access publishing seem the only viable future path for truth. However, the actual site of questioning that we need to focus on is the space of research evaluation. All the evidence indicates that we are not brilliant at evaluating work without some kind of frame and that peer review is deeply flawed. Yet at the same time we say that the main problem with predatory publishing is that it does not resort to peer review. It is likely that some readers will maintain a faith in peer review despite the above work - and that is fine. It is probable that peer review will catch some errors. But when we have become so dependent upon proxies for evaluation as a gatekeeping tool that we are willing, in the name of saving labour time, to exclude the possibility of good work appearing outside of known venues, there is something very wrong with our system of verification. Indeed, we would say that it is a necessary harm that predatory publishing inflicts upon our cultures of evaluation; forcing us to look at our own reflection and to dislike what we see. What we believe is needed is robust debate in the spirit of enhancing work, rather than supposedly robust but fallible standards used as a means of exclusion. This could be achieved through various types of post-publication review approaches.

To close with an anecdote: when one of the present authors was speaking about open access recently, a question came from the back of the audience. "How can we tell students which journals to read when some are predatory or just not part of our library catalogue? How will they know what is good?" It was impossible but to respond: it is our job to make people able to read critically, to find ways of evaluating truth wherever it is found or published (Priego 2016); not because it appeared in a glamorous academic journal.

References

American Society for Cell Biology and others. 2013. San Francisco Declaration on Research Assessment: Putting Science into the Assessment of Research. San Francisco. Accessed 18 February 2017. http://www.ascb.org/files/SFDeclarationFINAL.pdf

Anderson, Rick. 2017. Nope. (Re: Contradictory Standards?). Personal communication, 27 March. https://groups.google.com/forum/#!topic/osi2016-25/NeNMDHqIVYY

Association of Research Libraries. 2014. ARL Statistics 2009-2011. Accessed 2017. http://www.arl.org/storage/documents/expenditure-trends.pdf

Beall, Jeffrey. 2012. Predatory Publishers Are Corrupting Open Access. Nature News 489 (7415): 179. doi:10.1038/489179a

Beall, Jeffrey. 2013. The Open-Access Movement Is Not Really about Open Access. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society. 11 (2): 589-597.

Beall, Jeffrey. 2017. List of Publishers. Scholarly Open Access [blog]. http://scholarlyoa.com/publishers/

Berger, Monica, and Jill Cirasella. 2015. Beyond Beall's List: Better Understanding Predatory Publishers. College & Research Libraries News 76(3): 132-135.

Bohannon, John. 2013. Who's Afraid of Peer Review? Science 342 (6154): 60-65. doi:10.1126/science.342.6154.60

Brembs, Björn, Katherine Button, and Marcus Munafò. 2013. Deep Impact: Unintended Consequences of Journal Rank. Frontiers in Human Neuroscience 7: 291. doi:10.3389/fnhum.2013.00291

Buckland, Amy, Martin Paul Eve, Graham Steel, Jennifer Gardy, and Dorothea Salo. 2013. On the Mark? Responses to a Sting. Journal of Librarianship and Scholarly Communication 2 (1): 1-6. doi:10.7710/2162-3309.1116

Burdick, Alan. 2017. 'Paging Dr. Fraud': The Fake Publishers That Are Ruining Science. The New Yorker, 22 March. Accessed 27 March. http://www.newyorker.com/tech/elements/paging-dr-fraud-the-fake-publishers-that-are-ruining-science

Butler, Declan. 2017. Wellcome Trust Launches Open-Access Publishing Venture. Nature [News], 6 July. doi:10.1038/nature.2016.20220

Campanario, Juan Miguel. 1993. Consolation for the Scientist: Sometimes It Is Hard to Publish Papers That Are Later Highly-Cited. Social Studies of Science 23 (2): 342-362.

Campanario, Juan Miguel. 1996. Have Referees Rejected Some of the Most-Cited Articles of All Times? Journal of the American Society for Information Science 47 (4): 302-310. doi:10.1002/(SICI)1097-4571(199604)47:4<302::AID-ASI6>3.0.CO;2-0

Campanario, Juan Miguel. 2009. Rejecting and Resisting Nobel Class Discoveries: Accounts by Nobel Laureates. Scientometrics 81 (2): 549-565. doi:10.1007/s11192-008-2141-5

Campanario, Juan Miguel, and Erika Acedo. 2007. Rejecting Highly Cited Papers: The Views of Scientists Who Encounter Resistance to Their Discoveries from Other Scientists. Journal of the American Society for Information Science and Technology 58 (5): 734-743. doi:10.1002/asi.20556

Cochran, Angela. 2016. What We Can Learn from Fake News. The Scholarly Kitchen, 15 November. Accessed 27 March. https://scholarlykitchen.sspnet.org/2016/11/15/what-we-can-learn-from-fake-news/

CONACYT. 2017. Convocatoria 2017 Para Investigadores. Sistema Nacional de Investigadores. Accessed 27 March. http://www.conacyt.gob.mx/index.php/sni/convocatorias-conacyt/convocatorias-sistema-nacional-de-investigadores-sni/convocatorias-abiertas-sni

Crawford, Walt. 2014. Ethics and Access 1: The Sad Case of Jeffrey Beall. Cites & Insights 14 (4): 1-14.

Esposito, Joseph. 2013. Parting Company with Jeffrey Beall. The Scholarly Kitchen, 16 December. Accessed 27 March. http://scholarlykitchen.sspnet.org/2013/12/16/parting-company-with-jeffrey-beall/

Eve, Martin Paul. 2013. Before the Law: Open Access, Quality Control and the Future of Peer Review. In Debating Open Access, edited by Nigel Vincent and Chris Wickham, 68-81. London: British Academy.

Eve, Martin Paul. 2014. Open Access and the Humanities: Contexts, Controversies and the Future. Cambridge: Cambridge University Press.

Eve, Martin Paul. 2015. Co-Operating for Gold Open Access without APCs. Insights 28 (1): 73-77. doi:10.1629/uksg.166

Eyre-Walker, Adam and Nina Stoletzki. 2013. The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations. PLoS Biol 11 (10): e1001675. doi:10.1371/journal.pbio.1001675

Fiormonte, Domenico and Ernesto Priego. 2016. Knowledge Monopolies and Global Academic Publishing. The Winnower 4: e147220.00404. doi:10.15200/winn.147220.00404

Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press.

Gans, Joshua S. and George B. Shepherd. 1994. How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists. The Journal of Economic Perspectives 8 (1): 165-179.

Golumbia, David. 2016. Marxism and Open Access in the Humanities: Turning Academic Labor Against Itself. Workplace: A Journal for Academic Labor 28.

Grove, Jack. 2017. Gates Foundation Joins Shift towards Open Access Platforms. Times Higher Education, 23 March.

Ha, Hong-Youl. 2004. Factors Influencing Consumer Perceptions of Brand Trust Online. Journal of Product & Brand Management 13 (5): 329-342. doi:10.1108/10610420410554412

Jisc. 2016. An Introduction to Open Access. Jisc. Accessed 2017. https://www.jisc.ac.uk/guides/an-introduction-to-open-access

Jump, Paul. 2013. Open Access Will Cause Problems for Learned Societies' Journals, Accepts Finch. Times Higher Education, 15 January. Accessed 27 March 2017. http://www.timeshighereducation.co.uk/open-access-will-cause-problems-for-learned-societies-journals-accepts-finch/422395.article

Kiley, Robert. 2017. We Now Accept Preprints in Grant Applications. The Wellcome Trust, 10 January. Accessed 27 March. https://wellcome.ac.uk/news/we-now-accept-preprints-grant-applications

Kovanis, Michail, Raphaël Porcher, Philippe Ravaud, and Ludovic Trinquart. 2016. The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise. PLOS ONE 11 (11): e0166387. doi:10.1371/journal.pone.0166387

Look, Hugh, and Frances Pinter. 2010. Open Access and Humanities and Social Science Monograph Publishing. New Review of Academic Librarianship 16 (S1): 90-97. doi:10.1080/13614533.2010.512244

McGlynn, Terry. 2013. The Evolution of Pseudojournals. Small Pond Science, 14 February. Accessed 27 March 2017. https://smallpondscience.com/2013/02/14/the-evolution-of-pseudojournals/

Monbiot, George. 2011. Academic Publishers Make Murdoch Look like a Socialist. The Guardian, 29 August. Accessed 27 March 2017. http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist

Moore, Samuel, Cameron Neylon, Martin Paul Eve, Daniel O'Donnell, and Damian Pattinson. 2017. Excellence R Us: University Research and the Fetishisation of Excellence. Palgrave Communications 3: 1-13. doi:10.1057/palcomms.2016.105

Mounce, Ross. 2017a. Elsevier Selling Access to Open Access Again. Ross Mounce, 14 February. Accessed 27 March. http://rossmounce.co.uk/2017/02/14/elsevier-selling-access-to-open-access-again/

Mounce, Ross. 2017b. Hybrid Open Access is Unreliable. Ross Mounce, 20 February. Accessed 27 March. http://rossmounce.co.uk/2017/02/20/hybrid-open-access-is-unreliable/

Mounce, Ross. 2017c. Remarkable Ongoing Chaos at OUP. Ross Mounce, 21 February. Accessed 27 March. http://rossmounce.co.uk/2017/02/21/remarkable-ongoing-chaos-at-oup/

National Research Foundation of South Africa. 2017. NRF Statement on Predatory Journals and Deceptive Publishers. Accessed 27 March. http://www.nrf.ac.za/sites/default/files/documents/NRF%20Statement%20-%20Predatory.pdf

Oswald, Andrew J. 2006. An Examination of the Reliability of Prestigious Scholarly Journals: Evidence and Implications for Decision-Makers. Discussion Paper. University of Warwick: Warwick Economic Research Papers. Accessed 27 March 2017. http://www2.warwick.ac.uk/fac/soc/economics/research/workingpapers/publications/twerp_744.pdf

Peters, Douglas P. and Stephen J. Ceci. 1982. Peer-Review Practices of Psychological Journals: The Fate of Published Articles, Submitted Again. Behavioral and Brain Sciences 5 (2): 187-195. doi:10.1017/S0140525X00011183

Pinfield, Stephen, Jennifer Salter, and Peter A. Bath. 2016. The 'Total Cost of Publication' in a Hybrid Open-Access Environment: Institutional Approaches to Funding Journal Article-Processing Charges in Combination with Subscriptions. Journal of the Association for Information Science and Technology 67 (7): 1751-1766. doi:10.1002/asi.23446

Priego, Ernesto. 2016. Academic Publishing and the Word of the Year. Ernesto Priego, 18 November. Accessed 27 March 2017. https://epriego.wordpress.com/2016/11/18/academic-publishing-and-the-word-of-the-year/

Reilly, Susan. 2015. Are Publishers Really "Hoodwinking" Academics? The Guardian, 17 September. Accessed 27 March 2017. https://www.theguardian.com/higher-education-network/2015/sep/17/are-publishers-really-hoodwinking-academics

ROAPE Editors. 2013. Yes to Egalitarian 'Open Access', No to 'Pay to Publish': A ROAPE Position Statement on Open Access. Review of African Political Economy 40 (136): 177-178. doi:10.1080/03056244.2013.797757

Safi, Michael. 2014. Journal Accepts Bogus Paper Requesting Removal from Mailing List. The Guardian, 25 November. Accessed 27 March 2017. https://www.theguardian.com/australia-news/2014/nov/25/journal-accepts-paper-requesting-removal-from-mailing-list

Scientific Electronic Library Online. 2015. Rebuttal to the blog post "Is SciELO a Publication Favela?" authored by Jeffrey Beall [online]. SciELO in Perspective [blog], 25 August. Accessed 27 March 2017. http://blog.scielo.org/en/2015/08/25/rebuttal-to-the-blog-post-is-scielo-a-publication-favela-authored-by-jeffrey-beall/

Shamseer, Larissa and David Moher. 2017. Thirteen Ways to Spot a 'Predatory Journal' (And Why We Shouldn't Call Them That). Times Higher Education, 27 March. Accessed 27 March. https://www.timeshighereducation.com/blog/thirteen-ways-spot-predatory-journal-and-why-we-shouldnt-call-them

Shamseer, Larissa, David Moher, Onyi Maduekwe, Lucy Turner, Virginia Barbour, Rebecca Burch, Jocalyn Clark, James Galipeau, Jason Roberts, and Beverley J. Shea. 2017. Potential Predatory and Legitimate Biomedical Journals: Can You Tell the Difference? A Cross-Sectional Comparison. BMC Medicine 15 (1): 28. doi:10.1186/s12916-017-0785-9

Shen, Cenyu, and Bo-Christer Björk. 2015. 'Predatory' Open Access: A Longitudinal Study of Article Volumes and Market Characteristics. BMC Medicine 13: 230. doi:10.1186/s12916-015-0469-2

Siler, Kyle, Kirby Lee, and Lisa Bero. 2015. Measuring the Effectiveness of Scientific Gatekeeping. Proceedings of the National Academy of Sciences 112 (2): 360-65. doi:10.1073/pnas.1418218112

Silver, Andrew. 2017. Pay-to-View Blacklist of Predatory Journals Set to Launch. Nature [News]. doi:10.1038/nature.2017.22090

Smith, Richard. 2006. Peer Review: A Flawed Process at the Heart of Science and Journals. Journal of the Royal Society of Medicine 99 (4): 178-82.

Smith, Richard. 2013. The Irrationality of the REF. BMJ [blog], 7 May. Accessed 27 March 2017. http://blogs.bmj.com/bmj/2013/05/07/richard-smith-the-irrationality-of-the-ref/

Sorokowski, Piotr, Emanuel Kulczycki, Agnieszka Sorokowska, and Katarzyna Pisanski. 2017. Predatory Journals Recruit Fake Editor. Nature News 543 (7646): 481. doi:10.1038/543481a

Starbuck, William H. 2014. Why and Where Do Academics Publish? M@n@gement 16 (5): 707-718.

Suber, Peter. 2008. Thinking about Prestige, Quality, and Open Access. SPARC Open Access Newsletter. Accessed 27 March 2017. http://dash.harvard.edu/handle/1/4322577

Suber, Peter. 2010. Thoughts on Prestige, Quality, and Open Access. Logos 21 (1): 115-128. doi:10.1163/095796510X546959

Suber, Peter. 2012. Open Access. Essential Knowledge Series. Cambridge, MA: MIT Press. Accessed 27 March 2017. http://bit.ly/oa-book

Sutton, Caroline, Peter Suber, and Amanda Page. 2014. Societies and Open Access Research [Catalog]. Harvard Open Access Project. Accessed 27 March 2017. http://bit.ly/hoap-soar

Taraborelli, Dario. 2008. How the Web Is Changing the Way We Trust. In Proceedings of the 2008 Conference on Current Issues in Computing and Philosophy, 194-204. Amsterdam: IOS Press.

Teytelman, Lenny. 2017. How a Sustained Misinformation Campaign by Scholarly Kitchen Attacked @PLOSONE's Rigorous Peer Review. Protocols.io, 24 April. Accessed 27 March. https://www.protocols.io/groups/protocolsio/news/how-a-sustained-misinformation-campaign-by-publishers

The Economist. 2017. The Findings of Medical Research Are Disseminated Too Slowly, 25 March. Accessed 27 March. http://www.economist.com/news/science-and-technology/21719438-about-change-findings-medical-research-are-disseminated-too

The Guardian. 2015. Academics Are Being Hoodwinked into Writing Books Nobody Can Buy. Higher Education Network Section, 4 September. Accessed 27 March 2017. https://www.theguardian.com/higher-education-network/2015/sep/04/academics-are-being-hoodwinked-into-writing-books-nobody-can-buy

The Media Insight Project. 2017. 'Who Shared It?': How Americans Decide What News to Trust on Social Media. The American Press Institute and the Associated Press - NORC Center for Public Research. Accessed 27 March. http://mediainsight.org/Pages/%27Who-Shared-It%27-How-Americans-Decide-What-News-to-Trust-on-Social-Media.aspx

Wakefield, Andrew. J., Simon H. Murch, Andrew Anthony, John Linnell, David M. Casson, Mohsin Malik, Mark Berelowitz, Amar P. Dhillon, Michael A. Thomson, P. Harvey, Alan Valentine, Susan E. Davies, John A. Walker-Smith. 1998. RETRACTED: Ileal-Lymphoid-Nodular Hyperplasia, Non-Specific Colitis, and Pervasive Developmental Disorder in Children. The Lancet 351 (9103): 637-641. doi:10.1016/S0140-6736(97)11096-0

Walker, Richard, and Pascal Rocha da Silva. 2015. Emerging Trends in Peer Review: A Survey. Frontiers in Neuroscience 9: 169. doi:10.3389/fnins.2015.00169

Waters, Lindsay. 2001. Rescue Tenure From the Tyranny of the Monograph. The Chronicle of Higher Education, 20 April 2017. https://chronicle.com/article/Rescue-Tenure-From-the-Tyranny/9623

Willinsky, John. 2006. The Access Principle: The Case for Open Access to Research and Scholarship. Digital Libraries and Electronic Publishing. Cambridge, MA: MIT Press.

Wojick, David. 2017. Contradictory Standards? Personal communication, 27 March. https://groups.google.com/forum/#!topic/osi2016-25/01LRcOkVvG8

Xia, Jingfeng. 2015. Predatory Journals and Their Article Publishing Charges. Learned Publishing 28 (1): 69-74. doi:10.1087/20150111

Xia, Jingfeng, Jennifer L. Harmon, Kevin G. Connolly, Ryan M. Donnelly, Mary R. Anderson, and Heather A. Howard. 2015. Who Publishes in 'Predatory' Journals? Journal of the Association for Information Science and Technology 66 (7): 1406-1417. doi:10.1002/asi.23265

About the Authors

Martin Paul Eve

Martin Paul Eve is Professor of Literature, Technology and Publishing at Birkbeck, University of London. He is the author of Open Access and the Humanities: Contexts, Controversies and the Future (Cambridge University Press 2014) and a co-Founder of the Open Library of Humanities.

 

Ernesto Priego

Ernesto Priego is a Lecturer in the School of Mathematics, Computer Science & Engineering, Department of Computer Science at City, University of London. He is the editor-in-chief of The Comics Grid Journal of Comics Scholarship, an open access journal dedicated to comics studies published by the Open Library of Humanities.