Some working in academia have a bone to grind about publication. Others could wave at least a couple at you, and there are those trying hard to fit skeletons into outsized mortars. The publication system sucks.
One major concern is the peer-review system. Many feel that it is vulnerable to poisoned attacks by referees who have an interest in work not being published; that it allows work to be assessed by those who consider themselves above such things, don’t apply themselves to the task, so filter like a one-holed fishing net. Moreover, factors such as sex and national language of author affect publication likelihood, different referees show agreement “little greater than chance”, and the whole process is extremely conservative and causes great problems for unconventional projects to ever see the light of day. You can see more of this, and one suggestion for an alternative, here (2 page pdf, quote taken from page 269, may not be available on all servers).
Another concern, one that I prioritise over the previous, is that of the whole ‘publish or perish’ culture of academia. This certainly affects some fields more than others, but mine – psychology – is steeped in this attitude, which rattles from the hot dogs with hefty grants down to students like me. This paper highlights the concern that the everyday student has. Note it was written in 1983 – to me, this is the good old days. Suffice to say that the problems have become ever more systematic. It’s still pretty timely, though. See this:
This pressure to publish–and to publish as often as possible–directly interferes with the kind of research advocated by Maddi, McGuire, and the others: research that is multi-dimensional, longitudinal, collaborative, and relevant to life in the real world. Indeed, the Publication Manual points out how “we become content with rapid, mediocre investigations where longer and more careful work is possible” (1974, p. 22),
Publish-or- perish as a guideline for untenured faculty has now become something of a mania even for graduate students. How else, we are told by our professors, will you find a job? It’s a jungle out there, and a list of publications to flesh out the curriculum vitae is supposed to be our first line of defense. It doesn’t much matter what we publish, or whether we actually write anything original or useful or thoughtful: what matters is how long a list of publications we can present to the Search Committee of whatever institution we hope to work at.
From everyone I’ve spoken to on the subject, this still appears to be the state of play. I’m lucky in having a relaxed and supportive supervisor, in that while he’d e very happy for me to accrue papers (as would I), it’s not considered the primary aim of what I’m doing – I’m here to complete a set of research and write a thorough thesis (and learn a few things along the way). For many other students I know, there is a pressure to publish as much as possible; I’ve been introduced to the concept of salami slicing your thesis – working out how many discrete components you can reduce your work to, in order to maximise your publications. This is undoubtedly detrimental to the ease of access to academic information: five studies essentially establishing one phenomena in various ways are far better presented together, allowing continuity of writing and far less redundancy, than as five different papers that have to prehash, hash, and rehash the same preambles, and may take slightly different tacks in their articulation, impeding a reader from getting their teeth into whatever story needs to be told. Moreover, as mentioned, it’s an impediment to certain types of research, which simply can’t be reduced to free-standing components.
It’s a disheartening climate, as it privileges publishing over all other pursuits. Teaching is what they make you do while you’re trying to get that paper in to Nature. Research groups and collaborations are a means to an immediate end, rather than opportunities to wrangle with assumptions and open conceptual horizons. It’s quite appropriate to have indexes of performance based on visible outcomes – and clans of researchers navel-gazing interminably without any goal is obviously of no consequence to anyone, except the odd piece of lint – but the focus entirely misses other ambitions the universities should keep in mind – furthering knowledge as well as creating it; providing an environment in which research students, who are effectively apprentices – can learn, make mistakes and explore, rather than serve as a publication and citation engine for the upper echelons.
Worst of all, it produces work that is just good enough. Who would polish their work beyond what is asked of them, when they could be moving on to the next paper? Well, luckily, I think a lot of researchers do, as certainly there are other pressures and incentives than the publication imperative. Certainly, as the Fox paper began with, those who are entrenched and secure, with tenure or masses of funding, are freed to explore other avenues at the paces they can specify. But certainly, the young researcher is in a squalid trap, not least because the candidates far outstrip the places available at every rung of the career ladder.
[EXTRA: this little baby on academic fraud, interesting in its own right, throws up something which is not hugely rare – the “stamp of approval” authorships where a big hitter puts themselves on a paper they’ve had no real contact with – only presumably to massage upwards their publication rates. So I guess its a problem which envelopes everyone on the ladder. For a less serious, although I hesitate to call it fun, situation then you can look at the whole Erdos obsession amongst the biggies.]
So what are the alternatives, if any?
We could prune away at the bad points of the publication system, and in this there are no shortage of ideas. The Della Salla paper ( pdf above) suggests the use of sponsors, who assess your work and decide whether they’d be happy to attach their name (and reputation to it), at which point it then goes to committee; there is an idea of referees taking on prosecutorial and defense roles, and possibly add incentive to performing these roles successfully (‘acquitting your client’ gets you brownie points in some way). The allocation of refereeing has been criticized, so perhaps a system that engaged in a bit of reciprocity, where researchers can build up a bank of review-me credits by themselves reviewing others, would get everyone involved in the process with a fair incentive (a bit like some of the P2P file-sharing systems, where you have to keep your upload and download ratio within some set of boundaries).
Many of these possibilities seem worth pursuing to me; the system as it stands is no-where near optimal and these are fairly pragmatic solutions. Most don’t require a universal decree to get rolling: you just need one or two respectable journals modify their submissions criteria, and then crucially for the system to work (and be seen to work) to observe the practise spreading. I’m turning my head now, so someone silly can shout ‘Meme!’
There. But I’d like to engage in some fanciful discussion of long-term alternatives. Alterations to the peer-review process such as those above don’t really seem to make much of a dent in the problem of publish-or-perish, although perhaps revisions to employment criteria, made in parallel to these, might help somewhat. I’m interested if we can imagine a system, say five, ten or thirty years on from now, that would allow the positive aspects of the publication system to continue (access to scientific findings, filtration of poorly produced science, rewarding of research activities) while minimising the negative (quantity of output as a measure, limited selection of individuals acting as a filter, biases against unconventional work).
I’m looking for answers, so I guess I should give my take on what might be a future direction.
Increasingly, the web is being used to publish scientific information. I’m convinced that in years to come we will be seeing this as the preeminent way to get hold of all academic information. My hope is that we will see journals eventually dispensed with altogether, with all information freely available and hosted either by individuals, university sites or research groups / special interests. I would argue that all scientific information, irrespective of quality, should be available within the same milieu, with no sharp division between journal-published, good science, and non-journal, questionable science.
Without filters operating, this would leave us in the unenviable position of being trapped in the morass of innumerable studies, with no way of navigating between them and no way of checking the science was done correctly. To avoid this, a peer review system would still exist, but one that is far more open and nuanced than the current one. Work to be published should be sent to research communities, in a manner akin to posting to a newsgroup: the suggested article, alongside the raw data and possibly copies of the program themselves would be posted there. The material can then be reviewed by everyone – probably having to go through some sort of gating system to ensure it is actually adequate to be considered (to avoid the wastage of many people looking over unacceptable work) , and following this being available for comments and ranking by their peers – a genuine peer community rather than one or two anonymous figures. A key element of this is that the work gets printed anyway (as work is hosted personally, not by the research community), but the ratings the work accrues can be used in literature searches, such that good work is preferentially selected over poor ones.
The bottom line with the ratings would be how useful the article was to the researcher. Usefulness can come in different forms: an article might put a persuasive case for something already fairly well established, or could turn common understanding on its head; it may progress theory very little but contain an ingenious methodology that could be turned to other methods, or carefully reveal lacuna in dominant models, whilst failing to offer an alternative. The ratings systems could represent this, meaning that a student looking for innovative techniques or one trying to get a sense of the theoretical landscape have those tools at their disposal, as adjudged by the entirety of the relevant scientific community. The way in which these communities should be composed isn’t entirely clear to me: should it be essentially self-selecting, decided by consensus (like a club), or through a board with applications? Certain of these options lean this system closer to the closed methods it attempts to displace. Nonetheless, it still would provide judgment by a far broader range of individuals.
I suppose one might why the community would bother to do this at all? I think this would be an odd criticism, given that the academic community is based on progress via understanding what other people are doing, and then doing it better (or contrary). If you are presented work by someone in your area, it would be strange indeed not to react by reading through it, sifting the useful from the useless and inspiring your own work. This, after all, is why seminars and conference presentations are not played to empty chairs: scientists want to know more about their area, so they can do it better. This would simply add a formalised feedback component to the back of these existing practise – feedback that could be later reviewed according to the individual paper’s “impact factor” (ie the number of citations that this particular paper has accrued) thus giving a prestige incentive to be fair and not miserly. If everything is transparent, it is very dangerous to be biased.
I should add that I think that basing the rating of the article on how useful it seems to the peer group seems a good way to make sure that basic/non-applied research is still of some definite service. It meets the criticism that certain types of research are privileged due to their fashionability rather than any real contribution to understanding (as numerous critics of imagin work would put – I’ll cover this soon) – if no-one can say that it’s helped them progress their own ideas or techniques, then the research will sink to the bottom.
That’s it in a nutshell – I might have had more to say but its gone with my now certainly lamented journal…. anyone care to raise me on this one?
I find the ideas for improving/replacing the peer review system are very interesting, and I’ll have to give them a proper read soon, as the system is both very important and quite flawed. However, I disagree with the contention that “publish is perish” is a bad thing. It’s not so much the measuring of productivity—there are other ways to do this, and it would be absurd to argue that faculty who do more teaching than writing are unproductive—but that philosophers in caves do nothing to promote the progress of science. If work is not published it can’t be built on, and other people interested in the same areas are doomed to repeat work that has already been done.
Papers don’t have to reflect complete studies, they can always be summaries of works-in-progress, which is the status of almost everything that comes out of my lab. But these work in progress reports are still of great importance, because they let others see what we are up to and participate. To give one example, just in the 5 months since my last paper (which is nothing more than a snapshot of ‘where we are with this programme right now’) was accepted for publication, we’ve been contacted by people I had never heard of, who have replicated parts of the work and done their own analysis. Apart from anything else, their work is likely to be quite helpful to me, which already justifies the time I spent writing the paper.
Agreed – I think “the literature” is a tremendously important part of the knowledge system, and measures that would slow its rate of development should be approached with caution. But the current system (accompanied by the overemphasis on research output vs teaching/apprenticing new researchers) does allow for some cynical abuse, and I think allowing the people on the ground – in your case the group who found your work important and useful enough to replicate – to judge its worth gives a more accurate reflection of what this really contributes to the literature. Of course, sometimes valuable contributions are not recognised as such until much further down the line, but that’s somewhat inevitable in all systems (perhaps the Journal for Precognitive Research sees it differently).
Just to pick a point, though – do you not think it is important for academics to be philosophers in caves from time to time? I think that in the early stages of the game, especially, there is something to be said from taking what you’ve read and what you’ve produced and retreating, digesting, to think about what it is you may be trying to say in the broader scheme of things. Obviously there is an effective iteration of produce-contemplate-produce-contemplate that characterises most research, but I think sometimes this results in a whole set of trees, when you could really do with grasping the wood. You may disagree, as the argument can be made that ‘doing and thinking’ may produce better thinking than ‘thinking’ alone…
I think there is a potential use for working in isolation, but I’m generally very wary of people who disappear for some years and then resurface, because they have denied themselves a source of feedback. Your more recent post about ‘hunt the confound’ makes pretty much the argument I would make for publishing frequently–the scrutiny of others will find confounds and errors that a person working alone wouldn’t, and if these turn out not to be important, it’s only by having them pointed out to begin with that a researcher can make the case for a rebuttal.
I was thinking about that when I posted it up. I guess my position is that if there is an argument for work to be published, then we should be happy that it is, but that a researcher shouldn’t have to avoid big or messy problems, or scrap research that isn’t providing rapid reesults because that won’t produce desperately needed publications. I guess my ideal state would be where publications were encouraged from, rather than demanded of researchers.