Peer Review Week is an annual international event celebrating the essential role peer review plays in maintaining scientific quality.
For some authors, their submission to Veterinary Evidence is their first submission to any journal. So, for Peer Review Week 2019, we asked four of our reviewers to give authors and readers an insight and overview of what it’s like on the other side of the Veterinary Evidence peer-review process and why it’s important.
You can also read our Q&A with published authors.
Introducing the reviewers
Bruce A. Smith BVSc, MS, DACVS
Currently Director of the Small Animal Veterinary Teaching Hospital at the University of Queensland, I have worked in private and university referral practice in Australia since 1997. I have had a career interest in orthopaedic surgery, clinical career development for veterinary specialists and, more recently, evidence-based veterinary medicine (EBVM).
Constance White DVM, PhD
I graduated with honours in 1997 from Oregon State University’s College of Veterinary Medicine, after first earning a PhD in genetics. I have experience working in mixed large and small animal practice. My particular interests are in surgery/critical care and in diagnostic imaging. I’m currently working as a small animal emergency and critical care vet at Fremont Veterinary Clinic in Oregon.
Louise Buckley PhD, RVN
I am a long standing veterinary nurse, with a background in research, clinical practice and lecturing in higher education. My particular enthusiasms are in the area of companion animal welfare and behaviour, EBVM and the evidence behind non-pharmaceutical products promoted in veterinary practice. Currently based at Edinburgh University in the postgraduate education team, I am joining the Bristol Vet School in September as Programme Director of Veterinary Nursing, where I will be supporting our undergraduate veterinary nurses to publish their dissertations and write Knowledge Summaries for Veterinary Evidence.
Jacqueline Cole BVetMed MRCVS
I qualified from the Royal Veterinary College in 2014 after completing a degree in Zoology in the USA. I am originally from New York, but have made England my home. I have worked in a multitude of practices around the country. Following a small animal rotating internship in 2016 at the University of Bristol, I locumed until making Sanctuary Veterinary Centre my home. I have a passion for internal medicine, and I am currently pursuing an advanced qualification in feline medicine via the University of Sydney in order to advance my knowledge and deliver the best care to my patients.
Is peer review worthwhile and important? What are its limitations? Does it help advance science?
A qualified ‘yes’ on both worthwhile and important. The limitations are very much those of what is the ‘expectation’ of peer review. An explicit statement would provide clarity on definition, goals and expectation for authors, readers, reviewers and editorial staff. I am not convinced that peer review per se ‘advances science’, however it does assist authors to hone their manuscript and present it in a way that maximises its message to the readers – to put it another way, it improves the quality of the communication.
Peer review is a bit like democracy: flawed and inefficient but superior to all of the other systems of governance that have been tried. In a perfect world, science is an objective, iterative process aimed at establishing the ‘truth’ by the consistent application of hypothesis testing and ruthless rejection of prior beliefs which are not supported by a well-designed study. Peer review is but one step in that process, in which the scientific product is scrutinised for any potential missteps in the testing method. Problematically, peer review is done by humans, with their own biases and blind spots. Veterinary medicine is a particularly thorny area, since paltry research funding constrains most of our published literature to observational and descriptive studies in which authors must do their best to sort through dirty and often incomplete data for kernels of evidence. Knowledge Summaries are especially difficult since heterogeneous outcome reporting, study design, and patient populations make evidence synthesis immensely challenging. Veterinary medicine lags behind human medicine in shifting from ‘eminence-based’ to ‘evidence-based’ and reviewers may sometimes disagree with the ‘result’ due to their own deeply-entrenched practice beliefs or because the manuscript authors are not recognised specialists within the reviewer’s domain. Unlike our physician colleagues, most of us do not receive focused training in clinical study design and statistics adequate to task; this reviewer frequently finds errors in both study classification and statistical methods in the veterinary literature. Though these may seem like pedantic pitfalls, proper analysis and evaluation is challenging in this context.
I am going to have to answer yes to this one or I would have to claim that I spend hours of time engaged in a pointless activity! For me it is an important part of both quality assurance and the process of refining paper content. Authors, particularly new-to-research ones, often get so up close and personal to their work that they forget to write in a way that is accessible to the reader, or they omit information that someone who does not know the research would need in order to make sense of the work. The reviewer can play a part in moderating this. The quality assurance aspect though is only, really, as good as the knowledge/experience of the reviewer in the area in which the research is situated, their knowledge of the research methodologies employed and the time they have to review the papers. In all three cases this can be tricky to find reviewers that can deliver on all components, and if there is a reviewer skill deficit in one or more of these areas, it will limit the value of peer review. Plus, reviewers are only human; we have our off days and there is an element of subjectivity in any reviewing – I have often (politely) disagreed with some of a reviewer’s comments on a draft manuscript. Even reviewers often disagree between themselves as to a paper’s quality! But an imperfect quality assurance is better than no quality assurance at all, and an increase in poor quality papers in the public domain to be reviewed there instead.
Peer review is very worthwhile. Instead of just a single person deciding whether an article has merit, multiple people can evaluate it. It also means more eyes to pick up errors or mistakes or give suggestions to improve the writing. This process advances science; by bringing people together to look at questions it means more innovative solutions can be found.
What motivates you to review a paper? How does reviewing a paper benefit you individually?
Motivation to review stems from my personal understanding of my professional responsibilities. I have found ‘joy’ in my work and part of that is being of use and help to others. Reviewing, in my view, is a service that I can provide to others. It feels a ‘right’ thing to do; that is my personal reward.
These limitations combine to make peer review in veterinary medicine a bit more arduous than in other fields, particularly for articles which attempt systematic evidence synthesis. I believe that there is a crying need for first-opinion veterinarians to participate in this process. Our literature is primarily derived from referral centres, where patient mix is substantially different from those which we see in primary care. Our specialists, who author the bulk of our primary research, are not always aware of the pitfalls of referral bias, nor are necessarily well trained in clinical research. The movement towards establishing consensus guidelines is well-intentioned, but currently most guidelines are written without transparent evidence appraisal and without the input of first-opinion veterinarians on guideline panels. I believe that generalists are perhaps better equipped to evaluate the evidence objectively and, most importantly, to assess generalisability to first-opinion populations. Peer review is one area in which generalists can have a seat at the table for helping to determine best practice for our populations. For me personally, peer review offers an opportunity for exhaustive review of specific topics not refreshed in years. Recently, I reviewed a Knowledge Summary on a surgical procedure which I will never perform, yet my improved knowledge of the clinical questions surrounding this procedure will be helpful to my clients, who may need to know what options are available prior to their visit with a specialist.
Where to start? I think reviewing papers makes me better at understanding the scientific process and develops my critical evaluation skills, so it develops me as a professional. I also get a real buzz out of helping other veterinary professionals to develop their skills in this area, and I also really like being one of the first people to hear about new research that is being undertaken. I currently review for quite a wide range of journals so am reviewing work submitted by all sorts of people, from experienced researchers heading their own research group, to final-year student vets and nurses submitting their undergraduate dissertations. I would guess I spend about 250–300 hours a year reviewing papers, conference abstracts, etc. and this is all unpaid (as it usually is for most reviewers) so it must be pretty intrinsically motivating!
As I have published a paper in the past and therefore had people donate their time to review mine, I believe in the name of science and collaboration, that peer reviewing in turn is the way forward. It is interesting to see the thought processes of others, which in turn will improve my own writing.
What would you say to authors unfamiliar with and daunted by the thought of the peer-review process?
All of our communications – bar none – can benefit from the feedback of others. After all, the core purpose of a manuscript is communication for the benefit of others. Keeping this perspective, ALL feedback (favourable, unfavourable or irrelevant) helps us communicate better. We naturally take criticism personally and a new author is certainly more vulnerable. There is no place for incivility on the behalf of reviewers, irrespective of truth. This is a role for the editor to police; personally I would have a zero tolerance for non-collegiate communications – get another reviewer.
Though article review requires some effort, a good reviewer provides constructive feedback to make a paper better in a variety of ways. Ideally, a reviewer helps authors with clarity in writing, so that any practitioner can understand the gist of what was done and identifies possible alternative explanations for what is stated. I believe that practitioners are singularly adept at these tasks since they encounter countless clinical conundrums and counterfactuals in their daily environment. Veterinarians are a clever and creative people; any practitioner who spends a bit of time reviewing RCVS Knowledge toolkits has a great deal to contribute to the mission of EBVM. The review process often illuminates knowledge gaps that I, as a reviewer, was unaware of, and increases my own interest in research in that topic domain.
It is normal to feel daunted – you find me an author who doesn’t have niggles and worries about what other people will think of something that you have painstakingly laboured over and which they are now going to expose to potential criticism. I can remember getting a review back on a paper that was so awful that it took me a week to read the second reviewer’s comments (who thought it was an ace paper!). The important thing to remember is that Veterinary Evidence reviewers all want to see one thing – you to publish and publish with a paper that will make you (and us) proud. I think it is really important to point out that it is VERY, VERY, VERY rare to get a paper accepted in a journal without any revisions first, so don’t be deflated when the decision comes back that changes need to be made (sometimes major ones). I always submit papers to journals expecting to have to make revisions and the reviewers will come back with some negative comments. If they didn’t I might question the thoroughness of the reviewer – I want them to find the stuff I have not phrased well, etc. It makes my paper look better when it finally is accepted!
I would say, the only way to gain experience and to learn is by doing. To give it a go. The process is straight-forward. Peer reviewers are not out there to put up unsurpassable barriers for you to publish, but to catch errors, improve your work, and make it a better paper that will benefit readers.
What common mistakes do authors of Knowledge Summaries tend to make? What could authors look out for?
Not getting a ‘friendly’ peer review BEFORE submission. Having the first ‘round’ of review by a friendly colleague – this should include spelling, grammar and style – is far better than an irritated peer reviewer and can help to avoid the issues discussed earlier.
For me, I think there are three issues that I see very commonly. Hell, I have been guilty of some of these myself! The first one is not closely enough addressing the PICO/research question. Don’t let the paper summaries focus on populations, interventions, comparisons or outcomes that are not relevant. It is okay to state, for example, that other interventions or outcomes were also investigated in this study but have been omitted here as they are not relevant to addressing the PICO. The reader has picked up your Knowledge Summary because they want the answer to that PICO specifically, so focus on just answering this. Otherwise you will make a busy practitioner turn away without reading it because the key information they need is not easy to find. You can always write another Knowledge Summary (or several) if you want to explore other aspects! The other issue is a lack of detail when it comes to reporting the outcome measures and what was found, and I find that this can be reflected in not reporting the findings with sufficient precision. Finally, it really helps to read the Knowledge Summary if the author numbers the relevant outcomes and then ensures that the findings that relate to the outcomes are also numbered (and match up).
Common mistakes I find tend to be that people forget to put enough statistics, such as p-values and confidence intervals to back up a paper’s main findings. I also have found people tend to have inherent bias towards the result they want to find, and it can be reflected in their summaries versus being fully objective on the level of evidence presented.
What are your top tips for reviewing a Knowledge Summary? What general tips do you think a reviewer who is unfamiliar with a Knowledge Summary needs to know?
Read (and understand) the PICO question first. If the PICO and the summary don’t match, there is likely a fundamental misunderstanding of what a Knowledge Summary is, and further review is ‘editing’ only. Reject and let the authors know why.
Knowledge Summaries are substantial contributions in evidence synthesis but are difficult to perform well, especially for a single author. I enjoy reviewing them but find that I often dedicate time and energy nearly equal to second authorship. One cannot find omissions in the literature search without performing an independent search; few vets, including specialist or specialist trainees, have been taught formal search strategies, and I find that papers are often missing, usually due to failure to include all relevant terms in the search. The RCVS Knowledge toolkit provides good information but I think that many Knowledge Summary authors would benefit from using the RCVS library staff to help with their search, at least during their first few Knowledge Summary projects. Additionally, study classification is a frequent area of confusion and our literature excels in muddying the waters. Often, studies which are labelled only as ‘retrospective’ may contain a cohort, case-control, or prevalence (cross-sectional) analysis (or some mix of those for different outcomes), whilst a number of studies named as ‘prospective clinical trials’ are simply descriptive case series. Much of the toolkit, as well as most EBVM resources, is derived from the human EBM, which is a bit more straightforward.
Be kind, honest and fair. A Knowledge Summary author is unlikely to be a hardened academic used to getting harsh reviewer feedback. They might be, but it may just as easily be the veterinary nurse that you spayed a cat with last week, or a keen vet student that undertook extra-mural studies with you last summer. Try to make sure that any feedback that is given lets the author know, not only what needs improving, but how. Balance it out with the good. It can be really tempting to focus on just pointing out the aspects that need improving, but try to also let the author know what aspects you liked or thought they presented particularly well, etc. We don’t want their first Knowledge Summary to also be their last.
My top tips:
- Follow the template given. Look at the format: did they follow what was asked?
- Compare to other published papers that are similar in context. Is the level of detail the same?
- Ask the editor if you have any concerns, as they are experienced in looking at papers and can help direct you.
Interested in reviewing for Veterinary Evidence? Contact us.