Ethics Outside the Box : IRB decision making, transparency, and publication
Institutional Review Boards (IRBs) are required by United States (US) law to review research proposals involving human subjects to ensure that they provide adequate protection to the subjects’ rights and welfare. In conducting this review, IRBs weigh the legal, scientific, and ethical aspects of a potential study to determine whether and how it should proceed; and continue to monitor the project until its completion. , , 
Since their establishment in the mid-1970s, the processes and outcomes of IRB decision-making have been the subject of empirical research. Multiple commentators have reported evidence that the process is inefficient, expensive, and time-consuming; and that different IRBs often give different assessments of similar – or even identical – research proposals. The secrecy of the deliberations has also been criticized as preventing researchers and academics from evaluating the substance of the decisions. 
Klitzman  has suggested that the publication of IRB’s decisions may ameliorate this situation. Publication would immediately make the process transparent and, over time, would establish a body of “common law” to assist IRBs in their deliberations, eventually achieving greater efficiency and consistency in decision-making. This essay evaluates the merits of this suggestion. It first describes the criticisms that have been made of IRBs, then sets out Klitzman’s proposal and several counter-arguments. It concludes that the publication of IRB decisions would be a valuable step in addressing the identified problems.
Criticisms of IRB decision-making
In 2011, Abbott and Grady conducted the first systematic evaluation of the studies of IRB decision-making in the US. Dissatisfaction with the process was clearly foreshadowed, as the studies were motivated by the irritation of investigators and commentators, who described the system as “outdated”, “dysfunctional”, “overburdened”, and “overreaching.” As such, Abbott and Grady’s conclusions were overwhelmingly negative.
Variability in the process and outcomes was identified by many commentators as the most concerning feature of IRB decision-making. Abbott and Grady’s review found that IRBs differed in their interpretation of federal regulations, application of value judgments, and the time taken to review proposals.  These differences amounted to fundamental variances in determinations of the level of review required (i.e. full or expedited), the level of risk faced by participants, and how participants should be recruited and compensated. As well as these substantive contradictions, one IRB would assess a protocol within a week, whereas another would take over 30 weeks. Similar results were reported by Anderson and Dubois, and Silberman and Khan. , 
Pritchard (2011) has explained this variability as reflecting differences in the interpretation of the regulatory provisions, reflects the wide drafting of federal laws that do not, for example, define social or individual “risks” and “benefits” or explain how to balance them.  Divergence is also seen at an individual level, with members perceiving risks differently depending on their education, experience, cultural norms, and individual fears. , Linked to this is Anderson and Dubois’ finding that many IRB members admit to relying on their intuition rather than objective analysis, seeking “peace of mind” as to the riskiness of the study and using the “sniff test” to determine whether a protocol is ethically sound (see also Rosnow, 1993 ). 
The publication of IRB decisions
The IRB review process has been described as occurring in a “black box”, where researchers submit protocols and receive an arbitrary response at an arbitrary time. Only researchers whose proposals were rejected receive an explanation, which are not a matter of public record. Given this secrecy, the research described above focused on external features including written policies, IRB composition, workloads, timelines, and review outcomes. Many commentators consider that they cannot effectively evaluate IRB decisions without direct evidence as to the actual decision-making processes. ,
One method of revealing this process would be the publication of IRB decisions. This was referred to in passing by Rosnow, who recommended that IRBs be provided with a “casebook” of research protocols; and by Mueller, who suggested that IRBs adopt the perspective of a court, or at least a literature reviewer, and consider the process followed by other IRBs. , In his 2015 book The Ethics Police, Klitzman likened the IRB process to the court process, stating that the legal system “similarly confronts ambiguities and differences in interpretation, but avoids many pitfalls by being more transparent, and by seeking and drawing on documented precedents.”  He recommended the publication of decisions with the view to developing a body of “case law.” Klitzman considered that this body would confer an institutional memory to decentralized IRBs which would reduce “unnecessary idiosyncrasies” and duplication of work. It would also formalize the existing system of communication between IRBs and make these discussions available to those who were excluded or did not want to directly participate. Practically, Klitzman suggests that this publication may be managed by various external organizations, such as the Office for Human Research Protections.
Arguments against the publication of IRB decisions
This proposal raises three immediate concerns. First, it may be argued that the publication of decisions would add to the administrative burden of IRBs. This cannot be maintained in practice, however, as IRBs are already required to keep records of minutes, decisions, progress reports, correspondence, new findings, consent forms, and membership. , As this is all that would be required to produce a decision, and as the publication of that decision could be externally managed, publication would not add to an IRBs’ workload.
Secondly, there is a risk that publication may remove the flexibility of the decision-making. In response, it may be noted that the current degree of flexibility is not desirable, as it has resulted in the irrational variation described above. Further, there is no evidence that IRBs actually “import ‘community values’” that reflect the local legal and ethical climate, as their decisions vary widely within the same locality and instead display intra-IRB biases. In any event, the use of precedent would not freeze the system, as any defensible departure may be explained in the decision. 
Third, there is a risk that the publication of IRB decisions could reveal sensitive information. Klitzman addressed this simply by saying that such information could be redacted.  It is also possible to publish the decision after the completion of the study, or to publish only successful applications.
The most dramatic alternative proposal is the centralized review of all studies, or that IRBs be centrally accredited and audited. , This would almost certainly decrease variability and increase accountability. However, it would also render the process rigidly bureaucratic and unable to meet changing circumstances; and risk entrenching the perspectives of those employed in the central IRB or organization. , As such, it is possible that a centralized system would be more vulnerable to external influence than individual IRBs. Such a structure would also make it difficult for researchers to communicate directly with IRBs, which may increase the length of time taken for review.
Others have suggested the centralization of IRB oversight of multi-center studies. , While this would, of course, reduce the variability within these studies, it would not affect the variability between different multi-center studies or individual studies. In this way, it would simply mask the real issue, which is that IRBs are making decisions differently—a concern that was directly proven by the analysis of multi-center studies.
Commentators have also suggested targeted education or the adoption of evidence-based reasoning. ,, While this would be a welcome addition, it would require an investment of resources, is likely to be applied unevenly across IRBs, and would not guarantee consistency of application. Further, the authors do not state how IRB members would access or identify the “best” evidence, nor do they demonstrate that this approach would itself ensure consistent determinations. However, the substance of this suggestion could be maintained without these costs if it is seen as an augmentation of Klitzman’s suggestion, e.g. in making such materials available as templates within the database (see also Pritchard, 2011 ). 
Finally, legal scholars have suggested the revision of the existing regulations to clarify the concepts that have caused confusion. , This would frustrate the purpose of those provisions. It is impossible to prospectively define such concepts as “risk” and “benefit” in a manner that would suit all conceivable situations. Indeed, if this were possible, IRBs would be of little use. It is likely that the regulations were drafted in a deliberately vague way to allow them to be tailored to suit a diverse range of factual scenarios and evolve alongside scientific practices. Therefore, it is both practical and in line with the apparent legislative intent to identify and define these concepts with respect to particular cases, rather than in the abstract in advance.
Decision publication and the role of the IRB
This leaves the question of whether the arguments in favor of publication are strong enough to prompt a change to the current position. This analysis is informed by the importance of the IRBs’ role. IRBs were established as independent entities to protect participants from unethical research practices. They often perform these duties on behalf of vulnerable individuals or populations, including children, terminally ill patients, and indigent persons. This role is considered sufficiently important that IRBs’ existence and composition is federally mandated. If IRBs do not fulfil their protective role, it is a matter of significant public concern.
Current empirical evidence suggests that IRB decision-making is inefficient and variable. At worst, this means that an error has allowed unacceptable research to be conducted (risking harm to the research subjects) or barred acceptable research (to the detriment of scientific progress). At best, erroneous or unnecessary modifications may delay or stall research progress; or simply complicate the procedure. This has been used to cogently argue that there is no evidence that IRBs are fulfilling their aim of protecting the public or, indeed, performing any worthwhile function; and that they are simply “self-serving” bureaucratic hoops through which researchers must jump.  These are grave allegations.
The publication of IRB decisions is a practical means by which to ensure the fulfillment of their protective prerogative. This would immediately provide commentators with the data required to investigate IRB decision-making; and IRB members with practical insights as to the interplay of legal and ethical requirements. In the mid-term, it could establish templates for common issues, such as the language of consent forms, to increase consistency and decrease the burden on individual IRBs. Long-term, it could compile a body of cases that would define broad concepts such as “minimal risk” by reference to factual scenarios.  At each level, this would minimize the variability observed and bring a level of accountability to IRBs in requiring them to justify their actions. In addition, this may improve the efficiency of the entire procedure, for example if researchers use the templates to draft their proposals, so that they require fewer modifications. On a practical note, while funding would be immediately required, once established, such a database would be relatively inexpensive to maintain. If successful, its costs may even be offset by the savings made by streamlined decision-making.
Overall, the publication of IRB decisions offers direct benefits that are desperately needed in their decision-making, at a relatively small cost. As such, the arguments in favor of publication outweigh the presumption in favor of the status quo.
1. Abbott, L; Grady, C. (2011). A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn. Journal of Empirical Research on Human Research Ethics, 3-19.
2. Stoddard, D. (2010). Falling Short of Fundamental Fairness: Why Institutional Review Board Regulations Fail to Provide Procedural Due Fairness. Creighton Law Review 43, 1275-1327.
3. Code of Federal Regulations, Title 45, Public Welfare, Department of Health and Human Services, §46.109 IRB Review of Research and §46.115 IRB Records, 2009 (CFR).
4. Klitzman, R. (2015). The Ethics Police? The Struggle to Make Human Research Safe. New York, NY: Oxford University Press. Chapter 6.
5. Anderson, E; DuBois, J. (2012). IRB Decision- Making with Imperfect Knowledge: A Framework for Evidence-Based Research Ethics Review. Journal of Law, Medicine and Ethics, 951-969.
6. Silberman & Khan. (2011). Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform. The Milbank Quarterly 89(4), 599-627.
7. Pritchard, I. (2011). How Do IRB Members Make Decisions? A Review. Journal of Empirical Research on Human Research Ethics, 31-46.
8. Klitzman, R. (2013). How IRBs View and Make Decisions About Social Risks. Journal of Empirical Research on Human Research Ethics 8(3), 58-65.
9. Rosnow, R et al. (1993). The Institutional Review Board as a Mirror of Scientific and Ethical Standards. American Psychologist 48(7), 821-826.
10. Mueller, J. (2007). Censorship and Institutional Review Board: Ignorance is Neither Bliss Nor Ethical. Northwestern University Law Review 101, 809-836.
11. Sage, B. (2016). Lords of the Jumble: IRBs, Ethics, and the Common Law of the Common Rule. Health Affairs 35(5), 934-935.
12. Hoffman, S; Wilen Berg, J. (2005). The Suitability of IRB Liability. University of Pittsburg Law Review 67, 365-427.