Special Call for Papers regarding Personality Assessments in Legal Contexts

Submit Interest By November 1     Visit Journal

We invite you to contribute to this special issue focusing on building a cohesive evidence base for legal admissibility considerations regarding commonly-used psychological assessment instruments. A credibility revolution is occurring in various fields, including in both psychology and in law, with a sharpened focus on the tenability of claims made by experts. Recent projects have raised some questions and concerns about the legal admissibility of various psychological assessment methods (e.g., DeMatteo et al., 2020; Edens & Boccaccini, 2017; Neal, Slobogin, Saks, Faigman, & Geisinger, 2019). The current call for papers represents a systematic effort for the field to answer some of these questions, respond to some of the concerns, and charter a path forward.

JPA is a prestigious journal in the field of psychological assessment, and has a long history of publishing papers relevant to forensic psychology. This special issue will advance both psychological science and justice. We anticipate both organic and organized teams of authors with psychologist and legal scholars collaborating with one another in the writing of each paper. To increase the visibility of the papers across assessment and legal disciplines, the special issue will be widely disseminated across the fields of psychology and law, and we are actively negotiating to publish this issue as Open Access to further increase its visibility and impact. We anticipate it will be widely read, useful to both scholars and practitioners in psychology and in law, and will address a major gap in the literature.

Each paper will focus on scientific and legal issues for one particular psychological assessment instrument, summarizing its psychometric science to date and highlighting research that is most urgently needed, as well as laying out the strengths and weaknesses of the tool for use in legal settings with particular attention to the admissibility issues that both mental health practitioners and legal practitioners should be aware. We invite papers focused on psychopathology and personality assessment instruments, including forensic assessment instruments.


If you are interested in participating, please submit your expression of interest by November 1, 2020. We lay out further detail below.


JPA Call for Paper Details

The credibility revolution (sometimes called the “replicability crisis”) in psychology, and in science more broadly, highlights procedural and structural problems that have called into question the credibility of the scientific literature, and the steps needed to improve science and make strong scientific claims (e.g., Munafò et al., 2017; Vazire, 2018). Similarly, in law, there has been an increasing movement toward the need for stronger scientific claims and for the law to screen out claims based on low-quality methods (e.g., Daubert v. Merrell Dow Pharmaceuticals, 1993; Faigman, Cheng, Mnookin, Murphy, Sanders, & Slobogin, 2018; President’s Council of Advisors on Science and Technology, 2016).

In a prescient article, Grisso (1987) suggested the development of strong scientific underpinnings for the field of forensic psychological assessment was threatened by economic forces in the legal system. Those concerns remain valid today and have in fact been borne out in some cases. Hundreds of thousands of psychological assessments are used in court every year to aid judges in making legal decisions that profoundly affect people’s lives, and a large number of mental health professionals offer forensic services. However, recent work has shed light on the limitations of assessment tools in field settings, including legal contexts (see e.g., Edens & Boccaccini, 2017). For example, the reliability and validity of findings from research-based normative samples do not necessarily extend to field samples involving forensic populations (e.g., Boccaccini, Murrie, Capterton, & Hawes, 2009; Eno Louden, Kang, Ricks, & Marquez, 2017; Harris, Boccaccini, & Rice; 2017; Hawes, Boccaccini, & Murrie, 2013; Miller, Kimonis, Otto, Kline, & Wasserman, 2012; Neal, Miller, & Shealy, 2015; van Heesch, Jeandarme, Pouls, & Vervaeke, 2016; Vincent, Guy, Fusco, & Gershenson, 2012; Wood, Anderson, & Glassmire, 2017).

Other examples include a body of work by various scholars highlighting serious questions about the use of the popular Hare Psychopathy Checklist-Revised in some forensic contexts (see e.g., DeMatteo et al., 2020). The Rorschach Inkblot Test remains a subject of debate (e.g., Mihura, Meyer, Dumitrascu, & Bombel, 2013 vs. Wood, Garb, Nezworski, Lilienfeld, & Duke, 2015; but see Board of Trustees of the Society for Personality Assessment, 2005), which continues to be widely used in forensic settings (Neal et al., 2019). A final example is a recent two-part investigation of psychological assessments in legal contexts by Neal and colleagues (2019). They investigated 364 assessment tools used in legal cases, finding that many may not meet legal admissibility criteria. They also found that legal challenges to assessment evidence were rare. The authors ended with a call for research, encouraging psychological scientists to improve the state of knowledge in the field and to improve public access to information about psychological assessments. They also encouraged mental health practitioners to be more critical about the measures they use in forensic cases, and advocated attorneys to better scrutinize and challenge psychological assessment evidence.

The current call for papers aims to meet some of these needs. We seek to publish a set of articles offering a high-level review of the psychological assessment measures practitioners are likely to use in legal settings, with attention to multiple audiences for each article: psychological scientists, mental health practitioners, attorneys and judges, and the public.

Each paper will follow a general outline:
  • data are provided about how the psychological assessment tool is used in legal settings (including how it is sometimes inappropriately used, if applicable);
  • data on how commonly the tool is used, including comparative data across countries, if applicable;
  • a summary of any legal admissibility challenges the tool faced and the outcomes of such challenges;
  • a summary of the psychometric findings about the tool’s performance, especially in forensic populations, if known;
  • highlighting the data that are needed in order to increase the credibility of the tool in court;
  • offering suggestions for how to effectively cross-examine the use of the tool (which will be useful both for mental health practitioners in order to prepare for testimony but also to educate attorneys about how to effectively demonstrate the limitations of the tool);
  • offering an expert opinion about whether the tool is likely to meet legal admissibility criteria, with justification.

Here we outline some exemplary topics. These are examples intended to stimulate your imagination; submissions are not limited to these ideas. Some papers in this special issue we imagine might concern instruments that are heavily used but controversial (e.g., the Rorschach Inkblot Test, Thematic Apperception Test), those that are newer but likely to be heavily used in forensic settings (e.g., Historical Clinical Risk Management-20 Version 3, Minnesota Multiphasic Personality Inventory-3, Structured Inventory of Reported Symptoms-2), those that are perhaps appropriate for some psycholegal questions but inappropriate for others (e.g., the Psychopathy Checklist-Revised), those frequently used in forensic settings for particular questions (e.g., Pain Patient Profile, Structured Inventory of Malingered Symptomatology, Test of Memory Malingering, Evaluation of Competency to Stand Trial-Revised), and those that are often used but have various versions of the same test that may have a disjointed or confusing empirical basis (e.g., the Static-99, Millon Clinical Multiaxial Inventory, Substance Abuse Subtle Screening Inventory, Personality Assessment Inventory, Trauma Symptom Inventory, Wechsler Adult Intelligence Scales). Other suggestions are encouraged.

We hope to have each paper authored by a team of psychologists and legal scholars, though it will not be required, and we understand it will not be possible in every circumstance. We hope there are people on each team with a strong psychometric background. We also hope law students or legal scholars who are interested in expert evidence will participate in these collaborative teams, and we see these collaborations as one way to strengthen appreciation for and understanding science in law (see e.g., Lawless, Robbennolt, & Ulen, 2016).

We also hope to stimulate some adversarial collaborations, especially with regard to assessment instruments for which there has been a healthy debate in the literature and for which a dedicated collaborative effort to hammer out points of agreement and disagreement would be useful for psychology and for the law (for a model see Cowan, Belletier et al., 2020; Kahneman, 2003; Kahneman & Klein, 2009).

Overall, we hope the special issue will allow us to highlight the general strengths and weaknesses of measurement science among forensic psychological measures and to help the field conceptualize itself at the current moment in time. We also hope to summarize recent developments about psychological measurement in basic science and provide suggestions for more applied scholars in forensic psychological science in an effort to better link the more basic and applied areas of the field and generate ideas for helping the field forward. We believe these papers will be important: they will advance psychological science by motivating the research that most needs to be done, and they will advance justice by educating mental health practitioners and lawyers about the strengths and weaknesses of commonly-used tools in legal settings.


Submission Instructions

If you are interested in participating, please let us know by November 1, 2020 by filling out this form. If you have a detailed proposal for the tool you want to write about, and who you might want to write with, and your justifications, you can let us know through this form. If you are solo and don’t have a strong preference about the tool you would focus on but have skills in legal research or psychometrics, for instance, and want us to try to be pair you up with others interested in writing one of these papers, you can let us know through this form. We are open to creative ideas, interested in helping to build interdisciplinary teams of collaborators, and hoping to offer opportunities for involvement to early career scholars. We will make decisions after the initial deadline about what papers to solicit and their potential teams of authors before the work commences.

Papers that are invited will undergo the regular peer-review process. Invitation to submit a paper does not necessarily come with a guarantee of ultimate acceptance.

Submitted papers should follow the author guidelines of JPA. Papers should be as concise as possible. Exceptions to the word limits are possible upon request.

The anticipated schedule is:
  • November 1, 2020 – Submission of statement of interest via form above.
  • November 15, 2020 – Comments and decisions on proposals; papers and author teams invited.
  • April 1, 2021 – Full papers due.
  • April – October 2021 – Regular peer-review and revision process.
  • ~February 2022 – Issues published.
Questions regarding the special issue can be sent to any of the editors: