Statement to APA’s Commission on Accreditation

Industry,

The Standards of Accreditation for Health Service Psychology, Master’s Degree Programs were approved by the APA Council of Representatives in February of 2021. Following this approval, APA’s Commission on Accreditation began developing accompanying Implementation Regulations (IRs). IRs are official policy documents that elucidate, interpret, and operationally define the Commission on Accreditation’s policies and procedures. Eleven of these IRs were recently presented to the Commission on Accreditation and approved to put forward for public comment. In response to this call, SPA’s Education and Training Interest Group organized a task force to draft a set of comments on behalf of SPA. A request to join the task force was sent to all SPA members and over a dozen members joined the effort. Through meetings and email, the task force drafted a statement that was then presented to SPA’s Board of Trustees on September 23rd, 2021. The SPA Board of Trustees officially endorsed the statement, which was submitted to APA’s Commission on Accreditation on October 1st, 2021. The submitted statement read as follows: 

To: The American Psychological Association Commission on Accreditation

The Society for Personality Assessment (SPA) has prepared the following comment on the first set of Section C Implementing Regulations for the Standards of Accreditation for Health Service Psychology, Master’s Degree Programs. SPA, established in 1938, has over 1,000 members from across the globe and is the largest in the world focused on the practice, science, impact, and advancement of personality assessment. Given the strong psychological assessment expertise of our membership, as well as SPA’s mission, we have prepared the following statement. This comment was prepared by SPA’s Education and Training Interest Group and was officially endorsed by the SPA Board of Trustees on September 23rd, 2021.

 The proposed Implementing Regulations (IRs) concerning aspects of a program’s curriculum or training relevant to acquisition and demonstration of the Assessment Profession-Wide Competencies (PWCs) are practically identical for Master’s and Doctoral accreditation. There is a crucial need for explicit differentiation between these education levels with regard to the practice of assessment. We assert that a pragmatic differentiation should be made in terms of the scope and level of complexity in assessment training; there should be a clearly defined linkage between the training requirements of Master’s programs as described in the accreditation IRs and the relatively limited scope of Master’s level assessment practice. Of particular note, the level-appropriate training and level-appropriate expectations for Master’s degree programs are not clearly specified in C-8 M. PWCs. As written, these are to be effectively determined by each program. This issue could become problematic when practice settings call for a broader scope of practice and/or greater complexity of work than provided by an individual’s training program, which would inevitably create situations wherein Master’s-level practitioners are expected to practice beyond their assessment training, as the current PWCs are much broader than what could reasonably be accomplished within the timeframe of a Master’s degree (i.e., 2-3 years). Thus, the scope of training (and, ultimately, practice) for Master’s-level practitioners must be clarified, including what Master’s-level clinicians are qualified to administer and interpret. We propose the following for how the IRs should specify training:

The IRs should specify training students in the foundations of assessment. First and foremost, a thorough understanding of psychological theory is necessary to ensure the nuanced application of assessment and use of instruments. Thus, the IRs should further specify teaching students to rely on knowledge gained from other classes (e.g., psychopathology, cognition and learning, statistics, ethics) to guide clinical decision-making. In this respect, human functioning, in the form of assessment data, should be successfully tied to psychological theory in the context of case conceptualization and treatment planning, and the use of assessment data in research, as well as with respect given to other individual characteristics (e.g., culture, sexuality, gender, age, socioeconomic status). Students should undoubtedly be trained in current diagnostic and coding practices, but given the movement away from categorical diagnostic schemes, courses should also include new and emerging models of personality and psychopathology (e.g., Zimmermann et al., 2019). These core components of assessment training cannot just be items listed on the syllabus; rather, substantial proportions of the total reading material, class lecture, assignments, and discussion should be explicitly devoted to foundational areas (such as those enumerated below). Building upon these foundations, Master’s-level programs should then train students in the process(es) of assessment and in the skills necessary to use assessment as a tool in both clinical practice and research. These skills are central to differentiating psychology from other disciplines. Accordingly, we recommend that the IRs specify training in (a) how to formulate an appropriate referral question; (b) the fundamentals of psychometrics and how to determine the appropriateness of a specific test for a specific need; (c) basics of designing a test battery and preparing for test administration; (d) clinical interviewing to enable identification of primary concerns; (e) basic principles of interpreting and integrating test results using established methods and/or frameworks for doing so; and (f) how to communicate orally and/or in writing, as appropriate, the findings and implications of the assessment in an accurate and effective manner sensitive to a range of audiences, including the assessee who is receiving the services. The IRs should also specify training in a few, very circumscribed and specific tests and/or measures (i.e., administration, scoring, and interpretation at a level suitable for a psychometrist) that are appropriate to scope of practice and the clear, specifically stated aims of the program. The specific tests that are included in a program’s assessment sequence must be taught through coursework and adequately repeated supervised practice to assure that every student reaches mastery in the mechanics of administration, scoring, and interpretation. Wright (2021) proposed a model for this stepwise approach to assessment training that might serve as a valuable reference. In brief, training would begin with a foundational course in assessment, which could be largely didactic and focus primarily on the process of assessment without focus on individual tests. The foundational course would be followed by a second course in assessment focusing on discipline-appropriate tests and measures (e.g., measures of cognitive and academic functioning for school psychology, measures of personality and psychopathology for clinical psychology, and measures of vocational interest and personality functioning for counseling psychology). An optional additional course(s) in other areas of assessment could be offered to interested students. As noted, Master’s programs must pair the discipline-appropriate training in the classroom with supervised practice in order for Master’s-level students to continue learning, honing, and solidifying their assessment skills; IRs should specify assessment training experiences with actual (or volunteer) clients (e.g., via practica or internship). The objective outcome of this model would be to train Master’s-level students in the foundations of assessment and depth in a single area of assessment, rather than attempting to match the breadth and depth of assessment training possible in doctoral programs.

Inherent here is the belief that Master’s-level programs cannot provide the complete training necessary to independently practice assessment. This assertion should be clearly reflected in the PWCs and IRs.

The IRs should differentiate psychological testing and psychological assessment. Psychological testing “is a relatively straightforward process wherein a particular scale is administered to obtain a specific score” whereas “psychological assessment is concerned with the clinician who takes a variety of test scores, generally obtained from multiple test methods, and considers the data in the context of history, referral information, and observed behavior to understand the person being evaluated, to answer the referral questions, and then to communicate the findings to the patient, his or her significant others, and referral sources” (Meyer et al., 2001, p. 143). The IRs should specify training to involve distinguishing psychological testing and psychological assessment because a larger number of competencies are required before one can practice psychological assessment. The IRs should also specify that programs must detail how they are appropriately training students to conduct psychological assessment and not simply training students to conduct psychological testing. 

In light of the above and the program aims already required by the IRs, the lack of clearly defined acceptable program aims pertaining to assessment, which must be more limited relative to Doctoral programs, fails to safeguard against Master’s programs creating overly ambitious aims that cannot be fulfilled with the allotment of training experiences these programs can reliably provide all students (e.g., number of courses, number and types of cases, diversity of settings and populations). To this end, we recommend the IRs specify training experiences in size and scope must be commensurate with program aims, and that programs must justify their aims related to assessment by clearly enumerating how aims are consistent with the size and scope of the assessment training required for all program students (i.e., provided via required courses and required clinical experiences, and not only provided via a dedicated track, elective courses, and/or external clinical experiences).  

In specifying assessment training, it is necessary to acknowledge that minimum standards vary by area of assessment (e.g., forensic assessment requires knowledge of legal proceedings; knowledge of typical classroom structure and ability to implement recommendations is necessary for school-based assessments). As some programs will position themselves to offer training in a specialized area(s) of assessment, the IRs should specify the size and scope of both didactic andapplied training experiences necessary to train students in specialized areas of assessment. Similar to the above, we recommend the IRs specify that programs clearly outline how students will achieve a minimum level of competency in the specialty area(s) of assessment in addition to - not a substitute for - gaining a minimum level of competency in the foundations of assessment listed above (e.g., “students will complete a separate course on forensic assessment and complete at least two assessments that meet the following definition of forensic assessment:...”).

IR C-8M #VI Assessment states that trainees should “have the skills required to engage in assessment methods designed to ascertain psychological concerns and functional behaviors,” which includes “critically evaluate, select, and apply assessment methods” and “collect relevant data using multiple sources and methods.” These skills are germane to the role of Doctoral-level clinicians, and it is unclear if these skills can be adequately developed in the span of a Master’s program. Rather than using the term “specialty area,” programs should provide training that is appropriate to the level of complexity that can be achieved in two to three years’ time. Given the time limitations of a Master’s program, Master's-level training should be limited to competency levels that would not prepare the clinician for engaging in high-stakes testing practice.

Complementing the above, the IRs should reiterate that individual practitioners have an ethical obligation to practice within their area of competence and maintain competency in their area throughout the duration of their career in addition to the general guidelines about the minimum standards programs need to meet.

Lastly, language directly or inherently emphasizing problematic dichotomous conceptualizations of the natural diversity of human behaviors and functioning is included throughout the IRs. For example, C-8M #VI Assessment contains the following: “...demonstrate current knowledge of diagnostic classification systems across different contexts and settings (e.g., schools), functional and dysfunctional behaviors,…” [italics added for emphasis]. The need to assign a categorical diagnosis in many instances of practice notwithstanding, there are myriad arguments against dichotomous and categorical models of functioning, mental health, and human behavior (e.g., Haslam et al., 2012; Kotov et al., 2017, 2018). Research has offered strong, reliable evidence demonstrating the increased validity (e.g., Zimmermann et al., 2019) and greater clinical utility (e.g., Bornstein & Natoli, 2019) of dimensional (and hierarchical) models, and contemporary literature and practice guidelines acknowledge the importance of a dimensional way of thinking (e.g., APA, 2017, 2020; Krishnamurthy et al., 2021; Ruggero et al., 2019). Thus, we recommend the IRs be amended to remove language inferring or emphasizing unsubstantiated or unnecessary categories and/or dichotomies. This recommendation does not pertain to necessary categories and/or dichotomies, such as specifications for training students in the knowledge and skills needed to assign a categorical diagnosis based on criteria of the DSM-5 (American Psychiatric Association, 2013) or other recognized diagnostic system, as these are often necessary for practice (e.g., when billing insurance). Nevertheless, to be consistent with contemporary literature and multiple professional practice guidelines, we also recommend the IRs specify training in practices that move beyond simple categorical models of diagnosis and conceptualizations rooted in categorical assumptions.



References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Author.

American Psychological Association (2017). Multicultural guidelines: An ecological approach to context, identity, and intersectionality. Retrieved from: http://www.apa.org/about/policy/multicultural-guidelines.pdf

American Psychological Association, Board of Educational Affairs Task Force on Education and Training Guidelines for Psychological Assessment in Health Service Psychology. (2020). APA Guidelines for Education and Training in Psychological Assessment in Health Service Psychology. Retrieved from https://www.apa.org/about/policy/guidelines-assessment-health-service.pdf

Bornstein, R. F., & Natoli, A. P. (2019). Clinical utility of categorical and dimensional perspectives on personality pathology: A meta-analytic review. Personality Disorders: Theory, Research, and Treatment, 10, 479-490. https://doi.org/10.1037/per0000365

Haslam, N., Holland, E., & Kuppens, P. (2012). Categories versus dimensions in personality and psychopathology: a quantitative review of taxometric research. Psychological medicine, 42(5), 903-920.

Kotov, R., Krueger, R. F., Watson, D., Achenbach, T. M., Althoff, R. R., Bagby, R. M., ... & Zimmerman, M. (2017). The Hierarchical Taxonomy of Psychopathology (HiTOP): A dimensional alternative to traditional nosologies. Journal of abnormal psychology, 126(4), 454.

Kotov, R., Krueger, R. F., & Watson, D. (2018). A paradigm shift in psychiatric classification: The Hierarchical Taxonomy Of Psychopathology (HiTOP). World Psychiatry, 17(1), 24.

Krisnamurthy, R., Hass, G., Natoli, A. P., Smith, B., Arbisi, P., & Gottfried, E. (2021). Professional practice      guidelines for personality assessment. Journal of Personality Assessment. Advanced online publication. https://doi.org/10.1080/00223891.2021.1942020

Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., Eisman, E. J., Kubiszyn, T. W., & Reed, G. M. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56(2), 128–165. https://doi.org/10.1037/0003-066X.56.2.128

Ruggero, C. J., Kotov, R., Hopwood, C. J., First, M., Clark, L. A., Skodol, A. E., ... & Zimmermann, J. (2019). Integrating the Hierarchical Taxonomy of Psychopathology (HiTOP) into clinical practice. Journal of Consulting and Clinical Psychology, 87(12), 1069.

Wright, A. J. (2021). Master’s-level psychological assessment competencies and training. Training and Education in Professional Psychology. Advanced online publication. https://doi.org/10.1037/tep0000339

Zimmermann, J., Kerber, A., Rek, K., Hopwood, C. J., & Krueger, R. F. (2019). A brief but comprehensive review of research on the alternative DSM-5 model for personality disorders. Current Psychiatry Reports, 21:92. https://doi.org/10.1007/s11920-019-1079-z