Removing Roadblocks and Clearing the Way for Accreditation Success: Part 2
By Cynthia Allen, Susan Davis-Becker, Ph.D., and Janice Moore
3.16.18
By Cynthia Allen (SeaCrest), Susan Davis-Becker, Ph.D. (ACS Ventures), and Janice Moore (SeaCrest)
Disclaimer: The information provided in this article is based on our collective experience successfully preparing more than 250 applications for accreditation. This article does not represent the opinions of the NCCA Commission.
In part one of this article, we explored roadblocks to achieving NCCA accreditation in the Standards related to governance and operations of a certification program. In part two, we discuss common psychometric challenges and provide strategies to help clear the path to accreditation.
Overarching Psychometric Themes
Like the “administrative” Standards, overarching themes are evident in the psychometric requirements. Understanding these themes and incorporating the recommended strategies into exam development and maintenance supports accreditation and contributes to an organizational culture that values accreditation and quality improvement.
Fairness
Principles of fairness provide a common thread throughout the Standards. In the administrative Standards, the concept of fairness appears through the consistent implementation of policies, equal access to the exam and transparency in published program information. Fairness carries through the psychometric Standards, requiring that the policies and procedures used to develop, administer and score the exam are fair for all candidates. As with the administrative Standards, certain information must be publicly available regarding the requirements to test, how the test is developed and scored and how candidates should interpret exam results.
Exam Development is Ongoing
Exam development is not a one-time project. The NCCA expects and requires that an accredited program monitors performance on the exam and updates the content throughout the five-year accreditation period. While the content outline may not change until the next job analysis study, item and form development should be ongoing to increase security and ensure that content remains accurate and relevant. A program applying for accreditation should not plan to submit exam development reports used in a previous application.
Generally Accepted Psychometric Principles
Exam development is not a do-it-yourself project. Programs need expert consultation from a qualified psychometrician, one that is familiar with the NCCA Standards, in all steps of the exam development process to be successful. Program staff and volunteer leaders must be well-informed consumers of psychometric services to ensure that contracted services are the right match for the program and its accreditation goals. This is especially important for organizations that are new to certification and/or do not yet have in-depth expertise on staff.
Document, Document, Document
The NCCA also expects and requires sufficient documentation of each step in the exam development process. This includes, but is not limited to: rosters of subject matter experts (SMEs) who participated each activity, how SME panels represent the certificant/practitioner population, the psychometric methods and rationales for each activity, data and results, the recommendations of the SME panel, and approval of the decision.
Exam Development Policies
Similar to other program areas discussed in part one, the program should develop and consistently implement test development policies that provide a framework for the process used. For example, a comprehensive exam development policy should provide a high level summary of the required steps along with how frequently activities must occur.
No Excuses
Achieving accreditation requires full compliance with all of the Standards. Many programs face challenges related to small candidate volumes, vendor transitions, or limited staff resources (just to name a few). It may take some strategic planning and creative solutions, but these issues are not pre-destined to become roadblocks. Most organizations discover that addressing these challenges adds to the credibility and long-term strength of the program.
Eight Common Roadblocks
The eight common roadblocks identified below present the most significant exam-related challenges for programs applying for accreditation. As in part one, we discuss each challenge along with tips for clearing the path to accreditation.
1. SME Panels
The panels of SMEs engaged throughout the exam development process must represent the certificant/practitioner population. Rosters of SME panel members that include qualifications and demographic characteristics of the individuals who participated in each activity are necessary. The goal is to show the NCCA evidence that the program engages a diverse and representative group of qualified SMEs. While it is acceptable to have some crossover in the panel members, having the same group of individuals complete all steps in an exam development cycle is not, because it could constitute undue influence by that group. In addition to documenting SME names and qualifications, policies and procedures describing the selection and training of SMEs should also be available.
2. Job Analysis
The job analysis (which may also be referred to as the practice analysis or role delineation study) is often a resource-intensive project and serves as the foundation of an accredited certification program. With careful planning in advance, a well-designed job analysis study can provide benefits far beyond an exam content outline. Shortcuts, while intended to save time or money, should be avoided. Though they may initially come with cost savings, skipping steps in this essential process negatively impacts the chance of achieving, or maintaining, accreditation. For example, using only a panel and skipping a survey may save time and money, but should be carefully considered in close consultation with the psychometrician if accreditation is a goal. The methodology chosen must be appropriate for the program and supported by a sound rationale. Another example, choosing SMEs based on their close proximity to the office to reduce travel costs may prevent the panel from adequately representing the certificant/practitioner population.
3. Exam Specifications
The 2016 NCCA Standards include a Standard focused solely on exam specifications, which goes well beyond the domains, tasks and weights typically included in a published exam content outline. The NCCA does not specifically require an exam specifications report, however it can be a useful document to demonstrate compliance. It also has the added benefits of documenting essential test form information, preserving the rationale behind key exam design decisions and serving as a training tool for new staff or volunteers. The program should customize the document in consultation with its psychometrician, but it might include: target audience and level of the certification, purpose of the program, exam information and associated rationales (such as number of items, pre-test items, types of items, time permitted to take exam, delivery modality, etc.), number of forms, technical analysis and more.
4. Exam Development
Accredited programs review and update the exam periodically throughout the accreditation period, but a common challenge programs encounter is the lack of a documented exam development plan. Similar to other areas of the Standards, specifically quality assurance (Standard 23), the NCCA is looking for the program’s systematic approach to the exam development process. The program should document how it completes the exam development cycle through policies and procedures and provide evidence in the accreditation application that the program followed the plan. The rigor of the process used may depend on the program and the type of assessment, but generally accepted psychometric principles apply. We recommend a three-pronged approach: (1) a comprehensive exam development policy, (2) a project timeline detailing when planned activities will occur and (3) formal psychometric reports for each activity.
5. Exam Administration
Another common challenge programs face is providing evidence of standardized policies and procedures for all modalities of exam delivery. These policies and procedures help ensure a fair and equivalent testing experience for candidates. If, for example, a program delivers primarily through computer-based testing sites but offers occasional paper and pencil administration at an annual conference, documentation is needed to show the administration of the exam in both modalities is standardized and comparable.
Certification programs frequently ask, “Can we deliver the exam using remote proctoring?” To date, the NCCA has not approved a program using remote proctoring. If remote proctoring is to be accepted, the program must provide evidence, likely through formal research, of the comparability of remote proctoring to other delivery options. Specific areas of comparability to be investigated include security of administration procedures and exam content, the use of accommodations and general administration conditions (e.g., exam time, response format, and reporting of any irregular or inappropriate behavior).
As mentioned in part one of this series, the program leadership must provide oversight over all essential aspects of the certification program. Tasks can, and often should, be delegated, but a mechanism is needed for the certification board to monitor performance. The same requirement is true for exam administration. The program likely utilizes a vendor’s testing network and relies on that vendor to establish and implement policies related to the security of the exam administration process, protection of confidential information, or the selection of proctors. The program, however, should provide evidence of how the governing body monitors the vendor’s adherence to the established protocols. This could be done through periodic reporting from the vendor, “secret shops,” review of customer feedback, or other options.
6. Scoring
In addition to using a generally accepted method for scoring that is acceptable given the design of the examination, programs must document how scoring procedures are applied in a consistent and fair manner, particularly when any elements of the exam are scored by raters. When reporting scores, the program must also provide information to failing candidates on their score in relation to the passing standard, a general summary of how scoring is done, and how to interpret the scoring information. Under the revised Standards, when domain-level scores are provided to failing candidates, the program must document the reliability of the domain-level information and communicate how this information is intended to be interpreted to candidates and other end-users.
7. Technical analysis
An essential step in the exam development process is to demonstrate that the program monitors exam performance through ongoing analysis. For a robust review of exam performance, programs are advised to include analysis and evaluation at the exam level (in total and by form) and the item level. At the exam level, programs should evaluate and document candidate performance (score average and variability, pass rate), administration time, and appropriate measures of reliability (e.g., internal consistency, decision consistency, standard error of measurement). At the item level, programs should evaluate and document difficulty, discrimination, model fit (if appropriate), and response time. The program should also demonstrate how this information is used, for example, how data is shared with SME committees to inform ongoing item development activities. Again, a report including these data points for all active forms should be submitted with the application.
8. Equating
Accredited programs engaged in ongoing exam development activities administer multiple forms of the exam throughout the five-year accreditation cycle as forms are retired and replaced. The NCCA Standards require that the program evaluates and documents how the exam content between forms adheres to the examination content specifications. In addition, the program must demonstrate how the score/results reported from various forms are comparable based on the use of empirical equating procedures. The specific procedures to be applied should be determined based on program-specific factors (e.g., measurement model, exam form design, volume of data). A common challenge, particularly for small-volume programs, is the difficulty in equating new forms of the exam when candidate counts are low. This is an important issue to discuss with the program’s psychometric consultant as methods may exist to allow for small-volume equating.
Related Learning: Upcoming 2018 Webinars
Setting and Communicating Your Cut Score Method
Thursday, April 12, 2018 | 1:00 - 2:00 pm ET
Presented by Dr. Tim Vansickle
Register today.
Hear a complete overview of standard setting methods that are used in various assessment settings. Learn the practical aspects of selecting and training a group of Subject Matter Experts, and how to communicate standard setting results to both the governing body as well as the candidates. Learn more here.
Remote Proctoring and Strategies for Implementation
Thursday, September 20, 2018 | 1:00 - 2:00 pm ET
Presented by Rory McCorkle, PhD, Senior Vice President of Certification for PSI
Register today.
Remote proctoring continues to be discussed extensively in certification, with many questions abounding: What is its place in professional certification? Is it appropriate? How secure is it? This webinar will examine remote proctoring from the perspective of a broader test security framework and layout the breadth of remote proctoring options available. Learn more here.
Additional Resources
ICE Resources Related to Specific Roadblocks
ICE Resources for Increasing Your Understanding of Psychometrics