1.5.8 Assessing Program Assessment Systems

by Kelli Parmley (Director of OBIA, University of Northern Colorado) and
Daniel K. Apple (President and Founder, Pacific Crest)

The practice of continuous improvement applies not only to program performance, but also to the assessment systems that are used to assess programs. Assessment systems that are efficient and current are less time consuming to employ and are more likely to yield reliable data. As strategic planning processes shape institutional vision, mission, and priorities, the assessment systems by which an institution’s programs gauge performance and help direct improvement should align with these goals. Therefore it is crucial to review assessment systems annually with the goal of continuously improving the process. This module identifies the characteristics of a quality assessment system, provides a tool that assessors can use in assessing a program assessment system, and it describes how to use feedback effectively to develop an action plan for improving the assessment system.

What Makes a Program Assessment System a Quality Assessment System?

The process of assessing an assessment system begins with the same approach used to assess at other organizational levels or in other contexts: it involves a mindset that is not focused on the actual level of quality, only how to improve it (4.1.2 Distinctions Between Assessment and Evaluation). To ensure currency and alignment with institutional goals, regular assessments of a program’s assessment system is also consistent with the assessment standards of regional (Middle States Commission on Higher Education, 2002; Higher Learning Commission, 2001) and many professional accrediting bodies (Accreditation Board for Engineering and Technology, 2002).

Table 1 identifies a tool for assessing a program assessment system. The columns in the table are structured for the assessor to provide the strengths, areas for improvement, and insights to assess each element of the system. The rows in the table identify key criteria to use for assessing quality (4.1.9 SII Method for Assessment Reporting).

Essence Statement

In contrast to an inspirational mission or vision statement, which is often lengthy, an “essence statement” should provide an immediate sense of the core values of the current program. It should be concise, yet comprehensive, and be stated in a complete sentence.

Scope

The scope of a program is anchored in present performance and should clearly articulate the “core” of the program and its boundaries. Statements about what the program “is not” identify what is outside the scope of the program and why “gray areas” occur.

Current and Future Goals

The top five current and future goals should represent a three-year to five-year timeframe for the program and be specific enough to indicate direction and magnitude. A teaching and learning center might identify as a goal “collaborative initiatives among faculty within and across disciplines and schools.” However, feedback might suggest that the magnitude must be clarified. Will the teaching and learning center be maintaining their level of offerings or will they be increasing them?

Processes and Systems

The top five processes and systems of a program (e.g., curriculum design) should help contribute to the accomplishment of the current and future goals and should produce the products, assets, and results. The distinction between the two is important for determining the performance criteria, which may be process-oriented or product-oriented.

Performance Criteria

It should be clear that the performance criteria, when present in a program, will produce quality. Consider this: “Faculty provide proactive and developmentally-based advising that is centered on students’ needs within a systematic framework.” This performance criteria statement provides a concise statement in a specified context that is understood and valued by multiple stakeholders in that program.

Attributes

Attributes (measurable characteristics) provide the means for helping to differentiate levels of performance for each criterion. There should be no more than three for each criterion, and they should be measurable and significant. Examples of attributes for the advising performance criterion include “timely interaction with students,” “students graduate within four or five years,” and “students can effectively create a semester course schedule.”

Weights

The attributes should be prioritized by the weights assigned to each. The weights should indicate the significance of each attribute to the overall program performance. In this regard, they should add up to one hundred percent but no single factor should account for less than five percent.

Means for Collecting Data

The means for collecting the data (e.g., portfolio, survey) should be a reasonable, cost-effective venue for collecting the data in a timely manner.

Instrument

An instrument is used to measure the data that is collected. For example a survey may be the means for collecting the data, but a satisfaction index would be the instrument for measuring satisfaction. Similarly a standardized test may be the vehicle for gathering data on student learning, but test scores are the instruments used to measure knowledge. Instruments should be appropriate, valid, reliable, and accurate.

Benchmarks

Benchmarks identify the current level of performance of a program, while future targets identify the level of performance the program is striving for. Future targets should be clearly related to the performance criteria. They should be based on performance (as compared to effort) and be attainable, yet challenging.

Accountability

A specific individual (as opposed to a job title) is assigned responsibility for a factor. Responsibility should be distributed among program participants.

Using the Tool Effectively

The tool for assessing an assessment system is the means for an assessor and the assessee to increase the quality of an assessment system. The tool provides a framework for structuring the feedback both for the assessor and the assessee. The following tips suggest ways to use the tool more effectively.

  1. An interdisciplinary assessment review committee of five to fifteen people (from various disciplines across campus) should be established to assess a program assessment system.

  2. Set up a time schedule where assessment systems are reviewed on a monthly basis with the full cycle of assessing program assessment systems occurring over a three-year period.

  3. The program should identify areas in which it would most like feedback.

  4. Based on the feedback priorities identified by the program, the committee should use smaller review teams of two or three people to assess. While providing additional (perhaps contrasting) feedback, be careful not to send mixed messages to the assessee.

  5. Before beginning the review, the team should read and analyze the criteria for assessing a program assessment system.

  6. The review team should strive for feedback that is of high quality, not quantity.

  7. Be careful not to use evaluative statements; there are no standards (good or bad), because the emphasis is on how to improve.

  8. Provide an opportunity to complement written feedback with a face-to-face report.

  9. The form should be used as an electronic template where feedback is recorded directly into the chart (versus hand-written notes on a paper document).

  10. The feedback should be very explicit and directive about how to improve: it should give direction and assistance, not platitudes.

Turning Feedback into an Annual Plan of Action

Using the feedback provided by the review committee the program participants can establish a course of action. Two to three percent of the program’s resources should be explicitly set aside for purposes of improvement. Within those parameters, the program participants need to “scope” the changes and address the basic question: What is a reasonable amount of change to make based on what was learned from the feedback?

  1. Prioritize and choose the changes to be made to the assessment system based on which changes will leverage the most improvement.

  2. Specify a detailed list of activities that must take place in order for a proposed change to occur.

  3. The activities should be accompanied by dates for completion and an individual should be assigned to be responsible for carrying out each activity.

  4. Program participants should be updated on a regular basis, perhaps with a standing agenda item on the regular program meeting schedule.

  5. The changes made to an assessment system should be included in the program’s annual reporting process as evidence of improvement.

Concluding Thoughts

An assessment system must be healthy, dynamic, and continually advanced in order for it to help the program’s strategic plan to be aligned with the institution’s strategic plan. Therefore, the program should assess its program assessment system once a year and invest two or three percent of its annual program resources for implementing program assessment and improvements. The long-term outcome result of assessing the assessment system is greater buy-in from program participants.

References

Accreditation Board for Engineering and Technology. (2002). <http://www.abet.org>

Higher Learning Commission. (2001). <http://www.ncahigherlearningcommission.org>

Middle States Commission on Higher Education (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author.

 

Table 1  Tool for Assessing an Assessment System 
Criteria Strengths Improvements Insights

1. Essence statement

  • Represents all of its stakeholders
  • Is comprehensive
  • Is concise
  • Values are identified and appropriate

2. Scope

  • Clarifies what is outside the program’s core
  • Clarifies what the program does do
  • Differentiates core aspects (current) from future aspects
  • Clarifies why misconceptions can occur (gray areas)

3. Top five current and future goals of the program

  • Are specific
  • Are measurable
  • Are clear in direction and magnitude
  • Represent a three-to-five-year time frame

4. Processes and systems

  • Are descriptive
  • Identify key processes
  • Provide intent, direction, and connections

5. Products, assets, and results

  • Are explicit
  • Are descriptive
  • Are important
  • Are obvious

6. Assessment report

  • Documents accomplishments
  • Provides strong evidence
  • Provides clear action plans
  • Documents the past year’s improvements made to the assessment system
  • Presents a professional image

7. Performance criteria

  • Are concise
  • Are free from jargon; understandable by multiple audiences
  • Provide context
  • Produce quality
  • Are valued by multiple stakeholders
     

8. Attributes (measurable characteristics)

  • Are not too small
  • Are not too large
  • Are single dimensional
  • Are measurable
  • Contain appropriate units
     

9. Weights

  • Sum to one hundred
  • Factors less than five are not included
  • Are assigned an appropriate value
  • Are aligned to a factor
     

10. Means for collecting data

  • Are cost-effective
  • Are timely
  • Are obvious
  • Are reasonable
  • Capture performance data
     

11. Instruments

  • Are reliable
  • Are appropriate
  • Are valid
  • Are accurate
     

12. Benchmarks and future standards

  • Are related to criteria and factors
  • Are based on performance as compared to effort
  • Define the level of success used for evaluation
  • Are benchmarked
  • Are challenging
  • Are attainable
     

13. Accountability

  • Is assigned to a specific individual (not just a title)
  • Is assigned to all internal stakeholders
  • Is distributed appropriately