Students, faculty, academic programs, and sponsored programs all engage in evaluation and assessment processes, and depending upon the circumstances, may serve in the roles of evaluator, assessor, or performer. In an effort to foster continuous improvement, actors operating in these varied roles may find it beneficial to convert an evaluation into an assessment. This module identifies the circumstances under which it is appropriate to convert evaluation to assessment, and describes the institutional, personal, and programmatic changes necessary for efficient and effective conversion. This module also offers a methodology for individuals and institutions who seek to convert evaluation-based processes into more assessment-based processes, while retaining evaluation as a mechanism for recognizing and rewarding success.
Figure 1 Elements in the Evaluation-to-Assessment Continuum |
|
|
Uses of Evaluation and Assessment
According to Nachmias (1979), there are two distinct but interrelated types of evaluation: process evaluation and impact evaluation. Faculty teaching, for example, is generally evaluated utilizing both methods, with student evaluations serving as the primary indicator of the effectiveness of the teaching process and student learning outcomes serving as the indicator of teaching impact.
Decision-oriented types of evaluation are applied to support efficient and effective outcomes in the areas of program management, success in a course, or human resources promotion (Greene, 1994; Scriven, 1993). Student success in a given course is evaluated by measures that typically examine student performance on tests, research papers, portfolios, laboratory work, and class participation. A faculty member’s worthiness for promotion and tenure are evaluated through measures of effectiveness such as evaluations, publication history, and service contributions. Grant proposals are generally assessed and then evaluated by a panel of peers based on measures which may include the feasibility of the proposed program, experiential and professional backgrounds of the proposed program’s associated personnel, and the appropriateness and feasibility of the budget. In each of these instances, rigorous evaluation standards are applied, generally with associated rubrics, and the performance of the individuals or items being evaluated are held to those standards and judged accordingly.
Ideally, assessment and evaluation should complement one another. The best way to synergize these often mutually-exclusive processes is for the performer to treat his or her evaluation feedback as assessment feedback. The performer gains new insights about past performance and can produce higher quality action plans for growing future performance. Figure 1 summarizes the elements in the evaluation-to-assessment continuum.
One key assumption in the widespread use of evaluation in higher education and other institutions, is that it is being done well. When done well, evaluation can potentially contribute to quality improvement, cost reductions, accountability, decision support, and research (Scriven, 1993). When done poorly, evaluation often appears arbitrary and punitive, and neither informs nor inspires continuous improvement for the person or process being evaluated.
Good evaluation tends to work backward from conclusions reached by the evaluator. Based on convincing evidence, the evaluator is able to make recommendations to the performer and other stakeholders. The implementation of a strong assessment process in response to evaluation generally helps to mitigate against negative scenarios (4.1.3 Mindset for Assessment). In contrast to evaluation, which examines only the end-product of an endeavor, assessment gauges progress toward established goals. It is born from collaboration and fosters collegiality rather than resentment in its participants (4.1.1 Overview of Assessment). Assessment empowers both the assessor and the performer as it positively impacts five types of learning outcomes: competency, movement, experience, accomplishment, and integrated performance (2.4.5 Learning Outcomes).
When to Convert Evaluation Into Assessment
Although assessment is not a substitute for evaluation, it can be triggered by evaluation outcomes. When a program, product, or person is evaluated, the outcomes of the evaluation, particularly those that indicate poor performance, tend to prompt the performers to self-assess, often in hopes of ultimately improving future performance. Not every evaluated project, program, or performance can be converted into a subject for assessment. Therefore, it is important to examine the project, program, or performance very closely, including goals, timetable, metrics, and available performance data.
The best opportunity for turning an evaluation into an assessment occurs when the performer sees the value in the evaluation, accepts the legitimacy of its findings, and wants to grow his or her performance in the area studied. Assessment should not be conducted, therefore, to evade the proverbial “hammer” which may come down on programs or personnel as a result of poor performance. Rather, assessment is a catalyst for professional or programmatic development in the next evaluation period (Greene, 1994).
Methodology for Turning Evaluation Into Assessment
Several mental steps must be taken to convert an evaluation into an assessment. This begins by adopting the changes in perspective suggested by Table 1. These involve shifts in mindset, environment, and audience. Deeper analysis of the nature of assessment and evaluation can be found in 1.4.8 Mindset for Evaluation, 4.1.3 Mindset for Assessment, and 4.1.2 Distinctions Between Assessment and Evaluation.
Once the performer is committed to moving away from an evaluation mindset toward an assessment mindset, the next step is to follow the steps shown in Table 2. Note that most of these steps are performer-initiated and require greater responsibility on the part of the performer than the evaluator. The steps in this methodology are similar to those given in the Assessment Methodology (4.1.4), but have different starting and ending points.
Case Study
Janet Stone, a tenure-track associate professor in the University of the America’s Department of Psychology recently underwent the third-year tenure review. Dr. Stone has provided the university’s tenure and promotion committee with the requisite materials for their evaluation of her portfolio, which includes documentation and samples of her published scholarship, documentation of her record of academic service, professional contributions, and teaching evaluations. She is currently the chair of the psychology department, and serves as a peer reviewer for federal grants and as a committee chair for research divisions of her professional organizations. She has also published numerous books and diverse publications within several refereed journals.
When the committee completed its evaluation of Dr. Stone’s portfolio, they provided her with written feedback about the portfolio’s contents using a predetermined numeric rating system for each of the performance criteria. Dr. Stone received a relatively high numerical rating, though some broad areas of improvement in teaching and leadership ability were noted in the comment section of the evaluation form. Dr. Stone wanted to know more about her performance: her strengths and areas for improvement. She resisted the temptation to self-evaluate by ruminating over the outcomes of the evaluation which suggested that she needed to improve in the areas of teaching and leadership. Instead, she consulted with the chair of the tenure and promotion committee seeking further clarification and information related to the evaluation outcome in an effort to assess her past performance, grow her future performance, and ultimately impact her tenure review in a positive way.
During the meeting with the committee chair, Professor Stone learned that the committee viewed her academic record as highly accomplished. They felt that she was certainly on track for tenure should she sustain her current level of scholarly work and professional contributions. The committee members shared an overwhelming concern about the prevalence of negative student evaluations within her courses as well as faculty feedback that suggested that her manifold outside obligations diminished her effectiveness as a department chair. The committee chair suggested that Dr. Stone try to improve in these areas (teaching and administrative leadership) and find better ways to balance these demands with her seemingly overwhelming focus on research publication and professional service.
Dr. Stone learned a great deal about her performance from this meeting with the committee chair. She welcomed the feedback from the evaluator (the chair of the tenure and promotion committee), and rather than resenting the feedback, embraced it and began to develop a self-assessment plan for improving her future performance. She used the information garnered from her conversation with the tenure and promotion committee chair to craft an assessment plan as well as an action plan for improving her teaching and leadership skills. She informally engaged the committee chair as an assessor of her self-assessment so that she would be able to gain ongoing, proactive feedback that might fuel her continuous improvement in the identified areas of deficiency.
Over the course of the academic year, Dr. Stone met with the chair and reviewed her progress in certain areas. She implemented immediate changes in her teaching practice, leadership style, and availability to her faculty colleagues and students. She developed and issued informal assessments to her students prior to the end-of-term course evaluation, requesting their feedback in order to ascertain student sentiment related to the course content, her teaching style, and her availability (i.e., office hours). She responded to this feedback by implementing immediate remedies to issues and complaints, and found creative ways to sustain positive feedback. She also solicited informal and formal feedback from her colleagues in the psychology department regarding areas upon which she could improve with respect to her chairmanship of the department, and readily began to apply the suggestions to her service as chair.
Over the course of the next couple of academic years, Dr. Stone’s tenure and promotion file became more robust and reflective of her prowess as a teacher in the classroom, as a contributor within her discipline, and as a leader among her colleagues. Her quest toward continuous improvement through self-assessment and implementation of needed improvements ultimately paid off, as she was successfully granted a tenured professorship.
Concluding Thoughts
When evaluation is used as the basis for an assessment, its utility increases exponentially. The evaluation is no longer the terminating point of the evaluator’s examination of the evaluatee’s performance. The evaluation no longer serves as a punitive measure or a justification document, but instead serves as an indicator of progress toward stated goals. The evaluation therefore becomes a useful tool by which the performer’s progress is measured within the assessment paradigm, and both evaluation and assessment begin to peacefully co-exist in a world in which they had previously been perceived as having been mutually exclusive.
Turning evaluation into assessment requires high quality evaluation data and serious investment on the part of the performer. The performer’s challenge is to translate standards-based evaluation language into learner-centered assessment planning. It is important to recognize the level of critical thinking involved in this translation; it prompts and empowers the evaluatee to become better at self-assessment, and the evaluatee can, by modeling this behavior, inspire a wider population to learn from evaluation.
References
Chatterji, M. (2004). Evidence on ‘what works’: An argument for extended-term mixed-method (ETMM) evaluation designs. Educational Researcher 33 (9), 3-13.
Greene, J. C. (1994). Qualitative Program Evaluation. In N. K. Denzin & Y. S. Lincoln (Eds.). Handbook of qualitative research. Thousand Oaks, CA: Sage.
Nachmias, D. (1979). Public policy evaluation: Approaches and methods. New York: St. Martin’s Press.
Scriven, M. (1993). Hard won lessons in program evaluation. White paper for Evaluation Center, Western Michigan University.
Smith, M. F. (1989). Evaluability assessment: A practical approach. Norwell, MA: Kluwer.
|
Evaluation-Based Attributes |
Necessary Changes |
Assessment-Based Attributes |
|
deadlines are inherent to the process |
Mindset
Environment
Audience |
timelines are inherent to the process |
|
judgmental |
developmental |
|
|
goals are pre-determined |
goals are collaboratively determined |
|
|
focused on outcomes achieved |
focused on progress toward outcomes |
|
|
highly objective |
more subjectivity is allowed |
|
|
rubric-based |
benchmark-based |
|
|
summative in nature |
formative in nature |
|
|
rigid, structured guidelines |
process is inherently flexible |
|
|
retroactive perspective |
proactive perspective |
|
|
deterministic orientation |
holistic orientation |
|
|
unilateral in nature |
collaborative (bilateral) in nature |
|
|
focused on one’s completed performance |
focused one’s on continuous improvement and personal growth |
|
|
focused on rewards and penalties |
focused on the intrinsic value of the process rather than the product |
|
Process Owner |
Process Details |
Process |
|
Evaluator |
1. Conduct an evaluation of the performance and/or product. |
Evaluation |
|
Performer |
2. Review the evaluation feedback in a relaxed, non-defensive manner (pose questions to the evaluator for clarification as needed). |
Mindset Change |
|
Performer |
3. Consider the evaluator’s perspective and his or her motive for the evaluation (take an objective perspective). |
Mindset Change |
|
Performer |
4. Ask pertinent questions to determine whether it is appropriate to initiate assessment or whether further evaluation is necessary (Chatterji, 2004; Smith, 1989). |
Mindset Change |
|
Performer |
5. Use the outcomes of the evaluation to motivate self-assessment to improve future performance. |
Assessment |
|
Performer |
6. Use the data from the evaluation to create goals, objectives, and benchmarks to structure assessment planning. |
Assessment |
|
Performer |
7. Resist the trap of self-evaluation when initiating the self-assessment by focusing on improved performance rather than on the evaluation outcomes. |
Assessment |
|
Performer |
8. Conduct a needs analysis of the components of the evaluation to determine priorities for professional/program development. |
Assessment |
|
Performer |
9. Ask the evaluator to validate performance areas to be assessed as well as performance criteria to be used (map these against previous evaluation outcomes). |
Assessment |
|
Performer |
10. Ask the evaluator to validate the goals and objectives of the assessment based on desired evaluation outcomes. |
Assessment |
|
Performer |
11. Formulate realistic and attainable timelines for accomplishing established goals. |
Assessment |
|
Performer |
12. Craft a plan for action/change to achieve the performance outcomes. |
Assessment |
|
Performer |
13. Establish targets for growing performance and hold oneself accountable for meeting those goals. |
Assessment |
|
Performer |
14. Conduct a self-assessment, documenting observations and interpretations. |
Assessment |
|
Performer |
15. Seek an assessment of the self-assessment from the evaluator (4.1.10 Assessing Assessments), and document the feedback to grow assessed performance and to inform future evaluations. |
Assessment |
|
Evaluator |
16. Conduct an evaluability assessment to determine one’s readiness for formal evaluation (Chatterji, 2004; Smith, 1989). |
Assessment |
|
Evaluator |
17. Conduct the next round of evaluation about the performance or product. |
Evaluation |