1.5.9 Annotated Bibliography—Program Assessment

by Kelli Parmley (Director of OBIA, University of Northern Colorado)

The following resources inform and support the development of vibrant program assessment systems. Each is described relative to its content as well as themes in this section with which they are most closely aligned. While there are a variety of academic publications that provide theoretical background for assessment system design, comprehensive “how to” publications are rare. Many of the “how to” publications are institution-specific manuals that are aimed at assessment of student learning outcomes versus broader aspects of institutional effectiveness.

Books

Banta, T. W., Lund, J. P., Black, K. E., & Oblander, F. W. (Eds.). (1996). Assessment in practice: Putting principles to work on college campuses. San Francisco: Jossey-Bass.

The authors of this book use the “Principles of Good Practice for Assessing Student Learning” developed by the American Association for Higher Education to analyze 82 case studies across the country. While the text does not focus on the process of developing an assessment system or the specific elements of a quality assessment system, the cases are extremely valuable to consider when contemplating alternative performance standards, measurable attributes, and means by which assessment information can be gathered. A strength of the text is its exploration of assessment techniques across a wide range of institutional types from community colleges to research universities.

Banta, T. W., & Palomba, C. A. (2001). Assessing student competence in accredited disciplines: Pioneering approaches to assessment in higher education. Sterling, VA: Stylus.

This text emerged in response to the trend in standards in accredited fields requiring direct measures of student learning. While this is not inconsistent with trends in regional accrediting body standards (e.g., Middlestates), the authors focus on assessment systems in specific disciplines. A series of chapters address the specific accreditation and student learning considerations for teacher education, nursing, engineering, visual arts, business, social work, pharmacy, and computer science. However, the case examples are applicable and illustrative of the development of any academic assessment system, particularly in the development of student learning performance standards, measures, and means. Lastly, the text emphasizes that an important aspect of assessment systems, described in the module on Writing an Annual Assessment Report (1.5.7), namely is to “close the loop” by analyzing assessment results in order to identify actions for program improvement.

Diamond, R. M. (1997). Designing and assessing courses and curricula: A practical guide. San Francisco: Jossey-Bass.

While this text is a stronger resource for modules on curriculum design and development, its illumination of assessment techniques that align with instructional design make it a valuable resource for formulating program-level performance criteria that complement course-level assessment and evaluation systems. Chapter 10 of the text provides efficient and effective tools for gathering information related to curriculum coherence, diversity, and pedagogical emphases (e.g., active learning). The strength of these methods is that they are collaborative and they facilitate the investment of larger numbers of program participants (e.g., faculty) in the assessment system. The appendices illustrate these techniques in practice through a series of examples in a range of disciplines.

Nichols, J. O. (1991). A practitioner’s handbook for institutional effectiveness and student outcomes assessment implementation. New York: Agathon Press.

The perspective taken in this book is the connection of assessment systems in academic and non-academic settings to broader institutional mission, goals, and strategic planning. In addition to identifying the importance of placing program-level and unit-level assessment within a broader institutional mission and goal, the author provides specific examples of information gathering techniques at each of these levels. Therefore, the first part of this publication is informative for defining a program and for identifying key performance measures. The second part of the text identifies various information gathering strategies which are useful after performance standards and measurable attributes have been identified. These strategies are also useful when considering the means by which assessment information will be gathered.

Nichols, K. W., & Nichols, J. O. (2000). The department head’s guide to assessment implementation in administrative educational support units. New York: Agathon Press.

The process for developing a program assessment system for administrative and academic support units is the same as that for academic programs, however this text identifies and provides examples of how these systems are different for non-academic programs. Examples are drawn from the office of the registrar, the library, the career center, and the accounting office. The opening chapter underscores the importance of assessment systems in these units as not only important to internal improvement, but in linking assessment to their support role of academic units, strategic planning, and accountability for regional accrediting bodies. The subsequent chapters show how administrative and academic support units can link to institutional and divisional mission and goals in creating objectives that may involve student learning outcomes, but are more process oriented. The figures and examples provided include instruments that non-academic units might use to collect assessment information and describe the types of actions that these units might contemplate as a result of the data they collect.

Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

This text by two preeminent authors in the field is organized around the primary steps in an assessment system, providing illustrations from institutions such as Alverno College and Truman State. Chapter 3 emphasizes not only the importance of and strategies for involving faculty in assessment activities, but also provides ideas for student involvement, which is further elaborated on in Chapter 7. Chapters 4, 5, and 6 identify important issues and examples for locating potential measures by discussing issues of validity and reliability, developing local versus national measures, and using student portfolios. Chapter 9 addresses assessment of speaking as it relates to other general education skills such as critical thinking, writing, oral communication, problem-solving, and valuing. The authors conclude the text by highlighting the importance of closing the assessment “loop” through interim reports and the use of assessment results in decision-making.

Upcraft, L. M., & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass.

This practitioner guide leads the reader through important considerations for assessing many areas of student affairs and for developing broader measures of institutional effectiveness for student experiences outside the classroom. The authors identify key dimensions for assessment in student affairs, drawing important distinctions between student needs, student satisfaction, campus environment, and student culture. Further, they focus on important changes in accrediting body standards that involve shifting focus from measures of student satisfaction toward measures of student learning and performance outside the classroom. Drawing heavily on Alexander Astin’s model, assessment is framed around the I-E-O model (inputs, environment, and outcomes) to assist key stakeholders in designing what and how to measure institutional effectiveness. The guide also emphasizes the importance of regularly analyzing and reporting assessment results, and identifies strategies for doing so.