2.4.6 Methodology for Program Design

by Chris Davis (Director of Assessment, Baker College)

This module introduces a method for instructional design at the program level, the highest level of instructional design that in turn provides the framework for course and activity design. Program instructional design begins with the identification of program outcomes which are derived from the intentions of the program and the professional behaviors related to the program. These program outcomes drive the design, development, and sequencing of both courses and extracurricular learning experiences that provide the basis for the overall learning experience. The program outcomes also provide the basis for the development of program assessment and evaluation systems that ensure the continuous improvement of the program and its components.
 
Table 1   Methodology for Program Design

Analysis: Learning-Outcome Driven Instructional Design

Step 1—Identify professional behaviors.

Step 2—Identify program intentions.

Step 3—Construct measurable learning outcomes.

Step 4—Construct a meta-knowledge table.

Design: Activities and Knowledge to Support Learning Outcomes

Step 5—Choose themes.

Step 6—Create the appropriate methodologies.

Step 7—Identify a set of experiences.

Step 8—Identify a set of specific learning skills for the program.

Development: Construction and Selection

Step 9—Identify experience preference types.

Step 10—Match the experience types with the chosen experiences.

Step 11—Choose the formal course and extracurricular experiences.

Step 12—Allocate time across the themes.

Step 13—Sequence the experiences across the program.

Step 14—Create individual experiences from a prioritized list.

Step 15—Enhance experiences by using technology.

Step 16—Ask peers to review the experiences you create.

Step 17—Produce key performance criteria.

Step 18—Locate or build key performance measures.

Step 19—Design a program assessment system.

Step 20—Design a program evaluation system.

Step 21—Design a program description and schedule.

Implementation: Facilitating Learning

Evaluation and Assessment: Instruction that Learns from Itself

Systematic Design of Instruction and Instructional Design for Process Education

Virtually all models of instructional design follow the ADDIE model (analysis, design, development, improvement, and evaluation) (Kruse & Keil, 2000; Reiser, 2001).

Consistent with the instructional design model, the Methodology for Program Design (Table 1) presents the steps taken in an effective program design process. A discussion of the sections and subsequent steps of the methodology follows.

Analysis: Learning-Outcome Driven Instructional Design

The analysis stage of instructional design addresses what the learner is to learn. The results of the analysis should drive the rest of the instructional design (Dick, Carey & Carey, 2004). All content, methodologies, activities, sequencing, and assessment of the learning experience should be traceable to the results of the analysis. The analysis must consider what the learner should be expected to know prior to instruction (Gagné, Briggs, Wager, Golas, & Keller, 2005) as well as what the learner will need to know in the future. For programs, this means that to avoid either duplication or gaps in knowledge or practice, the analysis must address what is covered in previous courses and what will be addressed in future courses. The program prerequisite knowledge and abilities required of students prior to starting the program must also be analyzed. A program analysis must also examine how the program fits within the context of the larger institution. Thus, the behaviors, objectives, and learning outcomes of the program should align with the behaviors, objectives, and learning outcomes of the discipline and of the institution as a whole. Steps 1-4 are included in the analysis stage.

Step 1—Identify professional behaviors.

The professional behaviors of a program are the behaviors that graduates of the program should practice throughout their lives and professional careers. They include working knowledge, performance skills, and attitudes. The professional behaviors of the program will be reflected in the long-term behaviors of the courses and other experiences provided within the program. An important source that guides the professional behaviors required of program graduates are the performance expectations of stakeholders.

Step 2—Identify program intentions.

Program intentions describe the intended results of the program. These can take the form of key learning objectives that identify the essential content of the program, including significant performance skills and attitudes. Other examples of program intentions that are not direct learning objectives for the learners include behaviors such as increased graduation rates and job placement.

Step 3—Construct measurable program outcomes.

Program learning outcomes connect program intentions and professional behaviors. The program outcomes describe what knowledge and working expertise the student should have at the conclusion of the program. Measurable program learning outcomes are critical for the development of assessment and evaluation systems. Since the observation of long-term behaviors will generally be beyond the scope of a program of study, these are the indicators that professional behaviors are being developed. Program learning outcomes can be determined both with direct measures of student performance (i.e., student portfolios and capstone projects) and indirect measures (i.e., job placement data and alumni surveys).

Step 4—Construct a meta-knowledge table.

Once the program outcomes are determined, a meta-knowledge table should be created for the program. The meta-knowledge table describes the most important concepts, processes, tools, contexts, and “ways of being” that the student must master in order to achieve the program learning outcomes. The meta-knowledge table should also show the sequential dependency between these different components.

Design: Activities and Knowledge to Support Learning Outcomes

Once the learning outcomes have been determined through analysis, the design process develops a plan for how the learner will achieve these learning outcomes. The design stage is a creative and generative process in which one envisions how the instruction might look. Steps 5-8 are included the design stage.

Step 5—Choose themes.

The themes of a program focus on specific processes, tools, or ways of being to support the development of professional behaviors. The themes provide a continuous infrastructure through the program connecting multiple courses and course learning outcomes to help improve performance in these areas. A program will typically include 10-15 themes.

Step 6—Create the appropriate methodologies.

Each key process that is to be included in a program must have a corresponding methodology that is identified or developed. Key processes are beyond the scope of a single course or sequence of courses. A methodology explicitly models those practices that are essential for a novice to learn and shows how process is practiced by experts. Key processes are introduced in a foundation course and are integrated throughout the entire program, both in formal courses and in extracurricular activities.

Step 7—Identify a set of experiences.

Experiences include both what happens formally in classes and what the student does informally outside of class through extracurricular activities. Each of the items in the meta-knowledge table must be supported by a learning experience appropriate for the type and level of knowledge of that item. At this step, the goal is to generate as many potential experiences as possible without fully developing the specifics of those experiences.

Step 8—Identify a set of specific learning skills for the program.

In addition to providing domain content, the program should also incorporate key learning skills on which to focus. Learning skills come from four domains: cognitive, social, affective, and psychomotor. These skills support the learning outcomes of the program but are also transferable to other disciplines and environments. Program design should include the top 50 skills for graduates in the discipline. These might include skills such as risk-taking, articulating an idea, and making connections.

Development: Construction and Selection

The design and development phases are tightly intertwined, highly iterative, and often indistinguishable. At a certain stage in an instructional design project, the activities of the designers will shift away from brainstorming and generating possibilities to making selections and constructing materials and activities. Steps 9-21 are included the development stage.

Step 9—Identify experience preference types.

To assist in selecting what experiences should be incorporated into the program design, one should review both student and instructor preferences for different types of experiences. Types of learning experiences include the structure of courses (lecture, lab, seminar), integrated off-campus experiences such as study-abroad and internships, and extracurricular experiences.

Step 10—Match the experience types with the chosen experiences.

Collect all the possible experiences that were identified earlier and organize them by the type of experience. No single type of experience should account for more than twenty-five percent of the total time in the program.

Step 11—Choose the formal course and extracurricular experiences.

Items on the meta-knowledge table need to be mapped, either to experiences that take place as part of a formal course or to those that are provided through a supporting extracurricular experience. In general, the course work should focus on experiences that are the most critical and challenging for the student. When possible, programs should be flexible enough to allow the student to tailor the program to his or her specific needs and interests through electives and other opportunities.

Step 12—Allocate time across the themes.

The time for each theme needs to be allocated to both formal courses and extracurricular experiences. The percentage of total student learning time should be allocated for each theme.

Step 13—Sequence the experiences across the program.

The sequence of experiences should provide a progression across the program learning outcomes and the prerequisite knowledge needed to achieve those learning outcomes. In addition, the sequencing needs to provide a variety of experiences for the students.

Step 14—Create individual experiences from a priority list.

The experiences that have been selected need to be developed and documented. At a minimum, the documentation must justify the reason for the inclusion of the experience in the program and describe the components of the meta-knowledge table and themes that it addresses. The actual design of a program should follow the steps of the instructional design process for classes.

Step 15—Enhance experiences by using technology.

The first role of technology is to ensure that students develop the technical and information literacy skills required by the program. All disciplines require different skills with computers and other information processing tools. These skills need to be integrated into the experiences of the program.

The second role of technology in program design is in the area of distance and other forms of flexible delivery. Distance delivery can take many forms and can make a program accessible to a population of students that otherwise would not be able to participate due to schedule and/or geographic constraints. For maximum effectiveness, the implementation of technology should be approached systematically as part of the program design rather than on a piecemeal, course-by-course basis.

Step 16—Have the experiences you create peer reviewed.

The quality of the instructional design increases when the experiences are reviewed by peers. Students also often give helpful feedback.

Step 17—Produce key performance criteria.

Comprehensive and integrative performance criteria should be established for the set of learning objectives and outcomes. The performance criteria describe the expectations for student performance at the end of the program and are used in the design of assessment and evaluation systems.

Step 18—Locate or build key performance measures.

For each of the key performance criteria, identify or create instrument(s) to measure different levels of performance for assessment and evaluation. The performance measures should also be used to assess student performance of the learning skills.

Step 19—Design a program assessment system.

The program assessment system provides a mechanism for both the student and faculty member to track student performance in the course and identify opportunities for performance improvement. The assessment system should relate to the performance measures, and address how students can improve their performance. This step focuses on the design of the student assessment embedded within the program, not on the assessment/evaluation of the program itself (that is a separate stage of instructional design). Portfolios and capstone experiences are tools that support program assessment.

Step 20—Design a program evaluation system.

The program evaluation system is based on performance measures and criteria; unlike the program assessment system, the evaluation system measures the student’s performance relative to standard benchmarks and results in a grade. This step focuses on the design of the student evaluation that is embedded within the program; it is not concerned with evaluating the program itself (that is a separate stage of instructional design). Common forms of program evaluation include portfolios, dissertations, theses, comprehensive exams, certification/licensure exams, capstone courses, and student portfolios.

Step 21—Design a program description and schedule.

The program description and schedule should capture the results of the other steps of the design process. It should include a description of the program and its outcomes, a listing of courses and other experiences, and a sample schedule to provide students with a road map to completion.

Implementation: Facilitating Learning

Implementation takes the materials and experiences created during the design and development stages and puts them into practice with learners. Implementation is the delivery stage of instructional design. It is the end-result of the instructional design and it is combined with the teaching and facilitation practices of the instructor. The Facilitation section of the Faculty Guidebook contains more information related to the implementation stage.

Evaluation and Assessment: Instruction that Learns from Itself

Traditionally in instructional design, the evaluation component involves a summative evaluation that reviews whether the instruction achieved the goals determined during the analysis stage (Reiser, 2001). A more effective way to approach this phase is to shift from an evaluation model to an assessment model that reviews what aspects of the instruction did not work and asks how the instruction might be improved the next time. Ultimately, effective assessment leads to instruction that learns, improves, and adjusts from itself. The assessment process provides a feedback loop to inform the previous stages of the process to improve the next iteration of analysis, design, development, and implementation for continuous improvement of the instructional design (see Chapter 1.5, Added Value through Program Assessment for additional information).

Evaluation and assessment activities at this stage of instructional design are not the same as the assessment or evaluation of learner performance within the class. Evaluation and assessment of the program should examine whether or not learners achieved the established learning outcomes, but it should also look at other aspects of the entire program for opportunities to increase the effectiveness and quality of the learning experience. This stage should provide feedback into any and all of the previous steps with guidance on how to continuously improve the instructional design.

Concluding Thoughts

Program design provides the overall structure that guides lower levels of instructional design. The design of an entire program includes the identification of program outcomes that define the required course outcomes. The program design also specifies what formal, curricular, and informal extracurricular experiences will be included in the total learning experience. Finally, the program evaluation and assessment systems provide feedback on the quality of the entire program and its effectiveness at achieving the program outcomes. The results from this assessment process can have an important impact on the ongoing design and implementation of courses and other components of the program. In a similar fashion, courses include multiple learning activities (or learning objects) to address the specific set of learning outcomes for that course. Learning activities are the smallest unit of instructional design and target singular learning outcomes.

The instructional design process uses a structured approach that begins with an analysis that determines the program learning outcomes followed by a process of design and development of learning experiences to enable students to achieve those learning outcomes. Each of those learning experiences will have an associated set of course outcomes that support the larger program outcomes. The learning experiences include a variety of learning activities to support each outcome in the experience. This structured approach of instructional design fosters learning on purpose rather than learning by accident or chance. Without a clear road map for how learning is to occur, the learning event cannot be repeated, nor can it be reviewed and assessed for continuous improvement. Program design in particular is essential to create a holistic and integrated learning program that ensures that the student achieves the program outcomes in a systematic rather than haphazard or piecemeal fashion that can lead to knowledge gaps or confusion.

References

Dick, W. O., Carey, L., & Carey, J. O. (2004). The systematic design of instruction. Boston: Allyn & Bacon.

Gagné, R. M., Briggs, L. J., Wager, W. W., Golas, K. C., & Keller, J. M. (2004). Principles of instructional design. Belmont, CA: Wadsworth.

Kruse, K., & Keil, J. (2000). Technology-based training: The art and science of design, development, and delivery. San Francisco: Jossey-Bass.

Reiser, R. (2001). A history of instructional design and technology: Part 2: A history of instructional design. Educational Technology Research and Development, 49 (2), 57-67.