2.4.2 Instructional Systems Design Model, History, & Application

by Temba C. Bassoppo-Moyo (Technology & Design, Illinois State University)

Instructional Systems Design (ISD) is a systematic process of translating general principles of learning and instruction into plans for instructional materials and leadership of learning activities. The process involves creating detailed specifications for the development, implementation, evaluation, and maintenance of resources that facilitate learning. The process is intended to be iterative, initially culminating in developmental testing and/or expert review and frequently concluding with field testing and marketing of finished products. This module traces the evolution of ISD as a discipline and highlights best practices for program, course, and learning activity design.
 

The Historical Perspective of ISD

Instructional Systems Design dates back to classical philosophers such as Aristotle, Socrates, and Plato. Early thinkers and classical philosophers contributed to our knowledge of learning theory through their development, implementation, and refinement of instructional strategies. Principles of questioning, argumentation, modeling, repeated practice, and timely feedback are still central to modern instruction.

One can trace the development of ISD in the past century through several different movements (Anglin, 1991). In the early 1900s the behavioral approach to educational psychology dominated the field. Hull and Thorndike provided guidelines for using stimulus-response exercises as the basis for instructional materials. The advent of the Second World War witnessed the emergence of instructional techniques that were geared toward the rapid training of thousands of military and civilian personnel. These needs created a variety of teaching machines and mediated training modules. The 1950s witnessed the emergence of models for measuring behavioral outcomes inspired by Bloom’s taxonomy. In the 1960s, Robert Gagné sought to differentiate instructional approaches used to promote psychomotor skills, verbal skills, reasoning skills, and attitudinal dispositions. While numerous other instructional design models were introduced in the 1970s and 1980s in conjunction with innovations in instructional technology, their proliferation confirms the notion that ISD consists of the five distinct iterative processes shown in Figure 1.

Figure 1  The Generic Instructional Design Model

Needs Analysis

Generally the needs analysis component of the system design process serves to accomplish a number of goals. It establishes key long-term ingredients for the participants in the whole design process. It describes the criteria for the intended instructional environment. It also establishes the requirements, measurable goals, timelines, and even potential roadblocks learners can anticipate in the whole instructional process. During this phase, the designer will analyze instructional goals, the instruction itself, and the context in which learning will take place.

Goal Analysis: This initial step in the needs analysis phase is analogous to determining one’s destination if given a map and the task of going to some specific destination. At this stage, one defines what the instructor wants his or her learners to know, and what students should be able to do when they have completed the instruction. Dick, Carey & Carey (2004) suggest that instructional goal statements may be derived from a combination of the following:

  • List of goals

  • Description of long-term behaviors

  • Needs assessment and strategic plan

  • Practical experience in the subject matter

  • Someone who has delivered similar instruction

After identifying the instructional goal, it is imperative not only for the designer to determine who the learners are in terms of their characteristics, but what they will be able to do when they demonstrate knowledge, skills, abilities, or attitudes that relate to the instructional goal. This should be translated into performance indicators that will serve as proof that the desired learning destination has been reached.

Instructional Analysis: Once the instructor/designer has a clear conception of what he or she wants the target audience to know and be able to do, he or she needs to identify the specific skills, knowledge, abilities, and attitudes necessary to achieve the learning goals. Again, using the road map analogy, one must ask “How will we get from our initial destination to the intermediate locations before we get to the final destination? What are the logistics involved in the whole trip?” The results of this analysis will prescribe the subordinate skills and knowledge that must be created to ensure that the desired learner performance is achieved.=

Learner and Context Analysis: Typically in educational settings, teachers often assume that they “know” all the characteristics that make up the profile of their student population at least well enough to design instruction without verifying whether their hunches are valid and reliable (Hirumi, 2002). In a traditional instructor-led, teacher-centered course, this may be the case, but for teachers attempting to apply more student-centered techniques, it is often necessary to gather data about each individual student to address individual student needs and interests. In business and industry, the instructional designer often does not know much about the target audience. Therefore, learner and context analysis is critical for defining entry-level and prerequisite skills and knowledge, for identifying learning styles, for discerning learning motivation, and for revealing physical/social aspects of the site where learning will take place. Insights from learner and context analysis are valuable for selecting and customizing instructional strategies as well as assessment methods.

Design Specifications

Traditionally in instructional design, the design specifications phase spells out at least three very important ingredients of the whole process. These include the relationship between the final product and the target learners, the performance criteria by which successful learners will be assessed, and how that assessment will take place.

Performance Objectives: There is compelling evidence in the education literature that suggests that students perform better if they have a clear understanding of what they are expected to learn (Bassoppo-Moyo, 1998). Taking the road map analogy one step further, fuzzy directions to the final destination will increase the probability of one getting lost. In instructional design, performance objectives take the place of a clearly labeled set of directions that point toward some predefined destination.

Writing performance objectives is therefore both an art and a science, and invariably it is the process that determines whether the goals or vision of the activity have been met. Since the 1960s, thousands of educators have been trained in the writing of what was termed “behavioral” objectives. However, as noted by Dick, et al. (2004), two major difficulties emerged from this approach.

First, without an overall systematic design process, instructors found it difficult to define objectives. With little or no training on analysis techniques, e.g., instructional, content, and task analyses, instructors often reverted to textbooks to identify topics for which to write objectives. As a result, most designers and instructors tended to focus on performance objectives that were more drill-and-practice oriented. Second, without an overall systematic design process, instructors did not know what to do with the objectives once they were written. Instructors often listed objectives as a part of instruction, but did little else to align instruction or assessment with the objectives. Furthermore, educators have raised some objections to the use of behavioral objectives as the standard measures of performance. There now exists the argument that behavioral objectives reduce instruction to such small discrete components that learners lose the “trees from the forest”; that they limit class discussions and student learning; and that they cannot be written for some areas such as humanities or even for complex skills (Hirumi, 2002).

Performance Indicators: Performance indicators are the guideposts or milestones showing what needs to be achieved. They are an important component of our road map to our instruction and in principle they should be easy to distinguish among indicators for inputs, outputs, intermediate outcomes, and end outcomes. In practice, these four concepts represent a continuum for which indicators can blend into one another and form a continuum of activities that are interlocking (Zook, 2002). A simplified description of the process of training nurses at the community level illustrates the point. Community college nurses represent a final output from junior college and an input to colleges and universities as well as an input to employers who hire them directly.

Materials Development

The materials development phase is an important link between the designer/instructor and the learners. It is at this stage that a designer is judged on whether he or she is able to connect with the target audience. Through a creative approach, and interesting and relevant materials, the instructional designer must articulate the instruction to the learner. The designer should be able to determine the course level (basic, intermediate, or advanced) and determine the learning styles and characteristics of her target audience.

It is equally important that the instructor/designer create measurement instruments that are valid and reliable in order to gauge or assess the learning outcome(s). The instruments should reflect what the learners should be able to accomplish through performance-driven criteria.

Hirumi (2002) and Oosterhof (2001) suggest that criterion-referenced (a.k.a. outcome-referenced) assessments be designed to measure an explicit set of learning outcomes. Traditionally, these tests have included multiple choice, true-or-false, fill-in-the-blank and short answer test items. These tests have become pervasive because they can be mass-produced, administered and scored with a certain degree of expediency. They can also provide a relatively precise outcome-based method for determining whether learners have achieved specific learning goals. However, over the past decade, there has been a movement in education toward the use of performance, or portfolio assessments.

The method of portfolio assessment differs from traditional paper-and-pencil exams in two key respects. First, unlike traditional measures which tend to evaluate students’ possession of knowledge, portfolio assessments judge students ability to apply knowledge in real life circumstances. Second, portfolio assessments are used as an integral part of learning. Such assessments tell students and their instructors how well they are developing their skills and knowledge in an incremental fashion. Thus, portfolio assessments serve as diagnostic tools that provide students with profiles of their emerging skills to help them become increasingly independent learners. It is important to note that performance assessment techniques are not advocated as more appropriate than traditional, criterion-referenced assessments, but are offered as an alternative to assessing student learning and performance. Generally, research suggests a multi-faceted approach to the assessment of learning outcomes in order to increase the reliability and validity of the overall learner evaluation process. It is critical that the stated objectives correspond with the evaluation plan or strategies.

Course Implementation

Typically the implementation process seeks to determine the degree of success to which the goals of the instruction can be established. It is also at this stage that developmental testing and expert review often takes place during which the merit and worth of the instruction can be established. A wide range of strategies are generally employed during which the instruction is judged to determine whether it is performing to its standards. Some of the questions answered here would relate to the final goals of the instruction. These would include establishing what is working now at this stage of the process, and asking what might be improved, obstacles or challenges that exist, and what improvements might be made using a number of different perspectives. These would include applying different instructional strategies.

Instructional Strategy: Instruction is a deliberate arrangement of events to facilitate a learner’s acquisition of some outcome-based performance objective (Driscoll, 2004). Research on training and instruction indicates that students who actively participate in the learning process are likely to perform better and remember more than students who remain passive (Hirumi, 2002). The problem is that many current forms of instruction are based solely on prior experience, trial-and-error, opinions, fads and/or political agendas. They often fail to take into account what we know about learning and instruction. It is argued that instruction should be designed based on a combination of prior experience, learning theory, and research (Gagné & Dick, 1983). Events of instruction play a major role in developing instructional strategies by advocating a systematic approach to creating the learning environment.

Media Selection: Selecting media and materials is crucial in the whole instructional design process. There is an increasing variety of different media available to support the delivery of instruction. They include, but are not limited to, a live instructor, print, audio-visual media, computer-based interactive systems, Internet and World Wide Web resources.

In selecting media, the question is not which is the best technology, rather, what combination of media are most appropriate considering learner and instructor characteristics, the instructional goals, environment, instructional strategy, and the availability of resources (Hirumi, 2002). In addition, it is important not to use media simply because of its availability but rather each medium should serve a specific purpose.

Evaluation

Generally, evaluation links the lesson objectives to the actual outcomes. It is a way of determining the extent to which the instructional design process has been a success, and this phase can be accomplished in a number of ways.

Formative Evaluation: It is often the case that teachers and trainers frequently produce, distribute, and implement the initial draft of their instruction. In such instances, many problems often occur and either the instructor is blamed for poor teaching or learners are blamed for insufficient studying when, in fact, the instructional materials were not well developed. In the 1960s, the problem of untested instructional materials was accentuated with the advent of large curriculum projects (Hirumi, 2002). At that time, “evaluation” was defined as the determination of instructional effectiveness as compared to other products, and studies revealed relatively low student achievement with the new curriculum materials. Cronback and Scriven (as cited by Dick, et al., 2004) concluded that designers needed to expand their concept of evaluation, and they proposed that instructional developers conduct what is now known as “formative evaluations” or the collection of data during the development of instruction to improve its effectiveness.

Evaluation Plan & Instrument: Dick, et al. (2004), suggest that a summative evaluation be in place which has to be conducted in two phases: expert review, and field trials. Expert judgments or reviews are designed to determine whether currently used instruction or other instructional programs or materials meet content, structure, and organizational needs. Field evaluations should also be conducted to document the effectiveness of instruction with the target populations in its intended setting. Field trials can also consist of two components: outcome analysis and management analysis (Dick, et al., 2004).

Outcome analysis is conducted to determine the impact of instruction on learners, on the job (transfer), and on the organization (need resolution). Management analysis is conducted to determine instructor and supervisor attitudes related to learner performance, implementation feasibility and costs. An alternative view of summative evaluation posits four levels of training evaluation: reactions, learning, behavior and impact (Dick et al., 2004). Reactions refer to learner attitudes toward instruction and learning measures students’ acquisition of specified skills and knowledge. Behaviors examine the extent to which skills and knowledge learned during training are applied in the performance context. Impact refers to the effect the behaviors have on the organization.

Peer Review: Hirumi (2002) observes that there exist significant differences between a novice and an expert in that an expert can analyze his or her own work, or the work of others, and accurately judge its quality while a novice cannot. Peer evaluation is designed to help learners develop their expertise, as well as to expose them to alternative examples of how others apply the systematic design process. It also helps target learners who need to improve their own work as well as create a good working rapport when working with others.

Concluding Thoughts

There are a number of historical assumptions underlying the process of instructional design. They all, however, illustrate that for those involved in developing instruction, there are distinct and added advantages in using the instructional design process. Generally these advantages include focusing on the learning outcomes or performance behaviors, and ensuring that instruction is efficient and effective. The ISD process also facilitates congruence among performance objectives, instructional activities, and assessment of learning outcomes.

References

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass.

Anglin, G. (1991). Instructional technology, Past, present and future. Englewood, CO: Libraries Unlimited.

Bassoppo-Moyo, T. C. (1998). The effects of preinstructional activities in enhancing learner recall and conceptual learning of prose materials for preservice teachers in Zimbabwe. International Journal of Instructional Media, 24 (3), 1-14.

Dick, W. O., Carey, L., & Carey, J. O. (2004). The systematic design of instruction. Boston: Allyn & Bacon.

Driscoll, M. P. (2004). Psychology of learning for instruction. Boston: Allyn & Bacon.

Gagné, R. M., & Dick, W. (1983). Instructional psychology. Annual Review of Psychology, 34, 261-295.

Hirumi, A. (2002). The design and sequencing of e-learning: A grounded approach. International Journal on E-learning, 1 (1), 19-27.

Oosterhof, A. (2000). Classroom applications of educational measurement. Upper Saddle River, NJ: Prentice Hall.