Landa's theory is concerned with identifying mental processes -- conscious and especially unconscious -- that underlie expert learning, thinking and performance in any area. His methods represent a system of techniques for getting inside the mind of expert learners and performers which enable one to uncover the processes involved. Once uncovered, they are broken down into their relative elementary components -- mental operations and knowledge units which can be viewed as a kind of psychological "atoms" and "molecules". Performing a task or solving a problem always requires a certain system of elementary knowledge units and operations.
There are classes of problems for which it is necessary to execute operations in a well structured, predefined sequence (algorithmic problems). For such problem classes, it is possible to formulate a set of precise unambiguous instructions (algorithms) as to what one should do mentally and/or physically in order to successfully solve any problem belonging to that class. There are also classes of problems (creative or heuristic problems) for which precise and unambiguous sets of instructions cannot be formulated. For such classes of problems, it is possible to formulate instructions that contain a certain degree of uncertainty (heuristics). Landa also describes semi-algorithmic and semi-heuristic problems, processes and instructions.
The theory suggests that all cognitive activities can be analyzed into operations of an algorithmic, semi-algorithmic, heuristic, or semi-heuristic nature. Once discovered, these operations and their systems can serve as the basis for instructional strategies and methods. The theory specifies that students ought to be taught not only knowledge but the algorithms and heuristics of experts as well. They also have to be taught how to discover algorithms and heuristics on their own. Special emphasis is placed on teaching students cognitive operations, algorithms and heuristics which make up general methods of thinking (i.e., intelligence).
With respect to sequencing of instruction, Landa proposes a number of strategies, the most important of which is the "snowball" method. This method applies to teaching a system of cognitive operations by teaching the first operation, then the second which is practiced with the first, and so on.
While this is a general theory of learning, it is illustrated primarily in the context of mathematics and foreign language instruction. In recent years, Landa has applied his theory to training settings under the name "Landamatics" (Educational Technology , 1993)
Landa (1976) provides the following example of an algorithm for teaching a foreign speaker how to choose among the English verbs "to offer", "to suggest" and "to propose":
Check to see whether something that one presents to another person is a tangible object or viewed as tangible. If yes, use "offer". If no, it is an idea about some action to be performed. Check to see if this idea is presented formally. If yes, use "propose", otherwise use "suggest".
Applying the snowball method would involve teaching the student the action of checking the first condition and then the action of checking the second condition followed by practice that requires both conditions to be checked. Landa explains that after sufficient practice the application of the algorithm would become automatic and unconscious.
1. It is more important to teach algo-heuristic processes to students than prescriptions (knowledge of processes); on the other hand, teachers need to know both.
2. Processes can be taught through prescriptions and demonstrations of operations.
3. Teaching students how to discover processes is more valuable than providing them already formulated.
4. Break processes down into elementary operations of size and length suitable for each student (individualization of instruction).
Educational Technology (1993). Landamatics ten years later. Educational Technology, 33(6), 7-18.
Landa, L. (1974). Algorithmization in Learning and Instruction. Englewood Cliffs, NJ: Educational Technology Publications.
Landa, L. (1976). Instructional Regulation and Control: Cybernetics, Algorithmization, and Heuristics in Education. Englewood Cliffs, NJ: Educational Technology Publications.
Knowles' theory of andragogy is an attempt to develop a theory specifically for adult learning. Knowles emphasizes that adults are self-directed and expect to take responsibility for decisions. Adult learning programs must accommodate this fundamental aspect.
Andragogy makes the following assumptions about the design of learning: (1) Adults need to know why they need to learn something (2) Adults need to learn experientially, (3) Adults approach learning as problem-solving, and (4) Adults learn best when the topic is of immediate value.
In practical terms, andragogy means that instruction for adults needs to focus more on the process and less on the content being taught. Strategies such as case studies, role playing, simulations, and self-evaluation are most useful. Instructors adopt a role of facilitator or resource rather than lecturer or grader.
Andragogy applies to any form of adult learning and has been used extensively in the design of organizational training programs (especially for "soft skill" domains such as management development).
Knowles (1984, Appendix D) provides an example of applying andragogy principles to the design of personal computer training:
1. There is a need to explain why specific things are being taught (e.g., certain commands, functions, operations, etc.)
2. Instruction should be task-oriented instead of memorization -- learning activities should be in the context of common tasks to be performed.
3. Instruction should take into account the wide range of different backgrounds of learners; learning materials and activities should allow for different levels/types of previous experience with computers.
4. Since adults are self-directed, instruction should allow learners to discover things for themselves, providing guidance and help when mistakes are made.
(See computers for further discussion of this topic).
1. Adults need to be involved in the planning and evaluation of their instruction.
2. Experience (including mistakes) provides the basis for learning activities.
3. Adults are most interested in learning subjects that have immediate relevance to their job or personal life.
4. Adult learning is problem-centered rather than content-oriented.
Knowles, M. (1975). Self-Directed Learning. Chicago: Follet.
Knowles, M. (1984). The Adult Learner: A Neglected Species (3rd Ed.). Houston, TX: Gulf Publishing.
Knowles, M. (1984). Andragogy in Action. San Francisco: Jossey-Bass.
Anchored instruction is a major paradigm for technology-based learning that has been developed by the Cognition & Technology Group at Vanderb ilt (CTGV) under the leadership of John Bransford. While many people have contributed to the theory and research o f anchored instruction, Bransford is the principal spokesperson and hence the theory is attributed to him.
The initial focus of the wo rk was on the development of interactive videodisc tools that encouraged students and teachers to pose and solve complex, realistic problems. The video materials serve as "anchors" (macro-contexts) for all subsequent learning an d instruction. As explai ned by CTGV (1993, p52): "The design of these anchors was quite different from the design of videos that were typically used in education...our goal was to create interesting, realistic contexts that encouraged the active construct ion of knowledge by l earners. Our anchors were stories rather than lectures and were designed to be explored by students and teachers. " The use of interactive videodisc technology makes it possible for students to easily explore the content.
Th e primary application of anchored instruction has been to elementary reading, language arts and mathematics skills. The CLGV has developed a set of interactive videodisc programs called the "Jasper Woodbury Problem Solving Series". These programs involve adventures in which mathematical concepts are used to solve problems . However, the anchored instruction paradigm is based upon a general model of problem-solving (Bransford & Stein (1993).
One of the early anchored instruction activit ies involved the use of the film, "Young Sherlock Holmes" in interactive videodisc form. Students were asked to examine the film in terms of causal connections, motives of the characters, and authenticity of th e settings in order to understand the natu re of life in Victorian England. The film provides the anchor for an understanding of story-telling and a particular historical era.
1. Learning and teaching activities should be designed around a "anchor" which should be some sort of case-study or problem situation.
2. Curriculum materials should allow exploration by the learner (e.g., interactive videodisc programs).
This theory suggests that learning happens best under conditions that are aligned with human cognitive architecture. The structure of human cognitive architecture, while not known precisely, is discernible through the results of experimental research. Recognizing George Miller's research showing that short term memory is limited in the number of elements it can contain simultaneously, Sweller builds a theory that treats schemas, or combinations of elements, as the cognitive structures that make up an individual's knowledge base. (Sweller, 1988)
The contents of long term memory are "sophisticated structures that permit us to perceive, think, and solve problems," rather than a group of rote learned facts. These structures, known as schemas, are what permit us to treat multiple elements as a single element. They are the cognitive structures that make up the knowledge base (Sweller, 1988). Schemas are acquired over a lifetime of learning, and may have other schemas contained within themselves.
The difference between an expert and a novice is that a novice hasn't acquired the schemas of an expert. Learning requires a change in the schematic structures of long term memory and is demonstrated by performance that progresses from clumsy, error-prone, slow and difficult to smooth and effortless. The change in performance occurs because as the learner becomes increasingly familiar with the material, the cognitive characteristics associated with the material are altered so that it can be handled more efficiently by working memory.
From an instructional perspective, information contained in instructional material must first be processed by working memory. For schema acquisition to occur, instruction should be designed to reduce working memory load. Cognitive load theory is concerned with techniques for reducing working memory load in order to facilitate the changes in long term memory associated with schema acquisition.
Sweller's theories are best applied in the area of instructional design of cognitively complex or technically challenging material. His concentration is on the reasons that people have difficulty learning material of this nature. Cognitive load theory has many implications in the design of learning materials which must, if they are to be effective, keep cognitive load of learners at a minimum during the learning process. While in the past the theory has been applied primarily to technical areas, it is now being applied to more language-based discursive areas.
In combining an illustration of blood flow through the heart with text and labels, the separation of the text from the illustration forces the learner to look back and forth between the specified parts of the illustration and the text. If the diagram is self-explanatory, research data indicates that processing the text unnecessarily increases working memory load. If the information could be replaced with numbered arrows in the labeled illustration, the learner could concentrate better on learning the content from the illustration alone. Alternatively, if the text is essential to intelligibility, placing it on the diagram rather than separated will reduce cognitive load associated with searching for relations between the text and the diagram (Sweller, 1999).
Specific recommendations relative to the design of instructional material include:
1. Change problem solving methods to avoid means-ends approaches that impose a heavy working memory load, by using goal-free problems or worked examples.
2. Eliminate the working memory load associated with having to mentally integrate several sources of information by physically integrating those sources of information.
3. Eliminate the working memory load associated with unnecessarily processing repetitive information by reducing redundancy.
4. Increase working memory capacity by using auditory as well as visual information under conditions where both sources of information are essential (i.e. non-redundant) to understanding.
Sweller, J., Cognitive load during problem solving: Effects on learning, Cognitive Science, 12, 257-285 (1988).
Sweller, J., Instructional Design in Technical Areas, (Camberwell, Victoria, Australia: Australian Council for Educational Research (1999).
This article was provided by Howard Soloman.
Conditions of Learning(R. Gagne)
This theory stipulates that there are several different types or levels of learning. The significance of these classifications is that each different type requires different types of instruction. Gagne identifies five major categories of learning: verbal information, intellectual skills, cognitive strategies, motor skills and attitudes. Different internal and external conditions are necessary for each type of learning. For example, for cognitive strategies to be learned, there must be a chance to practice developing new solutions to problems; to learn attitudes, the learner must be exposed to a credible role model or persuasive arguments.
Gagne suggests that learning tasks for intellectual skills can be organized in a hierarchy according to complexity: stimulus recognition, response generation, procedure following, use of terminology, discriminations, concept formation, rule application, and problem solving. The primary significance of the hierarchy is to identify prerequisites that should be completed to facilitate learning at each level. Prerequisites are identified by doing a task analysis of a learning/training task. Learning hierarchies provide a basis for the sequencing of instruction.
In addition, the theory outlines nine instructional events and corresponding cognitive processes:
(1) gaining attention (reception) (2) informing learners of the objective (expectancy) (3) stimulating recall of prior learning (retrieval) (4) presenting the stimulus (selective perception) (5) providing learning guidance (semantic encoding) (6) eliciting performance (responding) (7) providing feedback (reinforcement) (8) assessing performance (retrieval) (9) enhancing retention and transfer (generalization).
These events should satisfy or provide the necessary conditions for learning and serve as the basis for designing instruction and selecting appropriate media (Gagne, Briggs & Wager, 1992).
While Gagne's theoretical framework covers all aspects of learning, the focus of the theory is on intellectual skills. The theory has been applied to the design of instruction in all domains (Gagner & Driscoll, 1988). In its original formulation (Gagne, 1 962), special attention was given to military training settings. Gagne (1987) addresses the role of instructional technology in learning.
The following example illustrates a teaching sequence corresponding to the nine instructional events for the objective, Recognize an equilateral triangle:
1. Gain attention - show variety of computer generated triangles 2. Identify objective - pose question: "What is an equilateral triangle?" 3. Recall prior learning - review definitions of triangles 4. Present stimulus - give definition of equilateral triangle 5. Guide learning- show example of how to create equilateral 6. Elicit per formance - ask students to create 5 different examples 7. Provide feedback - check all examples as correct/incorrect 8. Assess performance- provide scores and remediation 9. Enhance retention/transfer - show pictures of objects and ask students to identify equilaterals
Gagne (1985, chapter 12) provides examples of events for each category of learning outcomes.
1. Different instruction is required for different learning outcomes.
2. Events of learning operate on the learner in ways that constitute the conditions of learning.
3. The specific operations that constitute instructional events are different for each different type of learning outcome.
4. Learning hierarchies define what intellectual skills are to be learned and a sequence of instruction.
Gagne, R. (1962). Military training and principles of learning. American Psychologist, 17, 263-276.
Gagne, R. (1985). The Conditions of Learning (4th ed.). New York: Holt, Rinehart & Winston .
Gagne, R. (1987). Instructional Technology Foundations. Hillsdale, NJ: Lawrence Erlbaum Assoc.
Gagne, R. & Driscoll, M. (1988). Essentials of Learning for Instruction (2nd Ed.). Englewood Cliffs, NJ: Prentice-Hall.
Gagne, R., Briggs, L. & Wager, W. (1992). Principles of Instructional Design (4th Ed.). Fort Worth, TX: HBJCollege Publishers.
Relevant Web Sites:
The following web sites provide further information about Gagne and his work:
The learning theory of Thorndike represents the original S-R framework of behavioral psychology: Learning is the result of associations forming between stimuli and responses. Such associations or "habits" become strengthened or weakened by the nature and frequency of the S-R pairings. The paradigm for S-R theory was trial and error learning in which certain responses come to dominate others due to rewards. The hallmark of connectionism (like all behavioral theory) was that learning could be adequately explained without refering to any unobservable internal states.
Thorndike's theory consists of three primary laws: (1) law of effect - responses to a situation which are followed by a rewarding state of affairs will be strengthened and become habitual responses to that situation, (2) law of readiness - a series of responses can be chained together to satisfy some goal which will result in annoyance if blocked, and (3) law of exercise - connections become strengthened with practice and weakened when practice is discontinued. A corollary of the law of effect was that responses that reduce the likelihood of achieving a rewarding state (i.e., punishments, failures) will decrease in strength.
The theory suggests that transfer of learning depends upon the presence of identical elements in the original and new learning situations; i.e., transfer is always specific, never general. In later versions of the theory, the concept of "belongingness" was introduced; connections are more readily established if the person perceives that stimuli or responses go together (c.f. Gestalt principles). Another concept introduced was "polarity" which specifies that connections occur more easily in the direction in which they were originally formed than the opposite. Thorndike also introduced the "spread of effect" idea, i.e., rewards affect not only the connection that produced them but temporally adjacent connections as well.
Connectionism was meant to be a general theory of learning for animals and humans. Thorndike was especially interested in the application of his theory to education including mathematics (Thorndike, 1922), spelling and reading (Thorndike, 1921), measurement of intelligence (Thorndike et al., 1927) and adult learning (Thorndike at al., 1928).
The classic example of Thorndike's S-R theory was a cat learning to escape from a "puzzle box" by pressing a lever inside the box. After much trial and error behavior, the cat learns to associate pressing the lever (S) with opening the door (R). This S-R connection is established because it results in a satisfying state of affairs (escape from the box). The law of exercise specifies that the connection was established because the S-R pairing occurred many times (the law of effect) and was rewarded (law of effect) as well as forming a single sequence (law of readiness).
1. Learning requires both practice and rewards (laws of effect /exercise)
2. A series of S-R connections can be chained together if they belong to the same action sequence (law of readiness).
3. Transfer of learning occurs because of previously encountered situations.
4. Intelligence is a function of the number of connections learned.
Thorndike, E. (1913). Educational Psychology: The Psychology of Learning. New York: Teachers College Press.
Thorndike, E. (1921). The Teacher's Word Book. New York: Teachers College.
Thorndike, E. (1922). The Psychology of Arithmetic. New York: Macmillan.
Thorndike, E. (1932). The Fundamentals of Learning. New York: Teachers College Press.
Thorndike, E. at al. (1927). The Measurement of Intelligence. New York: Teachers College Press.
Thorndike, E. et al. (1928), Adult Learning. New York: Macmillan
A major theme in the theoretical framework of Bruner is that learning is an active process in which learners construct new ideas or concepts based upon their current/past knowledge. The learner selects and transforms information, constructs hypotheses, and makes decisions, relying on a cognitive structure to do so. Cognitive structure (i.e., schema, mental models) provides meaning and organization to experiences and allows the individual to "go beyond the information given".
As far as instruction is concerned, the instructor should try and encourage students to discover principles by themselves. The instructor and student should engage in an active dialog (i.e., socratic learning). The task of the instructor is to translate information to be learned into a format appropriate to the learner's current state of understanding. Curriculum should be organized in a spiral manner so that the student continually builds upon what they have already learned.
Bruner (1966) states that a theory of instruction should address four major aspects: (1) predisposition towards learning, (2) the ways in which a body of knowledge can be structured so that it can be most readily grasped by the learner, (3) the most effective sequences in which to present material, and (4) the nature and pacing of rewards and punishments. Good methods for structuring knowledge should result in simplifying, generating new propositions, and increasing the manipulation of information.
In his more recent work, Bruner (1986, 1990, 1996) has expanded his theoretical framework to encompass the social and cultural aspects of learning as well as the practice of law.
Bruner's constructivist theory is a general framework for instruction based upon the study of cognition. Much of the theory is linked to child development research (especially Piaget ). The ideas outlined in Bruner (1960) originated from a conference focused on science and math learning. Bruner illustrated his theory in the context of mathematics and social science programs for young children (see Bruner, 1973). The original development of the framework for reasoning processes is described in Bruner, Goodnow & Austin (1951). Bruner (1983) focuses on language learning in young children.
"The concept of prime numbers appears to be more readily grasped when the child, through construction, discovers that certain handfuls of beans cannot be laid out in completed rows and columns. Such quantities have either to be laid out in a single file or in an incomplete row-column design in which there is always one extra or one too few to fill the pattern. These patterns, the child learns, happen to be called prime. It is easy for the child to go from this step to the recognition that a multiple table , so called, is a record sheet of quantities in completed mutiple rows and columns. Here is factoring, multiplication and primes in a construction that can be visualized."
1. Instruction must be concerned with the experiences and contexts that make the student willing and able to learn (readiness).
2. Instruction must be structured so that it can be easily grasped by the student (spiral organization).
3. Instruction should be designed to facilitate extrapolation and or fill in the gaps (going beyond the information given).
Bruner, J. (1960). The Process of Education. Cambridge, MA: HarvardUniversity Press.
Bruner, J. (1966). Toward a Theory of Instruction. Cambridge, MA: HarvardUniversity Press.
Bruner, J. (1973). Going Beyond the Information Given. New York: Norton.
Bruner, J. (1983). Child's Talk: Learning to Use Language. New York: Norton.
Bruner, J. (1986). Actual Minds, Possible Worlds. Cambridge, MA: HarvardUniversity Press.
Bruner, J. (1990). Acts of Meaning. Cambridge, MA: HarvardUniversity Press.
Bruner, J. (1996). The Culture of Education, Cambridge, MA: HarvardUniversity Press.
Bruner, J., Goodnow, J., & Austin, A. (1956). A Study of Thinking. New York: Wiley.
Cross (1981) presents the Characteristics of Adults as Learners (CAL) model in the context of her analysis of lifelong learning programs. The model attempts to integrate other theoretical frameworks for adult learning such as andragogy ( Knowles ), experiential learning ( Rogers ), and lifespan psychology.
The CAL model consists of two classes of variables: personal characteristics and situational characteristics. Personal characteristics include: aging, life phases, and developmental stages. These three dimensions have different characteristics as far as lifelong learning is concerned. Aging results in the deterioration of certain sensory-motor abilities (e.g., eyesight, hearing, reaction time) while intelligence abilities (e.g., decision-making skills, reasoning, vocabulary) tend to improve. Life phases and developmental stages (e.g., marriage, job changes, retirement) involve a series of plateaus and transitions which may or may not be directly related to age.
Situational characteristics consist of part-time versus full-time learning, and voluntary versus compulsory learning. The administration of learning (i.e., schedules, locations, procedures) is strongly affected by the first variable; the second pertains to the self-directed, problem-centered nature of most adult learning.
The CAL model is intended to provide guidelines for adult education programs. There is no known research to support the model.
Consider three adults: a nursing student, a new parent, and a middle-aged social worker about to take a course on child development. Each of these individuals differs in age (20,30,40) and life/developmental phases (adolescent/searching, young/striving, mature/stable). They also differ in terms of situational characteristics: for the nursing student, the course is full-time and compulsory, for the parent, it is part-time and optional; for the social worker it is part-time but required. According to the CA L model, a different learning program might be necessary for these three individuals to accomodate the differences in personal and situational characteristics.
1. Adult learning programs should capitalize on the experience of participants.
2. Adult learning programs should adapt to the aging limitations of the participants.
3. Adults should be challenged to move to increasingly advanced stages of personal development.
4. Adults should have as much choice as possible in the availability and organization of learning programs.
Cross, K.P. (1981). Adults as Learners. San Francisco: Jossey-Bass.
Cross, K.P. (1976). Accent on Learning. San Francisco: Jossey-Bass.
Guthrie's contiguity theory specifies that "a combination of stimuli which has accompanied a movement will on its recurrence tend to be followed by that movement". According to Guthrie, all learning was a consequence of association between a particular stimulus and response. Furthermore, Guthrie argued that stimuli and responses affect specific sensory-motor patterns; what is learned are movements, not behaviors.
In contiguity theory, rewards or punishment play no significant role in learning since they occur after the association between stimulus and response has been made. Learning takes place in a single trial (all or none). However, since each stimulus pattern is slightly different, many trials may be necessary to produce a general response. One interesting principle that arises from this position is called "postremity" which specifies that we always learn the last thing we do in response to a specific stimulus situation.
Contiguity theory suggests that forgetting is due to interference rather than the passage of time; stimuli become associated with new responses. Previous conditioning can also be changed by being associated with inhibiting responses such as fear or fatigue. The role of motivation is to create a state of arousal and activity which produces responses that can be conditioned.
Contiguity theory is intended to be a general theory of learning, although most of the research supporting the theory was done with animals. Guthrie did apply his framework to personality disorders (e.g. Guthrie, 1938).
The classic experimental paradigm for Contiguity theory is cats learning to escape from a puzzle box (Guthrie & Horton, 1946). Guthrie used a glass paneled box that allowed him to photograph the exact movements of cats. These photographs showed that cats learned to repeat the same sequence of movements associated with the preceding escape from the box. Improvement comes about because irrelevant movements are unlearned or not included in successive associations.
1. In order for conditioning to occur, the organism must actively respond (i.e., do things).
2. Since learning involves the conditioning of specific movements, instruction must present very specific tasks.
3. Exposure to many variations in stimulus patterns is desirable in order to produce a generalized response.
4. The last response in a learning situation should be correct since it is the one that will be associated.
Guthrie, E.R. (1930). Conditioning as a principle of learning. Psychological Review, 37, 412-428.
Guthrie, E.R. (1935). The Psychology of Learning. New York: Harper.
Guthrie, E.R. (1938). The Psychology of Human Conflict. New York: Harper.
Guthrie, E.R. & Horton, G.P. (1946). Cats in a Puzzle Box. New York: Rinehart.
Conversation Theory (G. Pask)
The Conversation Theory developed by G. Pask originated from a cybernetics framework and attempts to explain learning in both living organisms and machines. The fundamental idea of the theory was that learning occurs through conversations about a subject matter which serve to make knowledge explicit. Conversations can be conducted at a number of different levels: natural language (general discussion), object languages (for discussing the subject matter), and metalanguages (for talking about learning/language).
In order to facilitate learning, Pask argued that subject matter should be represented in the form of entailment structures which show what is to be learned. Entailment structures exist in a variety of different levels depending upon the extent of relationships displayed (e.g., super/subordinate concepts, analogies).
The critical method of learning according to conversation theory is "teachback" in which one person teaches another what they have learned. Pask identified two different types of learning strategies: serialists who progress through an entailment structure in a sequential fashion and holists who look for higher order relations.
Conversation theory applies to the learning of any subject matter. Pask (1975) provides an extensive discussion of the theory applied to the learning of statistics (probability).
Pask (1975, Chapter 9) discusses the application of conversation theory to a medical diagnosis task (diseases of the thyroid). In this case, the entailment structure represents relationships between pathological conditions of the thyroid and treatment/tests. The student is encouraged to learn these relationships by changing the parameter values of a variable (e.g., iodine intake level) and investigating the effects.
1. To learn a subject matter, students must learn the relationships among the concepts.
2. Explicit explanation or manipulation of the subject matter facilitates understanding (e.g., use of teachback technique).
3. Individual's differ in their preferred manner of learning relationships (serialists versus holists).
Pask, G. (1i975). Conversation, Cognition, and Learning. New York: Elsevier.
The Criterion Referenced Instruction (CRI) framework developed by Robert Mager is a comprehensive set of methods for the design and delivery of training programs. Some of the critical aspects include: (1) goal/task analysis -- to identify what needs to be learned, (2) performance objectives -- exact specification of the outcomes to be accomplished and how they are to be evaluated (the criterion), (3) criterion referenced testing -- evaluation of learning in terms of the knowledge/skills specified in the objectives, (4) development of learning modules tied to specific objectives.
Training programs developed in CRI format tend to be self-paced courses involving a variety of different media (e.g., workbooks, videotapes, small group discussions, computer-based instruction). Students learn at their own pace and take tests to determine if they have mastered a module. A course manager administers the program and helps students with problems.
CRI is based upon the ideas of mastery learning and performance-oriented instruction. It also incorporates many of the ideas found in Gagne's theory of learning (e.g., task hierarchies, objectives) and is compatible with most theories of adult learning (e.g., Knowles, Rogers) because of its emphasis on learner initiative and self-management.
Criterion referenced instruction is applicable to any form of learning; however, it has been applied most extensively in technical training including troubleshooting.
CRI has been applied to a workshop that Mager gives about CRI. The workshop consists of a series of modules (mostly print materials) with well-defined objectives, practice exercises, and mastery tests. Participants have some freedom to choose the order in which they complete the modules, provided they satisfy the prerequisites shown on the course map. For example, in one module on Objectives, the student must learn the three primary components of an objective, recognize correctly formed objectives (practice exercises), and be able to draft correct objectives for specified tasks. This module has one pre-requisite and is the pre-requisite to most other modules in the course.
1. Instructional objectives are derived from job performance and reflect the competencies (knowledge/skills) that need to be learned.
2. Students study and practice only those skills not yet mastered to the level required by the objectives.
3. Students are given opportunities to practice each objective and obtain feedback about the quality of their performance.
4. Students should receive repeated practice in skills that are used often or are difficult to learn.
5. Students are free to sequence their own instruction within the constraints imposed by the pre-requisites and progress is controlled by their own competence (mastery of objectives).
Mager, R. (1975). Preparing Instructional Objectives (2nd Edition). Belmont, CA: Lake Publishing Co.
Mager, R. & Pipe, P. (1984). Analyzing Performance Problems, or You Really Oughta Wanna (2nd Edition). Belmont, CA: Lake Publishing Co.
Mager, R. (1988). Making Instruction Work. Belmont, CA: Lake Publishing Co.
Argyris (1976) proposes double loop learning theory which pertains to learning to change underlying values and assumptions. The focus of the theory is on solving problems that are complex and ill-structured and which change as problem-solving advances.
Double loop theory is based upon a "theory of action" perspective outlined by Argyris & Schon (1974). This perspective examines reality from the point of view of human beings as actors. Changes in values, behavior, leadership, and helping others, are all part of, and informed by, the actors' theory of action. An important aspect of the theory is the distinction between an individual's espoused theory and their "theory-in-use" (what they actually do); bringing these two into congruence is a primary concern of double loop learning. Typically, interaction with others is necessary to identify the conflict.
There are four basic steps in the action theory learning process: (1) discovery of espoused and theory-in-use, (2) invention of new meanings, (3) production of new actions, and (4) generalization of results. Double loop learning involves applying each of these steps to itself. In double loop learning, assumptions underlying current views are questioned and hypotheses about behavior tested publically. The end result of double loop learning should be increased effectiveness in decision-making and better acceptance of failures and mistakes.
In recent years, Argyris has focused on a methodology for implementing action theory on a broad scale called "action science" (see Argyris, Putnam & Smith, 1985) and the role of learning at the organizational level (e.g., Argyris, 1993; Schon & Argyris, 1996).
Double loop learning is a theory of personal change that is oriented towards professional education, especially leadership in organizations. It has been applied in the context of management development .
Here are two examples from Argyris (1976, p16). A teacher who believes that she has a class of "stupid" students will communicate expectations such that the children behave stupidly. She confirms her theory by asking them questions and eliciting stupid answers or puts them in situations where they behave stupidly. The theory-in-use is self-fulfilling. Similarly, a manager who believes his subordinates are passive, dependent and require authoritarian guidance rewards dependent and submissive behavior. He tests his theory by posing challenges for employees and eliciting dependent outcomes. In order to break this congruency, the teacher or manager would need to engage in open loop learning in which they delibrately disconfirm their theory-in-use.
1. Effective problem-solving about interpersonal or technical issues requires frequent public testing of theories-in-use.
2. Double loop learning requires learning situations in which participants can examine and experiment with their theories of action.
Related web sites:
While not directly about Argyris or his theory, there are many web sites that focus on management development and organization learning which are related to his work. Relevant resources are the Society for Organizational Learning or the web pages of Yogesh Malhotra .
Argyris, C. (1976). Increasing Leadership Effectiveness. New York: Wiley.
Argyris, C. (1993). On Organizational Learning. Cambridge, MA: Blackwell.
Argyris, C. & Schon, D. (1974). Theory in Practice. San Francisco: Jossey-Bass.
Argyris, C. (1982). Reasoning, Learning and Action. Individual and Organizational. San Francisco: Jossey-Bass.
Argyris, C. (1993). Knowledge for Action. San Francisco: Jossey-Bass.
Argyris, C., Putnam, R. & Smith, D. (1985). Action Science. San Francisco: Jossey Bass.
Hull developed a version of behaviorism in which the stimulus (S) affects the organism (O) and the resulting response (R) depends upon characteristics of both O and S. In other words, Hull was interested in studying intervening variables that affected behavior such as initial drive, incentives, inhibitors, and prior training (habit strength). Like other forms of behavior theory, reinforcement is the primary factor that determines learning. However, in Hull's theory, drive reduction or need satisfaction plays a much more important role in behavior than in other frameworks (i.e., Thorndike, Skinner) .
Hull's theoretical framework consisted of many postulates stated in mathematical form; They include: (1) organisms possess a hierarchy of needs which are aroused under conditions of stimulation and drive, (2) habit strength increases with activities that are associated with primary or secondary reinforcement, (3) habit strength aroused by a stimulus other than the one originally conditioned depends upon the closeness of the second stimulus in terms of discrimination thresholds, (4) stimuli associated with the cessation of a response become conditioned inhibitors, (5) the more the effective reaction potential exceeds the reaction theshold, the shorter the latency of response. As these postulates indicate, Hull proposed many types of variables that accounted for generalization, motivation, and variability (oscillation) in learning.
One of the most important concepts in Hull's theory was the habit strength hierarchy: for a given stimulus, an organism can respond in a number of ways. The likelihood of a specific response has a probability which can be changed by reward and is affected by various other variables (e.g. inhibition). In some respects, habit strength hierarchies resemble components of cognitive theories such as schema and production systems .
Hull's theory is meant to be a general theory of learning. Most of the research underlying the theory was done with animals, except for Hull et al. (1940) which focused on verbal learning. Miller & Dollard (1941) represents an attempt to apply the theory to a broader range of learning phenomena.As an interesting aside, Hull began his career researching hypnosis – an area that landed him in some controversy at Yale (Hull, 1933).
Here is an example described by Miller & Dollard (1941): A six year old girl who is hungry and wants candy is told that there is candy hidden under one of the books in a bookcase. The girl begins to pull out books in a random manner until she finally finds the correct book (210 seconds). She is sent out of the room and a new piece of candy is hidden under the same book. In her next search, she is much more directed and finds the candy in 86 seconds. By the ninth repetition of this experiment, the girl finds the candy immediately (2 seconds). The girl exhibited a drive for the candy and looking under books represented her responses to reduce this drive. When she eventually found the correct book, this particular response was rewarded, forming a habit. On subsequent trials, the strength of this habit was increased until it became a single stimulus-response connection in this setting.
1. Drive is essential in order for responses to occur (i.e., the student must want to learn).
2. Stimuli and responses must be detected by the organism in order for conditioning to occur ( i.e., the student must be attentive).
3. Response must be made in order for conditioning to occur (i.e., the student must be active).
4. Conditioning only occurs if the reinforcement satisfied a need (i.e, the learning must satisfy the learner's wants).
According to elaboration theory, instruction should be organized in increasing order of complexity for optimal learning. For example, when teaching a procedural task, the simplest version of the task is presented first; subsequent lessons present additional versions until the full range of tasks are taught. In each lesson, the learner should be reminded of all versions taught so far (summary/synthesis). A key idea of elaboration theory is that the learner needs to develop a meaningful context into which subsequent ideas and skills can be assimilated.
Elaboration theory proposes seven major strategy components: (1) an elaborative sequence, (2) learning prerequisite sequences, (3) summary, (4) synthesis, (5) analogies, (6) cognitive strategies, and (7) learner control. The first component is the most critical as far as elaboration theory is concerned. The elaborative sequence is defined as a simple to complex sequence in which the first lesson epitomizes (rather than summarize or abstract) the ideas and skills that follow. Epitomizing should be done on the basis of a single type of content (concepts, procedures, principles), although two or more types may be elaborated simultaneously, and should involve the learning of just a few fundamental or representative ideas or skills at the application level.
It is claimed that the elaboration approach results in the formation of more stable cognitive structures and therefore better retention and transfer, increased learner motivation through the creation of meaningful learning contexts, and the provision of information about the content that allows informed learner control. Elaboration theory is an extension of the work of Ausubel (advance organizers) and Bruner (spiral curriculum).
Elaboration theory applies to the design of instruction for the cognitive domain. The theoretical framework has been applied to a number of settings in higher education and training (English & Reigeluth, 1996; Reigeluth, 1992). Hoffman (1997) considers the relationship between elaboration theory and hypermedia.
Reigeluth (1983) provides the following summary of a theoretical epitome for an introductory course in economics:
1. Organizing content (principles)- the law of supply and demand
a) An increase in price causes an incease in the quantity supplied and a decrease in the quantity demanded.
b) A decrease in price causes a decrease in the quantity supplied and an increase in the quantity demanded.
Practically all principles of economics can be viewed as elaborations of the law of suppy and demand including monopoly, regulation, price fixing, planned economies.
1. Instruction will be more effective if it follows an elaboration strategy, i.e., the use of epitomes containing motivators, analogies, summaries, and syntheses.
2. There are four types of relationships important in the design of instruction: conceptual, procedural, theoretical and learning pre-requisites.
English, R.E. & Reigeluth, C.M. (1996). Formative research on sequencing instruction with the elaboration theory. Educational Technology Research & Development, 44(1), 23-42.
Hoffman, S. (1997). Elaboration theory and hypermedia: Is there a link? Educational Technology, 37(1), 57-64.
Reigeluth, C. & Stein, F. (1983). The elaboration theory of instruction. In C. Reigeluth (ed.), Instructional Design Theories and Models. Hillsdale, NJ: Erlbaum Associates.
Reigeluth, C. (1987). Lesson blueprints based upon the elaboration theory of instruction. In C. Reigeluth (ed.), Instructional Design Theories in Action. Hillsdale, NJ: Erlbaum Associates.
Reigeluth, C. (1992). Elaborating the elaboration theory. Educational Technology Research & Development, 40(3), 80-86.
Experiential Learning (C. Rogers)
Rogers distinguished two types of learning: cognitive (meaningless) and experiential (significant). The former corresponds to academic knowledge such as learning vocabulary or multiplication tables and the latter refers to applied knowledge such as learning about engines in order to repair a car. The key to the distinction is that experiential learning addresses the needs and wants of the learner. Rogers lists these qualities of experiential learning: personal involvement, self-initiated, evaluated by learner, and pervasive effects on learner.
To Rogers, experiential learning is equivalent to personal change and growth. Rogers feels that all human beings have a natural propensity to learn; the role of the teacher is to facilitate such learning. This includes: (1) setting a positive climate for learning, (2) clarifying the purposes of the learner(s), (3) organizing and making available learning resources, (4) balancing intellectual and emotional components of learning, and (5) sharing feelings and thoughts with learners but not dominating.
According to Rogers, learning is facilitated when: (1) the student participates completely in the learning process and has control over its nature and direction, (2) it is primarily based upon direct confrontation with practical, social, personal or research problems, and (3) self-evaluation is the principal method of assessing progress or success. Rogers also emphasizes the importance of learning to learn and an openness to change.
Roger's theory of learning evolved as part of the humanistic education movement (e.g., Patterson, 1973; Valett, 1977).
Roger's theory of learning originates from his views about psychotherapy and humanistic approach to psychology. It applies primarily to adult learners and has influenced other theories of adult learning such as Knowles and Cross. Combs (1982) examines the significance of Roger's work to education. Rogers & Frieberg (1994) discuss applications of the experiential learning framework to the classroom.
A person interested in becoming rich might seek out books or classes on ecomomics, investment, great financiers, banking, etc. Such an individual would perceive (and learn) any information provided on this subject in a much different fashion than a person who is assigned a reading or class.
1. Significant learning takes place when the subject matter is relevant to the personal interests of the student
2. Learning which is threatening to the self (e.g., new attitudes or perspectives) are more easily assimilated when external threats are at a minimum
3. Learning proceeds faster when the threat to the self is low
4. Self-initiated learning is the most lasting and pervasive.
Combs, A.W. (1982). Affective education or none at all. Educational Leadership, 39(7), 494-497.
The functional context approach to learning stresses the importance of making learning relevant to the experience of learners and their work context. The learning of new information is facilitated by making it possible for the learner to relate it to knowledge already possessed and transform old knowledge into new knowledge. By using materials that the learner will use after training, transfer of learning from the classroom to the "real world" will be enhanced.
The model of the cognitive system underlying this approach emphasizes the interaction of three components: (1) a knowledge base (i.e., long term memory) of what the individual knows, (2) processing skills including language, problem-solving, and learning strategies, and (3) information displays that present information. The performance of a task requires knowledge about what one is reading or writing, processing skills for comprehension and communication, and displays of information to be processed.
The functional context approach also proposes new assessment methods. Instead of using grade level scores, tests should measure content knowledge gained and distinquish between functional learning and academic learning. For example, an assessment of readi ng should measure both reading-to-do (e.g., looking up information in a manual) and reading-to-learn (e.g., information needed for future decisions).
Functional context theory shares a similar emphasis with Situated Learning theory which also stresses the importance of context during learning.
The functional context approach was developed specifically for adult technical and literacy training (reading/writing/mathematics) in military programs, but it has implications for learning of basic skills in general (e.g., Sticht, 1976) and reading in particular (Sticht, 1975). Sticht's functional context framework has been the basis for major workplace training and literacy programs sponsored by the U.S. Department of Labor and Department of Education.
The Experimental Functional Skills Program in Reading (XFSP/Read) was developed by Sticht and colleagues for the Navy. The purpose of the program was to improve the reading and mathematics skills of enlisted personnel using the functional context approach . A job/task analysis was performed to identify the reading-to-do and reading-to-learn skills needed in Navy jobs. On the basis of this analysis, print and computer-based instructional materials were developed for the program that involved Navy content (such as technical manuals). In addition, a Navy-related reading test was created in order to measure achievement in the program.
1. Instruction should be made as meaningful as possible to the learner in terms of the learner's prior knowledge.
2. Use material and equipment that the learner will actually use after training
3. Literacy can be improved by: improving content knowledge, information processing skills, or the design of the learning materials.
4. Valid assessment of learning requires context/content specific measurement.
Carnevale, A., Gainer, L. & Meltzer, A. (1990). Workplace Basics: The Essential Skills Employers Want. San Francisco: Jossey-Bass.
Sticht, T.G. (1975). Applications of the audread model to reading evaluation and instruction. In L. Resnick & P. Weaver (Eds.), Theory and Practice of Early Reading, Volume 1. Hillsdale, NJ: Erlbaum.
Sticht, T.G. (1976). Comprehending reading at work. In M. Just & P. Carpenter (eds.), Cognitive Processes in Comprehension. Hillsdale, NJ: Erlbaum.
Sticht, T. (1988). Adult literacy education. Review of Research in Education, Volume 15. Washington, DC: American Education Research Association.
Sticht, T., et al. (1987). Cast-off Youth: Policy and Training Methods from the Military Experience. New York: Praeger.
Over a period of six decades, Jean Piaget conducted a program of naturalistic research that has profoundly affected our understanding of child development. Piaget called his general theoretical framework "genetic epistemology" because he was primarily interested in how knowledge developed in human organisms. Piaget had a background in both Biology and Philosophy and concepts from both these disciplines influences his theories and research of child development.
The concept of cognitive structure is central to his theory. Cognitive structures are patterns of physical or mental action that underlie specific acts of intelligence and correspond to stages of child development (see Schemas). There are four primary cognitive structures (i.e., development stages) according to Piaget: sensorimotor, preoperations, concrete operations, and formal operations. In the sensorimotor stage (0-2 years), intelligence takes the form of motor actions. Intelligence in the preoperation period (3-7 years) is intutive in nature. The cognitive structure during the concrete operational stage (8-11 years) is logical but depends upon concrete referents. In the final stage of formal operations (12-15 years), thinking involves abstractions.
Cognitive structures change through the processes of adaptation: assimilation and accommodation. Assimilation involves the interpretation of events in terms of existing cognitive structure whereas accommodation refers to changing the cognitive structure to make sense of the environment. Cognitive development consists of a constant effort to adapt to the environment in terms of assimilation and accommodation. In this sense, Piaget's theory is similar in nature to other constructivist perspectives of learning (e.g., Bruner, Vygotsky).
While the stages of cognitive development identified by Piaget are associated with characteristic age spans, they vary for every individual. Furthermore, each stage has many detailed structural forms. For example, the concrete operational period has more than forty distinct structures covering classification and relations, spatial relationships, time, movement, chance, number, conservation and measurement. Similar detailed analysis of intellectual functions is provided by theories of intelligence such as Guilford, Gardner, and Sternberg.
Piaget explored the implications of his theory to all aspects of cognition, intelligence and moral development. Many of Piaget's experiments were focused on the development of mathematical and logical concepts. The theory has been applied extensively to teaching practice and curriculum design in elementary education (e.g., Bybee & Sund, 1982; Wadsworth, 1978). Piaget's ideas have been very influential on others, such as Seymour Papert (see computers).
Applying Piaget's theory results in specific recommendations for a given stage of cognitive development. For example, with children in the sensorimotor stage, teachers should try to provide a rich and stimulating environment with ample objects to play with. On the other hand, with children in the concrete operational stage, learning activities should involve problems of classification, ordering, location, conservation using concrete objects.
1. Children will provide different explanations of reality at different stages of cognitive development.
2. Cognitive development is facilitated by providing activities or situations that engage learners and require adaptation (i.e., assimilation and accomodation).
3. Learning materials and activities should involve the appropriate level of motor or mental operations for a child of given age; avoid asking students to perform tasks that are beyond their currrent cognitive capabilities.
4. Use teaching methods that actively involve students and present challenges.
Brainerd, C. (1978). Piaget's Theory of Intelligence. Englewood Cliffs, NJ: Prentice-Hall.
Bybee, R.W. & Sund, R.B. (1982). Piaget for Educators (2nd Ed). Columbus, OH: Charles Merrill.
Flavell, J. H. (1963). The Developmental Psychology of Jean Piaget. NY: Van Nostrand Reinhold.
Gallagher, J.M. & Reid, D.K. (1981). The Learning Theory of Piaget and Inhelder. Monterey, CA: Brooks/Cole.
Piaget, J. (1929). The Child's Conception of the World. NY: Harcourt, Brace Jovanovich.
Piaget, J. (1932). The Moral Judgement of the Child. NY: Harcourt, Brace Jovanovich.
Piaget, J. (1969). The Mechanisms of Perception. London: Rutledge & Kegan Paul.
Paiget, J. (1970). The Science of Education amd the Psychology of the Child. NY: Grossman.
Piaget, J. & Inhelder, B. (1969). The Psychology of the Child. NY: Basic Books.
Piaget, J. & Inhelder, B. (1973). Memory and intelligence. NY: Basic Books.
Wadsworth, B. (1978). Piaget for the Classroom Teacher. NY: Longman.
Along with Kohler and Koffka, Max Wertheimer was one of the principal proponents of Gestalt theory which emphasized higher-order cognitive processes in the midst of behaviorism. The focus of Gestalt theory was the idea of "grouping", i.e., characteristics of stimuli cause us to structure or interpret a visual field or problem in a certain way (Wertheimer, 1922). The primary factors that determine grouping were: (1) proximity - elements tend to be grouped together according to their nearness, (2) similarity - items similar in some respect tend to be grouped together, (3) closure - items are grouped together if they tend to complete some entity, and (4) simplicity - items will be organized into simple figures according to symmetry, regularity, and smoothness. These factors were called the laws of organization and were explained in the context of perception and problem-solving.
Wertheimer was especially concerned with problem-solving. Werthiemer (1959) provides a Gestalt interpretation of problem-solving episodes of famous scientists (e.g., Galileo, Einstein) as well as children presented with mathematical problems. The essence of successful problem-solving behavior according to Wertheimer is being able to see the overall structure of the problem: "A certain region in the field becomes crucial, is focused; but it does not become isolated. A new, deeper structural view of the situation develops, involving changes in functional meaning, the grouping, etc. of the items. Directed by what is required by the structure of a situation for a crucial region, one is led to a reasonable prediction, which like the other parts of the structure, calls for verification, direct or indirect. Two directions are involved: getting a whole consistent picture, and seeing what the structure of the whole requires for the parts." (p 212).
Gestalt theory applies to all aspects of human learning, although it applies most directly to perception and problem-solving. The work of Gibson was strongly influenced by Gestalt theory.
The classic example of Gestalt principles provided by Wertheimer is children finding the area of parallelograms. As long as the parallelograms are regular figures, a standard procedure can be applied (making lines perpendicular from the corners of the base). However, if a parallelogram with a novel shape or orientation is provided, the standard procedure will not work and children are forced to solve the problem by understanding the true structure of a parallelogram (i.e., the figure can be bisected anywhere if the ends are joined).
1. The learner should be encouraged to discover the underlying nature of a topic or problem (i.e., the relationship among the elements).
2. Gaps, incongruities, or disturbances are an important stimulus for learning
3. Instruction should be based upon the laws of organization: proximity, closure, similarity and simplicity.
Ellis, W.D. (1938). A Source Book of Gestalt Psychology. New York: Harcourt, Brace & World.
Wertheimer, M. (1923). Laws of organization in perceptual forms. First published as Untersuchungen zur Lehre von der Gestalt II, in Psycologische Forschung, 4, 301-350. Translation published in Ellis, W. (1938). A source book of Gestalt psychology (pp. 71-88). London: Routledge & Kegan Paul. [available at http://psy.ed.asu.edu/~classics/Wertheimer/Forms/forms.htm ]
Wertheimer, M. (1959). Productive Thinking (Enlarged Ed.). New York:Harper & Row.
Related Web Sites:
For more about Wertheimer and Gestalt theory, see:
Thanks to Gerhard Stemberger (email@example.com) for his help with this page.
GPS (A. Newell & H. Simon)
The General Problem Solver (GPS) was a theory of human problem solving stated in the form of a simulation program (Ernst & Newell, 1969; Newell & Simon, 1972). This program and the associated theoretical framework had a significant impact on the subsequent direction of cognitive psychology. It also introduced the use of productions as a method for specifying cognitive models.
The theoretical framework was information processing and attempted to explain all behavior as a function of memory operations, control processes and rules. The methodology for testing the theory involved developing a computer simulation and then comparing the results of the simulation with human behavior in a given task. Such comparisons also made use of protocol analysis (Ericsson & Simon, 1984) in which the verbal reports of a person solving a task are used as indicators of cognitive processes (see http://www.rci.rutgers.edu/~cfs/472_html/CogArch/Protocol.html)
GPS was intended to provide a core set of processes that could be used to solve a variety of different types of problems. The critical step in solving a problem with GPS is the definition of the problem space in terms of the goal to be achieved and the transformation rules. Using a means-end-analysis approach, GPS would divide the overall goal into subgoals and attempt to solve each of those. Some of the basic solution rules include: (1) transform one object into another, (2) reduce the different between two objects, and (3) apply an operator to an object. One of the key elements need by GPS to solve problems was an operator-difference table that specified what transformations were possible.
While GPS was intended to be a general problem-solver, it could only be applied to "well-defined" problems such as proving theorems in logic or geometry, word puzzles and chess.However, GPS was the basis other theoretical work by Newell et al. such as SOAR and GOMS. Newell (1990) provides a summary of how this work evolved.
Here is a trace of GPS solving the logic problem to transform L1= R*(-P => Q) into L2=(Q \/ P)*R (Newell & Simon, 1972, p420):
Goal 1: Transform L1 into LO Goal 2: Reduce difference between L1 and L0 Goal 3: Apply R1 to L1 Goal 4: Transform L1 into condition (R1) Produce L2: (-P => Q) *R Goal 5: Transform L2 into L0 Goal 6: Reduce difference between left(L2) and left(L0) Goal 7: Apply R5 to left(L2) Goal 8: Transform left(L2) into condition(R5) Goal 9: Reduce difference between left(L2) and condition(R5) Rejected: No easier than Goal 6 Goal 10: Apply R6 to left(L2) Goal 11: Transform left(L2) into condition(R5) Produce L3: (P \/ Q) *R Goal 12: Transform L3 into L0 Goal 13: Reduce difference between left(L3) and left(L0) Goal 14: Apply R1 to left(L3) Goal 15: Transform left(L3) into condition(R1) Produce L4: (Q \/ P)*R Goal 16: Transform L4 into L0 Identical, QED
1. Problem-solving behavior involves means-ends-analysis, i.e., breaking a problem down into subcomponents (subgoals) and solving each of those.
Ericsson, K. & Simon, H. (1984). Protocol Analysis. Cambridge, MA: MIT Press.
Ernst, G. & Newell, A. (1969). GPS: A Case Study in Generality and Problem Solving. New York: Academic Press.
Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: HarvardUniversity Press.
Newell, A. & Simon, H. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.
The theory of information pickup suggests that perception depends entirely upon information in the "stimulus array" rather than sensations that are influenced by cognition. Gibson proposes that the environment consists of affordances (such terrain, water, vegetation, etc.) which provide the clues necessary for perception. Furthermore, the ambient array includes invariants such as shadows, texture, color, convergence, symmetry and layout that determine what is perceived. According to Gibson, perception is a direct consequence of the properties of the environment and does not involve any form of sensory processing.
Information pickup theory stresses that perception requires an active organism. The act of perception depends upon an interaction between the organism and the environment. All perceptions are made in reference to body position and functions (proprioception). Awareness of the environment derives from how it reacts to our movements.
Information pickup theory opposes most traditional theories of cognition that assume past experience plays a dominant role in perceiving. It is based upon Gestalt theories that emphasize the significance of stimulus organization and relationships.
Information pickup theory is intended as a general theory of perception, although it has been developed most completely for the visual system. Gibson (1979) discusses the implications of the theory for still and motion picture research. Neisser (1976) presents a theory of cognition that is strongly influenced by Gibson.
Much of Gibson's ideas about perception were developed and applied in the context of aviation training during WWII. The critical concept is that pilots orient themselves according to characteristics of the ground surface rather than through vestibular/kinesthetic senses. In other words, it is the invariants of terrain and sky that determine perception while flying, not sensory processing per se. Therefore, training sequences and materials for pilots should always include this kind of information.
1. To facilitate perception, realistic environmental settings should be used in instructional materials.
2. Since perception is an active process, the individual should have an unconstrained learning environment.
3. Instruction should emphasize the stimulus characteristics that provide perceptual cues.
Gibson, J.J. (1966). The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin.
Gibson, J.J. (1977). The theory of affordances. In R. Shaw & J. Bransford (eds.), Perceiving, Acting and Knowing. Hillsdale, NJ: Erlbaum.
Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Neisser, U. (1976). Cognition and Reality. San Francisco: W.H. Freeman.
Edward de Bono has written extensively about the process of lateral thinking -- the generation of novel solutions to problems. The point of lateral thinking is that many problems require a different perspective to solve successfully.
De Bono identifies four critical factors associated with lateral thinking: (1) recognize dominant ideas that polarize perception of a problem, (2) searching for different ways of looking at things, (3) relaxation of rigid control of thinking, and (4) use of chance to encourage other ideas. This last factor has to do with the fact that lateral thinking involves low-probability ideas which are unlikely to occur in the normal course of events.
Lateral thinking applies to human problem-solving. DeBono (1971a) discusses the application of lateral thinking to management development and DeBono (1971b) provides an interesting study of lateral thinking in children. Some of his recent work has focused on schools (e.g., DeBono, 1991).
The following anecdote is provided by DeBono (1967). A merchant who owes money to a money lender agrees to settle the debt based upon the choice of two stones (one black, one white) from a money bag. If his daughter chooses the white stone, the debt is canceled; if she picks the black stone, the moneylender gets the merchant's daughter. However, the moneylender "fixes" the outcome by putting two black stones in the bag. The daughter sees this and when she picks a stone out of the bag, immediately drops it onto the path full of other stones. She then points out that the stone she picked must have been the opposite color of the one remaining in the bag. Unwilling to be unveiled as dishonest, the moneylender must agree and cancel the debt. The daughter has solved an intractable problem through the use of lateral thinking.
1. To get a different perspective on a problem, try breaking the elements up and recombining them in a different way (perhaps randomly).
DeBono, E. (1967). New Think: The Use of Lateral Thinking in the Generation of New Ideas. New York: Basic Books.
DeBono, E. (1971a). Lateral Thinking for Management. New York: McGraw-Hill.
DeBono, E. (1971b). The Dog Exercising Machine. London: Penquin Books.
DeBono, E. (1991). Teaching Thinking. London: Penquin Books.
Levels of Processing(F. Craik & R. Lockhart)
The levels of processing framework was presented by Craik & Lockhart (1972) as an alternative to theories of memory that postulated separate stages for sensory, working and long-term memory. According to the levels of processing framework, stimulus information is processed at multiple levels simultaneously depending upon its characteristics. Furthermore, the "deeper" the processing, the more that will be remembered. For example, information that involves strong visual images or many associations with existing knowledge will be processed at a deeper level. Similarly, information that is being attended to receives more processing than other stimuli/events. The theory also supports the finding that we remember things that are meaningful to us because this requires more processing than meaningless stimuli.
Processing of information at different levels is unconscious and automatic unless we attend to that level. For example, we are normally not aware of the sensory properties of stimuli, or what we have in working memory, unless we are asked to specifically identify such information. This suggests that the mechanism of attention is an interruption in processing rather than a cognitive process in its own right.
D'Agostino, O'Neill & Paivio (1977) discuss the relationship between the dual coding theory and the levels of processing framework. Other theories of memory related to levels of processing are Rumelhart & Norman and Soar .
The primary application of the levels of processing framework was to verbal learning settings (i.e., memorization of word lists); however, it has been applied to reading and language learning (e.g., Cermak & Craik, 1979).
Perfetti (in Cermak & Craik, 1979, p159-180) extends the levels of processing framework to language comprehension. He proposes seven levels: acoustic, phonology, syntactic, semantic, referential, thematic, and functional. The first levels are normally transparent while the fourth level (semantic) is the conscious interpretation of the utterence or sentence. Processing of the last three levels depend upon context and will result in comprehension provided there is no ambiguity. Note that any level can be made conscious if a problem arises (e.g., a strong accent or poor handwriting).
1. The greater the processing of information during learning, the more it will be retained and remembered.
2. Processing will be automatic unless attention is focused on a particular level.
Cermak, L. & Craik, F. (1979). Levels of Processing in Human Memory. Hillsdale, NJ: Erlbaum.
Craik, F. & Lockhart, R. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning & Verbal Behavior, 11, 671-684.
D'Agostino, P. R., O'Neill, B. J., & Paivio, A. (1977). Memory for pictures and words as a function of level of processing: Depth or dual coding? Memory & Cognition, 5, 252-256.
Mathematical learning theory (R. C. Atkinson)
Mathematical learning theory is an attempt to describe and explain behavior in quantitative terms. A number of psychologists have attempted to develop such theories (e.g., Hull; Estes; Restle & Greeno, 1970). The work of R. C. Atkinson is particularly interesting because he applied mathematical learning theory to the design of a language arts curriculum.
Atkinson (1972) discusses the problem of optimizing instruction. He outlined four possible strategies: (1) maximize the mean performance of the whole class, (2) minimize the variance in performance for the whole class, (3) maximize the number of students who score at grade level, or (4) maximize the mean performance for each individual. Atkinson shows that while alternative (1) produces the largest gain scores, it also produces the greatest variance since it increases the spread between the most and least successful students. Alternative (4) produces an overall gain but without increased variability. This is accomplished by giving each student variable amounts of time depending upon performance.
Atkinson's research has primarily focused on simple language learning in the context of computer based instruction. Atkinson & Shiffrin (1968) discuss a model of memory based upon quantitative principles.
Atkinson (1972) reports the results of an experiment in which college students learned German vocabulary via (1) random presentation of words, (2) learner selection of words, or (3) response-sensitive presentation based upon student performance. The response-sensitive strategy resulted in the best scores on a delayed test. The response-sensitive strategy was based upon a mathematical model that predicted the changes from one state of memory to another.
1. It is possible to develop an optimal instructional strategy for a given individual provided that a detailed model of the learning process is available.
2. Optimal learning performance can be achieved by giving each individual sufficent time to learn.
Atkinson, R. C. (1972). Ingredients for a theory of instruction. American Psychologist, 27, 921-931.
Atkinson, R. C., Bower, G., & Crothers, E.J. (1965). An Introduction to Mathematical Learning Theory. New York: Wiley.
Atkinson, R. C. & Shiffrin, R.M. (1968). Human memory: A proposed system and its control processes. In K.W. Spence & J.T. Spence (Eds.), The Psychology of Learning and Motivation, Vol 2. New York: Academic Press.
Restle, F. & Greeno, J. (1970). Introduction to Mathematical Psychology. Reading. MA: Addison-Wesley.
Mathematical Problem Solving (A. Schoenfeld)
Alan Schoenfeld presents the view that understanding and teaching mathematics should be approached as a problem-solving domain. According to Schoenfeld (1985), four categories of knowledge/skills are needed to be successful in mathematics: (1) Resources - proposition and procedural knowledge of mathematics, (2) heuristics - strategies and techniques for problem solving such as working backwards, or drawing figures, (3) control - decisions about when and what resources and strategies to use, and (4) beliefs - a mathematical "world view" that determines how someone approaches a problem.
Schoenfeld's theory is supported by extensive protocol analysis of students solving problems. The theoretical framework is based upon much other work in cognitive psychology, particularly the work of Newell & Simon. Schoenfeld (1987) places more emphasis on the importance of metacognition and the cultural components of learning mathematics (i.e., belief systems) than in his original formulation.
Schoenfeld's research and theory applies primarily to college level mathematics.
Schoenfeld (1985, Chapter 1) uses the following problem to illustrate his theory: Given two intersecting straight lines and a point P marked on one of them, show how to construct a circle that is tangent to both lines and has point P as its point of tangency to the lines. Examples of resource knowledge include the procedure to draw a perpendicular line from P to the center of the circle and the significance of this action. An important heuristic for solving this problem is to construct a diagram of the problem. A control strategy might involve the decision to construct an actual circle and line segments using a compass and protractor. A belief that might be relevant to this problem is that solutions should be empirical (i.e., constructed) rather than derived.
1. Successful solution of mathematics problems depends up on a combination of resource knowledge, heuristics, control processes and belief, all of which must be learned and taught.
Schoenfeld, A. (1985). Mathematical Problem Solving. New York: Academic Press.
Schoenfeld, A. (1987). Cognitive Science and Mathematics Education. Hillsdale, NJ: Erlbaum Assoc.
The Minimalist theory of J.M. Carroll is a framework for the design of instruction, especially training materials for computer users. The theory suggests that (1) all learning tasks should be meaningful and self-contained activities, (2) learners should be given realistic projects as quickly as possible, (3) instruction should permit self-directed reasoning and improvising by increasing the number of active learning activities, (4) training materials and activities should provide for error recognition and recovery and, (5) there should be a close linkage between the training and actual system.
Minimalist theory emphasizes the necessity to build upon the learner's experience (c.f., Knowles , Rogers ). Carroll (1990) states: "Adult learners are not blank slates; they don't have funnels in their heads; they have little patience for being treated as "don't knows"... New users are always learning computer methods in the context of specific preexisting goals and expectations." (p. 11) Carroll also identifies the roots of minimalism in the constructivism of Bruner and Piaget.
The critical idea of minimalist theory is to minimize the extent to which instructional materials obstruct learning and focus the design on activities that support learner-directed activity and accomplishment. Carroll feels that training developed on the basis of other instructional theories (e.g., Gagne, Merrill) is too passive and fails to exploit the prior knowledge of the learner or use errors as learning opportunities.
Minimalist theory is based upon studies of people learning to use a diverse range of computer applications including word processing, databases, and programming. It has been extensively applied to the design of computer documentation (e.g., Nowaczyk & James, 1993, van der Meij & Carroll, 1995). Carroll (1998) includes a survey of applications as well as analysis of the framework in practice and theory.
Carroll (1990, chapter 5) describes an example of a guided exploration approach to learning how to use a word processor. The training materials involved a set of 25 cards to replace a 94 page manual. Each card corresponded to a meaningful task, was self-contained and included error recognition/recovery information for that task. Furthermore, the information provided on the cards was not complete, step-by-step specifications but only the key ideas or hints about what to do. In an experiment that compared the use of the cards versus the manual, users learned the task in about half the time with the cards, supporting the effectiveness of the minimalist design.
1. Allow learners to start immediately on meaningful tasks.
2. Minimize the amount of reading and other passive forms of training by allowing users to fill in the gaps themselves
3. Include error recognition and recovery activities in the instruction
4. Make all learning activities self-contained and independent of sequence.
Carroll, J.M. (1990). The Nurnberg Funnel. Cambridge, MA: MIT Press.
Carroll, J.M. (1998). Minimalism beyond the Nurnberg Funnel. Cambridge, MA: MIT Press.
Nowaczyk, R. & James, E. (1993). Applying minimal manual principles for documentation of graphical user interfaces. Journal of Technical Writing and Communication, 23(4), 379-388.
van der Meij, H. & Carroll, J.M. (1995). Principles and heuristics for designing minimalist instruction. Technical Communications, 42(2), 243-261.
D. Rumelhart & D. Norman (1978) proposed that there are three modes of learning: accretion, structuring and tuning. Accretion is the addition of new knowledge to existing memory. Structuring involves the formation of new conceptual structures or schema. Tuning is the adjustment of knowledge to a specific task usually through practice. Accretion is the most common form of learning; structuring occurs much less frequently and requires considerable effort; tuning is the slowest form of learning and accounts for expert performance.
Restructuring involves some form of reflection or insight (i.e., metacognition) and may correspond to a plateau in performance. On the other hand, tuning often represents automatic behavior that is not available to reflection (e.g., learning procedures).
Rumelhart & Norman (1981) extended their model to include analogical processes: a new schema is created by modeling it on an existing schema and then modifying it based upon further experiences.
This is a general model for human learning, although it was originally proposed in the context of language learning .
Norman (1982) discusses the example of learning morse code. Initial learning of the code is the process of accretion. Learning to recognize sequences or full words represents restructuring. The gradual increase in translation or transmission speed indicates the process of tuning.
1. Instruction must be designed to accommodate different modes of learning.
2. Practice activities affect the refinement of skills but not necessarily the initial acquisition of knowledge.
Norman, D. (1982). Learning and Memory. San Francisco: Freeman.
Rumelhart, D. & Norman, D. (1978). Accretion, tuning and restructuring: Three modes of learning. In. J.W. Cotton & R. Klatzky (eds.), Semantic Factors in Cognition. Hillsdale, NJ: Erlbaum.
Rumelhart, D. & Norman, D. (1981). Analogical processes in learning. In J.R. Anderson (ed.), Cognitive Skills and their Acquisition. Hillsdale, NJ: Erlbaum.
Multiple Intelligences (H. Gardner)
The theory of multiple intelligences suggests that there are a number of distinct forms of intelligence that each individual possesses in varying degrees. Gardner proposes seven primary forms: linguistic, musical, logical-mathematical, spatial, body-kinesthetic, intrapersonal (e.g., insight, metacognition) and interpersonal (e.g., social skills).
According to Gardner, the implication of the theory is that learning/teaching should focus on the particular intelligences of each person. For example, if an individual has strong spatial or musical intelligences, they should be encouraged to develop these abilities. Gardner points out that the different intelligences represent not only different content domains but also learning modalities. A further implication of the theory is that assessment of abilities should measure all forms of intelligence, not just linguistic and logical-mathematical.
Gardner also emphasizes the cultural context of multiple intelligences. Each culture tends to emphasize particular intelligences. For example, Gardner (1983) discusses the high spatial abilities of the Puluwat people of the Caroline Islands, who use these skills to navigate their canoes in the ocean. Gardner also discusses the balance of personal intelligences required in Japanese society.
The theory of multiple intelligences has been focused mostly on child development although it applies to all ages. While there is no direct empirical support for the theory, Gardner (1983) presents evidence from many domains including biology, anthropology, and the creative arts and Gardner (1993a) discusses application of the theory to school programs. Gardner (1982, 1993b) explores the implications of the framework for creativity (see also Marks-Tarlow, 1995).
Gardner (1983, p 390) describes how learning to program a computer might involve multiple intelligences:
"Logical-mathematical intelligence seems central, because programming depends upon the deployment of strict procedures to solve a problem or attain a goal in a finite number of steps. Linguistic intelligence is also relevant, at least as long as manual and computer languages make use of ordinary language...an individual with a strong musical bent might best be introduced to programming by attempting to program a simple musical piece (or to master a program that composes). An individual with strong spatial abilities might be initiated through some form of computer graphics -- and might be aided in the task of programming through the use of a flowchart or some other spatial diagram. Personal intelligences can play important roles. The extensive planning of steps and goals carried out by the individual engaged in programming relies on intrapersonal forms of thinking, even as the cooperation needed for carrying a complex task or for learning new computational skills may rely on an individual's ability to work with a team. Kinesthetic intelligence may play a role in working with the computer itself, by facilitating skill at the terminal..."
1. Individuals should be encouraged to use their preferred intelligences in learning.
2. Instructional activities should appeal to different forms of intelligence.
3. Assessment of learning should measure multiple forms of intelligence.
Gardner, H. (1982). Art, Mind and Brain. New York: Basic Books.
Gardner, H. (1983). Frames of Mind. New York: Basic Books.
Gardner, H. (1993a). Multiple Intelligences: The Theory in Practice. NY: Basic Books.
Gardner, H. (1 993b). Creating Minds. NY: Basic Books.
Marks-Tarlow, T. (1995). Creativity inside out: Learning through multiple intelligences. Reading, MA: Addison-Wesley.
The theory of B.F. Skinner is based upon the idea that learning is a function of change in overt behavior. Changes in behavior are the result of an individual's response to events (stimuli) that occur in the environment. A response produces a consequence such as defining a word, hitting a ball, or solving a math problem. When a particular Stimulus-Response (S-R) pattern is reinforced (rewarded), the individual is conditioned to respond. The distinctive characteristic of operant conditioning relative to previous forms of behaviorism (e.g., Thorndike, Hull) is that the organism can emit responses instead of only eliciting response due to an external stimulus.
Reinforcement is the key element in Skinner's S-R theory. A reinforcer is anything that strengthens the desired response. It could be verbal praise, a good grade or a feeling of increased accomplishment or satisfaction. The theory also covers negative reinforcers -- any stimulus that results in the increased frequency of a response when it is withdrawn (different from adversive stimuli -- punishment -- which result in reduced responses). A great deal of attention was given to schedules of reinforcement (e.g. interval versus ratio) and their effects on establishing and maintaining behavior.
One of the distinctive aspects of Skinner's theory is that it attempted to provide behavioral explanations for a broad range of cognitive phenomena. For example, Skinner explained drive (motivation) in terms of deprivation and reinforcement schedules. Skinner (1957) tried to account for verbal learning and language within the operant conditioning paradigm, although this effort was strongly rejected by linguists and psycholinguists. Skinner (1971) deals with the issue of free will and social control.
Operant conditioning has been widely applied in clinical settings (i.e., behavior modification) as well as teaching (i.e., classroom management) and instructional development (e.g., programmed instruction). Parenthetically, it should be noted that Skinner rejected the idea of theories of learning (see Skinner, 1950).
By way of example, consider the implications of reinforcement theory as applied to the development of programmed instruction (Markle, 1969; Skinner, 1968)
1. Practice should take the form of question (stimulus) - answer (response) frames which expose the student to the subject in gradual steps
2. Require that the learner make a response for every frame and receive immediate feedback
3. Try to arrange the difficulty of the questions so the response is always correct and hence a positive reinforcement
4. Ensure that good performance in the lesson is paired with secondary reinforcers such as verbal praise, prizes and good grades.
1. Behavior that is positively reinforced will reoccur; intermittent reinforcement is particularly effective
2. Information should be presented in small amounts so that responses can be reinforced ("shaping")
3. Reinforcements will generalize across similar stimuli ("stimulus generalization") producing secondary conditioning
Markle, S. (1969). Good Frames and Bad (2nd ed.). New York: Wiley.
Skinner, B.F. (1950). Are theories of learning necessary? Psychological Review, 57(4), 193-216.
Skinner, B.F. (1953). Science and Human Behavior. New York: Macmillan.
Skinner, B.F. (1954). The science of learning and the art of teaching. Harvard Educational Review, 24(2), 86-97.
Skinner, B.F. (1957). Verbal Learning. New York: Appleton-Century-Crofts.
Skinner, B.F. (1968). The Technology of Teaching. New York: Appleton-Century-Crofts.
Skinner, B.F. (1971). Beyond Freedom and Dignity. New York: Knopf.
Irving Maltzman conducted a number of studies that demonstrated that originality could be increased. According to Maltzman, originality refers to behavior that occurs relatively infrequently, is uncommon under given conditions, and is relevant to those conditions. Maltzman distinquished originality from creativity, the latter referring to the consequences of original behavior (including the reaction of society to the behavior).
Maltzman (1960) describes three methods that can increase original responses: (1) present an uncommon stimulus situation for which conventional responses may not be readily available, (2) evoke different responses to the same situation, and (3) evoke uncommon responses as textual responses. Maltzman used the latter approach and mentions Osborn (1957) as an example of the first two.
Maltzman's research is distinctive because he was one of the few behaviorists who attempted to deal with creative behavior. He provided a simple definition and methodology for studying originality. He also examined the relationship between originality and problem solving.
Maltzman conducted his studies using word association tasks. Thus his findings are most directly applicable to originality that involves verbalization or language.
In a typical experiment, participants would be asked to give free associations to lists of words. After the first list, the experimental group would receive instructions to give uncommon responses. On the final list (with no instructions), the experimental group gave more unusual responses than the control group. In addition, the experimental group scored higher on a creativity test given at the conclusion of the experiment.
1. Originality can be increased through instructions or practice to produce uncommon responses.
Maltzman, I. (1960). On the training of originality. Psychological Review, 67(4), 229-242.
Phenomenonography (F. Marton & N. Entwistle)
This conceptual framework focuses on the experience of learning from the student's perspective and is based upon a phenomenological approach to research. Entwistle explains: "Our task is thus to describe more clearly how learning takes place in higher education and to point out how teaching and assessment affect the quality of learning. From these descriptions teachers should be able to draw their own lessons about how to facilitate their students' learning" (Marton, Hounsell & Entwistle, 1984, p.1).
The most important element of this framework is that data be collected directly from learners themselves through self-reports and interviews. Furthermore, the content and setting should be those actually involved in learning. Research based upon the phenomenographic approach has been conducted by a number of individuals at universities in Sweden and the United Kindom, of which F. Marton and N. Entwistle are leading proponents.
Phenomenography is related to the work of Pask on learning styles and that of Craik & Lockhart on levels of processing.
The scope of phenomenographic research is focused on learning in higher education. Initial studies focused on student learning experience in reading articles, attending lectures, writing essays, solving problems, and studying; more recent work has examined the cross-cultural aspects of student learning experiences (i.e., papers presented at 6th Annual EARLI conference). Ramsden (1992) provides practical guidelines for teaching based upon this research approach and Frantz,Ferreira, & Thambiratam (no date) discuss an application to engineering.
The original study conducted by Marton at the University of Gothenburg involved students reading an academic article and then asking them questions designed to reveal how they understood what they read, such as: "Could you describe how you went about reading the text?", "Was there anything you found difficult?", "Did you find it interesting or not?". Student responses were transcribed and these transcriptions formed the basis for analysis. On the basis of this study, Marton concluded that students differed in the way they related to the information in they read (deep versus surface understanding) and how they tried to organize their learning (holistic/atomistic).
1. Researchers should seek an understanding of the phenomenon of learning by examining the students' experiences
2. Research about learning needs to be conducted in a naturalistic setting involving the actual content and settings people learn with.
Entwistle, N. & Ramsden, R. (1983). Understanding student learning. London: Croom Helm, 1983.
Repair theory is a attempt to explain how people learn procedural skills with particular attention to how and why they make mistakes (i.e., bugs). The theory suggests that when a procedure cannot be performed, an impasse occurs and the individual applies various strategies to overcome the impasse. These strategies (meta-actions) are called repairs. Some repairs result in correct outcomes whereas others generate incorrect results and hence "buggy" procedures. Repair theory has been implemented in the form of a computer model called Sierra.
Repair theory has been developed from extensive study of children solving arithmetic problems (Brown & VanLehn, 1980). Even with simple subtraction problems, many types of bugs were found, often occurring in combinations. Such systematic errors are not to be confused with "slips" (cf. Norman, 1981) or random mistakes since they reoccur regularly in a particular student's work. On the other hand, bugs are not totally consistent:
"Students' bugs, unlike bugs in computer programs, are unstable. Students shift back and forth among bugs, a phenomenon called bug migration. The theory's explanation for bug migration is that the student has a stable underlying procedure but that the procedure is incomplete in such a way that the student reaches impasses on some problems. Students can apply any repair they can think of. Sometimes they choose one repair and sometimes another. The different repairs manifest themselves as different bugs. So bug migration comes from varying the choice of repairs to a stable, underlying impasse." (VanLehn, 1990) p 26.
Repair theory assumes that people primarily learn procedural tasks by induction and that bugs occur because of biases that are introduced in the examples provided or the feedback received during practice (as opposed to mistakes in memorizing formulas or instructions). Therefore, the implication of repair theory is that problem sets should be chosen to eliminate the bias likely to cause specific bugs. Another implication is that bugs are often introduced when students try to extend procedures beyond the initial examples provided.
Repair theory applies to any procedural knowledge. However, to date the theory has only been fully developed in the domain of children solving subtraction problems. However, elements of repair theory show up in subsequent work of VanLehn’s on intelligent tutoring systems and problem solving.
If a student learns subtraction with two digit numbers and is then presented with the following problem: 365 - 109 =?, they must generate a new rule for borrowing from the left column. Unlike a two digit problem, the left adjacent and the left-most column are different creating an impasse. To resolve the impasse, the student needs to repair their current rule (Always-Borrow-Left) by making it Always-Borrow-Left Adjacent. Alternatively, the student could skip the borrowing entirely generating a different bug called Borrow-No-Decrement-Except-Last.
1. Bugs that cause errors in procedural tasks are systematic and can be identified.
2. Once the bugs associated with a particular task are known, they can be used to improve student performance and the examples used to teach the procedure.
Brown, J.S. & VanLehn, K. (1980). Repair theory: A generative theory of bugs in procedural skills. Cognitive Science, 4, 379-426.
The central focus of Schank's theory has been the structure of knowledge, especially in the context of language understanding. Schank (1975) outlined contextual dependency theory which deals with the representation of meaning in sentences. Building upon this framework, Schank & Abelson (1977) introduced the concepts of scripts, plans and themes to handle story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the theory to encompass other aspects of cognition.
The key element of conceptual dependency theory is the idea that all conceptualizations can be represented in terms of a small number of primative acts performed by an actor on an object. For example, the concept, "John read a book" could be represented as: John MTRANS (information) to LTM from book, where MTRANS is the primative act of mental transfer. In Schank's theory, all memory is episodic, i.e., organized around personal experiences rather than semantic categories. Generalized episodes are called scripts -- specific memories are stored as pointers to scripts plus any unique events for a particular episode. Scripts allow individuals to make inferences needed for understanding by filling in missing information (i.e., schema).
Schank (1986) uses script theory as the basis for a dynamic model of memory. This model suggests that events are understood in terms of scripts, plans and other knowledges structures as well as relevant previous experiences. An important aspect of dynamic memory are explanatory processes (XPs) that represent sterotyped answers to events that involve analomies or unusual events. Schank proposes that XPs are a critical mechanism of creativity .
Script theory is primarily intended to explain language processing and higher thinking skills. A variety of computer programs have been developed to demonstrate the theory. Schank (1991) applies his theoretical framework to story telling and the development of intelligent tutors. Shank & Cleary (1995) describe the application of these ideas to educational software.
The classic example of Schank's theory is the restaurant script. The script has the following characteristics:
Scene 1: Entering S PTRANS S into restaurant, S ATTEND eyes to tables, S MBUILD where to sit, S PTRANS S to table, S MOVE S to sitting position
Scene 2: Ordering S PTRANS menu to S (menu already on table), S MBUILD choice of food, S MTRANS signal to waiter, waiter PTRANS to table, S MTRANS 'I want food' to waiter, waiter PTRANS to cook
Scene 3: Eating Cook ATRANS food to waiter, waiter PTRANS food to S, S INGEST food
Scene 4: Exiting waiter MOVE write check, waiter PTRANS to S, waiter ATRANS check to S, S ATRANS money to waiter, S PTRANS out of restaurant
There are many variations possible on this general script having to do with different types of restaurants or procedures. For example, the script above assumes that the waiter takes the money; in some restaurants, the check is paid to a cashier. Such variations are opportunities for misunderstandings or incorrect inferences.
1. Conceptualization is defined as an act or doing something to an object in a direction.
2. All conceptualizations can be analyzed in terms of a small number of primative acts.
3. All memory is episodic and organized in terms of scripts.
4. Scripts allow individuals to make inferences and hence understand verbal/written discourse.
5. Higher level expectations are created by goals and plans.
Schank, R.C. (1975). Conceptual Information Processing. New York: Elsevier.
Schank, R.C. (1982a). Dynamic Memory: A Theory of Reminding and Learning in Computers and People. CambridgeUniversity Press.
Schank, R.C. (1982b). Reading and Understanding. Hillsdale, NJ: Erlbaum.
Schank, R.C. (1991). Tell Me a Story: A New Look at Real and Artificial Intelligence. New York: Simon & Schuster.
Schank, R.C. & Abelson, R. (1977). Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Earlbaum Assoc.
Schank, R.C. & Cleary. C. (1995). Engines for education. Hillsdale, NJ: Erlbaum Assoc.
Sign Learning (E. Tolman)
Tolman's theorizing has been called purposive behaviorism and is often considered the bridge between behaviorism and cognitive theory. According to Tolman's theory of sign learning, an organism learns by pursuing signs to a goal, i.e., learning is acquired through meaningful behavior. Tolman emphasized the organized aspect of learning: "The stimuli which are allowed in are not connected by just simple one-to-one switches to the outgoing responses. Rather the incoming impulses are usually worked over and elaborated in the central control room into a tentative cognitive-like map of the environment. And it is this tentative map, indicating routes and paths and environmental relationships, which finally determines what responses, if any, the animal will finally make." (Tolman, 1948, p192)
Tolman (1932) proposed five types of learning: (1) approach learning, (2) escape learning, (3) avoidance learning, (4) choice-point learning, and (5) latent learning. All forms of learning depend upon means-end readiness, i.e., goal-oriented behavior, mediated by expectations, perceptions, representations, and other internal or environmental variables.
Tolman's version of behaviorism emphasized the relationships between stimuli rather than stimulus-response (Tolman, 1922). According to Tolman, a new stimulus (the sign) becomes associated with already meaningful stimuli (the significate) through a series of pairings; there was no need for reinforcement in order to establish learning. For this reason, Tolman's theory was closer to the connectionist framework of Thorndike than the drive reduction theory of Hull or other behaviorists.
Although Tolman intended his theory to apply to human learning, almost all of his research was done with rats and mazes. Tolman (1942) examines motivation towards war, but this work is not directly related to his learning theory.
Much of Tolman's research was done in the context of place learning. In the most famous experiments, one group of rats was placed at random starting locations in a maze but the food was always in the same location. Another group of rats had the food placed in different locations which always required exactly the same pattern of turns from their starting location. The group that had the food in the same location performed much better than the other group, supposedly demonstrating that they had learned the location rather than a specific sequence of turns.
1. Learning is always purposive and goal-directed.
2. Learning often involves the use of environmental factors to achieve a goal (e.g., means-ends-analysis)
3. Organisms will select the shortest or easiest path to achieve a goal.
Tolman, E.C. (1932). Purposive Behavior in Animals and Men. New York: Appleton-Century-Crofts.
Tolman, E.C. (1942). Drives Towards War. New York: Appleton-Century-Crofts.
Tolman, E.C. (1948). Cognitive maps in rats and men. Psychological Review, 55, 189-208.
Situated Learning (J. Lave)
Lave argues that learning as it normally occurs is a function of the activity, context and culture in which it occurs (i.e., it is situated). This contrasts with most classroom learning activities which involve knowledge which is abstract and out of context. Social interaction is a critical component of situated learning -- learners become involved in a "community of practice" which embodies certain beliefs and behaviors to be acquired. As the beginner or newcomer moves from the periphery of this community to its center, they become more active and engaged within the culture and hence assume the role of expert or old-timer. Furthermore, situated learning is usually unintentional rather than deliberate. These ideas are what Lave & Wenger (1991) call the process of "legitimate peripheral participation."
Other researchers have further developed the theory of situated learning. Brown, Collins & Duguid (1989) emphasize the idea of cognitive apprenticeship: "Cognitive apprenticeship supports learning in a domain by enabling students to acquire, develop and use cognitive tools in authentic domain activity. Learning, both outside and inside school, advances through collaborative social interaction and the social construction of knowledge." Brown et al. also emphasize the need for a new epistemology for learning -- one that emphasizes active perception over concepts and representation. Suchman (1988) explores the situated learning framework in the context of artificial intelligence.
Situated learning has antecedents in the work of Gibson (theory of affordances) and Vygotsky (social learning). In addition, the theory of Schoenfeld on mathematical problem solving embodies some of the critical elements of situated learning framework.
Situated learning is a general theory of knowledge acquisition . It has been applied in the context of technology-based learning activities for schools that focus on problem-solving skills (Cognition & Technology Group at Vanderbilt, 1993). McLellan (1995) provides a collection of articles that describe various perspectives on the theory.
Lave & Wenger (1991) provide an analysis of situated learning in five different settings: Yucatec midwives, native tailors, navy quartermasters, meat cutters and alcoholics. In all cases, there was a gradual acquisition of knowledge and skills as novices learned from experts in the context of everyday activities.
1. Knowledge needs to be presented in an authentic context, i.e., settings and applications that would normally involve that knowledge.
2. Learning requires social interaction and collaboration.
Soar is an architecture for human cognition expressed in the form of a production system. It involves the collaboration of a number of researchers including Allen Newell, John Laird and Paul Rosenbloom and others at different institutions. The theory builds upon earlier efforts involving Newell such as GPS (Newell & Simon) and GOMS (Card, Moran & Newell). Like the latter model, Soar is capable of simulating actual responses and response times.
The principal element in Soar is the idea of a problem space: all cognitive acts are some form of search task. Memory is unitary and procedural; there is no distinction between procedural and declarative memory. Chunking is the primary mechanism for learning and represents the conversion of problem-solving acts into long-term memory. The occasion for chunking is an impasse and its resolution in the problem solving process (i.e., satisfying production rules). Newell states that Soar suggests a reconstructive view of memory (c.f. Bartlett).
Soar exhibits a variety of different types or levels of learning: operators (e.g., create, call), search control (e.g., operator selection, plans), declarative data (e.g., recognition/recall), and tasks (e.g., identify problem spaces, initial/goal states). Soar is capable of transfer within or across trials or tasks.
Newell (1990) has positioned Soar as the basis for a unified theory of cognition and attempts to show how it explains a wide range of past results and phenomena. For example, he provides interpretations for response time data, verbal learning tasks, reasoning tasks, mental models and skill acquisition. In addition, versions of Soar have been developed that perform as intelligent systems for configuring computer systems and formulating algorithms.
Newell (1990; pp 335-336) provides the following description of how Soar would handle a simple recognition task:
Study trial: given item, become familiar (recognition problem space) Operator: recognize-next element Fails if subitem is not recognized; get an impasse Subgoal - learn to recognize subitem Assign a name (recognize item) Chunk is created to assign name if item is recognized Test trail: given an item If chunk fires, name is assigned and item is recognized If chunk fails to fire, item is not recognized
The key aspect of this example, is that Soar treats recognition as a problem solving activity in which it tries to recursively identify the components of the item and creates an impasse when it fails.
As a theory of learning, Soar specifies (or confirms) a number of principles:
1. All learning arises from goal-directed activities; specific knowledge is acquired in order to satisify goals (needs)
2. Learning occurs at a constant rate -- the rate at which impasses occur while problem solving (average of 0.5 chunk/second)
3. Transfer occurs by identical elements and is highly specific (c.f. Thorndike). Transfer can be general if the productions are abstract.
4. Rehearsal helps learning provided it involves active processing (i.e., creation of chunks)
5. Chunking is the basis for the organization of memory
Laird, J.E., Newell, A., & P.S. Rosenbloom. (1987). Soar: An architecture for general intelligence. Artificial Intelligence, 33, 1-64.
Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: HarvardUniversity Press.
Social Development Theory(L. Vygotsky)
The major theme of Vygotsky's theoretical framework is that social interaction plays a fundamental role in the development of cognition. Vygotsky (1978) states: "Every function in the child's cultural development appears twice: first, on the social level, and later, on the individual level; first, between people (interpsychological) and then inside the child (intrapsychological). This applies equally to voluntary attention, to logical memory, and to the formation of concepts. All the higher functions originate as actual relationships between individuals." (p57).
A second aspect of Vygotsky's theory is the idea that the potential for cognitive development depends upon the "zone of proximal development" (ZPD): a level of development attained when children engage in social behavior. Full development of the ZPD depends upon full social interaction. The range of skill that can be developed with adult guidance or peer collaboration exceeds what can be attained alone.
Vygotsky's theory was an attempt to explain consciousness as the end product of socialization. For example, in the learning of language, our first utterances with peers or adults are for the purpose of communication but once mastered they become internalized and allow "inner speech".
Vygotsky's theory is complementary to the work of Bandura on social learning and a key component of situated learning theory. Because Vygotsky's focus was on cognitive development, it is interesting to compare his views with those of Bruner and Piaget .
This is a general theory of cognitive development. Most of the original work was done in the context of language learning in children (Vygotsky, 1962), although later applications of the framework have been broader (see Wertsch, 1985).
Vygotsky (1978, p56) provides the example of pointing a finger. Initially, this behavior begins as a meaningless grasping motion; however, as people react to the gesture, it becomes a movement that has meaning. In particular, the pointing gesture represents an interpersonal connection between individuals.
1. Cognitive development is limited to a certain range at any given age.
2. Full cognitive development requires social interaction.
Vygotsky, L.S. (1962). Thought and Language. Cambridge, MA: MIT Press.
Vygotsky, L.S. (1978). Mind in Society. Cambridge, MA: HarvardUniversity Press.
The social learning theory of Bandura emphasizes the importance of observing and modeling the behaviors, attitudes, and emotional reactions of others. Bandura (1977) states: "Learning would be exceedingly laborious, not to mention hazardous, if people had to rely solely on the effects of their own actions to inform them what to do. Fortunately, most human behavior is learned observationally through modeling: from observing others one forms an idea of how new behaviors are performed, and on later occasions this coded information serves as a guide for action." (p22). Social learning theory explains human behavior in terms of continuous reciprocal interaction between cognitive, behavioral, an environmental influences. The component processes underlying observational learning are: (1) Attention, including modeled events (distinctiveness, affective valence, complexity, prevalence, functional value) and observer characteristics (sensory capacities, arousal level, perceptual set, past reinforcement), (2) Retention, including symbolic coding, cognitive organization, symbolic rehearsal, motor rehearsal), (3) Motor Reproduction, including physical capabilities, self-observation of reproduction, accuracy of feedback, and (4) Motivation, including external, vicarious and self reinforcement.
Because it encompasses attention, memory and motivation, social learning theory spans both cognitive and behavioral frameworks. Bandura's theory improves upon the strictly behavioral interpretation of modeling provided by Miller & Dollard (1941).Bandura’s work is related to the theories of Vygotsky and Lave which also emphasize the central role of social learning.
Social learning theory has been applied extensively to the understanding of aggression (Bandura, 1973) and psychological disorders, particularly in the context of behavior modification (Bandura, 1969). It is also the theoretical foundation for the technique of behavior modeling which is widely used in training programs. In recent years, Bandura has focused his work on the concept of self-efficacy in a variety of contexts (e.g., Bandura, 1997).
The most common (and pervasive) examples of social learning situations are television commercials. Commercials suggest that drinking a certain beverage or using a particular hair shampoo will make us popular and win the admiration of attractive people. Depending upon the component processes involved (such as attention or motivation), we may model the behavior shown in the commercial and buy the product being advertised.
1. The highest level of observational learning is achieved by first organizing and rehearsing the modeled behavior symbolically and then enacting it overtly. Coding modeled behavior into words, labels or images results in better retention than simply observing.
2. Individuals are more likely to adopt a modeled behavior if it results in outcomes they value.
3. Individuals are more likely to adopt a modeled behavior if the model is similar to the observer and has admired status and the behavior has functional value.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman.
Bandura, A. (1986). Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice-Hall.
Bandura, A. (1973). Aggression: A Social Learning Analysis. Englewood Cliffs, NJ: Prentice-Hall.
Bandura, A. (1977). Social Learning Theory. New York: General Learning Press.
Bandura, A. (1969). Principles of Behavior Modification. New York: Holt, Rinehart & Winston.
Bandura, A. & Walters, R. (1963). Social Learning and Personality Development. New York: Holt, Rinehart & Winston.
Miller, N. & Dollard, J. (1941). Social Learning and Imitation. New Haven, NJ: YaleUniversity Press.
Aptitude-Treatment Interaction (L. Cronbach & R. Snow)
Aptitude-Treatment Interaction (ATI) -- the concept that some instructional strategies (treatments) are more or less effective for particular individuals depending upon their specific abilities. As a theoretical framework, ATI suggests that optimal learning results when the instruction is exactly matched to the aptitudes of the learner. It is consistent with theories of intelligence (e.g., Gardner , Guilford , Sternberg ) that suggest a multidimensional view of ability.
According to Snow (1989), the aim of ATI research is predict educational outcomes from combinations of aptitudes and treatments. He summarizes the main conclusions of Cronbach & Snow (1977) as: (1) aptitude treatment interactions are very common in education, (2) many ATI combinations are complex and difficult to demonstrate clearly, and no particular ATI effect is sufficiently understood to be the basis for instructional practice. Furthermore, Snow identifies the lack of attention to the social aspects of learning as a serious deficiency of ATI research. He states: "Learning style differences can be linked to relatively stable person or aptitude variables, but they also vary within individuals as a function of task and situation variables." (p51)
ATI research covers a broad range of aptitudes and instructional variables; it has been used to explore new teaching strategies and curriculum design, especially in mathematics and reading.
Snow (1989) states that the best supported ATI effect involves treatments that differ in the structure and completeness of instruction and high or low "general" ability measures. Highly structured treatments (e.g., high level of external control, well-defined sequences/components) seem to help students with low ability but hinder those with high abilities (relative to low structure treatments).
1. Aptitudes and instructional treatments interact in complex patterns and are influenced by task and situation variables.
2. Highly structured instructional environments tend to be most successful with students of lower ability; conversely, low structure environments may result in better learning for high ability students.
3. Anxious or conforming students tend to learn better in highly structured instructional environments; non-anxious or independent students tend to prefer low structure.
Cronbach, L. & Snow, R. (1977). Aptitudes and Instructional Methods: A Handbook for Research on Interactions. New York: Irvington.
Snow, R. (1989). Aptitude-Treatment Interaction as a framework for research on individual differences in learning. In P. Ackerman, R.J. Sternberg, & R. Glaser (ed.), Learning and Individual Differences. New York: W.H. Freeman.
Snow, R., Federico, P., & Montague, W. (1980). Aptitude, Learning, and Instruction, Vols 1 & 2. Hillsdale, NJ: Erlbaum.
Stimulus sampling theory (SST), first proposed by Estes in 1950, was an attempt to develop a statistical explanation for learning phenomena. The theory suggested that a particular stimulus-response association is learned on a single trial; however, the overall learning process is a continuous one consisting of the accumulation of discrete S-R pairings. On any given learning trial, a number of different responses can be made but only the portion that are effective (i.e., rewarded) form associations. Thus, learned responses are a sample of all possible stimulus elements experienced. Variations (random or systematic) in stimulus elements are due to environmental factors or changes in the organism.
A key feature of SST was the probability of a certain stimulus occurring in any trial and of being paired with a given response. SST resulted in many forms of mathematical models, principally linear equations, that predicted learning curves. Indeed, SST was able to account for a wide variety of learning paradigms including: free recall, paired-associates, stimulus generalization, concept identification, preferential choice, and operant conditioning. SST also formed the basis for mathematical models of memory (e.g., Norman, 1970) and instruction (e.g., Atkinson ).
Most of the research on SST was conducted using probability or verbal learning experiments, limiting its application to other types of learning. Furthermore, SST did not really take into account cognitive strategies used by participants in these experiments (such as hypothesis testing or the "gamblers fallacy") which could affect the results.
The SST explanation of forgetting (as well as spontaneous recovery) is as follows. Over time, different stimulus elements become available or unavailable for sampling due to external or internal variations. Hence, some of the stimuli that have been conditioned in S-R pairs (i.e., memory traces) may not be available at a given time we wish to make use of the pairing. On the other hand, something we have temporarily forgetten may be remembered when the relevant stimuli happen to be included in the sample. The stronger the memory (i.e., the more pairings created), the higher the likelihood that relevant stimuli are included in the current sampling.
1. While learning of a particular instance is all or none, the overall learning process is gradual and cumulative.
2. Fluctuations in environmental and internal factors will cause variability in learning progress.
Estes, W.K. (1950). Toward a statistical theory of learning. Psychological Review, 57, 94-107.
Estes, W.K. (1970). Learning Theory and Mental Development. New York: Academic Press.
Norman, D. (1970). Models of Memory. New York: Academic Press.
According to structural learning theory, what is learned are rules which consist of a domain, range, and procedure. There may be alternative rule sets for any given class of tasks. Problem solving may be facilitated when higher order rules are used, i.e., rules that generate new rules. Higher order rules account for creative behavior (unanticipated outcomes) as well as the ability to solve complex problems by making it possible to generate (learn) new rules.
Unlike information processing theories which often assume more complex control mechanisms and production rules, structural learning theory postulates a single, goal-switching control mechanism with minimal assumptions about the processor and allows more complex rule structures. Structural learning theory also assumes that "working memory" holds both rules and data (i.e., rules which do not act on other rules); the memory load associated with a task depends upon the rule(s) used for the task at hand.
Structural analysis is a methodology for identifying the rules to be learned for a given topic or class of tasks and breaking them done into their atomic components. The major steps in structural analysis are: (1) select a representative sample of problems, (2) identify a solution rule for each problem, (3) convert each solution rule into a higher order problem whose solutions is that rule, (4) identify a higher order solution rule for solving the new problems, (5) eliminate redundant solution rules from the rule set (i.e., those which can be derived from other rules), and (6) notice that steps 3 and 4 are essentially the same as steps 1 and 2, and continue the process iteratively with each newly-identified set of solution rules. The result of repeatedly identifying higher order rules, and eliminating redundant rules, is a succession of rule sets, each consisting of rules which are simpler individually but collectively more powerful than the ones before.
Structural learning prescribes teaching the simplest solution path for a problem and then teaching more complex paths until the entire rule has been mastered. The theory proposes that we should teach as many higher-order rules as possible as replacements for lower order rules. The theory also suggests a strategy for individualizing instruction by analyzing which rules a student has/has not mastered and teaching only the rules, or portions thereof, that have not been mastered.
Structural learning theory has been applied extensively to mathematics and also provides an interpretation of Piagetian theory (Sandura & Scandura, 1980). The primary focus of the theory is problem solving instruction (Scandura, 1977). Scandura has applied the theoretical framework to the development of authoring tools and software engineering.
Here is an example of structural learning theory in the context of subtraction provided by Scandura (1977):
1. The first step involves selecting a representative sample of problems such as 9-5, 248-13, or 801-302.
2. The second step is to identify the rules for solving each of the selected problems. To achieve this step, it is necessary to determine the minimal capabilities of the students (e.g., can recognize the digits 0-9, minus sign, column and rows). Then the detailed operations involved in solving each of the representative problems must be worked out in terms of the minimum capabilities of the students. For example, one subtraction rule students might learn is the "borrowing" procedure that specifies if the top number is less than the bottom number in a column, the top number in the column to the right must be made smaller by 1.
3. The next step is to identify any higher order rules and eliminate any lower order rules they subsume. In the case of subtraction , we could replace a number of partial rules with a single rule for borrowing that covers all cases.
4. The last step is to test and refine the resulting rule(s) using new problems and extend the rule set if necessary so that it accounts for all problems in the domain. In the case of subtraction, we would use problems with varying combinations of columns and perhaps different bases.
1. Whenever possible, teach higher order rules that can be used to derive lower order rules.
2 . Teach the simplest solution path first and then teach more complex paths or rule sets.
3. Rules must be composed of the minimum capabilities possessed by the learners.
Scandura, J.M. (1970). The role of rules in behavior: Toward an operational definition of what (rule) is learned. Psychological Review, 77, 516-533.
Scandura, J.M. (1973). Structural Learning I: Theory and Research. London: Gordon & Breach.
Scandura, J.M. (1976). Structural Learning II: Issues and Approaches. London: Gordon & Breach.
Scandura, J.M. (1977). Problem Solving: A Structural/Process Approach with Instructional Applications. NY: Academic Press.
Scandura, J.M. & Scandura, A. (1980). Structural Learning and Concrete Operations: An Approach to Piagetian Conservation. NY: Praeger.
Scandura, J.M. (1984). Structural (cognitive task) analysis: A method for analyzing content. Part II: Precision, objectivity, and systematization. Journal of Structural Learning, 8, 1-28.
Ausubel's theory is concerned with how individuals learn large amounts of meaningful material from verbal/textual presentations in a school setting (in contrast to theories developed in the context of laboratory experiments). According to Ausubel, learning is based upon the kinds of superordinate, representational, and combinatorial processes that occur during the reception of information. A primary process in learning is subsumption in which new material is related to relevant ideas in the existing cognitive structure on a substantive, non-verbatim basis. Cognitive structures represent the residue of all learning experiences; forgetting occurs because certain details get integrated and lose their individual identity.
A major instructional mechanism proposed by Ausubel is the use of advance organizers:
"These organizers are introduced in advance of learning itself, and are also presented at a higher level of abstraction, generality, and inclusiveness; and since the substantive content of a given organizer or series of organizers is selected on the basis of its suitability for explaining, integrating, and interrelating the material they precede, this strategy simultaneously satisfies the substantive as well as the programming criteria for enhancing the organization strength of cognitive structure." (1963 , p. 81).
Ausubel emphasizes that advance organizers are different from overviews and summaries which simply emphasize key ideas and are presented at the same level of abstraction and generality as the rest of the material. Organizers act as a subsuming bridge between new learning material and existing related ideas.
Ausubel's theory has commonalities with Gestalt theories and those that involve schema (e.g., Bartlett) as a central principle. There are also similarities with Bruner's "spiral learning" model , although Ausubel emphasizes that subsumption involves reorganization of existing cognitive structures not the development of new structures as constructivist theories suggest. Ausubel was apparently influenced by the work of Piaget on cognitive development.
Ausubel clearly indicates that his theory applies only to reception (expository) learning in school settings. He distinguishes reception learning from rote and discovery learning; the former because it doesn't involve subsumption (i.e., meaningful materials) and the latter because the learner must discover information through problem solving. A large number of studies have been conducted on the effects of advance organizers in learning (see Ausubel, 1968, 1978).
Ausubel (1963, p. 80) cites Boyd's textbook of pathology as an example of progressive differentiation because the book presents information according to general processes (e.g., inflammation, degeneration) rather than by describing organ systems in isolation. He also cites the Physical Science Study Committee curriculum which organizes material according to the major ideas of physics instead of piece-meal discussion of principle or phenomenon (p. 78).
1. The most general ideas of a subject should be presented first and then progressively differentiated in terms of detail and specificity.
2. Instructional materials should attempt to integrate new material with previously presented information through comparisons and cross-referencing of new and old ideas.
Ausubel, D. (1963). The Psychology of Meaningful Verbal Learning. New York: Grune & Stratton.
Ausubel, D. (1978). In defense of advance organizers: A reply to the critics. Review of Educational Research, 48, 251-257.
Ausubel, D., Novak, J., & Hanesian, H. (1978). Educational Psychology: A Cognitive View (2nd Ed.). New York: Holt, Rinehart & Winston.
Symbol Systems (G. Salomon)
The symbol systems theory developed by Salomon is intended to explain the effects of media on learning. Salomon (1977) states: "To summarize, the symbol systems of media affect the acquisition of knowledge in a number of ways. First, they highlight different aspects of content. Second, they vary with respect to ease of recoding. Third, specific coding elements can save the learner from difficult mental elaborations by overtly supplanting or short-circuiting specific elaboration. Fourth, symbol systems differ with respect to how much processing they demand or allow. Fifth, symbol systems differ with respect to the kinds of mental processes they call on for recoding and elaboration. Thus, symbol systems partly determine who will acquire how much knowledge from what kinds of messages." (p226-227)
According to Salomon, each medium is capable of conveying content via certain inherent symbol systems. For example, Salomon suggests that television requires less mental processing than reading and that the meanings secured from viewing television tend to be less elaborate than those secured from reading (i.e., different levels of processing are involved). However, the meaning extracted from a given medium depends upon the learner. Thus, a person may acquire information about a subject they are familar with equally well from different media but be significantly influenced by different media for novel information.
Salomon (1981) focuses on the reciprocal nature of instructional communications, the instructional setting, and the learner. Salomon argues that schema play a major role in determining how messages are perceived -- in terms of creating an anticipatory bias that influences what information is selected and how it is interpreted. Furthermore, media create new schema which affect subsequent cognitive processing.
Salomon's theory is supported primarily by research conducted with film and television (especially "Sesame Street"). More recent work has extended the framework to computers (e.g., Salomon, Perkins & Globerson, 1991).
One of the critical concepts of Salomon's theory is that the effectiveness of a medium depends upon its match with the learner, the context and the task. Salomon (1977; p 112) explains: "Learning can be facilitated to the extent that the activated skills are relevant to the demands of the learning task. Thus, when the task calls for some act of analytic comparison and the coded message activates imagery instead, the learning may be debilitated. For effective instructional communication, a match needs to be established between the cognitive demands of a learning task, the skills that are required by the codes of the message, and the learner's level of mastery of these skills."
1. The symbolic coding elements of particular media require different mental transformations and hence affect the mastery of specific skills.
2. The level of knowledge and skill that an individual possesses will affect the impact of specific media sequences.
3. The nature of the learning/information processing tasks can affect the impact of specific media sequences.
4. The social context of media presentations can influence what message is perceived.
5. There is a reciprocal relationship between media and learner; each can influence the other.
Salomon, G. (1979). Interaction of Media, Cognition, and Learning. San Francisco: Jossey-Bass.
Salomon, G. (1981). Communication and Education. Beverly Hills, CA: Sage.
Salomon, G., Perkins, D., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(4), 2-9.
Attribution theory is concerned with how individuals interpret events and how this relates to their thinking and behavior. Heider (1958) was the first to propose a psychological theory of attribution, but Weiner and colleagues (e.g., Jones et al, 1972; Weiner, 1974, 1986) developed a theoretical framework that has become a major research paradigm of social psychology. Attribution theory assumes that people try to determine why people do what they do, i.e., attribute causes to behavior. A person seeking to understand why another person did something may attribute one or more causes to that behavior. A three-stage process underlies an attribution: (1) the person must perceive or observe the behavior, (2) then the person must believe that the behavior was intentionally performed, and (3) then the person must determine if they believe the other person was forced to perform the behavior (in which case the cause is attributed to the situation) or not (in which case the cause is attributed to the other person).
Weiner focused his attribution theory on achievement (Weiner, 1974).He identified ability, effort, task difficulty, and luck as the most important factors affecting attributions for achievement. Attributions are classified along three causal dimensions: locus of control, stability, and controllability. The locus of control dimension has two poles: internal versus external locus of control. The stability dimension captures whether causes change over time or not. For instance, ability can be classified as a stable, internal cause, and effort classified as unstable and internal. Controllability contrasts causes one can control, such as skill/efficacy, from causes one cannot control, such as aptitude, mood, others' actions, and luck.
Attribution theory is closely associated with the concept of motivation. It also relates the work done on scripts and inferencing done by Schank.
Weiner’s theory has been widely applied in education, law, clinical psychology, and the mental health domain. There is a strong relationship between self-concept and achievement. Weiner (1980) states: "Causal attributions determine affective reactions to success and failure. For example, one is not likely to experience pride in success, or feelings of competence, when receiving an ‘A’ from a teacher who gives only that grade, or when defeating a tennis player who always loses...On the other hand, an ‘A’ from a teacher who gives few high grades or a victory over a highly rated tennis player following a great deal of practice generates great positive affect." (p.362). Students with higher ratings of self-esteem and with higher school achievement tend to attribute success to internal, stable, uncontrollable factors such as ability, while they contribute failure to either internal, unstable, controllable factors such as effort, or external, uncontrollable factors such as task difficulty. For example, students who experience repeated failures in reading are likely to see themselves as being less competent in reading.This self-perception of reading ability reflects itself in children's expectations of success on reading tasks and reasoning of success or failure of reading.Similarly, students with learning disabilities seem less likely than non-disabled peers to attribute failure to effort, an unstable, controllable factor, and more likely to attribute failure to ability, a stable, uncontrollable factor.
Lewis & Daltroy (1990) discuss applications of attribution theory to health care. An interesting example of attribution theory applied to career development is provided by Daly (1996) who examined the attributions that employees held as to why they failed to receive promotions.
Attribution theory has been used to explain the difference in motivation between high and low achievers. According to attribution theory, high achievers will approach rather than avoid tasks related to succeeding because they believe success is due to high ability and effort which they are confident of. Failure is thought to be caused by bad luck or a poor exam, i.e. not their fault. Thus, failure doesn't affect their self-esteem but success builds pride and confidence. On the other hand, low achievers avoid success-related chores because they tend to (a) doubt their ability and/or (b) assume success is related to luck or to "who you know" or to other factors beyond their control. Thus, even when successful, it isn't as rewarding to the low achiever because he/she doesn't feel responsible, i.e., it doesn't increase his/her pride and confidence.
1. Attribution is a three stage process: (1) behavior is observed, (2) behavior is determined to be deliberate, and (3) behavior is attributed to internal or external causes.
2. Achievement can be attributed to (1) effort, (2) ability, (3) level of task difficulty, or (4) luck.
3. Causal dimensions of behavior are (1) locus of control, (2) stability, and (3) controllability.
Daly, Dennis. (1996). Attribution Theory and the Glass Ceiling: Career Development Among Federal Employees. Public Administration & Management: An interactive Journal[ http://www.hbg.psu.edu/faculty/jxr11/glass1sp.html]
Heider, F. (1958). The Psychology of Interpersonal Relations. New York: Wiley.
Jones, E. E., D. E. Kannouse, H. H. Kelley, R. E. Nisbett, S. Valins, and B. Weiner, Eds. (1972). Attribution: Perceiving the Causes of Behavior. Morristown, NJ: General Learning Press.
Harvey, J.H.& Weary, G. (1985). Attribution: Basic Issues and Applications, Academic Press, San Diego.
Lewis, F. M. and Daltroy, L. H. (1990). "How Causal Explanations Influence Health Behavior: Attribution Theory." In Glanz, K., Lewis, F.M. and Rimer, B.K. (eds.) Health Education and Health Behavior: Theory , Research. and Practice. San Francisco, CA: Jossey-Bass Publishers, Inc
Weiner, B. (1974). Achievement motivation and attribution theory.Morristown, N.J.: General Learning Press.
Weiner, B. (1980). Human Motivation. NY: Holt, Rinehart & Winston.
Weiner, B. (1986). An attributional theory of motivation and emotion. New York: Springer-Verlag.
Thanks to John Cherry for suggesting the inclusion of Weiner in the TIP database.
Cognitive Dissonance(L. Festinger)
According to cognitive dissonance theory, there is a tendency for individuals to seek consistency among their cognitions (i.e., beliefs, opinions). When there is an inconsistency between attitudes or behaviors (dissonance), something must change to eliminate the dissonance. In the case of a discrepancy between attitudes and behavior, it is most likely that the attitude will change to accommodate the behavior.
Two factors affect the strength of the dissonance: the number of dissonant beliefs, and the importance attached to each belief. There are three ways to eliminate dissonance: (1) reduce the importance of the dissonant beliefs, (2) add more consonant beliefs that outweigh the dissonant beliefs, or (3) change the dissonant beliefs so that they are no longer inconsistent.
Dissonance occurs most often in situations where an individual must choose between two incompatible beliefs or actions. The greatest dissonance is created when the two alternatives are equally attractive. Furthermore, attitude change is more likely in the direction of less incentive since this results in lower dissonance. In this respect, dissonance theory is contradictory to most behavioral theories which would predict greater attitude change with increased incentive (i.e., reinforcement).
Dissonance theory applies to all situations involving attitude formation and change. It is especially relevant to decision-making and problem-solving.
Consider someone who buys an expensive car but discovers that it is not comfortable on long drives. Dissonance exists between their beliefs that they have bought a good car and that a good car should be comfortable. Dissonance could be eliminated by deciding that it does not matter since the car is mainly used for short trips (reducing the importance of the dissonant belief) or focusing on the cars strengths such as safety, appearance, handling (thereby adding more consonant beliefs). The dissonance could also be eliminated by getting rid of the car, but this behavior is a lot harder to achieve than changing beliefs.
1. Dissonance results when an individual must choose between attitudes and behaviors that are contradictory.
2. Dissonance can be eliminated by reducing the importance of the conflicting beliefs, acquiring new beliefs that change the balance, or removing the conflicting attitude or behavior.
Brehm, J. & Cohen, A. (1962). Explorations in Cognitive Dissonance. New York: Wiley.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA: StanfordUniversity Press.
Cognitive Flexibility Theory (R. Spiro, P. Feltovitch & R. Coulson)
Cognitive flexibility theory focuses on the nature of learning in complex and ill-structured domains. Spiro & Jehng (1990, p. 165) state: "By cognitive flexibility, we mean the ability to spontaneously restructure one's knowledge, in many ways, in adaptive response to radically changing situational demands...This is a function of both the way knowledge is represented (e.g., along multiple rather single conceptual dimensions) and the processes that operate on those mental representations (e.g., processes of schema assembly rather than intact schema retrieval)."
The theory is largely concerned with transfer of knowledge and skills beyond their initial learning situation. For this reason, emphasis is placed upon the presentation of information from multiple perspectives and use of many case studies that present diverse examples. The theory also asserts that effective learning is context-dependent, so instruction needs to be very specific. In addition, the theory stresses the importance of constructed knowledge; learners must be given an opportunity to develop their own representations of information in order to properly learn.
Cognitive flexibility theory builds upon other constructivist theories (e.g., Bruner, Ausubel, Piaget) and is related to the work of Salomon in terms of media and learning interaction.
Cognitive flexibility theory is especially formulated to support the use of interactive technology (e.g., videodisc, hypertext). Its primary applications have been literary comprehension, history, biology and medicine.
Jonassen, Ambruso & Olesen (1992) describe an application of cognitive flexibility theory to the design of a hypertext program on transfusion medicine. The program provides a number of different clinical cases which students must diagnose and treat using various sources of information available (including advice from experts). The learning environment presents multiple perspectives on the content, is complex and ill-defined, and emphasizes the construction of knowledge by the learner.
1. Learning activities must provide multiple representations of content.
2. Instructional materials should avoid oversimplifying the content domain and support context-dependent knowledge.
3. Instruction should be case-based and emphasize knowledge construction, not transmission of information.
4. Knowledge sources should be highly interconnected rather than compartmentalized.
Jonassen, D., Ambruso, D . & Olesen, J. (1992). Designing hypertext on transfusion medicine using cognitive flexibility theory. Journal of Educational Multimedia and Hypermedia, 1(3), 309-322.
Spiro, R.J., Coulson, R.L., Feltovich, P.J., & Anderson, D. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. In V. Patel (ed.), Proceedings of the 10th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum. [available at http://www.ilt.columbia.edu/ilt/papers/Spiro.html]
Spiro, R.J., Feltovich, P.J., Jacobson, M.J., & Coulson, R.L. (1992). Cognitive flexibility, constructivism and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In T. Duffy & D. Jonassen (Eds.), Constructivism and the Technology of Instruction. Hillsdale, NJ: Erlbaum.
Spiro, R.J. & Jehng, J. (1990). Cognitive flexibility and hypertext: Theory and technology for the non-linear and multidimensional traversal of complex subject matter. D. Nix & R. Spiro (eds.), Cognition, Education, and Multimedia. Hillsdale, NJ: Erlbaum.